Dec 05 00:18:35 localhost kernel: Linux version 5.14.0-645.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025
Dec 05 00:18:35 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 05 00:18:35 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 05 00:18:35 localhost kernel: BIOS-provided physical RAM map:
Dec 05 00:18:35 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 05 00:18:35 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 05 00:18:35 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 05 00:18:35 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec 05 00:18:35 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec 05 00:18:35 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 05 00:18:35 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 05 00:18:35 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec 05 00:18:35 localhost kernel: NX (Execute Disable) protection: active
Dec 05 00:18:35 localhost kernel: APIC: Static calls initialized
Dec 05 00:18:35 localhost kernel: SMBIOS 2.8 present.
Dec 05 00:18:35 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec 05 00:18:35 localhost kernel: Hypervisor detected: KVM
Dec 05 00:18:35 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 05 00:18:35 localhost kernel: kvm-clock: using sched offset of 3975422000 cycles
Dec 05 00:18:35 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 05 00:18:35 localhost kernel: tsc: Detected 2800.000 MHz processor
Dec 05 00:18:35 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 05 00:18:35 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 05 00:18:35 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec 05 00:18:35 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec 05 00:18:35 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 05 00:18:35 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec 05 00:18:35 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec 05 00:18:35 localhost kernel: Using GB pages for direct mapping
Dec 05 00:18:35 localhost kernel: RAMDISK: [mem 0x2d472000-0x32a30fff]
Dec 05 00:18:35 localhost kernel: ACPI: Early table checksum verification disabled
Dec 05 00:18:35 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec 05 00:18:35 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 05 00:18:35 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 05 00:18:35 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 05 00:18:35 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec 05 00:18:35 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 05 00:18:35 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 05 00:18:35 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec 05 00:18:35 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec 05 00:18:35 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec 05 00:18:35 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec 05 00:18:35 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec 05 00:18:35 localhost kernel: No NUMA configuration found
Dec 05 00:18:35 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec 05 00:18:35 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec 05 00:18:35 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec 05 00:18:35 localhost kernel: Zone ranges:
Dec 05 00:18:35 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 05 00:18:35 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 05 00:18:35 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec 05 00:18:35 localhost kernel:   Device   empty
Dec 05 00:18:35 localhost kernel: Movable zone start for each node
Dec 05 00:18:35 localhost kernel: Early memory node ranges
Dec 05 00:18:35 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 05 00:18:35 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec 05 00:18:35 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec 05 00:18:35 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec 05 00:18:35 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 05 00:18:35 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 05 00:18:35 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 05 00:18:35 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Dec 05 00:18:35 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 05 00:18:35 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 05 00:18:35 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 05 00:18:35 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 05 00:18:35 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 05 00:18:35 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 05 00:18:35 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 05 00:18:35 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 05 00:18:35 localhost kernel: TSC deadline timer available
Dec 05 00:18:35 localhost kernel: CPU topo: Max. logical packages:   8
Dec 05 00:18:35 localhost kernel: CPU topo: Max. logical dies:       8
Dec 05 00:18:35 localhost kernel: CPU topo: Max. dies per package:   1
Dec 05 00:18:35 localhost kernel: CPU topo: Max. threads per core:   1
Dec 05 00:18:35 localhost kernel: CPU topo: Num. cores per package:     1
Dec 05 00:18:35 localhost kernel: CPU topo: Num. threads per package:   1
Dec 05 00:18:35 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec 05 00:18:35 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec 05 00:18:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 05 00:18:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 05 00:18:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 05 00:18:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 05 00:18:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec 05 00:18:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec 05 00:18:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 05 00:18:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 05 00:18:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 05 00:18:35 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec 05 00:18:35 localhost kernel: Booting paravirtualized kernel on KVM
Dec 05 00:18:35 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 05 00:18:35 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec 05 00:18:35 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec 05 00:18:35 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Dec 05 00:18:35 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Dec 05 00:18:35 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 05 00:18:35 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 05 00:18:35 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64", will be passed to user space.
Dec 05 00:18:35 localhost kernel: random: crng init done
Dec 05 00:18:35 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 05 00:18:35 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 05 00:18:35 localhost kernel: Fallback order for Node 0: 0 
Dec 05 00:18:35 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec 05 00:18:35 localhost kernel: Policy zone: Normal
Dec 05 00:18:35 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 05 00:18:35 localhost kernel: software IO TLB: area num 8.
Dec 05 00:18:35 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec 05 00:18:35 localhost kernel: ftrace: allocating 49335 entries in 193 pages
Dec 05 00:18:35 localhost kernel: ftrace: allocated 193 pages with 3 groups
Dec 05 00:18:35 localhost kernel: Dynamic Preempt: voluntary
Dec 05 00:18:35 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 05 00:18:35 localhost kernel: rcu:         RCU event tracing is enabled.
Dec 05 00:18:35 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec 05 00:18:35 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Dec 05 00:18:35 localhost kernel:         Rude variant of Tasks RCU enabled.
Dec 05 00:18:35 localhost kernel:         Tracing variant of Tasks RCU enabled.
Dec 05 00:18:35 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 05 00:18:35 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec 05 00:18:35 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 05 00:18:35 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 05 00:18:35 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 05 00:18:35 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec 05 00:18:35 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 05 00:18:35 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 05 00:18:35 localhost kernel: Console: colour VGA+ 80x25
Dec 05 00:18:35 localhost kernel: printk: console [ttyS0] enabled
Dec 05 00:18:35 localhost kernel: ACPI: Core revision 20230331
Dec 05 00:18:35 localhost kernel: APIC: Switch to symmetric I/O mode setup
Dec 05 00:18:35 localhost kernel: x2apic enabled
Dec 05 00:18:35 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Dec 05 00:18:35 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 05 00:18:35 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Dec 05 00:18:35 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 05 00:18:35 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 05 00:18:35 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 05 00:18:35 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 05 00:18:35 localhost kernel: Spectre V2 : Mitigation: Retpolines
Dec 05 00:18:35 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec 05 00:18:35 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec 05 00:18:35 localhost kernel: RETBleed: Mitigation: untrained return thunk
Dec 05 00:18:35 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 05 00:18:35 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 05 00:18:35 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec 05 00:18:35 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec 05 00:18:35 localhost kernel: x86/bugs: return thunk changed
Dec 05 00:18:35 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec 05 00:18:35 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 05 00:18:35 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 05 00:18:35 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 05 00:18:35 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 05 00:18:35 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec 05 00:18:35 localhost kernel: Freeing SMP alternatives memory: 40K
Dec 05 00:18:35 localhost kernel: pid_max: default: 32768 minimum: 301
Dec 05 00:18:35 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec 05 00:18:35 localhost kernel: landlock: Up and running.
Dec 05 00:18:35 localhost kernel: Yama: becoming mindful.
Dec 05 00:18:35 localhost kernel: SELinux:  Initializing.
Dec 05 00:18:35 localhost kernel: LSM support for eBPF active
Dec 05 00:18:35 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 05 00:18:35 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 05 00:18:35 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec 05 00:18:35 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 05 00:18:35 localhost kernel: ... version:                0
Dec 05 00:18:35 localhost kernel: ... bit width:              48
Dec 05 00:18:35 localhost kernel: ... generic registers:      6
Dec 05 00:18:35 localhost kernel: ... value mask:             0000ffffffffffff
Dec 05 00:18:35 localhost kernel: ... max period:             00007fffffffffff
Dec 05 00:18:35 localhost kernel: ... fixed-purpose events:   0
Dec 05 00:18:35 localhost kernel: ... event mask:             000000000000003f
Dec 05 00:18:35 localhost kernel: signal: max sigframe size: 1776
Dec 05 00:18:35 localhost kernel: rcu: Hierarchical SRCU implementation.
Dec 05 00:18:35 localhost kernel: rcu:         Max phase no-delay instances is 400.
Dec 05 00:18:35 localhost kernel: smp: Bringing up secondary CPUs ...
Dec 05 00:18:35 localhost kernel: smpboot: x86: Booting SMP configuration:
Dec 05 00:18:35 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec 05 00:18:35 localhost kernel: smp: Brought up 1 node, 8 CPUs
Dec 05 00:18:35 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Dec 05 00:18:35 localhost kernel: node 0 deferred pages initialised in 21ms
Dec 05 00:18:35 localhost kernel: Memory: 7763992K/8388068K available (16384K kernel code, 5795K rwdata, 13908K rodata, 4196K init, 7156K bss, 618204K reserved, 0K cma-reserved)
Dec 05 00:18:35 localhost kernel: devtmpfs: initialized
Dec 05 00:18:35 localhost kernel: x86/mm: Memory block size: 128MB
Dec 05 00:18:35 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 05 00:18:35 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec 05 00:18:35 localhost kernel: pinctrl core: initialized pinctrl subsystem
Dec 05 00:18:35 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 05 00:18:35 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec 05 00:18:35 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 05 00:18:35 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 05 00:18:35 localhost kernel: audit: initializing netlink subsys (disabled)
Dec 05 00:18:35 localhost kernel: audit: type=2000 audit(1764893913.664:1): state=initialized audit_enabled=0 res=1
Dec 05 00:18:35 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 05 00:18:35 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 05 00:18:35 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 05 00:18:35 localhost kernel: cpuidle: using governor menu
Dec 05 00:18:35 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 05 00:18:35 localhost kernel: PCI: Using configuration type 1 for base access
Dec 05 00:18:35 localhost kernel: PCI: Using configuration type 1 for extended access
Dec 05 00:18:35 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 05 00:18:35 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 05 00:18:35 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec 05 00:18:35 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 05 00:18:35 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 05 00:18:35 localhost kernel: Demotion targets for Node 0: null
Dec 05 00:18:35 localhost kernel: cryptd: max_cpu_qlen set to 1000
Dec 05 00:18:35 localhost kernel: ACPI: Added _OSI(Module Device)
Dec 05 00:18:35 localhost kernel: ACPI: Added _OSI(Processor Device)
Dec 05 00:18:35 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 05 00:18:35 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 05 00:18:35 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 05 00:18:35 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec 05 00:18:35 localhost kernel: ACPI: Interpreter enabled
Dec 05 00:18:35 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec 05 00:18:35 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Dec 05 00:18:35 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 05 00:18:35 localhost kernel: PCI: Using E820 reservations for host bridge windows
Dec 05 00:18:35 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 05 00:18:35 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 05 00:18:35 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [3] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [4] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [5] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [6] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [7] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [8] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [9] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [10] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [11] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [12] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [13] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [14] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [15] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [16] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [17] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [18] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [19] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [20] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [21] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [22] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [23] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [24] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [25] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [26] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [27] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [28] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [29] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [30] registered
Dec 05 00:18:35 localhost kernel: acpiphp: Slot [31] registered
Dec 05 00:18:35 localhost kernel: PCI host bridge to bus 0000:00
Dec 05 00:18:35 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 05 00:18:35 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 05 00:18:35 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 05 00:18:35 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 05 00:18:35 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec 05 00:18:35 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 05 00:18:35 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec 05 00:18:35 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec 05 00:18:35 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec 05 00:18:35 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec 05 00:18:35 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec 05 00:18:35 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec 05 00:18:35 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec 05 00:18:35 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec 05 00:18:35 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec 05 00:18:35 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec 05 00:18:35 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec 05 00:18:35 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 05 00:18:35 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 05 00:18:35 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec 05 00:18:35 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec 05 00:18:35 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 05 00:18:35 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec 05 00:18:35 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec 05 00:18:35 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 05 00:18:35 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 05 00:18:35 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec 05 00:18:35 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec 05 00:18:35 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 05 00:18:35 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec 05 00:18:35 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec 05 00:18:35 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec 05 00:18:35 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec 05 00:18:35 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 05 00:18:35 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec 05 00:18:35 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec 05 00:18:35 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 05 00:18:35 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec 05 00:18:35 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec 05 00:18:35 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 05 00:18:35 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 05 00:18:35 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 05 00:18:35 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 05 00:18:35 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 05 00:18:35 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 05 00:18:35 localhost kernel: iommu: Default domain type: Translated
Dec 05 00:18:35 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 05 00:18:35 localhost kernel: SCSI subsystem initialized
Dec 05 00:18:35 localhost kernel: ACPI: bus type USB registered
Dec 05 00:18:35 localhost kernel: usbcore: registered new interface driver usbfs
Dec 05 00:18:35 localhost kernel: usbcore: registered new interface driver hub
Dec 05 00:18:35 localhost kernel: usbcore: registered new device driver usb
Dec 05 00:18:35 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 05 00:18:35 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 05 00:18:35 localhost kernel: PTP clock support registered
Dec 05 00:18:35 localhost kernel: EDAC MC: Ver: 3.0.0
Dec 05 00:18:35 localhost kernel: NetLabel: Initializing
Dec 05 00:18:35 localhost kernel: NetLabel:  domain hash size = 128
Dec 05 00:18:35 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 05 00:18:35 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Dec 05 00:18:35 localhost kernel: PCI: Using ACPI for IRQ routing
Dec 05 00:18:35 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 05 00:18:35 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 05 00:18:35 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Dec 05 00:18:35 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 05 00:18:35 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 05 00:18:35 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 05 00:18:35 localhost kernel: vgaarb: loaded
Dec 05 00:18:35 localhost kernel: clocksource: Switched to clocksource kvm-clock
Dec 05 00:18:35 localhost kernel: VFS: Disk quotas dquot_6.6.0
Dec 05 00:18:35 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 05 00:18:35 localhost kernel: pnp: PnP ACPI init
Dec 05 00:18:35 localhost kernel: pnp 00:03: [dma 2]
Dec 05 00:18:35 localhost kernel: pnp: PnP ACPI: found 5 devices
Dec 05 00:18:35 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 05 00:18:35 localhost kernel: NET: Registered PF_INET protocol family
Dec 05 00:18:35 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 05 00:18:35 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec 05 00:18:35 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 05 00:18:35 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 05 00:18:35 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 05 00:18:35 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec 05 00:18:35 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec 05 00:18:35 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 05 00:18:35 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 05 00:18:35 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 05 00:18:35 localhost kernel: NET: Registered PF_XDP protocol family
Dec 05 00:18:35 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 05 00:18:35 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 05 00:18:35 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 05 00:18:35 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec 05 00:18:35 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec 05 00:18:35 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 05 00:18:35 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 05 00:18:35 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 05 00:18:35 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 80517 usecs
Dec 05 00:18:35 localhost kernel: PCI: CLS 0 bytes, default 64
Dec 05 00:18:35 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 05 00:18:35 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec 05 00:18:35 localhost kernel: ACPI: bus type thunderbolt registered
Dec 05 00:18:35 localhost kernel: Trying to unpack rootfs image as initramfs...
Dec 05 00:18:35 localhost kernel: Initialise system trusted keyrings
Dec 05 00:18:35 localhost kernel: Key type blacklist registered
Dec 05 00:18:35 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec 05 00:18:35 localhost kernel: zbud: loaded
Dec 05 00:18:35 localhost kernel: integrity: Platform Keyring initialized
Dec 05 00:18:35 localhost kernel: integrity: Machine keyring initialized
Dec 05 00:18:35 localhost kernel: Freeing initrd memory: 87804K
Dec 05 00:18:35 localhost kernel: NET: Registered PF_ALG protocol family
Dec 05 00:18:35 localhost kernel: xor: automatically using best checksumming function   avx       
Dec 05 00:18:35 localhost kernel: Key type asymmetric registered
Dec 05 00:18:35 localhost kernel: Asymmetric key parser 'x509' registered
Dec 05 00:18:35 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 05 00:18:35 localhost kernel: io scheduler mq-deadline registered
Dec 05 00:18:35 localhost kernel: io scheduler kyber registered
Dec 05 00:18:35 localhost kernel: io scheduler bfq registered
Dec 05 00:18:35 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 05 00:18:35 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 05 00:18:35 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 05 00:18:35 localhost kernel: ACPI: button: Power Button [PWRF]
Dec 05 00:18:35 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 05 00:18:35 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 05 00:18:35 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 05 00:18:35 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 05 00:18:35 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 05 00:18:35 localhost kernel: Non-volatile memory driver v1.3
Dec 05 00:18:35 localhost kernel: rdac: device handler registered
Dec 05 00:18:35 localhost kernel: hp_sw: device handler registered
Dec 05 00:18:35 localhost kernel: emc: device handler registered
Dec 05 00:18:35 localhost kernel: alua: device handler registered
Dec 05 00:18:35 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec 05 00:18:35 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec 05 00:18:35 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec 05 00:18:35 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec 05 00:18:35 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 05 00:18:35 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 05 00:18:35 localhost kernel: usb usb1: Product: UHCI Host Controller
Dec 05 00:18:35 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-645.el9.x86_64 uhci_hcd
Dec 05 00:18:35 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec 05 00:18:35 localhost kernel: hub 1-0:1.0: USB hub found
Dec 05 00:18:35 localhost kernel: hub 1-0:1.0: 2 ports detected
Dec 05 00:18:35 localhost kernel: usbcore: registered new interface driver usbserial_generic
Dec 05 00:18:35 localhost kernel: usbserial: USB Serial support registered for generic
Dec 05 00:18:35 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 05 00:18:35 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 05 00:18:35 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 05 00:18:35 localhost kernel: mousedev: PS/2 mouse device common for all mice
Dec 05 00:18:35 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 05 00:18:35 localhost kernel: rtc_cmos 00:04: registered as rtc0
Dec 05 00:18:35 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-05T00:18:34 UTC (1764893914)
Dec 05 00:18:35 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec 05 00:18:35 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec 05 00:18:35 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 05 00:18:35 localhost kernel: usbcore: registered new interface driver usbhid
Dec 05 00:18:35 localhost kernel: usbhid: USB HID core driver
Dec 05 00:18:35 localhost kernel: drop_monitor: Initializing network drop monitor service
Dec 05 00:18:35 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 05 00:18:35 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 05 00:18:35 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 05 00:18:35 localhost kernel: Initializing XFRM netlink socket
Dec 05 00:18:35 localhost kernel: NET: Registered PF_INET6 protocol family
Dec 05 00:18:35 localhost kernel: Segment Routing with IPv6
Dec 05 00:18:35 localhost kernel: NET: Registered PF_PACKET protocol family
Dec 05 00:18:35 localhost kernel: mpls_gso: MPLS GSO support
Dec 05 00:18:35 localhost kernel: IPI shorthand broadcast: enabled
Dec 05 00:18:35 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Dec 05 00:18:35 localhost kernel: AES CTR mode by8 optimization enabled
Dec 05 00:18:35 localhost kernel: sched_clock: Marking stable (1705002320, 158712810)->(2041589540, -177874410)
Dec 05 00:18:35 localhost kernel: registered taskstats version 1
Dec 05 00:18:35 localhost kernel: Loading compiled-in X.509 certificates
Dec 05 00:18:35 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec 05 00:18:35 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 05 00:18:35 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 05 00:18:35 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec 05 00:18:35 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec 05 00:18:35 localhost kernel: Demotion targets for Node 0: null
Dec 05 00:18:35 localhost kernel: page_owner is disabled
Dec 05 00:18:35 localhost kernel: Key type .fscrypt registered
Dec 05 00:18:35 localhost kernel: Key type fscrypt-provisioning registered
Dec 05 00:18:35 localhost kernel: Key type big_key registered
Dec 05 00:18:35 localhost kernel: Key type encrypted registered
Dec 05 00:18:35 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 05 00:18:35 localhost kernel: Loading compiled-in module X.509 certificates
Dec 05 00:18:35 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec 05 00:18:35 localhost kernel: ima: Allocated hash algorithm: sha256
Dec 05 00:18:35 localhost kernel: ima: No architecture policies found
Dec 05 00:18:35 localhost kernel: evm: Initialising EVM extended attributes:
Dec 05 00:18:35 localhost kernel: evm: security.selinux
Dec 05 00:18:35 localhost kernel: evm: security.SMACK64 (disabled)
Dec 05 00:18:35 localhost kernel: evm: security.SMACK64EXEC (disabled)
Dec 05 00:18:35 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 05 00:18:35 localhost kernel: evm: security.SMACK64MMAP (disabled)
Dec 05 00:18:35 localhost kernel: evm: security.apparmor (disabled)
Dec 05 00:18:35 localhost kernel: evm: security.ima
Dec 05 00:18:35 localhost kernel: evm: security.capability
Dec 05 00:18:35 localhost kernel: evm: HMAC attrs: 0x1
Dec 05 00:18:35 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 05 00:18:35 localhost kernel: Running certificate verification RSA selftest
Dec 05 00:18:35 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 05 00:18:35 localhost kernel: Running certificate verification ECDSA selftest
Dec 05 00:18:35 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec 05 00:18:35 localhost kernel: clk: Disabling unused clocks
Dec 05 00:18:35 localhost kernel: Freeing unused decrypted memory: 2028K
Dec 05 00:18:35 localhost kernel: Freeing unused kernel image (initmem) memory: 4196K
Dec 05 00:18:35 localhost kernel: Write protecting the kernel read-only data: 30720k
Dec 05 00:18:35 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Dec 05 00:18:35 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 05 00:18:35 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 05 00:18:35 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Dec 05 00:18:35 localhost kernel: usb 1-1: Manufacturer: QEMU
Dec 05 00:18:35 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec 05 00:18:35 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 05 00:18:35 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec 05 00:18:35 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 05 00:18:35 localhost kernel: Run /init as init process
Dec 05 00:18:35 localhost kernel:   with arguments:
Dec 05 00:18:35 localhost kernel:     /init
Dec 05 00:18:35 localhost kernel:   with environment:
Dec 05 00:18:35 localhost kernel:     HOME=/
Dec 05 00:18:35 localhost kernel:     TERM=linux
Dec 05 00:18:35 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64
Dec 05 00:18:35 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 05 00:18:35 localhost systemd[1]: Detected virtualization kvm.
Dec 05 00:18:35 localhost systemd[1]: Detected architecture x86-64.
Dec 05 00:18:35 localhost systemd[1]: Running in initrd.
Dec 05 00:18:35 localhost systemd[1]: No hostname configured, using default hostname.
Dec 05 00:18:35 localhost systemd[1]: Hostname set to <localhost>.
Dec 05 00:18:35 localhost systemd[1]: Initializing machine ID from VM UUID.
Dec 05 00:18:35 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Dec 05 00:18:35 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 05 00:18:35 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 05 00:18:35 localhost systemd[1]: Reached target Initrd /usr File System.
Dec 05 00:18:35 localhost systemd[1]: Reached target Local File Systems.
Dec 05 00:18:35 localhost systemd[1]: Reached target Path Units.
Dec 05 00:18:35 localhost systemd[1]: Reached target Slice Units.
Dec 05 00:18:35 localhost systemd[1]: Reached target Swaps.
Dec 05 00:18:35 localhost systemd[1]: Reached target Timer Units.
Dec 05 00:18:35 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 05 00:18:35 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Dec 05 00:18:35 localhost systemd[1]: Listening on Journal Socket.
Dec 05 00:18:35 localhost systemd[1]: Listening on udev Control Socket.
Dec 05 00:18:35 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 05 00:18:35 localhost systemd[1]: Reached target Socket Units.
Dec 05 00:18:35 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 05 00:18:35 localhost systemd[1]: Starting Journal Service...
Dec 05 00:18:35 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 05 00:18:35 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 05 00:18:35 localhost systemd[1]: Starting Create System Users...
Dec 05 00:18:35 localhost systemd[1]: Starting Setup Virtual Console...
Dec 05 00:18:35 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 05 00:18:35 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 05 00:18:35 localhost systemd-journald[307]: Journal started
Dec 05 00:18:35 localhost systemd-journald[307]: Runtime Journal (/run/log/journal/6c9ead2d84954e2b9845f862956e441e) is 8.0M, max 153.6M, 145.6M free.
Dec 05 00:18:35 localhost systemd[1]: Started Journal Service.
Dec 05 00:18:35 localhost systemd-sysusers[311]: Creating group 'users' with GID 100.
Dec 05 00:18:35 localhost systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Dec 05 00:18:35 localhost systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 05 00:18:35 localhost systemd[1]: Finished Create System Users.
Dec 05 00:18:35 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 05 00:18:35 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 05 00:18:35 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 05 00:18:35 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 05 00:18:35 localhost systemd[1]: Finished Setup Virtual Console.
Dec 05 00:18:35 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 05 00:18:35 localhost systemd[1]: Starting dracut cmdline hook...
Dec 05 00:18:35 localhost dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Dec 05 00:18:35 localhost dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 05 00:18:35 localhost systemd[1]: Finished dracut cmdline hook.
Dec 05 00:18:35 localhost systemd[1]: Starting dracut pre-udev hook...
Dec 05 00:18:35 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 05 00:18:35 localhost kernel: device-mapper: uevent: version 1.0.3
Dec 05 00:18:35 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec 05 00:18:35 localhost kernel: RPC: Registered named UNIX socket transport module.
Dec 05 00:18:35 localhost kernel: RPC: Registered udp transport module.
Dec 05 00:18:35 localhost kernel: RPC: Registered tcp transport module.
Dec 05 00:18:35 localhost kernel: RPC: Registered tcp-with-tls transport module.
Dec 05 00:18:35 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 05 00:18:35 localhost rpc.statd[442]: Version 2.5.4 starting
Dec 05 00:18:35 localhost rpc.statd[442]: Initializing NSM state
Dec 05 00:18:35 localhost rpc.idmapd[447]: Setting log level to 0
Dec 05 00:18:35 localhost systemd[1]: Finished dracut pre-udev hook.
Dec 05 00:18:35 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 05 00:18:35 localhost systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Dec 05 00:18:35 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 05 00:18:35 localhost systemd[1]: Starting dracut pre-trigger hook...
Dec 05 00:18:35 localhost systemd[1]: Finished dracut pre-trigger hook.
Dec 05 00:18:35 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 05 00:18:36 localhost systemd[1]: Created slice Slice /system/modprobe.
Dec 05 00:18:36 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 05 00:18:36 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 05 00:18:36 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 05 00:18:36 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 05 00:18:36 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 05 00:18:36 localhost systemd[1]: Reached target Network.
Dec 05 00:18:36 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 05 00:18:36 localhost systemd[1]: Starting dracut initqueue hook...
Dec 05 00:18:36 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec 05 00:18:36 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec 05 00:18:36 localhost kernel:  vda: vda1
Dec 05 00:18:36 localhost kernel: libata version 3.00 loaded.
Dec 05 00:18:36 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Dec 05 00:18:36 localhost kernel: scsi host0: ata_piix
Dec 05 00:18:36 localhost kernel: scsi host1: ata_piix
Dec 05 00:18:36 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec 05 00:18:36 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec 05 00:18:36 localhost systemd[1]: Mounting Kernel Configuration File System...
Dec 05 00:18:36 localhost systemd[1]: Mounted Kernel Configuration File System.
Dec 05 00:18:36 localhost systemd[1]: Found device /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec 05 00:18:36 localhost systemd[1]: Reached target Initrd Root Device.
Dec 05 00:18:36 localhost systemd[1]: Reached target System Initialization.
Dec 05 00:18:36 localhost systemd[1]: Reached target Basic System.
Dec 05 00:18:36 localhost kernel: ata1: found unknown device (class 0)
Dec 05 00:18:36 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 05 00:18:36 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 05 00:18:36 localhost systemd-udevd[472]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 00:18:36 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 05 00:18:36 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 05 00:18:36 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 05 00:18:36 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Dec 05 00:18:36 localhost systemd[1]: Finished dracut initqueue hook.
Dec 05 00:18:36 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Dec 05 00:18:36 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Dec 05 00:18:36 localhost systemd[1]: Reached target Remote File Systems.
Dec 05 00:18:36 localhost systemd[1]: Starting dracut pre-mount hook...
Dec 05 00:18:36 localhost systemd[1]: Finished dracut pre-mount hook.
Dec 05 00:18:36 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f...
Dec 05 00:18:36 localhost systemd-fsck[555]: /usr/sbin/fsck.xfs: XFS file system.
Dec 05 00:18:36 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec 05 00:18:36 localhost systemd[1]: Mounting /sysroot...
Dec 05 00:18:37 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 05 00:18:37 localhost kernel: XFS (vda1): Mounting V5 Filesystem fcf6b761-831a-48a7-9f5f-068b5063763f
Dec 05 00:18:37 localhost kernel: XFS (vda1): Ending clean mount
Dec 05 00:18:37 localhost systemd[1]: Mounted /sysroot.
Dec 05 00:18:37 localhost systemd[1]: Reached target Initrd Root File System.
Dec 05 00:18:37 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 05 00:18:37 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 05 00:18:37 localhost systemd[1]: Reached target Initrd File Systems.
Dec 05 00:18:37 localhost systemd[1]: Reached target Initrd Default Target.
Dec 05 00:18:37 localhost systemd[1]: Starting dracut mount hook...
Dec 05 00:18:37 localhost systemd[1]: Finished dracut mount hook.
Dec 05 00:18:37 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 05 00:18:37 localhost rpc.idmapd[447]: exiting on signal 15
Dec 05 00:18:37 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 05 00:18:37 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 05 00:18:37 localhost systemd[1]: Stopped target Network.
Dec 05 00:18:37 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 05 00:18:37 localhost systemd[1]: Stopped target Timer Units.
Dec 05 00:18:37 localhost systemd[1]: dbus.socket: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 05 00:18:37 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 05 00:18:37 localhost systemd[1]: Stopped target Initrd Default Target.
Dec 05 00:18:37 localhost systemd[1]: Stopped target Basic System.
Dec 05 00:18:37 localhost systemd[1]: Stopped target Initrd Root Device.
Dec 05 00:18:37 localhost systemd[1]: Stopped target Initrd /usr File System.
Dec 05 00:18:37 localhost systemd[1]: Stopped target Path Units.
Dec 05 00:18:37 localhost systemd[1]: Stopped target Remote File Systems.
Dec 05 00:18:37 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 05 00:18:37 localhost systemd[1]: Stopped target Slice Units.
Dec 05 00:18:37 localhost systemd[1]: Stopped target Socket Units.
Dec 05 00:18:37 localhost systemd[1]: Stopped target System Initialization.
Dec 05 00:18:37 localhost systemd[1]: Stopped target Local File Systems.
Dec 05 00:18:37 localhost systemd[1]: Stopped target Swaps.
Dec 05 00:18:37 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: Stopped dracut mount hook.
Dec 05 00:18:37 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: Stopped dracut pre-mount hook.
Dec 05 00:18:37 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Dec 05 00:18:37 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 05 00:18:37 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: Stopped dracut initqueue hook.
Dec 05 00:18:37 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: Stopped Apply Kernel Variables.
Dec 05 00:18:37 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Dec 05 00:18:37 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: Stopped Coldplug All udev Devices.
Dec 05 00:18:37 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: Stopped dracut pre-trigger hook.
Dec 05 00:18:37 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 05 00:18:37 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: Stopped Setup Virtual Console.
Dec 05 00:18:37 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 05 00:18:37 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 05 00:18:37 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: Closed udev Control Socket.
Dec 05 00:18:37 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: Closed udev Kernel Socket.
Dec 05 00:18:37 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: Stopped dracut pre-udev hook.
Dec 05 00:18:37 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: Stopped dracut cmdline hook.
Dec 05 00:18:37 localhost systemd[1]: Starting Cleanup udev Database...
Dec 05 00:18:37 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 05 00:18:37 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Dec 05 00:18:37 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: Stopped Create System Users.
Dec 05 00:18:37 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 05 00:18:37 localhost systemd[1]: Finished Cleanup udev Database.
Dec 05 00:18:37 localhost systemd[1]: Reached target Switch Root.
Dec 05 00:18:37 localhost systemd[1]: Starting Switch Root...
Dec 05 00:18:37 localhost systemd[1]: Switching root.
Dec 05 00:18:37 localhost systemd-journald[307]: Journal stopped
Dec 05 00:18:38 localhost systemd-journald[307]: Received SIGTERM from PID 1 (systemd).
Dec 05 00:18:38 localhost kernel: audit: type=1404 audit(1764893917.483:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec 05 00:18:38 localhost kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 00:18:38 localhost kernel: SELinux:  policy capability open_perms=1
Dec 05 00:18:38 localhost kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 00:18:38 localhost kernel: SELinux:  policy capability always_check_network=0
Dec 05 00:18:38 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 00:18:38 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 00:18:38 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 00:18:38 localhost kernel: audit: type=1403 audit(1764893917.617:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 05 00:18:38 localhost systemd[1]: Successfully loaded SELinux policy in 137.469ms.
Dec 05 00:18:38 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.493ms.
Dec 05 00:18:38 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 05 00:18:38 localhost systemd[1]: Detected virtualization kvm.
Dec 05 00:18:38 localhost systemd[1]: Detected architecture x86-64.
Dec 05 00:18:38 localhost systemd-rc-local-generator[638]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 00:18:38 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 05 00:18:38 localhost systemd[1]: Stopped Switch Root.
Dec 05 00:18:38 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 05 00:18:38 localhost systemd[1]: Created slice Slice /system/getty.
Dec 05 00:18:38 localhost systemd[1]: Created slice Slice /system/serial-getty.
Dec 05 00:18:38 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Dec 05 00:18:38 localhost systemd[1]: Created slice User and Session Slice.
Dec 05 00:18:38 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 05 00:18:38 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Dec 05 00:18:38 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec 05 00:18:38 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 05 00:18:38 localhost systemd[1]: Stopped target Switch Root.
Dec 05 00:18:38 localhost systemd[1]: Stopped target Initrd File Systems.
Dec 05 00:18:38 localhost systemd[1]: Stopped target Initrd Root File System.
Dec 05 00:18:38 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Dec 05 00:18:38 localhost systemd[1]: Reached target Path Units.
Dec 05 00:18:38 localhost systemd[1]: Reached target rpc_pipefs.target.
Dec 05 00:18:38 localhost systemd[1]: Reached target Slice Units.
Dec 05 00:18:38 localhost systemd[1]: Reached target Swaps.
Dec 05 00:18:38 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Dec 05 00:18:38 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Dec 05 00:18:38 localhost systemd[1]: Reached target RPC Port Mapper.
Dec 05 00:18:38 localhost systemd[1]: Listening on Process Core Dump Socket.
Dec 05 00:18:38 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Dec 05 00:18:38 localhost systemd[1]: Listening on udev Control Socket.
Dec 05 00:18:38 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 05 00:18:38 localhost systemd[1]: Mounting Huge Pages File System...
Dec 05 00:18:38 localhost systemd[1]: Mounting POSIX Message Queue File System...
Dec 05 00:18:38 localhost systemd[1]: Mounting Kernel Debug File System...
Dec 05 00:18:38 localhost systemd[1]: Mounting Kernel Trace File System...
Dec 05 00:18:38 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 05 00:18:38 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 05 00:18:38 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 05 00:18:38 localhost systemd[1]: Starting Load Kernel Module drm...
Dec 05 00:18:38 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Dec 05 00:18:38 localhost systemd[1]: Starting Load Kernel Module fuse...
Dec 05 00:18:38 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec 05 00:18:38 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 05 00:18:38 localhost systemd[1]: Stopped File System Check on Root Device.
Dec 05 00:18:38 localhost systemd[1]: Stopped Journal Service.
Dec 05 00:18:38 localhost systemd[1]: Starting Journal Service...
Dec 05 00:18:38 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 05 00:18:38 localhost systemd[1]: Starting Generate network units from Kernel command line...
Dec 05 00:18:38 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 05 00:18:38 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Dec 05 00:18:38 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 05 00:18:38 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 05 00:18:38 localhost systemd-journald[679]: Journal started
Dec 05 00:18:38 localhost systemd-journald[679]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec 05 00:18:37 localhost systemd[1]: Queued start job for default target Multi-User System.
Dec 05 00:18:37 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 05 00:18:38 localhost kernel: fuse: init (API version 7.37)
Dec 05 00:18:38 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 05 00:18:38 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec 05 00:18:38 localhost systemd[1]: Started Journal Service.
Dec 05 00:18:38 localhost systemd[1]: Mounted Huge Pages File System.
Dec 05 00:18:38 localhost systemd[1]: Mounted POSIX Message Queue File System.
Dec 05 00:18:38 localhost kernel: ACPI: bus type drm_connector registered
Dec 05 00:18:38 localhost systemd[1]: Mounted Kernel Debug File System.
Dec 05 00:18:38 localhost systemd[1]: Mounted Kernel Trace File System.
Dec 05 00:18:38 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 05 00:18:38 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 05 00:18:38 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 05 00:18:38 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 05 00:18:38 localhost systemd[1]: Finished Load Kernel Module drm.
Dec 05 00:18:38 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 05 00:18:38 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Dec 05 00:18:38 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 05 00:18:38 localhost systemd[1]: Finished Load Kernel Module fuse.
Dec 05 00:18:38 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec 05 00:18:38 localhost systemd[1]: Finished Generate network units from Kernel command line.
Dec 05 00:18:38 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Dec 05 00:18:38 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 05 00:18:38 localhost systemd[1]: Mounting FUSE Control File System...
Dec 05 00:18:38 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 05 00:18:38 localhost systemd[1]: Starting Rebuild Hardware Database...
Dec 05 00:18:38 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Dec 05 00:18:38 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 05 00:18:38 localhost systemd[1]: Starting Load/Save OS Random Seed...
Dec 05 00:18:38 localhost systemd[1]: Starting Create System Users...
Dec 05 00:18:38 localhost systemd[1]: Mounted FUSE Control File System.
Dec 05 00:18:38 localhost systemd-journald[679]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec 05 00:18:38 localhost systemd-journald[679]: Received client request to flush runtime journal.
Dec 05 00:18:38 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Dec 05 00:18:38 localhost systemd[1]: Finished Load/Save OS Random Seed.
Dec 05 00:18:38 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 05 00:18:38 localhost systemd[1]: Finished Create System Users.
Dec 05 00:18:38 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 05 00:18:38 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 05 00:18:38 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 05 00:18:38 localhost systemd[1]: Reached target Preparation for Local File Systems.
Dec 05 00:18:38 localhost systemd[1]: Reached target Local File Systems.
Dec 05 00:18:38 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec 05 00:18:38 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec 05 00:18:38 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 05 00:18:38 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec 05 00:18:38 localhost systemd[1]: Starting Automatic Boot Loader Update...
Dec 05 00:18:38 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec 05 00:18:38 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 05 00:18:38 localhost bootctl[698]: Couldn't find EFI system partition, skipping.
Dec 05 00:18:38 localhost systemd[1]: Finished Automatic Boot Loader Update.
Dec 05 00:18:38 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 05 00:18:38 localhost systemd[1]: Starting Security Auditing Service...
Dec 05 00:18:38 localhost systemd[1]: Starting RPC Bind...
Dec 05 00:18:38 localhost systemd[1]: Starting Rebuild Journal Catalog...
Dec 05 00:18:38 localhost auditd[704]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec 05 00:18:38 localhost auditd[704]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec 05 00:18:38 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec 05 00:18:38 localhost systemd[1]: Finished Rebuild Journal Catalog.
Dec 05 00:18:38 localhost augenrules[709]: /sbin/augenrules: No change
Dec 05 00:18:38 localhost systemd[1]: Started RPC Bind.
Dec 05 00:18:38 localhost augenrules[724]: No rules
Dec 05 00:18:38 localhost augenrules[724]: enabled 1
Dec 05 00:18:38 localhost augenrules[724]: failure 1
Dec 05 00:18:38 localhost augenrules[724]: pid 704
Dec 05 00:18:38 localhost augenrules[724]: rate_limit 0
Dec 05 00:18:38 localhost augenrules[724]: backlog_limit 8192
Dec 05 00:18:38 localhost augenrules[724]: lost 0
Dec 05 00:18:38 localhost augenrules[724]: backlog 2
Dec 05 00:18:38 localhost augenrules[724]: backlog_wait_time 60000
Dec 05 00:18:38 localhost augenrules[724]: backlog_wait_time_actual 0
Dec 05 00:18:38 localhost augenrules[724]: enabled 1
Dec 05 00:18:38 localhost augenrules[724]: failure 1
Dec 05 00:18:38 localhost augenrules[724]: pid 704
Dec 05 00:18:38 localhost augenrules[724]: rate_limit 0
Dec 05 00:18:38 localhost augenrules[724]: backlog_limit 8192
Dec 05 00:18:38 localhost augenrules[724]: lost 0
Dec 05 00:18:38 localhost augenrules[724]: backlog 2
Dec 05 00:18:38 localhost augenrules[724]: backlog_wait_time 60000
Dec 05 00:18:38 localhost augenrules[724]: backlog_wait_time_actual 0
Dec 05 00:18:38 localhost augenrules[724]: enabled 1
Dec 05 00:18:38 localhost augenrules[724]: failure 1
Dec 05 00:18:38 localhost augenrules[724]: pid 704
Dec 05 00:18:38 localhost augenrules[724]: rate_limit 0
Dec 05 00:18:38 localhost augenrules[724]: backlog_limit 8192
Dec 05 00:18:38 localhost augenrules[724]: lost 0
Dec 05 00:18:38 localhost augenrules[724]: backlog 1
Dec 05 00:18:38 localhost augenrules[724]: backlog_wait_time 60000
Dec 05 00:18:38 localhost augenrules[724]: backlog_wait_time_actual 0
Dec 05 00:18:38 localhost systemd[1]: Started Security Auditing Service.
Dec 05 00:18:38 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec 05 00:18:38 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec 05 00:18:38 localhost systemd[1]: Finished Rebuild Hardware Database.
Dec 05 00:18:38 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 05 00:18:38 localhost systemd[1]: Starting Update is Completed...
Dec 05 00:18:38 localhost systemd[1]: Finished Update is Completed.
Dec 05 00:18:38 localhost systemd-udevd[732]: Using default interface naming scheme 'rhel-9.0'.
Dec 05 00:18:38 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 05 00:18:38 localhost systemd[1]: Reached target System Initialization.
Dec 05 00:18:38 localhost systemd[1]: Started dnf makecache --timer.
Dec 05 00:18:38 localhost systemd[1]: Started Daily rotation of log files.
Dec 05 00:18:38 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec 05 00:18:38 localhost systemd[1]: Reached target Timer Units.
Dec 05 00:18:38 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 05 00:18:38 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec 05 00:18:38 localhost systemd[1]: Reached target Socket Units.
Dec 05 00:18:38 localhost systemd[1]: Starting D-Bus System Message Bus...
Dec 05 00:18:38 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 05 00:18:38 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec 05 00:18:38 localhost systemd-udevd[735]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 00:18:38 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 05 00:18:38 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 05 00:18:38 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 05 00:18:38 localhost systemd[1]: Started D-Bus System Message Bus.
Dec 05 00:18:38 localhost systemd[1]: Reached target Basic System.
Dec 05 00:18:38 localhost dbus-broker-lau[765]: Ready
Dec 05 00:18:38 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec 05 00:18:38 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec 05 00:18:38 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec 05 00:18:38 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec 05 00:18:38 localhost systemd[1]: Starting NTP client/server...
Dec 05 00:18:38 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec 05 00:18:38 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec 05 00:18:38 localhost systemd[1]: Starting IPv4 firewall with iptables...
Dec 05 00:18:38 localhost systemd[1]: Started irqbalance daemon.
Dec 05 00:18:38 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec 05 00:18:38 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 00:18:38 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 00:18:38 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 00:18:38 localhost systemd[1]: Reached target sshd-keygen.target.
Dec 05 00:18:38 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec 05 00:18:38 localhost systemd[1]: Reached target User and Group Name Lookups.
Dec 05 00:18:39 localhost systemd[1]: Starting User Login Management...
Dec 05 00:18:39 localhost chronyd[795]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 05 00:18:39 localhost chronyd[795]: Loaded 0 symmetric keys
Dec 05 00:18:39 localhost chronyd[795]: Using right/UTC timezone to obtain leap second data
Dec 05 00:18:39 localhost chronyd[795]: Loaded seccomp filter (level 2)
Dec 05 00:18:39 localhost systemd[1]: Started NTP client/server.
Dec 05 00:18:39 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec 05 00:18:39 localhost systemd-logind[792]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 05 00:18:39 localhost systemd-logind[792]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 05 00:18:39 localhost systemd-logind[792]: New seat seat0.
Dec 05 00:18:39 localhost systemd[1]: Started User Login Management.
Dec 05 00:18:39 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec 05 00:18:39 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec 05 00:18:39 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec 05 00:18:39 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec 05 00:18:39 localhost kernel: Console: switching to colour dummy device 80x25
Dec 05 00:18:39 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec 05 00:18:39 localhost kernel: [drm] features: -context_init
Dec 05 00:18:39 localhost kernel: [drm] number of scanouts: 1
Dec 05 00:18:39 localhost kernel: [drm] number of cap sets: 0
Dec 05 00:18:39 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec 05 00:18:39 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec 05 00:18:39 localhost kernel: Console: switching to colour frame buffer device 128x48
Dec 05 00:18:39 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec 05 00:18:39 localhost kernel: kvm_amd: TSC scaling supported
Dec 05 00:18:39 localhost kernel: kvm_amd: Nested Virtualization enabled
Dec 05 00:18:39 localhost kernel: kvm_amd: Nested Paging enabled
Dec 05 00:18:39 localhost kernel: kvm_amd: LBR virtualization supported
Dec 05 00:18:39 localhost iptables.init[784]: iptables: Applying firewall rules: [  OK  ]
Dec 05 00:18:39 localhost systemd[1]: Finished IPv4 firewall with iptables.
Dec 05 00:18:39 localhost cloud-init[843]: Cloud-init v. 24.4-7.el9 running 'init-local' at Fri, 05 Dec 2025 00:18:39 +0000. Up 6.57 seconds.
Dec 05 00:18:39 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Dec 05 00:18:39 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Dec 05 00:18:39 localhost systemd[1]: run-cloud\x2dinit-tmp-tmp_v0ff7zi.mount: Deactivated successfully.
Dec 05 00:18:39 localhost systemd[1]: Starting Hostname Service...
Dec 05 00:18:39 localhost systemd[1]: Started Hostname Service.
Dec 05 00:18:39 np0005546222.novalocal systemd-hostnamed[857]: Hostname set to <np0005546222.novalocal> (static)
Dec 05 00:18:39 np0005546222.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec 05 00:18:39 np0005546222.novalocal systemd[1]: Reached target Preparation for Network.
Dec 05 00:18:39 np0005546222.novalocal systemd[1]: Starting Network Manager...
Dec 05 00:18:39 np0005546222.novalocal NetworkManager[861]: <info>  [1764893919.9897] NetworkManager (version 1.54.1-1.el9) is starting... (boot:4334a7b0-3a1f-41a9-a980-618d92846a01)
Dec 05 00:18:39 np0005546222.novalocal NetworkManager[861]: <info>  [1764893919.9903] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0021] manager[0x55dab0b5b080]: monitoring kernel firmware directory '/lib/firmware'.
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0071] hostname: hostname: using hostnamed
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0071] hostname: static hostname changed from (none) to "np0005546222.novalocal"
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0080] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0180] manager[0x55dab0b5b080]: rfkill: Wi-Fi hardware radio set enabled
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0181] manager[0x55dab0b5b080]: rfkill: WWAN hardware radio set enabled
Dec 05 00:18:40 np0005546222.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0247] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0247] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0248] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0250] manager: Networking is enabled by state file
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0253] settings: Loaded settings plugin: keyfile (internal)
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0270] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0311] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0330] dhcp: init: Using DHCP client 'internal'
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0335] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0361] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0376] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0393] device (lo): Activation: starting connection 'lo' (d5cf929f-c0df-4c7c-b75c-299bce2e80f0)
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0410] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0417] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 00:18:40 np0005546222.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 05 00:18:40 np0005546222.novalocal systemd[1]: Started Network Manager.
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0474] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 05 00:18:40 np0005546222.novalocal systemd[1]: Reached target Network.
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0487] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0490] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0493] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0496] device (eth0): carrier: link connected
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0501] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0512] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 05 00:18:40 np0005546222.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0522] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0530] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0531] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0536] manager: NetworkManager state is now CONNECTING
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0538] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 00:18:40 np0005546222.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0551] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0557] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0611] dhcp4 (eth0): state changed new lease, address=38.102.83.176
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0625] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 05 00:18:40 np0005546222.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0661] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 00:18:40 np0005546222.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0692] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0696] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 05 00:18:40 np0005546222.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 05 00:18:40 np0005546222.novalocal systemd[1]: Reached target NFS client services.
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0706] device (lo): Activation: successful, device activated.
Dec 05 00:18:40 np0005546222.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Dec 05 00:18:40 np0005546222.novalocal systemd[1]: Reached target Remote File Systems.
Dec 05 00:18:40 np0005546222.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0760] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0762] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0765] manager: NetworkManager state is now CONNECTED_SITE
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0767] device (eth0): Activation: successful, device activated.
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0772] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 05 00:18:40 np0005546222.novalocal NetworkManager[861]: <info>  [1764893920.0774] manager: startup complete
Dec 05 00:18:40 np0005546222.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 05 00:18:40 np0005546222.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Dec 05 00:18:40 np0005546222.novalocal cloud-init[925]: Cloud-init v. 24.4-7.el9 running 'init' at Fri, 05 Dec 2025 00:18:40 +0000. Up 7.51 seconds.
Dec 05 00:18:40 np0005546222.novalocal cloud-init[925]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec 05 00:18:40 np0005546222.novalocal cloud-init[925]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 05 00:18:40 np0005546222.novalocal cloud-init[925]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec 05 00:18:40 np0005546222.novalocal cloud-init[925]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 05 00:18:40 np0005546222.novalocal cloud-init[925]: ci-info: |  eth0  | True |        38.102.83.176         | 255.255.255.0 | global | fa:16:3e:86:26:59 |
Dec 05 00:18:40 np0005546222.novalocal cloud-init[925]: ci-info: |  eth0  | True | fe80::f816:3eff:fe86:2659/64 |       .       |  link  | fa:16:3e:86:26:59 |
Dec 05 00:18:40 np0005546222.novalocal cloud-init[925]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec 05 00:18:40 np0005546222.novalocal cloud-init[925]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec 05 00:18:40 np0005546222.novalocal cloud-init[925]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 05 00:18:40 np0005546222.novalocal cloud-init[925]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec 05 00:18:40 np0005546222.novalocal cloud-init[925]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 05 00:18:40 np0005546222.novalocal cloud-init[925]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec 05 00:18:40 np0005546222.novalocal cloud-init[925]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 05 00:18:40 np0005546222.novalocal cloud-init[925]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec 05 00:18:40 np0005546222.novalocal cloud-init[925]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec 05 00:18:40 np0005546222.novalocal cloud-init[925]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec 05 00:18:40 np0005546222.novalocal cloud-init[925]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 05 00:18:40 np0005546222.novalocal cloud-init[925]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec 05 00:18:40 np0005546222.novalocal cloud-init[925]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 05 00:18:40 np0005546222.novalocal cloud-init[925]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec 05 00:18:40 np0005546222.novalocal cloud-init[925]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 05 00:18:40 np0005546222.novalocal cloud-init[925]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec 05 00:18:40 np0005546222.novalocal cloud-init[925]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec 05 00:18:40 np0005546222.novalocal cloud-init[925]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 05 00:18:41 np0005546222.novalocal useradd[991]: new group: name=cloud-user, GID=1001
Dec 05 00:18:41 np0005546222.novalocal useradd[991]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Dec 05 00:18:41 np0005546222.novalocal useradd[991]: add 'cloud-user' to group 'adm'
Dec 05 00:18:41 np0005546222.novalocal useradd[991]: add 'cloud-user' to group 'systemd-journal'
Dec 05 00:18:41 np0005546222.novalocal useradd[991]: add 'cloud-user' to shadow group 'adm'
Dec 05 00:18:41 np0005546222.novalocal useradd[991]: add 'cloud-user' to shadow group 'systemd-journal'
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: Generating public/private rsa key pair.
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: The key fingerprint is:
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: SHA256:tUePpzzRHZgyI++cXeT8lDATfBnObToXhC99+x2g1LY root@np0005546222.novalocal
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: The key's randomart image is:
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: +---[RSA 3072]----+
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: |            ...oo|
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: |             oBo.|
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: |         ..+o*+*o|
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: |         .o++OB*=|
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: |        S o.* X=*|
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: |          o+oE.*.|
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: |           ++.  =|
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: |             .  o|
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: |                 |
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: +----[SHA256]-----+
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: Generating public/private ecdsa key pair.
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: The key fingerprint is:
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: SHA256:zjEmhd+SR45MSS774Hti793KOBQQmEvPoyBkW8m9HI8 root@np0005546222.novalocal
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: The key's randomart image is:
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: +---[ECDSA 256]---+
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: |  . oo...        |
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: | o ++o.+ .       |
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: |o o..+B.= .      |
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: |... .E+O.*       |
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: | . . .+.S.+      |
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: |    .. B.=       |
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: |      ..+        |
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: |      o.o+ .     |
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: |     ..=+.+..    |
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: +----[SHA256]-----+
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: Generating public/private ed25519 key pair.
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: The key fingerprint is:
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: SHA256:McTQA/aXxjNH1RPfUIGoMMKutUrrMSACe0Q1FA8WXqk root@np0005546222.novalocal
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: The key's randomart image is:
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: +--[ED25519 256]--+
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: |  .oXo=*.   o.+=+|
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: | . o O.=+. + . o+|
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: |. . o.o *.O .   +|
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: |.o  Eo   * +     |
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: |+.. o . S        |
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: |o..o .           |
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: |  .oo            |
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: |   oo            |
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: |  ..             |
Dec 05 00:18:41 np0005546222.novalocal cloud-init[925]: +----[SHA256]-----+
Dec 05 00:18:41 np0005546222.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Dec 05 00:18:41 np0005546222.novalocal systemd[1]: Reached target Cloud-config availability.
Dec 05 00:18:41 np0005546222.novalocal systemd[1]: Reached target Network is Online.
Dec 05 00:18:41 np0005546222.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Dec 05 00:18:41 np0005546222.novalocal systemd[1]: Starting Crash recovery kernel arming...
Dec 05 00:18:41 np0005546222.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Dec 05 00:18:41 np0005546222.novalocal systemd[1]: Starting System Logging Service...
Dec 05 00:18:41 np0005546222.novalocal sm-notify[1007]: Version 2.5.4 starting
Dec 05 00:18:41 np0005546222.novalocal systemd[1]: Starting OpenSSH server daemon...
Dec 05 00:18:41 np0005546222.novalocal systemd[1]: Starting Permit User Sessions...
Dec 05 00:18:41 np0005546222.novalocal systemd[1]: Started Notify NFS peers of a restart.
Dec 05 00:18:41 np0005546222.novalocal systemd[1]: Finished Permit User Sessions.
Dec 05 00:18:41 np0005546222.novalocal sshd[1009]: Server listening on 0.0.0.0 port 22.
Dec 05 00:18:41 np0005546222.novalocal sshd[1009]: Server listening on :: port 22.
Dec 05 00:18:41 np0005546222.novalocal rsyslogd[1008]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1008" x-info="https://www.rsyslog.com"] start
Dec 05 00:18:41 np0005546222.novalocal rsyslogd[1008]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec 05 00:18:41 np0005546222.novalocal systemd[1]: Started Command Scheduler.
Dec 05 00:18:41 np0005546222.novalocal systemd[1]: Started Getty on tty1.
Dec 05 00:18:41 np0005546222.novalocal crond[1012]: (CRON) STARTUP (1.5.7)
Dec 05 00:18:41 np0005546222.novalocal crond[1012]: (CRON) INFO (Syslog will be used instead of sendmail.)
Dec 05 00:18:41 np0005546222.novalocal crond[1012]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 58% if used.)
Dec 05 00:18:41 np0005546222.novalocal crond[1012]: (CRON) INFO (running with inotify support)
Dec 05 00:18:41 np0005546222.novalocal systemd[1]: Started Serial Getty on ttyS0.
Dec 05 00:18:41 np0005546222.novalocal systemd[1]: Reached target Login Prompts.
Dec 05 00:18:41 np0005546222.novalocal systemd[1]: Started OpenSSH server daemon.
Dec 05 00:18:41 np0005546222.novalocal systemd[1]: Started System Logging Service.
Dec 05 00:18:41 np0005546222.novalocal systemd[1]: Reached target Multi-User System.
Dec 05 00:18:41 np0005546222.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Dec 05 00:18:41 np0005546222.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 05 00:18:41 np0005546222.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Dec 05 00:18:41 np0005546222.novalocal rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 00:18:42 np0005546222.novalocal kdumpctl[1020]: kdump: No kdump initial ramdisk found.
Dec 05 00:18:42 np0005546222.novalocal kdumpctl[1020]: kdump: Rebuilding /boot/initramfs-5.14.0-645.el9.x86_64kdump.img
Dec 05 00:18:42 np0005546222.novalocal cloud-init[1147]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Fri, 05 Dec 2025 00:18:42 +0000. Up 9.30 seconds.
Dec 05 00:18:42 np0005546222.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Dec 05 00:18:42 np0005546222.novalocal sshd-session[1194]: Unable to negotiate with 38.102.83.114 port 35610: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Dec 05 00:18:42 np0005546222.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Dec 05 00:18:42 np0005546222.novalocal sshd-session[1206]: Connection reset by 38.102.83.114 port 35616 [preauth]
Dec 05 00:18:42 np0005546222.novalocal sshd-session[1222]: Unable to negotiate with 38.102.83.114 port 35624: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Dec 05 00:18:42 np0005546222.novalocal sshd-session[1227]: Unable to negotiate with 38.102.83.114 port 35626: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Dec 05 00:18:42 np0005546222.novalocal sshd-session[1174]: Connection closed by 38.102.83.114 port 35606 [preauth]
Dec 05 00:18:42 np0005546222.novalocal sshd-session[1261]: Unable to negotiate with 38.102.83.114 port 35654: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Dec 05 00:18:42 np0005546222.novalocal sshd-session[1272]: Unable to negotiate with 38.102.83.114 port 35656: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Dec 05 00:18:42 np0005546222.novalocal dracut[1286]: dracut-057-102.git20250818.el9
Dec 05 00:18:42 np0005546222.novalocal sshd-session[1234]: Connection closed by 38.102.83.114 port 35642 [preauth]
Dec 05 00:18:42 np0005546222.novalocal sshd-session[1245]: Connection closed by 38.102.83.114 port 35650 [preauth]
Dec 05 00:18:42 np0005546222.novalocal dracut[1288]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-645.el9.x86_64kdump.img 5.14.0-645.el9.x86_64
Dec 05 00:18:42 np0005546222.novalocal cloud-init[1328]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Fri, 05 Dec 2025 00:18:42 +0000. Up 9.67 seconds.
Dec 05 00:18:42 np0005546222.novalocal cloud-init[1357]: #############################################################
Dec 05 00:18:42 np0005546222.novalocal cloud-init[1359]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec 05 00:18:42 np0005546222.novalocal cloud-init[1361]: 256 SHA256:zjEmhd+SR45MSS774Hti793KOBQQmEvPoyBkW8m9HI8 root@np0005546222.novalocal (ECDSA)
Dec 05 00:18:42 np0005546222.novalocal cloud-init[1363]: 256 SHA256:McTQA/aXxjNH1RPfUIGoMMKutUrrMSACe0Q1FA8WXqk root@np0005546222.novalocal (ED25519)
Dec 05 00:18:42 np0005546222.novalocal cloud-init[1368]: 3072 SHA256:tUePpzzRHZgyI++cXeT8lDATfBnObToXhC99+x2g1LY root@np0005546222.novalocal (RSA)
Dec 05 00:18:42 np0005546222.novalocal cloud-init[1369]: -----END SSH HOST KEY FINGERPRINTS-----
Dec 05 00:18:42 np0005546222.novalocal cloud-init[1370]: #############################################################
Dec 05 00:18:42 np0005546222.novalocal cloud-init[1328]: Cloud-init v. 24.4-7.el9 finished at Fri, 05 Dec 2025 00:18:42 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.86 seconds
Dec 05 00:18:42 np0005546222.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Dec 05 00:18:42 np0005546222.novalocal systemd[1]: Reached target Cloud-init target.
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: Module 'resume' will not be installed, because it's in the list to be omitted!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: memstrack is not available
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 05 00:18:43 np0005546222.novalocal dracut[1288]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 05 00:18:44 np0005546222.novalocal dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 05 00:18:44 np0005546222.novalocal dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 05 00:18:44 np0005546222.novalocal dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 05 00:18:44 np0005546222.novalocal dracut[1288]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 05 00:18:44 np0005546222.novalocal dracut[1288]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 05 00:18:44 np0005546222.novalocal dracut[1288]: memstrack is not available
Dec 05 00:18:44 np0005546222.novalocal dracut[1288]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 05 00:18:44 np0005546222.novalocal dracut[1288]: *** Including module: systemd ***
Dec 05 00:18:44 np0005546222.novalocal dracut[1288]: *** Including module: fips ***
Dec 05 00:18:44 np0005546222.novalocal dracut[1288]: *** Including module: systemd-initrd ***
Dec 05 00:18:44 np0005546222.novalocal dracut[1288]: *** Including module: i18n ***
Dec 05 00:18:44 np0005546222.novalocal dracut[1288]: *** Including module: drm ***
Dec 05 00:18:45 np0005546222.novalocal chronyd[795]: Selected source 174.138.193.90 (2.centos.pool.ntp.org)
Dec 05 00:18:45 np0005546222.novalocal chronyd[795]: System clock TAI offset set to 37 seconds
Dec 05 00:18:45 np0005546222.novalocal dracut[1288]: *** Including module: prefixdevname ***
Dec 05 00:18:45 np0005546222.novalocal dracut[1288]: *** Including module: kernel-modules ***
Dec 05 00:18:45 np0005546222.novalocal kernel: block vda: the capability attribute has been deprecated.
Dec 05 00:18:45 np0005546222.novalocal dracut[1288]: *** Including module: kernel-modules-extra ***
Dec 05 00:18:45 np0005546222.novalocal dracut[1288]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Dec 05 00:18:45 np0005546222.novalocal dracut[1288]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Dec 05 00:18:45 np0005546222.novalocal dracut[1288]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Dec 05 00:18:45 np0005546222.novalocal dracut[1288]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Dec 05 00:18:45 np0005546222.novalocal dracut[1288]: *** Including module: qemu ***
Dec 05 00:18:46 np0005546222.novalocal dracut[1288]: *** Including module: fstab-sys ***
Dec 05 00:18:46 np0005546222.novalocal dracut[1288]: *** Including module: rootfs-block ***
Dec 05 00:18:46 np0005546222.novalocal dracut[1288]: *** Including module: terminfo ***
Dec 05 00:18:46 np0005546222.novalocal dracut[1288]: *** Including module: udev-rules ***
Dec 05 00:18:46 np0005546222.novalocal dracut[1288]: Skipping udev rule: 91-permissions.rules
Dec 05 00:18:46 np0005546222.novalocal dracut[1288]: Skipping udev rule: 80-drivers-modprobe.rules
Dec 05 00:18:46 np0005546222.novalocal dracut[1288]: *** Including module: virtiofs ***
Dec 05 00:18:46 np0005546222.novalocal dracut[1288]: *** Including module: dracut-systemd ***
Dec 05 00:18:47 np0005546222.novalocal chronyd[795]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Dec 05 00:18:47 np0005546222.novalocal dracut[1288]: *** Including module: usrmount ***
Dec 05 00:18:47 np0005546222.novalocal dracut[1288]: *** Including module: base ***
Dec 05 00:18:47 np0005546222.novalocal dracut[1288]: *** Including module: fs-lib ***
Dec 05 00:18:47 np0005546222.novalocal dracut[1288]: *** Including module: kdumpbase ***
Dec 05 00:18:47 np0005546222.novalocal dracut[1288]: *** Including module: microcode_ctl-fw_dir_override ***
Dec 05 00:18:47 np0005546222.novalocal dracut[1288]:   microcode_ctl module: mangling fw_dir
Dec 05 00:18:47 np0005546222.novalocal dracut[1288]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec 05 00:18:47 np0005546222.novalocal dracut[1288]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec 05 00:18:47 np0005546222.novalocal dracut[1288]:     microcode_ctl: configuration "intel" is ignored
Dec 05 00:18:47 np0005546222.novalocal dracut[1288]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec 05 00:18:47 np0005546222.novalocal dracut[1288]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec 05 00:18:47 np0005546222.novalocal dracut[1288]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec 05 00:18:47 np0005546222.novalocal dracut[1288]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec 05 00:18:47 np0005546222.novalocal dracut[1288]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec 05 00:18:48 np0005546222.novalocal dracut[1288]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec 05 00:18:48 np0005546222.novalocal dracut[1288]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec 05 00:18:48 np0005546222.novalocal dracut[1288]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Dec 05 00:18:48 np0005546222.novalocal dracut[1288]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec 05 00:18:48 np0005546222.novalocal dracut[1288]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec 05 00:18:48 np0005546222.novalocal dracut[1288]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec 05 00:18:48 np0005546222.novalocal dracut[1288]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec 05 00:18:48 np0005546222.novalocal dracut[1288]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec 05 00:18:48 np0005546222.novalocal dracut[1288]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec 05 00:18:48 np0005546222.novalocal dracut[1288]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec 05 00:18:48 np0005546222.novalocal dracut[1288]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec 05 00:18:48 np0005546222.novalocal dracut[1288]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec 05 00:18:48 np0005546222.novalocal dracut[1288]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec 05 00:18:48 np0005546222.novalocal dracut[1288]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec 05 00:18:48 np0005546222.novalocal dracut[1288]: *** Including module: openssl ***
Dec 05 00:18:48 np0005546222.novalocal dracut[1288]: *** Including module: shutdown ***
Dec 05 00:18:48 np0005546222.novalocal dracut[1288]: *** Including module: squash ***
Dec 05 00:18:48 np0005546222.novalocal dracut[1288]: *** Including modules done ***
Dec 05 00:18:48 np0005546222.novalocal dracut[1288]: *** Installing kernel module dependencies ***
Dec 05 00:18:48 np0005546222.novalocal irqbalance[786]: Cannot change IRQ 25 affinity: Operation not permitted
Dec 05 00:18:48 np0005546222.novalocal irqbalance[786]: IRQ 25 affinity is now unmanaged
Dec 05 00:18:48 np0005546222.novalocal irqbalance[786]: Cannot change IRQ 31 affinity: Operation not permitted
Dec 05 00:18:48 np0005546222.novalocal irqbalance[786]: IRQ 31 affinity is now unmanaged
Dec 05 00:18:48 np0005546222.novalocal irqbalance[786]: Cannot change IRQ 28 affinity: Operation not permitted
Dec 05 00:18:48 np0005546222.novalocal irqbalance[786]: IRQ 28 affinity is now unmanaged
Dec 05 00:18:48 np0005546222.novalocal irqbalance[786]: Cannot change IRQ 32 affinity: Operation not permitted
Dec 05 00:18:48 np0005546222.novalocal irqbalance[786]: IRQ 32 affinity is now unmanaged
Dec 05 00:18:48 np0005546222.novalocal irqbalance[786]: Cannot change IRQ 30 affinity: Operation not permitted
Dec 05 00:18:48 np0005546222.novalocal irqbalance[786]: IRQ 30 affinity is now unmanaged
Dec 05 00:18:48 np0005546222.novalocal irqbalance[786]: Cannot change IRQ 29 affinity: Operation not permitted
Dec 05 00:18:48 np0005546222.novalocal irqbalance[786]: IRQ 29 affinity is now unmanaged
Dec 05 00:18:49 np0005546222.novalocal dracut[1288]: *** Installing kernel module dependencies done ***
Dec 05 00:18:49 np0005546222.novalocal dracut[1288]: *** Resolving executable dependencies ***
Dec 05 00:18:50 np0005546222.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 05 00:18:51 np0005546222.novalocal dracut[1288]: *** Resolving executable dependencies done ***
Dec 05 00:18:51 np0005546222.novalocal dracut[1288]: *** Generating early-microcode cpio image ***
Dec 05 00:18:51 np0005546222.novalocal dracut[1288]: *** Store current command line parameters ***
Dec 05 00:18:51 np0005546222.novalocal dracut[1288]: Stored kernel commandline:
Dec 05 00:18:51 np0005546222.novalocal dracut[1288]: No dracut internal kernel commandline stored in the initramfs
Dec 05 00:18:51 np0005546222.novalocal dracut[1288]: *** Install squash loader ***
Dec 05 00:18:52 np0005546222.novalocal dracut[1288]: *** Squashing the files inside the initramfs ***
Dec 05 00:18:53 np0005546222.novalocal dracut[1288]: *** Squashing the files inside the initramfs done ***
Dec 05 00:18:53 np0005546222.novalocal dracut[1288]: *** Creating image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' ***
Dec 05 00:18:53 np0005546222.novalocal dracut[1288]: *** Hardlinking files ***
Dec 05 00:18:53 np0005546222.novalocal dracut[1288]: Mode:           real
Dec 05 00:18:53 np0005546222.novalocal dracut[1288]: Files:          50
Dec 05 00:18:53 np0005546222.novalocal dracut[1288]: Linked:         0 files
Dec 05 00:18:53 np0005546222.novalocal dracut[1288]: Compared:       0 xattrs
Dec 05 00:18:53 np0005546222.novalocal dracut[1288]: Compared:       0 files
Dec 05 00:18:53 np0005546222.novalocal dracut[1288]: Saved:          0 B
Dec 05 00:18:53 np0005546222.novalocal dracut[1288]: Duration:       0.000927 seconds
Dec 05 00:18:53 np0005546222.novalocal dracut[1288]: *** Hardlinking files done ***
Dec 05 00:18:53 np0005546222.novalocal dracut[1288]: *** Creating initramfs image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' done ***
Dec 05 00:18:54 np0005546222.novalocal kdumpctl[1020]: kdump: kexec: loaded kdump kernel
Dec 05 00:18:54 np0005546222.novalocal kdumpctl[1020]: kdump: Starting kdump: [OK]
Dec 05 00:18:54 np0005546222.novalocal systemd[1]: Finished Crash recovery kernel arming.
Dec 05 00:18:54 np0005546222.novalocal systemd[1]: Startup finished in 2.087s (kernel) + 2.537s (initrd) + 16.769s (userspace) = 21.395s.
Dec 05 00:18:56 np0005546222.novalocal sshd-session[4297]: Accepted publickey for zuul from 38.102.83.114 port 34268 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Dec 05 00:18:56 np0005546222.novalocal systemd[1]: Created slice User Slice of UID 1000.
Dec 05 00:18:56 np0005546222.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec 05 00:18:56 np0005546222.novalocal systemd-logind[792]: New session 1 of user zuul.
Dec 05 00:18:56 np0005546222.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec 05 00:18:56 np0005546222.novalocal systemd[1]: Starting User Manager for UID 1000...
Dec 05 00:18:56 np0005546222.novalocal systemd[4301]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 00:18:56 np0005546222.novalocal systemd[4301]: Queued start job for default target Main User Target.
Dec 05 00:18:56 np0005546222.novalocal systemd[4301]: Created slice User Application Slice.
Dec 05 00:18:56 np0005546222.novalocal systemd[4301]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 05 00:18:56 np0005546222.novalocal systemd[4301]: Started Daily Cleanup of User's Temporary Directories.
Dec 05 00:18:56 np0005546222.novalocal systemd[4301]: Reached target Paths.
Dec 05 00:18:56 np0005546222.novalocal systemd[4301]: Reached target Timers.
Dec 05 00:18:56 np0005546222.novalocal systemd[4301]: Starting D-Bus User Message Bus Socket...
Dec 05 00:18:56 np0005546222.novalocal systemd[4301]: Starting Create User's Volatile Files and Directories...
Dec 05 00:18:57 np0005546222.novalocal systemd[4301]: Finished Create User's Volatile Files and Directories.
Dec 05 00:18:57 np0005546222.novalocal systemd[4301]: Listening on D-Bus User Message Bus Socket.
Dec 05 00:18:57 np0005546222.novalocal systemd[4301]: Reached target Sockets.
Dec 05 00:18:57 np0005546222.novalocal systemd[4301]: Reached target Basic System.
Dec 05 00:18:57 np0005546222.novalocal systemd[4301]: Reached target Main User Target.
Dec 05 00:18:57 np0005546222.novalocal systemd[4301]: Startup finished in 114ms.
Dec 05 00:18:57 np0005546222.novalocal systemd[1]: Started User Manager for UID 1000.
Dec 05 00:18:57 np0005546222.novalocal systemd[1]: Started Session 1 of User zuul.
Dec 05 00:18:57 np0005546222.novalocal sshd-session[4297]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 00:18:57 np0005546222.novalocal python3[4383]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 00:19:00 np0005546222.novalocal python3[4411]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 00:19:06 np0005546222.novalocal python3[4469]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 00:19:07 np0005546222.novalocal python3[4509]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec 05 00:19:08 np0005546222.novalocal python3[4535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrke2nNI7LZZcrV3DsdyMSIR4c6KfqG/Om714fquYAFbh1UWyn5oMHsKsvzgsrOIcaGmNYOpXmgcRacQSrOFDvb+xChD7lS8+fQBxFPtuZUoI1pMyWDDnKHr1t9ZABIurzBz2x+fMUQ7vpMBANf3KQwUowFL1piEDrThsoTDWM/RqfkQcwYjWuqLci1YcDlCCOf+xEgOQbX0YjS0LMpk7WURouYANIkIIoKnbXtkKfylX2rW/ZPtpCFKLORsNs5QCXSITbkfr8npItpelyDo0Wu3HbLKsR6tip36RnB+aso4Dm9OnAPxtQ17bNtiLUQHuqLiYMirkjizswukaynpECYtGzccW+QGEcmnZ6TmG9uxVqFhGZmt1c6WQWbFDmOPISeov6LQc6Tgg9OllMZT1bpQQFDE9jjVJxaIhQkam2w7eawimiQa17Rl6EScnamNQFx2E9m5UNxdqy4OY95y4Cy5w/qaJbtcESxO91Qch5DA6el+D1ayPiyj1A31ugLkM= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:09 np0005546222.novalocal python3[4559]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:19:09 np0005546222.novalocal python3[4658]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 00:19:10 np0005546222.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 05 00:19:10 np0005546222.novalocal python3[4731]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764893949.5651255-207-167469590867975/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=3a75b7ae28fe48ff8d276b97cada9e67_id_rsa follow=False checksum=a05cea211c85eb88e6672f4e1f9d0017264e88e8 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:19:10 np0005546222.novalocal python3[4854]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 00:19:11 np0005546222.novalocal python3[4925]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764893950.5522387-240-132846170932382/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=3a75b7ae28fe48ff8d276b97cada9e67_id_rsa.pub follow=False checksum=f80602560ff0022bd9c0fa6a603c5552a0d66a17 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:19:12 np0005546222.novalocal python3[4973]: ansible-ping Invoked with data=pong
Dec 05 00:19:13 np0005546222.novalocal python3[4997]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 00:19:14 np0005546222.novalocal python3[5055]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec 05 00:19:15 np0005546222.novalocal python3[5087]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:19:15 np0005546222.novalocal python3[5111]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:19:16 np0005546222.novalocal python3[5135]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:19:16 np0005546222.novalocal python3[5159]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:19:16 np0005546222.novalocal python3[5183]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:19:16 np0005546222.novalocal python3[5207]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:19:18 np0005546222.novalocal sudo[5231]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptptbdrmrociocyvabxbmcvhgwyxppcb ; /usr/bin/python3'
Dec 05 00:19:18 np0005546222.novalocal sudo[5231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:19:18 np0005546222.novalocal python3[5233]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:19:18 np0005546222.novalocal sudo[5231]: pam_unix(sudo:session): session closed for user root
Dec 05 00:19:18 np0005546222.novalocal sudo[5309]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diwmotmrmdolyqsbbxedjiacozcyjvdv ; /usr/bin/python3'
Dec 05 00:19:18 np0005546222.novalocal sudo[5309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:19:18 np0005546222.novalocal python3[5311]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 00:19:18 np0005546222.novalocal sudo[5309]: pam_unix(sudo:session): session closed for user root
Dec 05 00:19:19 np0005546222.novalocal sudo[5382]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiscfrsdjhwxqtmthshgytaoceuzgojt ; /usr/bin/python3'
Dec 05 00:19:19 np0005546222.novalocal sudo[5382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:19:19 np0005546222.novalocal python3[5384]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764893958.4652362-21-24959342436566/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:19:19 np0005546222.novalocal sudo[5382]: pam_unix(sudo:session): session closed for user root
Dec 05 00:19:19 np0005546222.novalocal python3[5432]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:20 np0005546222.novalocal python3[5456]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:20 np0005546222.novalocal python3[5480]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:20 np0005546222.novalocal python3[5504]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:20 np0005546222.novalocal python3[5528]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:21 np0005546222.novalocal python3[5552]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:21 np0005546222.novalocal python3[5576]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:21 np0005546222.novalocal python3[5600]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:22 np0005546222.novalocal python3[5624]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:22 np0005546222.novalocal python3[5648]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:22 np0005546222.novalocal python3[5672]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:22 np0005546222.novalocal python3[5696]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:23 np0005546222.novalocal python3[5720]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:23 np0005546222.novalocal python3[5744]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:23 np0005546222.novalocal python3[5768]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:23 np0005546222.novalocal python3[5792]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:24 np0005546222.novalocal python3[5816]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:24 np0005546222.novalocal python3[5840]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:24 np0005546222.novalocal python3[5864]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:25 np0005546222.novalocal python3[5888]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:25 np0005546222.novalocal python3[5912]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:25 np0005546222.novalocal python3[5936]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:25 np0005546222.novalocal python3[5960]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:26 np0005546222.novalocal python3[5984]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:26 np0005546222.novalocal python3[6008]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:26 np0005546222.novalocal python3[6032]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:19:32 np0005546222.novalocal sudo[6056]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwaxikwuninsqnzdvzebppwhjcyeggon ; /usr/bin/python3'
Dec 05 00:19:32 np0005546222.novalocal sudo[6056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:19:32 np0005546222.novalocal python3[6058]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 05 00:19:32 np0005546222.novalocal systemd[1]: Starting Time & Date Service...
Dec 05 00:19:32 np0005546222.novalocal systemd[1]: Started Time & Date Service.
Dec 05 00:19:32 np0005546222.novalocal systemd-timedated[6060]: Changed time zone to 'UTC' (UTC).
Dec 05 00:19:32 np0005546222.novalocal sudo[6056]: pam_unix(sudo:session): session closed for user root
Dec 05 00:19:32 np0005546222.novalocal sudo[6087]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrdarwbmalwlddbktxsnvkqlzsejvazb ; /usr/bin/python3'
Dec 05 00:19:32 np0005546222.novalocal sudo[6087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:19:32 np0005546222.novalocal python3[6089]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:19:33 np0005546222.novalocal sudo[6087]: pam_unix(sudo:session): session closed for user root
Dec 05 00:19:33 np0005546222.novalocal python3[6165]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 00:19:33 np0005546222.novalocal python3[6236]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764893973.175383-153-226596726729127/source _original_basename=tmpwuu27w03 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:19:34 np0005546222.novalocal python3[6336]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 00:19:34 np0005546222.novalocal python3[6407]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764893973.9798925-183-263147300137375/source _original_basename=tmp5gh9pb2w follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:19:35 np0005546222.novalocal sudo[6507]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpadzcuwusfpmqqzlqkiaclvfuxyprlj ; /usr/bin/python3'
Dec 05 00:19:35 np0005546222.novalocal sudo[6507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:19:35 np0005546222.novalocal python3[6509]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 00:19:35 np0005546222.novalocal sudo[6507]: pam_unix(sudo:session): session closed for user root
Dec 05 00:19:35 np0005546222.novalocal sudo[6580]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdfawyzgndbyioaezmpnvdmsqiywopil ; /usr/bin/python3'
Dec 05 00:19:35 np0005546222.novalocal sudo[6580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:19:35 np0005546222.novalocal python3[6582]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764893974.97921-231-53259489605934/source _original_basename=tmpu3qw08j2 follow=False checksum=8e0e434468aa50922357fbdb56d8b197f48f0949 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:19:35 np0005546222.novalocal sudo[6580]: pam_unix(sudo:session): session closed for user root
Dec 05 00:19:36 np0005546222.novalocal python3[6630]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:19:36 np0005546222.novalocal python3[6656]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:19:36 np0005546222.novalocal sudo[6734]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdtohkxihzzyomkzldlomxsuqslajzbp ; /usr/bin/python3'
Dec 05 00:19:36 np0005546222.novalocal sudo[6734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:19:36 np0005546222.novalocal python3[6736]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 00:19:36 np0005546222.novalocal sudo[6734]: pam_unix(sudo:session): session closed for user root
Dec 05 00:19:37 np0005546222.novalocal sudo[6807]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inwcuamhoeeabobedkkzuurayxoouusp ; /usr/bin/python3'
Dec 05 00:19:37 np0005546222.novalocal sudo[6807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:19:37 np0005546222.novalocal python3[6809]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764893976.5276375-273-32351702820715/source _original_basename=tmps1ps58bj follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:19:37 np0005546222.novalocal sudo[6807]: pam_unix(sudo:session): session closed for user root
Dec 05 00:19:37 np0005546222.novalocal sudo[6858]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abjevuowyoesafbuhrgdiqxhczrjtkrk ; /usr/bin/python3'
Dec 05 00:19:37 np0005546222.novalocal sudo[6858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:19:37 np0005546222.novalocal python3[6860]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-cf93-3d31-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:19:37 np0005546222.novalocal sudo[6858]: pam_unix(sudo:session): session closed for user root
Dec 05 00:19:38 np0005546222.novalocal python3[6888]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-cf93-3d31-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec 05 00:19:39 np0005546222.novalocal python3[6916]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:19:59 np0005546222.novalocal sudo[6940]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjtypogvflhijqxwatrnfyjyejjmoggp ; /usr/bin/python3'
Dec 05 00:19:59 np0005546222.novalocal sudo[6940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:19:59 np0005546222.novalocal python3[6942]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:19:59 np0005546222.novalocal sudo[6940]: pam_unix(sudo:session): session closed for user root
Dec 05 00:20:02 np0005546222.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 05 00:20:42 np0005546222.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 05 00:20:42 np0005546222.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec 05 00:20:42 np0005546222.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec 05 00:20:42 np0005546222.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec 05 00:20:42 np0005546222.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec 05 00:20:42 np0005546222.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec 05 00:20:42 np0005546222.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec 05 00:20:42 np0005546222.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec 05 00:20:42 np0005546222.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec 05 00:20:42 np0005546222.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec 05 00:20:42 np0005546222.novalocal NetworkManager[861]: <info>  [1764894042.3351] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 05 00:20:42 np0005546222.novalocal systemd-udevd[6946]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 00:20:42 np0005546222.novalocal NetworkManager[861]: <info>  [1764894042.3566] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 00:20:42 np0005546222.novalocal NetworkManager[861]: <info>  [1764894042.3615] settings: (eth1): created default wired connection 'Wired connection 1'
Dec 05 00:20:42 np0005546222.novalocal NetworkManager[861]: <info>  [1764894042.3620] device (eth1): carrier: link connected
Dec 05 00:20:42 np0005546222.novalocal NetworkManager[861]: <info>  [1764894042.3623] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 05 00:20:42 np0005546222.novalocal NetworkManager[861]: <info>  [1764894042.3634] policy: auto-activating connection 'Wired connection 1' (74afdaaf-08cb-315f-8816-01bd59fc3bf4)
Dec 05 00:20:42 np0005546222.novalocal NetworkManager[861]: <info>  [1764894042.3643] device (eth1): Activation: starting connection 'Wired connection 1' (74afdaaf-08cb-315f-8816-01bd59fc3bf4)
Dec 05 00:20:42 np0005546222.novalocal NetworkManager[861]: <info>  [1764894042.3644] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 00:20:42 np0005546222.novalocal NetworkManager[861]: <info>  [1764894042.3649] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 00:20:42 np0005546222.novalocal NetworkManager[861]: <info>  [1764894042.3655] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 00:20:42 np0005546222.novalocal NetworkManager[861]: <info>  [1764894042.3662] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 05 00:20:43 np0005546222.novalocal python3[6972]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-6e21-e576-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:20:53 np0005546222.novalocal sudo[7050]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcxosvpelcoqzpjieunuqlgvhmqpdqdk ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 05 00:20:53 np0005546222.novalocal sudo[7050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:20:53 np0005546222.novalocal python3[7052]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 00:20:53 np0005546222.novalocal sudo[7050]: pam_unix(sudo:session): session closed for user root
Dec 05 00:20:53 np0005546222.novalocal sudo[7123]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gctkaripnfkvgetwkfgopcjxpqbydbig ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 05 00:20:53 np0005546222.novalocal sudo[7123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:20:53 np0005546222.novalocal python3[7125]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764894052.9167798-102-66769513048676/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=f07f1bcbce7c2f75e7f9492c34b4635a0841af8e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:20:53 np0005546222.novalocal sudo[7123]: pam_unix(sudo:session): session closed for user root
Dec 05 00:20:54 np0005546222.novalocal sudo[7173]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgtagdbrclzpsmfjprjfnqplpchyhyhd ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 05 00:20:54 np0005546222.novalocal sudo[7173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:20:54 np0005546222.novalocal python3[7175]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 00:20:54 np0005546222.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 05 00:20:54 np0005546222.novalocal systemd[1]: Stopped Network Manager Wait Online.
Dec 05 00:20:54 np0005546222.novalocal systemd[1]: Stopping Network Manager Wait Online...
Dec 05 00:20:54 np0005546222.novalocal systemd[1]: Stopping Network Manager...
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[861]: <info>  [1764894054.4074] caught SIGTERM, shutting down normally.
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[861]: <info>  [1764894054.4092] dhcp4 (eth0): canceled DHCP transaction
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[861]: <info>  [1764894054.4092] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[861]: <info>  [1764894054.4093] dhcp4 (eth0): state changed no lease
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[861]: <info>  [1764894054.4097] manager: NetworkManager state is now CONNECTING
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[861]: <info>  [1764894054.4149] dhcp4 (eth1): canceled DHCP transaction
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[861]: <info>  [1764894054.4150] dhcp4 (eth1): state changed no lease
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[861]: <info>  [1764894054.4217] exiting (success)
Dec 05 00:20:54 np0005546222.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 05 00:20:54 np0005546222.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 05 00:20:54 np0005546222.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 05 00:20:54 np0005546222.novalocal systemd[1]: Stopped Network Manager.
Dec 05 00:20:54 np0005546222.novalocal systemd[1]: NetworkManager.service: Consumed 1.088s CPU time, 10.0M memory peak.
Dec 05 00:20:54 np0005546222.novalocal systemd[1]: Starting Network Manager...
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.4671] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:4334a7b0-3a1f-41a9-a980-618d92846a01)
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.4676] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.4734] manager[0x5600f85c7070]: monitoring kernel firmware directory '/lib/firmware'.
Dec 05 00:20:54 np0005546222.novalocal systemd[1]: Starting Hostname Service...
Dec 05 00:20:54 np0005546222.novalocal systemd[1]: Started Hostname Service.
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.5946] hostname: hostname: using hostnamed
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.5948] hostname: static hostname changed from (none) to "np0005546222.novalocal"
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.5955] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.5961] manager[0x5600f85c7070]: rfkill: Wi-Fi hardware radio set enabled
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.5961] manager[0x5600f85c7070]: rfkill: WWAN hardware radio set enabled
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.5999] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6000] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6000] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6001] manager: Networking is enabled by state file
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6004] settings: Loaded settings plugin: keyfile (internal)
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6008] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6037] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6051] dhcp: init: Using DHCP client 'internal'
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6054] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6060] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6067] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6077] device (lo): Activation: starting connection 'lo' (d5cf929f-c0df-4c7c-b75c-299bce2e80f0)
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6085] device (eth0): carrier: link connected
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6090] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6097] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6098] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6107] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6116] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6125] device (eth1): carrier: link connected
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6130] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6136] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (74afdaaf-08cb-315f-8816-01bd59fc3bf4) (indicated)
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6136] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6144] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6152] device (eth1): Activation: starting connection 'Wired connection 1' (74afdaaf-08cb-315f-8816-01bd59fc3bf4)
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6160] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 05 00:20:54 np0005546222.novalocal systemd[1]: Started Network Manager.
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6178] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6185] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6188] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6192] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6196] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6201] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6205] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6210] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6220] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6225] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6243] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6247] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6270] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6279] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6288] device (lo): Activation: successful, device activated.
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6300] dhcp4 (eth0): state changed new lease, address=38.102.83.176
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6313] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 05 00:20:54 np0005546222.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6403] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6426] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6430] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6436] manager: NetworkManager state is now CONNECTED_SITE
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6445] device (eth0): Activation: successful, device activated.
Dec 05 00:20:54 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894054.6453] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 05 00:20:54 np0005546222.novalocal sudo[7173]: pam_unix(sudo:session): session closed for user root
Dec 05 00:20:54 np0005546222.novalocal python3[7259]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-6e21-e576-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:21:04 np0005546222.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 05 00:21:16 np0005546222.novalocal systemd[4301]: Starting Mark boot as successful...
Dec 05 00:21:16 np0005546222.novalocal systemd[4301]: Finished Mark boot as successful.
Dec 05 00:21:24 np0005546222.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 05 00:21:39 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894099.8576] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 05 00:21:39 np0005546222.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 05 00:21:39 np0005546222.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 05 00:21:39 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894099.8971] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 05 00:21:39 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894099.8972] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 05 00:21:39 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894099.8978] device (eth1): Activation: successful, device activated.
Dec 05 00:21:39 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894099.8983] manager: startup complete
Dec 05 00:21:39 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894099.8986] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec 05 00:21:39 np0005546222.novalocal NetworkManager[7183]: <warn>  [1764894099.8989] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec 05 00:21:39 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894099.8996] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec 05 00:21:39 np0005546222.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 05 00:21:39 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894099.9089] dhcp4 (eth1): canceled DHCP transaction
Dec 05 00:21:39 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894099.9090] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 05 00:21:39 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894099.9090] dhcp4 (eth1): state changed no lease
Dec 05 00:21:39 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894099.9105] policy: auto-activating connection 'ci-private-network' (f5ed226a-1553-53a1-8171-c813f4b5c69c)
Dec 05 00:21:39 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894099.9110] device (eth1): Activation: starting connection 'ci-private-network' (f5ed226a-1553-53a1-8171-c813f4b5c69c)
Dec 05 00:21:39 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894099.9110] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 00:21:39 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894099.9113] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 00:21:39 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894099.9118] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 00:21:39 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894099.9125] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 00:21:39 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894099.9159] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 00:21:39 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894099.9161] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 00:21:39 np0005546222.novalocal NetworkManager[7183]: <info>  [1764894099.9166] device (eth1): Activation: successful, device activated.
Dec 05 00:21:49 np0005546222.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 05 00:21:55 np0005546222.novalocal sshd-session[4310]: Received disconnect from 38.102.83.114 port 34268:11: disconnected by user
Dec 05 00:21:55 np0005546222.novalocal sshd-session[4310]: Disconnected from user zuul 38.102.83.114 port 34268
Dec 05 00:21:55 np0005546222.novalocal sshd-session[4297]: pam_unix(sshd:session): session closed for user zuul
Dec 05 00:21:55 np0005546222.novalocal systemd-logind[792]: Session 1 logged out. Waiting for processes to exit.
Dec 05 00:22:01 np0005546222.novalocal sshd-session[7288]: Accepted publickey for zuul from 38.102.83.114 port 53832 ssh2: RSA SHA256:TVb6vFiLOEHtrkkdoyIozA4b0isBLmSla+NPtR7bFX8
Dec 05 00:22:01 np0005546222.novalocal systemd-logind[792]: New session 3 of user zuul.
Dec 05 00:22:01 np0005546222.novalocal systemd[1]: Started Session 3 of User zuul.
Dec 05 00:22:01 np0005546222.novalocal sshd-session[7288]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 00:22:02 np0005546222.novalocal sudo[7367]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sydkxewtyezqpxxrcqqnizyrxmibkpkg ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 05 00:22:02 np0005546222.novalocal sudo[7367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:22:02 np0005546222.novalocal python3[7369]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 00:22:02 np0005546222.novalocal sudo[7367]: pam_unix(sudo:session): session closed for user root
Dec 05 00:22:02 np0005546222.novalocal sudo[7440]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trkpzsrbxbwtojjzjtfgifxrloiukmwt ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 05 00:22:02 np0005546222.novalocal sudo[7440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:22:02 np0005546222.novalocal python3[7442]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764894122.0749617-267-232678731867454/source _original_basename=tmp8nei87_4 follow=False checksum=f7e92cd384322c1de547c4614a92d0716d6c382e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:22:02 np0005546222.novalocal sudo[7440]: pam_unix(sudo:session): session closed for user root
Dec 05 00:22:04 np0005546222.novalocal sshd-session[7291]: Connection closed by 38.102.83.114 port 53832
Dec 05 00:22:04 np0005546222.novalocal sshd-session[7288]: pam_unix(sshd:session): session closed for user zuul
Dec 05 00:22:04 np0005546222.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Dec 05 00:22:04 np0005546222.novalocal systemd-logind[792]: Session 3 logged out. Waiting for processes to exit.
Dec 05 00:22:04 np0005546222.novalocal systemd-logind[792]: Removed session 3.
Dec 05 00:24:16 np0005546222.novalocal systemd[4301]: Created slice User Background Tasks Slice.
Dec 05 00:24:16 np0005546222.novalocal systemd[4301]: Starting Cleanup of User's Temporary Files and Directories...
Dec 05 00:24:16 np0005546222.novalocal systemd[4301]: Finished Cleanup of User's Temporary Files and Directories.
Dec 05 00:28:24 np0005546222.novalocal sshd-session[7470]: Connection reset by authenticating user root 45.140.17.124 port 49484 [preauth]
Dec 05 00:28:26 np0005546222.novalocal sshd-session[7472]: Connection reset by authenticating user root 45.140.17.124 port 49494 [preauth]
Dec 05 00:28:28 np0005546222.novalocal sshd-session[7474]: Connection reset by authenticating user root 45.140.17.124 port 49506 [preauth]
Dec 05 00:28:28 np0005546222.novalocal sshd-session[7476]: Invalid user user from 91.202.233.33 port 60696
Dec 05 00:28:29 np0005546222.novalocal sshd-session[7476]: Connection reset by invalid user user 91.202.233.33 port 60696 [preauth]
Dec 05 00:28:29 np0005546222.novalocal sshd-session[7478]: Invalid user vagrant from 45.140.17.124 port 49512
Dec 05 00:28:29 np0005546222.novalocal sshd-session[7478]: Connection reset by invalid user vagrant 45.140.17.124 port 49512 [preauth]
Dec 05 00:28:31 np0005546222.novalocal sshd-session[7482]: Connection reset by authenticating user root 45.140.17.124 port 49524 [preauth]
Dec 05 00:28:31 np0005546222.novalocal sshd-session[7480]: Connection reset by authenticating user root 91.202.233.33 port 60706 [preauth]
Dec 05 00:28:33 np0005546222.novalocal sshd-session[7484]: Connection reset by authenticating user root 91.202.233.33 port 34966 [preauth]
Dec 05 00:28:35 np0005546222.novalocal sshd-session[7486]: Connection reset by authenticating user root 91.202.233.33 port 34984 [preauth]
Dec 05 00:28:37 np0005546222.novalocal sshd-session[7488]: Connection reset by authenticating user root 91.202.233.33 port 34998 [preauth]
Dec 05 00:29:14 np0005546222.novalocal sshd-session[7491]: Accepted publickey for zuul from 38.102.83.114 port 58112 ssh2: RSA SHA256:TVb6vFiLOEHtrkkdoyIozA4b0isBLmSla+NPtR7bFX8
Dec 05 00:29:14 np0005546222.novalocal systemd-logind[792]: New session 4 of user zuul.
Dec 05 00:29:14 np0005546222.novalocal systemd[1]: Started Session 4 of User zuul.
Dec 05 00:29:14 np0005546222.novalocal sshd-session[7491]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 00:29:14 np0005546222.novalocal sudo[7518]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hikylbctuotqxenpettrjlpiugjghsub ; /usr/bin/python3'
Dec 05 00:29:14 np0005546222.novalocal sudo[7518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:29:14 np0005546222.novalocal python3[7520]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-59a0-e257-000000001cd6-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:29:14 np0005546222.novalocal sudo[7518]: pam_unix(sudo:session): session closed for user root
Dec 05 00:29:14 np0005546222.novalocal sudo[7547]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxkdtaithxbceklbdoksfmhqzkchnxxo ; /usr/bin/python3'
Dec 05 00:29:14 np0005546222.novalocal sudo[7547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:29:14 np0005546222.novalocal python3[7549]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:29:14 np0005546222.novalocal sudo[7547]: pam_unix(sudo:session): session closed for user root
Dec 05 00:29:14 np0005546222.novalocal sudo[7573]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyehbwfkehemdccumdqgwiwdjjxvrnqa ; /usr/bin/python3'
Dec 05 00:29:14 np0005546222.novalocal sudo[7573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:29:14 np0005546222.novalocal python3[7575]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:29:14 np0005546222.novalocal sudo[7573]: pam_unix(sudo:session): session closed for user root
Dec 05 00:29:15 np0005546222.novalocal sudo[7599]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbbgbyvukolexuvykdymhmmwdgjdllsd ; /usr/bin/python3'
Dec 05 00:29:15 np0005546222.novalocal sudo[7599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:29:15 np0005546222.novalocal python3[7601]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:29:15 np0005546222.novalocal sudo[7599]: pam_unix(sudo:session): session closed for user root
Dec 05 00:29:15 np0005546222.novalocal sudo[7625]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdqbnrnkgskphdnkjfjfxxeityygwnuw ; /usr/bin/python3'
Dec 05 00:29:15 np0005546222.novalocal sudo[7625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:29:15 np0005546222.novalocal python3[7627]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:29:15 np0005546222.novalocal sudo[7625]: pam_unix(sudo:session): session closed for user root
Dec 05 00:29:15 np0005546222.novalocal sudo[7651]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfqoveunvayekppipkjlzeknofttepqo ; /usr/bin/python3'
Dec 05 00:29:15 np0005546222.novalocal sudo[7651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:29:15 np0005546222.novalocal python3[7653]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:29:15 np0005546222.novalocal sudo[7651]: pam_unix(sudo:session): session closed for user root
Dec 05 00:29:16 np0005546222.novalocal sudo[7729]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pispidkryvnpaspmjussqjznxadnkdei ; /usr/bin/python3'
Dec 05 00:29:16 np0005546222.novalocal sudo[7729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:29:16 np0005546222.novalocal python3[7731]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 00:29:16 np0005546222.novalocal sudo[7729]: pam_unix(sudo:session): session closed for user root
Dec 05 00:29:16 np0005546222.novalocal sudo[7802]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwudogtuxgfpsxpvhitkhwjtfrkzeipx ; /usr/bin/python3'
Dec 05 00:29:16 np0005546222.novalocal sudo[7802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:29:16 np0005546222.novalocal python3[7804]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764894556.1267543-477-99524090553393/source _original_basename=tmp7ryqk7zz follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:29:16 np0005546222.novalocal sudo[7802]: pam_unix(sudo:session): session closed for user root
Dec 05 00:29:17 np0005546222.novalocal sudo[7852]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcspgpldcoiyjkmyrjovfwhzoumibklg ; /usr/bin/python3'
Dec 05 00:29:17 np0005546222.novalocal sudo[7852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:29:17 np0005546222.novalocal python3[7854]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 00:29:17 np0005546222.novalocal systemd[1]: Reloading.
Dec 05 00:29:17 np0005546222.novalocal systemd-rc-local-generator[7877]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 00:29:18 np0005546222.novalocal sudo[7852]: pam_unix(sudo:session): session closed for user root
Dec 05 00:29:19 np0005546222.novalocal sudo[7908]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwozmpdzgmvpszgexrthadrwybdbrngr ; /usr/bin/python3'
Dec 05 00:29:19 np0005546222.novalocal sudo[7908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:29:19 np0005546222.novalocal python3[7910]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec 05 00:29:19 np0005546222.novalocal sudo[7908]: pam_unix(sudo:session): session closed for user root
Dec 05 00:29:20 np0005546222.novalocal sudo[7934]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwkrsarpsuesbxrcousiygitmkkpswhv ; /usr/bin/python3'
Dec 05 00:29:20 np0005546222.novalocal sudo[7934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:29:20 np0005546222.novalocal python3[7936]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:29:20 np0005546222.novalocal sudo[7934]: pam_unix(sudo:session): session closed for user root
Dec 05 00:29:20 np0005546222.novalocal sudo[7962]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wztgsjzapmxqdrzkjemocxrtcpjehojv ; /usr/bin/python3'
Dec 05 00:29:20 np0005546222.novalocal sudo[7962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:29:20 np0005546222.novalocal python3[7964]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:29:20 np0005546222.novalocal sudo[7962]: pam_unix(sudo:session): session closed for user root
Dec 05 00:29:20 np0005546222.novalocal sudo[7990]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdznjwzbwtkodsrjjldperloeozdldhc ; /usr/bin/python3'
Dec 05 00:29:20 np0005546222.novalocal sudo[7990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:29:20 np0005546222.novalocal python3[7992]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:29:20 np0005546222.novalocal sudo[7990]: pam_unix(sudo:session): session closed for user root
Dec 05 00:29:21 np0005546222.novalocal sudo[8018]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upwghqxgytrwslmaiyeugerghxipisfo ; /usr/bin/python3'
Dec 05 00:29:21 np0005546222.novalocal sudo[8018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:29:21 np0005546222.novalocal python3[8020]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:29:21 np0005546222.novalocal sudo[8018]: pam_unix(sudo:session): session closed for user root
Dec 05 00:29:21 np0005546222.novalocal python3[8047]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-59a0-e257-000000001cdd-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:29:22 np0005546222.novalocal python3[8077]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 00:29:24 np0005546222.novalocal sshd-session[7494]: Connection closed by 38.102.83.114 port 58112
Dec 05 00:29:24 np0005546222.novalocal sshd-session[7491]: pam_unix(sshd:session): session closed for user zuul
Dec 05 00:29:24 np0005546222.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Dec 05 00:29:24 np0005546222.novalocal systemd[1]: session-4.scope: Consumed 4.207s CPU time.
Dec 05 00:29:24 np0005546222.novalocal systemd-logind[792]: Session 4 logged out. Waiting for processes to exit.
Dec 05 00:29:24 np0005546222.novalocal systemd-logind[792]: Removed session 4.
Dec 05 00:29:25 np0005546222.novalocal sshd-session[8084]: Accepted publickey for zuul from 38.102.83.114 port 60984 ssh2: RSA SHA256:TVb6vFiLOEHtrkkdoyIozA4b0isBLmSla+NPtR7bFX8
Dec 05 00:29:25 np0005546222.novalocal systemd-logind[792]: New session 5 of user zuul.
Dec 05 00:29:25 np0005546222.novalocal systemd[1]: Started Session 5 of User zuul.
Dec 05 00:29:25 np0005546222.novalocal sshd-session[8084]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 00:29:25 np0005546222.novalocal sudo[8111]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qixerlzcxocuaoiflmqacygpnxosmcvh ; /usr/bin/python3'
Dec 05 00:29:25 np0005546222.novalocal sudo[8111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:29:25 np0005546222.novalocal python3[8113]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 00:29:38 np0005546222.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 05 00:29:38 np0005546222.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 00:29:38 np0005546222.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 05 00:29:38 np0005546222.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 00:29:38 np0005546222.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 05 00:29:38 np0005546222.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 00:29:38 np0005546222.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 00:29:38 np0005546222.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 00:29:47 np0005546222.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 05 00:29:47 np0005546222.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 00:29:47 np0005546222.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 05 00:29:47 np0005546222.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 00:29:47 np0005546222.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 05 00:29:47 np0005546222.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 00:29:47 np0005546222.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 00:29:47 np0005546222.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 00:29:56 np0005546222.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 05 00:29:56 np0005546222.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 00:29:56 np0005546222.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 05 00:29:56 np0005546222.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 00:29:56 np0005546222.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 05 00:29:56 np0005546222.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 00:29:56 np0005546222.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 00:29:56 np0005546222.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 00:29:57 np0005546222.novalocal setsebool[8176]: The virt_use_nfs policy boolean was changed to 1 by root
Dec 05 00:29:57 np0005546222.novalocal setsebool[8176]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec 05 00:30:08 np0005546222.novalocal kernel: SELinux:  Converting 388 SID table entries...
Dec 05 00:30:08 np0005546222.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 00:30:08 np0005546222.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 05 00:30:08 np0005546222.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 00:30:08 np0005546222.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 05 00:30:08 np0005546222.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 00:30:08 np0005546222.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 00:30:08 np0005546222.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 00:30:26 np0005546222.novalocal dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 05 00:30:26 np0005546222.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 00:30:26 np0005546222.novalocal systemd[1]: Starting man-db-cache-update.service...
Dec 05 00:30:26 np0005546222.novalocal systemd[1]: Reloading.
Dec 05 00:30:26 np0005546222.novalocal systemd-rc-local-generator[8931]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 00:30:26 np0005546222.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 00:30:27 np0005546222.novalocal sudo[8111]: pam_unix(sudo:session): session closed for user root
Dec 05 00:30:36 np0005546222.novalocal python3[15501]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ec2-ffbe-1d2a-bece-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:30:37 np0005546222.novalocal kernel: evm: overlay not supported
Dec 05 00:30:37 np0005546222.novalocal systemd[4301]: Starting D-Bus User Message Bus...
Dec 05 00:30:37 np0005546222.novalocal dbus-broker-launch[15949]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 05 00:30:37 np0005546222.novalocal dbus-broker-launch[15949]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 05 00:30:37 np0005546222.novalocal systemd[4301]: Started D-Bus User Message Bus.
Dec 05 00:30:37 np0005546222.novalocal dbus-broker-lau[15949]: Ready
Dec 05 00:30:37 np0005546222.novalocal systemd[4301]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 05 00:30:37 np0005546222.novalocal systemd[4301]: Created slice Slice /user.
Dec 05 00:30:37 np0005546222.novalocal systemd[4301]: podman-15881.scope: unit configures an IP firewall, but not running as root.
Dec 05 00:30:37 np0005546222.novalocal systemd[4301]: (This warning is only shown for the first unit using IP firewalling.)
Dec 05 00:30:37 np0005546222.novalocal systemd[4301]: Started podman-15881.scope.
Dec 05 00:30:37 np0005546222.novalocal systemd[4301]: Started podman-pause-59ff2333.scope.
Dec 05 00:30:38 np0005546222.novalocal sudo[16282]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjqkdcpzoslwgdmuqvvrotijngspkhtu ; /usr/bin/python3'
Dec 05 00:30:38 np0005546222.novalocal sudo[16282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:30:38 np0005546222.novalocal python3[16291]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.129.56.107:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.129.56.107:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:30:38 np0005546222.novalocal python3[16291]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec 05 00:30:38 np0005546222.novalocal sudo[16282]: pam_unix(sudo:session): session closed for user root
Dec 05 00:30:38 np0005546222.novalocal sshd-session[8087]: Connection closed by 38.102.83.114 port 60984
Dec 05 00:30:38 np0005546222.novalocal sshd-session[8084]: pam_unix(sshd:session): session closed for user zuul
Dec 05 00:30:38 np0005546222.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Dec 05 00:30:38 np0005546222.novalocal systemd[1]: session-5.scope: Consumed 59.087s CPU time.
Dec 05 00:30:38 np0005546222.novalocal systemd-logind[792]: Session 5 logged out. Waiting for processes to exit.
Dec 05 00:30:38 np0005546222.novalocal systemd-logind[792]: Removed session 5.
Dec 05 00:30:48 np0005546222.novalocal irqbalance[786]: Cannot change IRQ 27 affinity: Operation not permitted
Dec 05 00:30:48 np0005546222.novalocal irqbalance[786]: IRQ 27 affinity is now unmanaged
Dec 05 00:30:56 np0005546222.novalocal sshd-session[23596]: Connection closed by 38.102.83.179 port 38530 [preauth]
Dec 05 00:30:56 np0005546222.novalocal sshd-session[23600]: Connection closed by 38.102.83.179 port 38514 [preauth]
Dec 05 00:30:56 np0005546222.novalocal sshd-session[23595]: Unable to negotiate with 38.102.83.179 port 38544: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 05 00:30:56 np0005546222.novalocal sshd-session[23597]: Unable to negotiate with 38.102.83.179 port 38546: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 05 00:30:56 np0005546222.novalocal sshd-session[23599]: Unable to negotiate with 38.102.83.179 port 38556: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 05 00:31:01 np0005546222.novalocal sshd-session[25251]: Accepted publickey for zuul from 38.102.83.114 port 58080 ssh2: RSA SHA256:TVb6vFiLOEHtrkkdoyIozA4b0isBLmSla+NPtR7bFX8
Dec 05 00:31:01 np0005546222.novalocal systemd-logind[792]: New session 6 of user zuul.
Dec 05 00:31:01 np0005546222.novalocal systemd[1]: Started Session 6 of User zuul.
Dec 05 00:31:01 np0005546222.novalocal sshd-session[25251]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 00:31:01 np0005546222.novalocal python3[25367]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAteWjKx1WidR2fld72QkeCDJvAUiRqCvGoAMZWxhexJ1YeJP5ASDHAiFUpkx06Liwggu/eavRoHvmQvhjUvhOU= zuul@np0005546221.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:31:02 np0005546222.novalocal sudo[25581]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygaxbyggmjkgbghsrxzorpjchofwtnuv ; /usr/bin/python3'
Dec 05 00:31:02 np0005546222.novalocal sudo[25581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:31:02 np0005546222.novalocal python3[25591]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAteWjKx1WidR2fld72QkeCDJvAUiRqCvGoAMZWxhexJ1YeJP5ASDHAiFUpkx06Liwggu/eavRoHvmQvhjUvhOU= zuul@np0005546221.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:31:02 np0005546222.novalocal sudo[25581]: pam_unix(sudo:session): session closed for user root
Dec 05 00:31:03 np0005546222.novalocal sudo[26015]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsyiocypfuncbgqkrwdalymofpfrlpru ; /usr/bin/python3'
Dec 05 00:31:03 np0005546222.novalocal sudo[26015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:31:03 np0005546222.novalocal python3[26023]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005546222.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 05 00:31:03 np0005546222.novalocal useradd[26082]: new group: name=cloud-admin, GID=1002
Dec 05 00:31:03 np0005546222.novalocal useradd[26082]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Dec 05 00:31:03 np0005546222.novalocal sudo[26015]: pam_unix(sudo:session): session closed for user root
Dec 05 00:31:03 np0005546222.novalocal sudo[26206]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awkdvzabubksgrjzvzxdgzphjveotxjf ; /usr/bin/python3'
Dec 05 00:31:03 np0005546222.novalocal sudo[26206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:31:03 np0005546222.novalocal python3[26217]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAteWjKx1WidR2fld72QkeCDJvAUiRqCvGoAMZWxhexJ1YeJP5ASDHAiFUpkx06Liwggu/eavRoHvmQvhjUvhOU= zuul@np0005546221.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 05 00:31:03 np0005546222.novalocal sudo[26206]: pam_unix(sudo:session): session closed for user root
Dec 05 00:31:03 np0005546222.novalocal sudo[26437]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhgpmywctmvlgfafyqjlccnvpmhzwsis ; /usr/bin/python3'
Dec 05 00:31:03 np0005546222.novalocal sudo[26437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:31:04 np0005546222.novalocal python3[26447]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 00:31:04 np0005546222.novalocal sudo[26437]: pam_unix(sudo:session): session closed for user root
Dec 05 00:31:04 np0005546222.novalocal sudo[26674]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eghkusnhgnkngyyysmglwqwlkcfhpmok ; /usr/bin/python3'
Dec 05 00:31:04 np0005546222.novalocal sudo[26674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:31:04 np0005546222.novalocal python3[26688]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764894663.824713-135-161331014616792/source _original_basename=tmp6p6l9uxd follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:31:04 np0005546222.novalocal sudo[26674]: pam_unix(sudo:session): session closed for user root
Dec 05 00:31:05 np0005546222.novalocal sudo[27009]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqmdgcibfmgayuewxwogpemyemudbreh ; /usr/bin/python3'
Dec 05 00:31:05 np0005546222.novalocal sudo[27009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:31:05 np0005546222.novalocal python3[27016]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec 05 00:31:05 np0005546222.novalocal systemd[1]: Starting Hostname Service...
Dec 05 00:31:05 np0005546222.novalocal systemd[1]: Started Hostname Service.
Dec 05 00:31:05 np0005546222.novalocal systemd-hostnamed[27112]: Changed pretty hostname to 'compute-0'
Dec 05 00:31:05 compute-0 systemd-hostnamed[27112]: Hostname set to <compute-0> (static)
Dec 05 00:31:05 compute-0 NetworkManager[7183]: <info>  [1764894665.5333] hostname: static hostname changed from "np0005546222.novalocal" to "compute-0"
Dec 05 00:31:05 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 05 00:31:05 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 05 00:31:05 compute-0 sudo[27009]: pam_unix(sudo:session): session closed for user root
Dec 05 00:31:06 compute-0 sshd-session[25311]: Connection closed by 38.102.83.114 port 58080
Dec 05 00:31:06 compute-0 sshd-session[25251]: pam_unix(sshd:session): session closed for user zuul
Dec 05 00:31:06 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Dec 05 00:31:06 compute-0 systemd[1]: session-6.scope: Consumed 2.183s CPU time.
Dec 05 00:31:06 compute-0 systemd-logind[792]: Session 6 logged out. Waiting for processes to exit.
Dec 05 00:31:06 compute-0 systemd-logind[792]: Removed session 6.
Dec 05 00:31:08 compute-0 irqbalance[786]: Cannot change IRQ 26 affinity: Operation not permitted
Dec 05 00:31:08 compute-0 irqbalance[786]: IRQ 26 affinity is now unmanaged
Dec 05 00:31:14 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 00:31:14 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 05 00:31:14 compute-0 systemd[1]: man-db-cache-update.service: Consumed 57.282s CPU time.
Dec 05 00:31:14 compute-0 systemd[1]: run-rabd214ef511a41589295ad367d7d3a2d.service: Deactivated successfully.
Dec 05 00:31:15 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 05 00:31:35 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 05 00:34:16 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Dec 05 00:34:16 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec 05 00:34:16 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Dec 05 00:34:16 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec 05 00:36:10 compute-0 sshd-session[29996]: Accepted publickey for zuul from 38.102.83.179 port 50474 ssh2: RSA SHA256:TVb6vFiLOEHtrkkdoyIozA4b0isBLmSla+NPtR7bFX8
Dec 05 00:36:10 compute-0 systemd-logind[792]: New session 7 of user zuul.
Dec 05 00:36:10 compute-0 systemd[1]: Started Session 7 of User zuul.
Dec 05 00:36:10 compute-0 sshd-session[29996]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 00:36:11 compute-0 python3[30072]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 00:36:12 compute-0 sudo[30186]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amlhfxnsxxieliyhqrikknvakljzpccb ; /usr/bin/python3'
Dec 05 00:36:12 compute-0 sudo[30186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:36:12 compute-0 python3[30188]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 00:36:12 compute-0 sudo[30186]: pam_unix(sudo:session): session closed for user root
Dec 05 00:36:13 compute-0 sudo[30259]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtujxmfededuqnaiaawukhovwnnioykd ; /usr/bin/python3'
Dec 05 00:36:13 compute-0 sudo[30259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:36:13 compute-0 python3[30261]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764894972.5303113-33625-133759135839121/source mode=0755 _original_basename=delorean.repo follow=False checksum=39c885eb875fd03e010d1b0454241c26b121dfb2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:36:13 compute-0 sudo[30259]: pam_unix(sudo:session): session closed for user root
Dec 05 00:36:13 compute-0 sudo[30285]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttfdkybhnkdehxrcphrrejsacuhhjpwz ; /usr/bin/python3'
Dec 05 00:36:13 compute-0 sudo[30285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:36:13 compute-0 python3[30287]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 00:36:13 compute-0 sudo[30285]: pam_unix(sudo:session): session closed for user root
Dec 05 00:36:13 compute-0 sudo[30358]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvuwujsvhakqzzhfxopdnanhwivxjcyd ; /usr/bin/python3'
Dec 05 00:36:13 compute-0 sudo[30358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:36:14 compute-0 python3[30360]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764894972.5303113-33625-133759135839121/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:36:14 compute-0 sudo[30358]: pam_unix(sudo:session): session closed for user root
Dec 05 00:36:14 compute-0 sudo[30384]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqhshpktybbjtfbsqywrzjgglpobgxlv ; /usr/bin/python3'
Dec 05 00:36:14 compute-0 sudo[30384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:36:14 compute-0 python3[30386]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 00:36:14 compute-0 sudo[30384]: pam_unix(sudo:session): session closed for user root
Dec 05 00:36:14 compute-0 sudo[30457]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szmcxocpnybyqzqyizcjapvckqcdpcql ; /usr/bin/python3'
Dec 05 00:36:14 compute-0 sudo[30457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:36:14 compute-0 python3[30459]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764894972.5303113-33625-133759135839121/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:36:14 compute-0 sudo[30457]: pam_unix(sudo:session): session closed for user root
Dec 05 00:36:14 compute-0 sudo[30483]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcugatztrbktgviipvobdgwpohqpgzmg ; /usr/bin/python3'
Dec 05 00:36:14 compute-0 sudo[30483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:36:15 compute-0 python3[30485]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 00:36:15 compute-0 sudo[30483]: pam_unix(sudo:session): session closed for user root
Dec 05 00:36:15 compute-0 sudo[30556]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnbviheurafiwcxxjobftufozqzxfths ; /usr/bin/python3'
Dec 05 00:36:15 compute-0 sudo[30556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:36:15 compute-0 python3[30558]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764894972.5303113-33625-133759135839121/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:36:15 compute-0 sudo[30556]: pam_unix(sudo:session): session closed for user root
Dec 05 00:36:15 compute-0 sudo[30582]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucvpggpflwjwgpusdyjsvqpqmfjcwncz ; /usr/bin/python3'
Dec 05 00:36:15 compute-0 sudo[30582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:36:15 compute-0 python3[30584]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 00:36:15 compute-0 sudo[30582]: pam_unix(sudo:session): session closed for user root
Dec 05 00:36:16 compute-0 sudo[30655]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbxtvbbvrbxecduihoaqvuoxkvtkdxve ; /usr/bin/python3'
Dec 05 00:36:16 compute-0 sudo[30655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:36:16 compute-0 python3[30657]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764894972.5303113-33625-133759135839121/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:36:16 compute-0 sudo[30655]: pam_unix(sudo:session): session closed for user root
Dec 05 00:36:16 compute-0 sudo[30681]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrncphkusrggunnaxdkbdmrfjwujholb ; /usr/bin/python3'
Dec 05 00:36:16 compute-0 sudo[30681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:36:16 compute-0 python3[30683]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 00:36:16 compute-0 sudo[30681]: pam_unix(sudo:session): session closed for user root
Dec 05 00:36:16 compute-0 sudo[30754]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dghpujwfkrizfrcalipqzlvtysecnddg ; /usr/bin/python3'
Dec 05 00:36:16 compute-0 sudo[30754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:36:16 compute-0 python3[30756]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764894972.5303113-33625-133759135839121/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:36:16 compute-0 sudo[30754]: pam_unix(sudo:session): session closed for user root
Dec 05 00:36:17 compute-0 sudo[30780]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsgbwdrablzxaxkwpzfghmsulqrtgxyh ; /usr/bin/python3'
Dec 05 00:36:17 compute-0 sudo[30780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:36:17 compute-0 python3[30782]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 00:36:17 compute-0 sudo[30780]: pam_unix(sudo:session): session closed for user root
Dec 05 00:36:17 compute-0 sudo[30853]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeqadsttecsfmiflevzzdbjdgxuzwkat ; /usr/bin/python3'
Dec 05 00:36:17 compute-0 sudo[30853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:36:17 compute-0 python3[30855]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764894972.5303113-33625-133759135839121/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6e18e2038d54303b4926db53c0b6cced515a9151 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:36:17 compute-0 sudo[30853]: pam_unix(sudo:session): session closed for user root
Dec 05 00:36:20 compute-0 sshd-session[30880]: Connection closed by 192.168.122.11 port 56468 [preauth]
Dec 05 00:36:20 compute-0 sshd-session[30881]: Unable to negotiate with 192.168.122.11 port 56478: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 05 00:36:20 compute-0 sshd-session[30882]: Unable to negotiate with 192.168.122.11 port 56484: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 05 00:36:20 compute-0 sshd-session[30884]: Connection closed by 192.168.122.11 port 56476 [preauth]
Dec 05 00:36:20 compute-0 sshd-session[30885]: Unable to negotiate with 192.168.122.11 port 56498: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 05 00:39:10 compute-0 python3[30913]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:39:47 compute-0 sshd-session[30916]: Invalid user admin from 45.135.232.92 port 39062
Dec 05 00:39:48 compute-0 sshd-session[30916]: Connection reset by invalid user admin 45.135.232.92 port 39062 [preauth]
Dec 05 00:39:49 compute-0 sshd-session[30918]: Invalid user kodi from 45.135.232.92 port 39076
Dec 05 00:39:49 compute-0 sshd-session[30918]: Connection reset by invalid user kodi 45.135.232.92 port 39076 [preauth]
Dec 05 00:39:51 compute-0 sshd-session[30920]: Connection reset by authenticating user root 45.135.232.92 port 39092 [preauth]
Dec 05 00:39:53 compute-0 sshd-session[30922]: Connection reset by authenticating user root 45.135.232.92 port 39112 [preauth]
Dec 05 00:39:55 compute-0 sshd-session[30924]: Connection reset by authenticating user root 45.135.232.92 port 47902 [preauth]
Dec 05 00:42:16 compute-0 systemd[1]: Starting dnf makecache...
Dec 05 00:42:16 compute-0 dnf[30927]: Failed determining last makecache time.
Dec 05 00:42:16 compute-0 dnf[30927]: delorean-openstack-barbican-42b4c41831408a8e323 332 kB/s |  13 kB     00:00
Dec 05 00:42:16 compute-0 dnf[30927]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 1.9 MB/s |  65 kB     00:00
Dec 05 00:42:16 compute-0 dnf[30927]: delorean-openstack-cinder-1c00d6490d88e436f26ef 1.1 MB/s |  32 kB     00:00
Dec 05 00:42:17 compute-0 dnf[30927]: delorean-python-stevedore-c4acc5639fd2329372142 3.7 MB/s | 131 kB     00:00
Dec 05 00:42:17 compute-0 dnf[30927]: delorean-python-cloudkitty-tests-tempest-2c80f8 1.4 MB/s |  32 kB     00:00
Dec 05 00:42:17 compute-0 dnf[30927]: delorean-os-net-config-d0cedbdb788d43e5c7551df5 3.1 MB/s | 349 kB     00:00
Dec 05 00:42:17 compute-0 dnf[30927]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 1.1 MB/s |  42 kB     00:00
Dec 05 00:42:17 compute-0 dnf[30927]: delorean-python-designate-tests-tempest-347fdbc 626 kB/s |  18 kB     00:00
Dec 05 00:42:17 compute-0 dnf[30927]: delorean-openstack-glance-1fd12c29b339f30fe823e 483 kB/s |  18 kB     00:00
Dec 05 00:42:17 compute-0 dnf[30927]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 1.3 MB/s |  29 kB     00:00
Dec 05 00:42:17 compute-0 dnf[30927]: delorean-openstack-manila-3c01b7181572c95dac462 773 kB/s |  25 kB     00:00
Dec 05 00:42:17 compute-0 dnf[30927]: delorean-python-whitebox-neutron-tests-tempest- 6.2 MB/s | 154 kB     00:00
Dec 05 00:42:17 compute-0 dnf[30927]: delorean-openstack-octavia-ba397f07a7331190208c 823 kB/s |  26 kB     00:00
Dec 05 00:42:17 compute-0 dnf[30927]: delorean-openstack-watcher-c014f81a8647287f6dcc 526 kB/s |  16 kB     00:00
Dec 05 00:42:17 compute-0 dnf[30927]: delorean-ansible-config_template-5ccaa22121a7ff 318 kB/s | 7.4 kB     00:00
Dec 05 00:42:17 compute-0 dnf[30927]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 3.8 MB/s | 144 kB     00:00
Dec 05 00:42:17 compute-0 dnf[30927]: delorean-openstack-swift-dc98a8463506ac520c469a 446 kB/s |  14 kB     00:00
Dec 05 00:42:17 compute-0 dnf[30927]: delorean-python-tempestconf-8515371b7cceebd4282 1.7 MB/s |  53 kB     00:00
Dec 05 00:42:17 compute-0 dnf[30927]: delorean-openstack-heat-ui-013accbfd179753bc3f0 2.3 MB/s |  96 kB     00:00
Dec 05 00:42:18 compute-0 dnf[30927]: CentOS Stream 9 - BaseOS                         27 kB/s | 7.3 kB     00:00
Dec 05 00:42:18 compute-0 dnf[30927]: CentOS Stream 9 - AppStream                      81 kB/s | 7.4 kB     00:00
Dec 05 00:42:18 compute-0 dnf[30927]: CentOS Stream 9 - CRB                            70 kB/s | 7.2 kB     00:00
Dec 05 00:42:18 compute-0 dnf[30927]: CentOS Stream 9 - Extras packages                73 kB/s | 8.3 kB     00:00
Dec 05 00:42:18 compute-0 dnf[30927]: dlrn-antelope-testing                            25 MB/s | 1.1 MB     00:00
Dec 05 00:42:19 compute-0 dnf[30927]: dlrn-antelope-build-deps                         17 MB/s | 461 kB     00:00
Dec 05 00:42:19 compute-0 dnf[30927]: centos9-rabbitmq                                8.5 MB/s | 123 kB     00:00
Dec 05 00:42:19 compute-0 dnf[30927]: centos9-storage                                  27 MB/s | 415 kB     00:00
Dec 05 00:42:19 compute-0 dnf[30927]: centos9-opstools                                4.5 MB/s |  51 kB     00:00
Dec 05 00:42:19 compute-0 dnf[30927]: NFV SIG OpenvSwitch                              18 MB/s | 456 kB     00:00
Dec 05 00:42:20 compute-0 dnf[30927]: repo-setup-centos-appstream                      85 MB/s |  25 MB     00:00
Dec 05 00:42:26 compute-0 dnf[30927]: repo-setup-centos-baseos                         63 MB/s | 8.8 MB     00:00
Dec 05 00:42:27 compute-0 dnf[30927]: repo-setup-centos-highavailability               15 MB/s | 744 kB     00:00
Dec 05 00:42:27 compute-0 dnf[30927]: repo-setup-centos-powertools                     72 MB/s | 7.3 MB     00:00
Dec 05 00:42:30 compute-0 dnf[30927]: Extra Packages for Enterprise Linux 9 - x86_64   27 MB/s |  20 MB     00:00
Dec 05 00:42:43 compute-0 dnf[30927]: Metadata cache created.
Dec 05 00:42:43 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 05 00:42:43 compute-0 systemd[1]: Finished dnf makecache.
Dec 05 00:42:43 compute-0 systemd[1]: dnf-makecache.service: Consumed 24.477s CPU time.
Dec 05 00:44:09 compute-0 sshd-session[29999]: Received disconnect from 38.102.83.179 port 50474:11: disconnected by user
Dec 05 00:44:09 compute-0 sshd-session[29999]: Disconnected from user zuul 38.102.83.179 port 50474
Dec 05 00:44:09 compute-0 sshd-session[29996]: pam_unix(sshd:session): session closed for user zuul
Dec 05 00:44:09 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Dec 05 00:44:09 compute-0 systemd[1]: session-7.scope: Consumed 5.635s CPU time.
Dec 05 00:44:09 compute-0 systemd-logind[792]: Session 7 logged out. Waiting for processes to exit.
Dec 05 00:44:09 compute-0 systemd-logind[792]: Removed session 7.
Dec 05 00:50:18 compute-0 sshd-session[31030]: Invalid user admin from 139.19.117.131 port 49550
Dec 05 00:50:18 compute-0 sshd-session[31030]: userauth_pubkey: signature algorithm ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
Dec 05 00:50:28 compute-0 sshd-session[31030]: Connection closed by invalid user admin 139.19.117.131 port 49550 [preauth]
Dec 05 00:50:56 compute-0 sshd-session[31032]: Accepted publickey for zuul from 192.168.122.30 port 49446 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 00:50:56 compute-0 systemd-logind[792]: New session 8 of user zuul.
Dec 05 00:50:56 compute-0 systemd[1]: Started Session 8 of User zuul.
Dec 05 00:50:56 compute-0 sshd-session[31032]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 00:50:57 compute-0 python3.9[31185]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 00:50:59 compute-0 sudo[31364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uonnmkzwznltnvxkzmumybhzilpzisvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764895858.6130874-32-264563234240929/AnsiballZ_command.py'
Dec 05 00:50:59 compute-0 sudo[31364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:50:59 compute-0 python3.9[31366]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:51:06 compute-0 sudo[31364]: pam_unix(sudo:session): session closed for user root
Dec 05 00:51:06 compute-0 sshd-session[31035]: Connection closed by 192.168.122.30 port 49446
Dec 05 00:51:06 compute-0 sshd-session[31032]: pam_unix(sshd:session): session closed for user zuul
Dec 05 00:51:06 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Dec 05 00:51:06 compute-0 systemd[1]: session-8.scope: Consumed 7.855s CPU time.
Dec 05 00:51:06 compute-0 systemd-logind[792]: Session 8 logged out. Waiting for processes to exit.
Dec 05 00:51:06 compute-0 systemd-logind[792]: Removed session 8.
Dec 05 00:51:22 compute-0 sshd-session[31425]: Accepted publickey for zuul from 192.168.122.30 port 58024 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 00:51:22 compute-0 systemd-logind[792]: New session 9 of user zuul.
Dec 05 00:51:22 compute-0 systemd[1]: Started Session 9 of User zuul.
Dec 05 00:51:22 compute-0 sshd-session[31425]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 00:51:23 compute-0 python3.9[31578]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 05 00:51:24 compute-0 python3.9[31752]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 00:51:25 compute-0 sudo[31902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcehxjywqmepvdmqyaydqtizniqouwlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764895885.049201-45-102542359024512/AnsiballZ_command.py'
Dec 05 00:51:25 compute-0 sudo[31902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:51:25 compute-0 python3.9[31904]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:51:25 compute-0 sudo[31902]: pam_unix(sudo:session): session closed for user root
Dec 05 00:51:26 compute-0 sudo[32055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwcmrzalvdqyrbolrpjmmjwpxtnwkjoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764895886.0830917-57-24109162231408/AnsiballZ_stat.py'
Dec 05 00:51:26 compute-0 sudo[32055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:51:26 compute-0 python3.9[32057]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 00:51:26 compute-0 sudo[32055]: pam_unix(sudo:session): session closed for user root
Dec 05 00:51:27 compute-0 sudo[32207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igwzhltgxntgaewojjhpnnbiwjysyzlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764895886.8575785-65-103392664886881/AnsiballZ_file.py'
Dec 05 00:51:27 compute-0 sudo[32207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:51:27 compute-0 python3.9[32209]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:51:27 compute-0 sudo[32207]: pam_unix(sudo:session): session closed for user root
Dec 05 00:51:28 compute-0 sudo[32359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwakmboxvixlusgpuhmmcqzzegpgyuaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764895887.801169-73-192652768040053/AnsiballZ_stat.py'
Dec 05 00:51:28 compute-0 sudo[32359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:51:28 compute-0 python3.9[32361]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:51:28 compute-0 sudo[32359]: pam_unix(sudo:session): session closed for user root
Dec 05 00:51:28 compute-0 sudo[32482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdlbwsrxflzrlvrvwbqmubrifrgxgtnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764895887.801169-73-192652768040053/AnsiballZ_copy.py'
Dec 05 00:51:28 compute-0 sudo[32482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:51:29 compute-0 python3.9[32484]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764895887.801169-73-192652768040053/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:51:29 compute-0 sudo[32482]: pam_unix(sudo:session): session closed for user root
Dec 05 00:51:29 compute-0 sudo[32634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgusvxovzzpaoydghxvnedxkcyobyoxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764895889.2218764-88-225775603519376/AnsiballZ_setup.py'
Dec 05 00:51:29 compute-0 sudo[32634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:51:29 compute-0 python3.9[32636]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 00:51:30 compute-0 sudo[32634]: pam_unix(sudo:session): session closed for user root
Dec 05 00:51:30 compute-0 sudo[32790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kavcutbdwqdzqmpbantvhaffeacjifsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764895890.22156-96-157432890230754/AnsiballZ_file.py'
Dec 05 00:51:30 compute-0 sudo[32790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:51:30 compute-0 python3.9[32792]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:51:30 compute-0 sudo[32790]: pam_unix(sudo:session): session closed for user root
Dec 05 00:51:31 compute-0 sudo[32942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olzckaecdhbpiosylepbasgxcotlczvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764895890.9665744-105-95082792045547/AnsiballZ_file.py'
Dec 05 00:51:31 compute-0 sudo[32942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:51:31 compute-0 python3.9[32944]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:51:31 compute-0 sudo[32942]: pam_unix(sudo:session): session closed for user root
Dec 05 00:51:32 compute-0 python3.9[33094]: ansible-ansible.builtin.service_facts Invoked
Dec 05 00:51:38 compute-0 python3.9[33347]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:51:39 compute-0 python3.9[33497]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 00:51:40 compute-0 python3.9[33651]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 00:51:41 compute-0 sudo[33807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqnwjqakydhkjdfitfcymwjwfonmfemq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764895901.3546546-153-165016962697721/AnsiballZ_setup.py'
Dec 05 00:51:41 compute-0 sudo[33807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:51:41 compute-0 python3.9[33809]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 00:51:42 compute-0 sudo[33807]: pam_unix(sudo:session): session closed for user root
Dec 05 00:51:42 compute-0 sudo[33891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mthiukwysisikqmldtxsqhszffqjhtdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764895901.3546546-153-165016962697721/AnsiballZ_dnf.py'
Dec 05 00:51:42 compute-0 sudo[33891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:51:42 compute-0 python3.9[33893]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 00:52:26 compute-0 systemd[1]: Reloading.
Dec 05 00:52:26 compute-0 systemd-rc-local-generator[34087]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 00:52:26 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec 05 00:52:27 compute-0 systemd[1]: Reloading.
Dec 05 00:52:27 compute-0 systemd-rc-local-generator[34124]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 00:52:27 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec 05 00:52:27 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec 05 00:52:27 compute-0 systemd[1]: Reloading.
Dec 05 00:52:27 compute-0 systemd-rc-local-generator[34164]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 00:52:27 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Dec 05 00:52:27 compute-0 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Dec 05 00:52:27 compute-0 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Dec 05 00:52:27 compute-0 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Dec 05 00:52:42 compute-0 sshd-session[34230]: Connection reset by authenticating user root 45.140.17.124 port 40410 [preauth]
Dec 05 00:52:43 compute-0 sshd-session[34236]: Connection reset by authenticating user root 45.140.17.124 port 41700 [preauth]
Dec 05 00:52:47 compute-0 sshd-session[34252]: Connection reset by authenticating user root 45.140.17.124 port 41722 [preauth]
Dec 05 00:52:49 compute-0 sshd-session[34262]: Connection reset by authenticating user root 45.140.17.124 port 41728 [preauth]
Dec 05 00:53:20 compute-0 sshd-session[34361]: Connection reset by authenticating user root 91.202.233.33 port 41474 [preauth]
Dec 05 00:53:22 compute-0 sshd-session[34367]: Connection reset by authenticating user root 91.202.233.33 port 56642 [preauth]
Dec 05 00:53:25 compute-0 sshd-session[34397]: Invalid user user from 91.202.233.33 port 56666
Dec 05 00:53:25 compute-0 sshd-session[34397]: Connection reset by invalid user user 91.202.233.33 port 56666 [preauth]
Dec 05 00:53:27 compute-0 sshd-session[34399]: Connection reset by authenticating user root 91.202.233.33 port 56680 [preauth]
Dec 05 00:53:28 compute-0 sshd-session[34401]: Invalid user demo from 91.202.233.33 port 56710
Dec 05 00:53:29 compute-0 sshd-session[34401]: Connection reset by invalid user demo 91.202.233.33 port 56710 [preauth]
Dec 05 00:53:32 compute-0 kernel: SELinux:  Converting 2719 SID table entries...
Dec 05 00:53:32 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 00:53:32 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 05 00:53:32 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 00:53:32 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 05 00:53:32 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 00:53:32 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 00:53:32 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 00:53:32 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec 05 00:53:33 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 00:53:33 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 05 00:53:33 compute-0 systemd[1]: Reloading.
Dec 05 00:53:33 compute-0 systemd-rc-local-generator[34518]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 00:53:33 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 00:53:33 compute-0 sudo[33891]: pam_unix(sudo:session): session closed for user root
Dec 05 00:53:34 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 00:53:34 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 05 00:53:34 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.157s CPU time.
Dec 05 00:53:34 compute-0 systemd[1]: run-r4d6dfa0464004ed9afee165154f165ad.service: Deactivated successfully.
Dec 05 00:53:34 compute-0 sudo[35436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufwtvpauipbwmmrdmiarianxjmbarjtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896014.006616-165-170700959565904/AnsiballZ_command.py'
Dec 05 00:53:34 compute-0 sudo[35436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:53:34 compute-0 python3.9[35438]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:53:35 compute-0 sudo[35436]: pam_unix(sudo:session): session closed for user root
Dec 05 00:53:36 compute-0 sudo[35717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgpfphzjkglwekwdalejjuujwbcrmoxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896015.542145-173-89619409090292/AnsiballZ_selinux.py'
Dec 05 00:53:36 compute-0 sudo[35717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:53:36 compute-0 python3.9[35719]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 05 00:53:36 compute-0 sudo[35717]: pam_unix(sudo:session): session closed for user root
Dec 05 00:53:37 compute-0 sudo[35869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjwtnhmaehboprsotcmfsttyjtmcuoan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896016.8456435-184-176776207385009/AnsiballZ_command.py'
Dec 05 00:53:37 compute-0 sudo[35869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:53:37 compute-0 python3.9[35871]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 05 00:53:38 compute-0 sudo[35869]: pam_unix(sudo:session): session closed for user root
Dec 05 00:53:38 compute-0 sudo[36022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbemtnevfuqqdlkintkyrdycdniktihu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896018.484866-192-281283083722733/AnsiballZ_file.py'
Dec 05 00:53:38 compute-0 sudo[36022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:53:40 compute-0 python3.9[36024]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:53:40 compute-0 sudo[36022]: pam_unix(sudo:session): session closed for user root
Dec 05 00:53:41 compute-0 sudo[36174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iusivpcufxytfbtynufpxhcdnlkqwomc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896020.7311904-200-202611200878415/AnsiballZ_mount.py'
Dec 05 00:53:41 compute-0 sudo[36174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:53:41 compute-0 python3.9[36176]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 05 00:53:41 compute-0 sudo[36174]: pam_unix(sudo:session): session closed for user root
Dec 05 00:53:42 compute-0 sudo[36326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unxidlkaktjyznxqsskgbmadyaxydhdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896022.3517268-228-26758045512627/AnsiballZ_file.py'
Dec 05 00:53:42 compute-0 sudo[36326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:53:42 compute-0 python3.9[36328]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:53:42 compute-0 sudo[36326]: pam_unix(sudo:session): session closed for user root
Dec 05 00:53:43 compute-0 sudo[36478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbrueesyevzctfmhijrauhziqoqodaga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896022.979676-236-152245672197792/AnsiballZ_stat.py'
Dec 05 00:53:43 compute-0 sudo[36478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:53:44 compute-0 python3.9[36480]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:53:44 compute-0 sudo[36478]: pam_unix(sudo:session): session closed for user root
Dec 05 00:53:45 compute-0 sudo[36601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfrjotomjttdfocaureogwpffeuvrjqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896022.979676-236-152245672197792/AnsiballZ_copy.py'
Dec 05 00:53:45 compute-0 sudo[36601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:53:48 compute-0 python3.9[36603]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896022.979676-236-152245672197792/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aad3215deeeb1eba7754fd1a27527afcf2bb5051 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:53:48 compute-0 sudo[36601]: pam_unix(sudo:session): session closed for user root
Dec 05 00:53:49 compute-0 sudo[36753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obqqqiztfuxmyzjyaibfwotrnildwmxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896029.6204274-260-150834012098980/AnsiballZ_stat.py'
Dec 05 00:53:49 compute-0 sudo[36753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:53:50 compute-0 python3.9[36755]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 00:53:50 compute-0 sudo[36753]: pam_unix(sudo:session): session closed for user root
Dec 05 00:53:50 compute-0 sudo[36905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymyqsmfzlvvgmnbhwwhthwctmjufvdzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896030.3058102-268-81845412557790/AnsiballZ_command.py'
Dec 05 00:53:50 compute-0 sudo[36905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:53:50 compute-0 python3.9[36907]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:53:50 compute-0 sudo[36905]: pam_unix(sudo:session): session closed for user root
Dec 05 00:53:51 compute-0 sudo[37058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iojrayduynleyqxywxhlkmlycwdtcidg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896031.050237-276-1673311022399/AnsiballZ_file.py'
Dec 05 00:53:51 compute-0 sudo[37058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:53:51 compute-0 python3.9[37060]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:53:51 compute-0 sudo[37058]: pam_unix(sudo:session): session closed for user root
Dec 05 00:53:52 compute-0 sudo[37210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihqhljnmjghdcxkvfyjcnkvvrxcbwyou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896032.1127107-287-2299029018238/AnsiballZ_getent.py'
Dec 05 00:53:52 compute-0 sudo[37210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:53:52 compute-0 python3.9[37212]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 05 00:53:52 compute-0 sudo[37210]: pam_unix(sudo:session): session closed for user root
Dec 05 00:53:52 compute-0 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 00:53:53 compute-0 sudo[37364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfxbepucdgrlzypxefomkhcppncvdgtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896033.016972-295-228502722348088/AnsiballZ_group.py'
Dec 05 00:53:53 compute-0 sudo[37364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:53:53 compute-0 python3.9[37366]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 05 00:53:53 compute-0 groupadd[37367]: group added to /etc/group: name=qemu, GID=107
Dec 05 00:53:53 compute-0 groupadd[37367]: group added to /etc/gshadow: name=qemu
Dec 05 00:53:53 compute-0 groupadd[37367]: new group: name=qemu, GID=107
Dec 05 00:53:53 compute-0 sudo[37364]: pam_unix(sudo:session): session closed for user root
Dec 05 00:53:54 compute-0 sudo[37522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpjyqmetafhmzegwrrtttllpheslqjkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896033.9990058-303-240107844518236/AnsiballZ_user.py'
Dec 05 00:53:54 compute-0 sudo[37522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:53:54 compute-0 python3.9[37524]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 05 00:53:54 compute-0 useradd[37526]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Dec 05 00:53:54 compute-0 sudo[37522]: pam_unix(sudo:session): session closed for user root
Dec 05 00:53:55 compute-0 sudo[37682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usllqjfbsvvtwejokcfohwuzlqxwmdet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896035.0436604-311-164023944723180/AnsiballZ_getent.py'
Dec 05 00:53:55 compute-0 sudo[37682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:53:55 compute-0 python3.9[37684]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 05 00:53:55 compute-0 sudo[37682]: pam_unix(sudo:session): session closed for user root
Dec 05 00:53:56 compute-0 sudo[37835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btobrrhvxsxeeygbtfyzzevvuiyalcwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896035.830654-319-75051675735653/AnsiballZ_group.py'
Dec 05 00:53:56 compute-0 sudo[37835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:53:56 compute-0 python3.9[37837]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 05 00:53:56 compute-0 groupadd[37838]: group added to /etc/group: name=hugetlbfs, GID=42477
Dec 05 00:53:56 compute-0 groupadd[37838]: group added to /etc/gshadow: name=hugetlbfs
Dec 05 00:53:56 compute-0 groupadd[37838]: new group: name=hugetlbfs, GID=42477
Dec 05 00:53:56 compute-0 sudo[37835]: pam_unix(sudo:session): session closed for user root
Dec 05 00:53:57 compute-0 sudo[37993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kieyndvawxgbnkkkkkyyuqqcslhslrqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896036.6875827-328-5701398596491/AnsiballZ_file.py'
Dec 05 00:53:57 compute-0 sudo[37993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:53:57 compute-0 python3.9[37995]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 05 00:53:57 compute-0 sudo[37993]: pam_unix(sudo:session): session closed for user root
Dec 05 00:53:58 compute-0 sudo[38145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xumhbirgtalnadznqlnakjuznfwxvqoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896037.7548885-339-117044514842437/AnsiballZ_dnf.py'
Dec 05 00:53:58 compute-0 sudo[38145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:53:58 compute-0 python3.9[38147]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 00:54:00 compute-0 sudo[38145]: pam_unix(sudo:session): session closed for user root
Dec 05 00:54:00 compute-0 sudo[38298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utoalmilghfyubzbgbkkglkevkmednvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896040.2181256-347-41389380501314/AnsiballZ_file.py'
Dec 05 00:54:00 compute-0 sudo[38298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:54:00 compute-0 python3.9[38300]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:54:00 compute-0 sudo[38298]: pam_unix(sudo:session): session closed for user root
Dec 05 00:54:01 compute-0 sudo[38450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywkbvseddqwpgtjfydtrpvaugnbqemxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896040.871625-355-185880773444324/AnsiballZ_stat.py'
Dec 05 00:54:01 compute-0 sudo[38450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:54:01 compute-0 python3.9[38452]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:54:01 compute-0 sudo[38450]: pam_unix(sudo:session): session closed for user root
Dec 05 00:54:01 compute-0 sudo[38573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iakdrtpjwqlsajbrbqqrbtqdsexmfsim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896040.871625-355-185880773444324/AnsiballZ_copy.py'
Dec 05 00:54:01 compute-0 sudo[38573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:54:01 compute-0 python3.9[38575]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896040.871625-355-185880773444324/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:54:01 compute-0 sudo[38573]: pam_unix(sudo:session): session closed for user root
Dec 05 00:54:02 compute-0 sudo[38725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhiuymadjgftwisahhehyugnouboyouh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896042.135084-370-200362558255448/AnsiballZ_systemd.py'
Dec 05 00:54:02 compute-0 sudo[38725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:54:02 compute-0 python3.9[38727]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 00:54:03 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 05 00:54:03 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 05 00:54:03 compute-0 kernel: Bridge firewalling registered
Dec 05 00:54:03 compute-0 systemd-modules-load[38731]: Inserted module 'br_netfilter'
Dec 05 00:54:03 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 05 00:54:03 compute-0 sudo[38725]: pam_unix(sudo:session): session closed for user root
Dec 05 00:54:03 compute-0 sudo[38885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzrqttfhszaufbajbydusliulmzgwjsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896043.2780735-378-96168174893512/AnsiballZ_stat.py'
Dec 05 00:54:03 compute-0 sudo[38885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:54:03 compute-0 python3.9[38887]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:54:03 compute-0 sudo[38885]: pam_unix(sudo:session): session closed for user root
Dec 05 00:54:04 compute-0 sudo[39008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zutgywfomeqxguhnsojfrzzcerbgrxvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896043.2780735-378-96168174893512/AnsiballZ_copy.py'
Dec 05 00:54:04 compute-0 sudo[39008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:54:04 compute-0 python3.9[39010]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896043.2780735-378-96168174893512/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:54:04 compute-0 sudo[39008]: pam_unix(sudo:session): session closed for user root
Dec 05 00:54:05 compute-0 sudo[39160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwjufqjfwstexrwesexxdbsrsditzqaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896044.8610187-396-58498168617311/AnsiballZ_dnf.py'
Dec 05 00:54:05 compute-0 sudo[39160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:54:05 compute-0 python3.9[39162]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 00:54:10 compute-0 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Dec 05 00:54:10 compute-0 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Dec 05 00:54:11 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 00:54:11 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 05 00:54:11 compute-0 systemd[1]: Reloading.
Dec 05 00:54:11 compute-0 systemd-rc-local-generator[39226]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 00:54:11 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 00:54:12 compute-0 sudo[39160]: pam_unix(sudo:session): session closed for user root
Dec 05 00:54:13 compute-0 python3.9[41457]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 00:54:14 compute-0 python3.9[42481]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 05 00:54:15 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 00:54:15 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 05 00:54:15 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.700s CPU time.
Dec 05 00:54:15 compute-0 systemd[1]: run-r2da31c0c80154411b0a98127cd83664a.service: Deactivated successfully.
Dec 05 00:54:15 compute-0 python3.9[43205]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 00:54:15 compute-0 sudo[43356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewsyhxfhnacwvdiggwjkljvdhefldjzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896055.4859202-435-208091451832729/AnsiballZ_command.py'
Dec 05 00:54:15 compute-0 sudo[43356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:54:16 compute-0 python3.9[43358]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:54:16 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 05 00:54:16 compute-0 systemd[1]: Starting Authorization Manager...
Dec 05 00:54:16 compute-0 polkitd[43575]: Started polkitd version 0.117
Dec 05 00:54:16 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 05 00:54:16 compute-0 polkitd[43575]: Loading rules from directory /etc/polkit-1/rules.d
Dec 05 00:54:16 compute-0 polkitd[43575]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 05 00:54:16 compute-0 polkitd[43575]: Finished loading, compiling and executing 2 rules
Dec 05 00:54:16 compute-0 systemd[1]: Started Authorization Manager.
Dec 05 00:54:16 compute-0 polkitd[43575]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Dec 05 00:54:16 compute-0 sudo[43356]: pam_unix(sudo:session): session closed for user root
Dec 05 00:54:17 compute-0 sudo[43743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phnwsykvwrmszuhzlupevbngxcrxgvhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896056.9521105-444-227434087192164/AnsiballZ_systemd.py'
Dec 05 00:54:17 compute-0 sudo[43743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:54:17 compute-0 python3.9[43745]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 00:54:17 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 05 00:54:17 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec 05 00:54:17 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 05 00:54:17 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 05 00:54:17 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 05 00:54:17 compute-0 sudo[43743]: pam_unix(sudo:session): session closed for user root
Dec 05 00:54:18 compute-0 python3.9[43907]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 05 00:54:21 compute-0 sudo[44057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypkcrpizglvxcaffrnziaockvvxmixml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896060.9567428-501-106993938293487/AnsiballZ_systemd.py'
Dec 05 00:54:21 compute-0 sudo[44057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:54:21 compute-0 python3.9[44059]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 00:54:21 compute-0 systemd[1]: Reloading.
Dec 05 00:54:21 compute-0 systemd-rc-local-generator[44090]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 00:54:21 compute-0 sudo[44057]: pam_unix(sudo:session): session closed for user root
Dec 05 00:54:22 compute-0 sudo[44247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwufzgcofroeljsnszbwsklpcyfvquzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896062.0057316-501-73988582623358/AnsiballZ_systemd.py'
Dec 05 00:54:22 compute-0 sudo[44247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:54:22 compute-0 python3.9[44249]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 00:54:22 compute-0 systemd[1]: Reloading.
Dec 05 00:54:22 compute-0 systemd-rc-local-generator[44278]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 00:54:22 compute-0 sudo[44247]: pam_unix(sudo:session): session closed for user root
Dec 05 00:54:23 compute-0 sudo[44436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrrfhaqwniuuavxvukzadzmkiegjtfeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896063.068388-517-145758349540198/AnsiballZ_command.py'
Dec 05 00:54:23 compute-0 sudo[44436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:54:23 compute-0 python3.9[44438]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:54:23 compute-0 sudo[44436]: pam_unix(sudo:session): session closed for user root
Dec 05 00:54:24 compute-0 sudo[44589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ertbrippkhwefdsqgxqyvmphwdwynslz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896063.7724571-525-242902465026383/AnsiballZ_command.py'
Dec 05 00:54:24 compute-0 sudo[44589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:54:24 compute-0 python3.9[44591]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:54:24 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec 05 00:54:24 compute-0 sudo[44589]: pam_unix(sudo:session): session closed for user root
Dec 05 00:54:24 compute-0 sudo[44742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kowuudwfnzzfvgfgcuksvgaqbovalnpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896064.4867766-533-86500653286193/AnsiballZ_command.py'
Dec 05 00:54:24 compute-0 sudo[44742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:54:25 compute-0 python3.9[44744]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:54:26 compute-0 sudo[44742]: pam_unix(sudo:session): session closed for user root
Dec 05 00:54:26 compute-0 sudo[44904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkueyzstmmkacqfcsjjpwdhhlocpfmxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896066.6456187-541-234263884314961/AnsiballZ_command.py'
Dec 05 00:54:26 compute-0 sudo[44904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:54:27 compute-0 python3.9[44906]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:54:27 compute-0 sudo[44904]: pam_unix(sudo:session): session closed for user root
Dec 05 00:54:27 compute-0 sudo[45057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clocieqjxcbcgkyggjjubfczinkmdqnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896067.399616-549-166414585959536/AnsiballZ_systemd.py'
Dec 05 00:54:27 compute-0 sudo[45057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:54:27 compute-0 python3.9[45059]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 00:54:27 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 05 00:54:27 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Dec 05 00:54:27 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Dec 05 00:54:27 compute-0 systemd[1]: Starting Apply Kernel Variables...
Dec 05 00:54:27 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 05 00:54:28 compute-0 systemd[1]: Finished Apply Kernel Variables.
Dec 05 00:54:28 compute-0 sudo[45057]: pam_unix(sudo:session): session closed for user root
Dec 05 00:54:28 compute-0 sshd-session[31428]: Connection closed by 192.168.122.30 port 58024
Dec 05 00:54:28 compute-0 sshd-session[31425]: pam_unix(sshd:session): session closed for user zuul
Dec 05 00:54:28 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Dec 05 00:54:28 compute-0 systemd[1]: session-9.scope: Consumed 2min 14.486s CPU time.
Dec 05 00:54:28 compute-0 systemd-logind[792]: Session 9 logged out. Waiting for processes to exit.
Dec 05 00:54:28 compute-0 systemd-logind[792]: Removed session 9.
Dec 05 00:54:34 compute-0 sshd-session[45089]: Accepted publickey for zuul from 192.168.122.30 port 39340 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 00:54:34 compute-0 systemd-logind[792]: New session 10 of user zuul.
Dec 05 00:54:34 compute-0 systemd[1]: Started Session 10 of User zuul.
Dec 05 00:54:34 compute-0 sshd-session[45089]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 00:54:35 compute-0 python3.9[45242]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 00:54:36 compute-0 sudo[45396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgggqpkihnilmwhikvcxcdpzytphkuyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896076.059699-36-279645837779266/AnsiballZ_getent.py'
Dec 05 00:54:36 compute-0 sudo[45396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:54:36 compute-0 python3.9[45398]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 05 00:54:36 compute-0 sudo[45396]: pam_unix(sudo:session): session closed for user root
Dec 05 00:54:37 compute-0 sudo[45549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icligktespmxssytgvtputexfkiaxswb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896077.0064604-44-171945910366733/AnsiballZ_group.py'
Dec 05 00:54:37 compute-0 sudo[45549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:54:37 compute-0 python3.9[45551]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 05 00:54:37 compute-0 groupadd[45552]: group added to /etc/group: name=openvswitch, GID=42476
Dec 05 00:54:37 compute-0 groupadd[45552]: group added to /etc/gshadow: name=openvswitch
Dec 05 00:54:37 compute-0 groupadd[45552]: new group: name=openvswitch, GID=42476
Dec 05 00:54:37 compute-0 sudo[45549]: pam_unix(sudo:session): session closed for user root
Dec 05 00:54:38 compute-0 sudo[45707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysmbyqqdcvegavxhwqautafuzdhgvzlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896077.9450905-52-130241845520871/AnsiballZ_user.py'
Dec 05 00:54:38 compute-0 sudo[45707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:54:38 compute-0 python3.9[45709]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 05 00:54:38 compute-0 useradd[45711]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Dec 05 00:54:38 compute-0 useradd[45711]: add 'openvswitch' to group 'hugetlbfs'
Dec 05 00:54:38 compute-0 useradd[45711]: add 'openvswitch' to shadow group 'hugetlbfs'
Dec 05 00:54:38 compute-0 sudo[45707]: pam_unix(sudo:session): session closed for user root
Dec 05 00:54:39 compute-0 sudo[45867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsvtjqwumqqbsowcpujsdqimfniaftcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896079.191705-62-5586464836084/AnsiballZ_setup.py'
Dec 05 00:54:39 compute-0 sudo[45867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:54:39 compute-0 python3.9[45869]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 00:54:40 compute-0 sudo[45867]: pam_unix(sudo:session): session closed for user root
Dec 05 00:54:40 compute-0 sudo[45951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsqjbzwxbqpzoahnukeesgsdpajxisah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896079.191705-62-5586464836084/AnsiballZ_dnf.py'
Dec 05 00:54:40 compute-0 sudo[45951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:54:40 compute-0 python3.9[45953]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 05 00:54:43 compute-0 sudo[45951]: pam_unix(sudo:session): session closed for user root
Dec 05 00:54:43 compute-0 sudo[46115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmggtmlecxogckuvfkykkflnvumhnhek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896083.4059355-76-52344247725662/AnsiballZ_dnf.py'
Dec 05 00:54:43 compute-0 sudo[46115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:54:43 compute-0 sshd[1009]: Timeout before authentication for connection from 45.140.17.124 to 38.102.83.176, pid = 34244
Dec 05 00:54:44 compute-0 python3.9[46117]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 00:54:54 compute-0 kernel: SELinux:  Converting 2731 SID table entries...
Dec 05 00:54:54 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 00:54:54 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 05 00:54:54 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 00:54:54 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 05 00:54:54 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 00:54:54 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 00:54:54 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 00:54:55 compute-0 groupadd[46140]: group added to /etc/group: name=unbound, GID=993
Dec 05 00:54:55 compute-0 groupadd[46140]: group added to /etc/gshadow: name=unbound
Dec 05 00:54:55 compute-0 groupadd[46140]: new group: name=unbound, GID=993
Dec 05 00:54:55 compute-0 useradd[46147]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Dec 05 00:54:55 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec 05 00:54:55 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec 05 00:54:56 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 00:54:56 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 05 00:54:56 compute-0 systemd[1]: Reloading.
Dec 05 00:54:56 compute-0 systemd-rc-local-generator[46644]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 00:54:56 compute-0 systemd-sysv-generator[46648]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 00:54:56 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 00:54:57 compute-0 sudo[46115]: pam_unix(sudo:session): session closed for user root
Dec 05 00:54:57 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 00:54:57 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 05 00:54:57 compute-0 systemd[1]: run-re81ed5bc8b4d4abe8d7cb9f291f2b5ec.service: Deactivated successfully.
Dec 05 00:54:58 compute-0 sudo[47214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcegphwhbzmecsnsggaecbeltjxdinvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896097.5029552-84-153422016444757/AnsiballZ_systemd.py'
Dec 05 00:54:58 compute-0 sudo[47214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:54:58 compute-0 python3.9[47216]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 00:54:58 compute-0 systemd[1]: Reloading.
Dec 05 00:54:58 compute-0 systemd-sysv-generator[47249]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 00:54:58 compute-0 systemd-rc-local-generator[47245]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 00:54:58 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Dec 05 00:54:58 compute-0 chown[47258]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec 05 00:54:58 compute-0 ovs-ctl[47263]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec 05 00:54:58 compute-0 ovs-ctl[47263]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec 05 00:54:58 compute-0 ovs-ctl[47263]: Starting ovsdb-server [  OK  ]
Dec 05 00:54:58 compute-0 ovs-vsctl[47312]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec 05 00:54:59 compute-0 ovs-vsctl[47332]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"8dd76c1c-ab01-42af-b35e-2e870841b6ad\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec 05 00:54:59 compute-0 ovs-ctl[47263]: Configuring Open vSwitch system IDs [  OK  ]
Dec 05 00:54:59 compute-0 ovs-ctl[47263]: Enabling remote OVSDB managers [  OK  ]
Dec 05 00:54:59 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Dec 05 00:54:59 compute-0 ovs-vsctl[47338]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 05 00:54:59 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec 05 00:54:59 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec 05 00:54:59 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec 05 00:54:59 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Dec 05 00:54:59 compute-0 ovs-ctl[47382]: Inserting openvswitch module [  OK  ]
Dec 05 00:54:59 compute-0 ovs-ctl[47351]: Starting ovs-vswitchd [  OK  ]
Dec 05 00:54:59 compute-0 ovs-ctl[47351]: Enabling remote OVSDB managers [  OK  ]
Dec 05 00:54:59 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec 05 00:54:59 compute-0 ovs-vsctl[47400]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 05 00:54:59 compute-0 systemd[1]: Starting Open vSwitch...
Dec 05 00:54:59 compute-0 systemd[1]: Finished Open vSwitch.
Dec 05 00:54:59 compute-0 sudo[47214]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:00 compute-0 python3.9[47551]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 00:55:01 compute-0 sudo[47701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtgmepkhkeibzjpegpgpvohzbecouhmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896100.5869455-102-182385397303744/AnsiballZ_sefcontext.py'
Dec 05 00:55:01 compute-0 sudo[47701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:01 compute-0 python3.9[47703]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 05 00:55:02 compute-0 kernel: SELinux:  Converting 2745 SID table entries...
Dec 05 00:55:02 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 00:55:02 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 05 00:55:02 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 00:55:02 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 05 00:55:02 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 00:55:02 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 00:55:02 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 00:55:02 compute-0 sudo[47701]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:03 compute-0 python3.9[47858]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 00:55:04 compute-0 sudo[48014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbdaguwvvwtdtbtlzlvbwfghdcwaewlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896104.0988586-120-70812817895655/AnsiballZ_dnf.py'
Dec 05 00:55:04 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec 05 00:55:04 compute-0 sudo[48014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:04 compute-0 python3.9[48016]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 00:55:05 compute-0 sudo[48014]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:06 compute-0 sudo[48167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wopueuaqngtwtkzcesggiyrayabergbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896106.02886-128-25795428273124/AnsiballZ_command.py'
Dec 05 00:55:06 compute-0 sudo[48167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:06 compute-0 python3.9[48169]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:55:07 compute-0 sudo[48167]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:07 compute-0 sudo[48454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlxvujfrubvkpcdrbhipxncpnafiduol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896107.4764566-136-11081244630543/AnsiballZ_file.py'
Dec 05 00:55:07 compute-0 sudo[48454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:08 compute-0 python3.9[48456]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 00:55:08 compute-0 sudo[48454]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:08 compute-0 python3.9[48606]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 00:55:09 compute-0 sudo[48758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvvdpuaoobaybyulzxthqqmsmcvgkivi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896109.2610667-152-104080096001646/AnsiballZ_dnf.py'
Dec 05 00:55:09 compute-0 sudo[48758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:09 compute-0 python3.9[48760]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 00:55:11 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 00:55:11 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 05 00:55:11 compute-0 systemd[1]: Reloading.
Dec 05 00:55:11 compute-0 systemd-rc-local-generator[48795]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 00:55:11 compute-0 systemd-sysv-generator[48799]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 00:55:11 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 00:55:11 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 00:55:11 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 05 00:55:11 compute-0 systemd[1]: run-rb892d9f4f1bc4359bf72bc6f6b4e4a8f.service: Deactivated successfully.
Dec 05 00:55:11 compute-0 sudo[48758]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:12 compute-0 sudo[49077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tscxwuvadlwkrztxhafhfbpldxbwbqjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896112.2140057-160-143436722698025/AnsiballZ_systemd.py'
Dec 05 00:55:12 compute-0 sudo[49077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:12 compute-0 python3.9[49079]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 00:55:12 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 05 00:55:12 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Dec 05 00:55:12 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Dec 05 00:55:12 compute-0 systemd[1]: Stopping Network Manager...
Dec 05 00:55:12 compute-0 NetworkManager[7183]: <info>  [1764896112.9171] caught SIGTERM, shutting down normally.
Dec 05 00:55:12 compute-0 NetworkManager[7183]: <info>  [1764896112.9184] dhcp4 (eth0): canceled DHCP transaction
Dec 05 00:55:12 compute-0 NetworkManager[7183]: <info>  [1764896112.9184] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 05 00:55:12 compute-0 NetworkManager[7183]: <info>  [1764896112.9184] dhcp4 (eth0): state changed no lease
Dec 05 00:55:12 compute-0 NetworkManager[7183]: <info>  [1764896112.9186] manager: NetworkManager state is now CONNECTED_SITE
Dec 05 00:55:12 compute-0 NetworkManager[7183]: <info>  [1764896112.9246] exiting (success)
Dec 05 00:55:12 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 05 00:55:12 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 05 00:55:12 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 05 00:55:12 compute-0 systemd[1]: Stopped Network Manager.
Dec 05 00:55:12 compute-0 systemd[1]: NetworkManager.service: Consumed 13.604s CPU time, 4.1M memory peak, read 0B from disk, written 39.0K to disk.
Dec 05 00:55:12 compute-0 systemd[1]: Starting Network Manager...
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.0012] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:4334a7b0-3a1f-41a9-a980-618d92846a01)
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.0014] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.0064] manager[0x564e22c6c090]: monitoring kernel firmware directory '/lib/firmware'.
Dec 05 00:55:13 compute-0 systemd[1]: Starting Hostname Service...
Dec 05 00:55:13 compute-0 systemd[1]: Started Hostname Service.
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1175] hostname: hostname: using hostnamed
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1176] hostname: static hostname changed from (none) to "compute-0"
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1182] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1187] manager[0x564e22c6c090]: rfkill: Wi-Fi hardware radio set enabled
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1187] manager[0x564e22c6c090]: rfkill: WWAN hardware radio set enabled
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1208] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1217] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1218] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1218] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1219] manager: Networking is enabled by state file
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1221] settings: Loaded settings plugin: keyfile (internal)
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1224] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1249] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1259] dhcp: init: Using DHCP client 'internal'
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1261] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1266] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1271] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1279] device (lo): Activation: starting connection 'lo' (d5cf929f-c0df-4c7c-b75c-299bce2e80f0)
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1286] device (eth0): carrier: link connected
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1290] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1294] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1295] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1301] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1307] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1312] device (eth1): carrier: link connected
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1315] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1320] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (f5ed226a-1553-53a1-8171-c813f4b5c69c) (indicated)
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1321] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1326] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1331] device (eth1): Activation: starting connection 'ci-private-network' (f5ed226a-1553-53a1-8171-c813f4b5c69c)
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1337] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 05 00:55:13 compute-0 systemd[1]: Started Network Manager.
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1351] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1355] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1357] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1360] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1363] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1365] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1366] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1371] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1400] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1403] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1414] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1430] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1439] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1442] dhcp4 (eth0): state changed new lease, address=38.102.83.176
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1445] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1451] device (lo): Activation: successful, device activated.
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1462] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 05 00:55:13 compute-0 systemd[1]: Starting Network Manager Wait Online...
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1565] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1571] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1572] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1575] manager: NetworkManager state is now CONNECTED_LOCAL
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1578] device (eth1): Activation: successful, device activated.
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1612] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1614] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1616] manager: NetworkManager state is now CONNECTED_SITE
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1620] device (eth0): Activation: successful, device activated.
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1623] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 05 00:55:13 compute-0 NetworkManager[49092]: <info>  [1764896113.1655] manager: startup complete
Dec 05 00:55:13 compute-0 sudo[49077]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:13 compute-0 systemd[1]: Finished Network Manager Wait Online.
Dec 05 00:55:13 compute-0 sudo[49303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pczwhvutdsmgkyuuwnzbafvxtwsmebfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896113.3497436-168-49327419205644/AnsiballZ_dnf.py'
Dec 05 00:55:13 compute-0 sudo[49303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:13 compute-0 python3.9[49305]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 00:55:18 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 00:55:18 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 05 00:55:18 compute-0 systemd[1]: Reloading.
Dec 05 00:55:18 compute-0 systemd-rc-local-generator[49359]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 00:55:18 compute-0 systemd-sysv-generator[49363]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 00:55:18 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 00:55:19 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 00:55:19 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 05 00:55:19 compute-0 systemd[1]: run-rff9f6249a21948d7b1b5821026ebd649.service: Deactivated successfully.
Dec 05 00:55:19 compute-0 sudo[49303]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:20 compute-0 sudo[49762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrzhvbvebzpuvgohwiobixnhhczqhewd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896119.7804508-180-275309164993937/AnsiballZ_stat.py'
Dec 05 00:55:20 compute-0 sudo[49762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:20 compute-0 python3.9[49764]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 00:55:20 compute-0 sudo[49762]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:21 compute-0 sudo[49914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npwhcqdnzdppxowybhrkfkcikfybnybj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896120.650517-189-176237634716002/AnsiballZ_ini_file.py'
Dec 05 00:55:21 compute-0 sudo[49914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:21 compute-0 python3.9[49916]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:55:21 compute-0 sudo[49914]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:22 compute-0 sudo[50068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxisgzixbngfamzvdivtydxwcczqgyqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896121.774255-199-220519848191476/AnsiballZ_ini_file.py'
Dec 05 00:55:22 compute-0 sudo[50068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:22 compute-0 python3.9[50070]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:55:22 compute-0 sudo[50068]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:22 compute-0 sudo[50220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bumdhstnopkxlqaygsfjivltfqrdqunc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896122.4608688-199-156542136084729/AnsiballZ_ini_file.py'
Dec 05 00:55:22 compute-0 sudo[50220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:22 compute-0 python3.9[50222]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:55:23 compute-0 sudo[50220]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:23 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 05 00:55:23 compute-0 sudo[50372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjrsmfldrmfpoywuljeydpneilpegadc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896123.2313533-214-100114321134719/AnsiballZ_ini_file.py'
Dec 05 00:55:23 compute-0 sudo[50372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:23 compute-0 python3.9[50374]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:55:23 compute-0 sudo[50372]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:24 compute-0 sudo[50524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dalrcxhwpcjaedxdwcngcntssaqigdgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896124.0335245-214-64538946793770/AnsiballZ_ini_file.py'
Dec 05 00:55:24 compute-0 sudo[50524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:24 compute-0 python3.9[50526]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:55:24 compute-0 sudo[50524]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:25 compute-0 sudo[50676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osurmhqugbafzjteahasfzjchraojbpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896124.8146298-229-92180075428855/AnsiballZ_stat.py'
Dec 05 00:55:25 compute-0 sudo[50676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:25 compute-0 python3.9[50678]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:55:25 compute-0 sudo[50676]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:26 compute-0 sudo[50799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulssvqrroaqkaucvygeserjwdvtcajkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896124.8146298-229-92180075428855/AnsiballZ_copy.py'
Dec 05 00:55:26 compute-0 sudo[50799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:26 compute-0 python3.9[50801]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896124.8146298-229-92180075428855/.source _original_basename=.p2a_xk_x follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:55:26 compute-0 sudo[50799]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:26 compute-0 sudo[50951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-talslhpufujcguosniltiidtotfzgypr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896126.5759525-244-258485423196973/AnsiballZ_file.py'
Dec 05 00:55:26 compute-0 sudo[50951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:27 compute-0 python3.9[50953]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:55:27 compute-0 sudo[50951]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:27 compute-0 sudo[51103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjuoilylfltmkghfzoilvhzlcifhchjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896127.4297152-252-178935228458959/AnsiballZ_edpm_os_net_config_mappings.py'
Dec 05 00:55:27 compute-0 sudo[51103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:28 compute-0 python3.9[51105]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec 05 00:55:28 compute-0 sudo[51103]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:28 compute-0 sudo[51255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykfgdsndgaghwyvudfsvjnbmnjfgcnec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896128.4007964-261-92009960140619/AnsiballZ_file.py'
Dec 05 00:55:28 compute-0 sudo[51255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:28 compute-0 python3.9[51257]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:55:28 compute-0 sudo[51255]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:29 compute-0 sudo[51407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atqecvzlzjrjpbtgxpxbfrtpwesxspup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896129.2624032-271-225201598388025/AnsiballZ_stat.py'
Dec 05 00:55:29 compute-0 sudo[51407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:29 compute-0 sudo[51407]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:30 compute-0 sudo[51530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-porbdeuuxydeolrlhprfntbbcfispyhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896129.2624032-271-225201598388025/AnsiballZ_copy.py'
Dec 05 00:55:30 compute-0 sudo[51530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:30 compute-0 sudo[51530]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:31 compute-0 sudo[51682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhyjltrvkeaifbobpmbadnmjbsaswcex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896130.5646617-286-28816922477398/AnsiballZ_slurp.py'
Dec 05 00:55:31 compute-0 sudo[51682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:31 compute-0 python3.9[51684]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec 05 00:55:31 compute-0 sudo[51682]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:32 compute-0 sudo[51857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecylsqfzprebvaezwmipwvjtxwaxqfyw ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896131.6636953-295-174145510162028/async_wrapper.py j477830243345 300 /home/zuul/.ansible/tmp/ansible-tmp-1764896131.6636953-295-174145510162028/AnsiballZ_edpm_os_net_config.py _'
Dec 05 00:55:32 compute-0 sudo[51857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:32 compute-0 ansible-async_wrapper.py[51859]: Invoked with j477830243345 300 /home/zuul/.ansible/tmp/ansible-tmp-1764896131.6636953-295-174145510162028/AnsiballZ_edpm_os_net_config.py _
Dec 05 00:55:32 compute-0 ansible-async_wrapper.py[51862]: Starting module and watcher
Dec 05 00:55:32 compute-0 ansible-async_wrapper.py[51862]: Start watching 51863 (300)
Dec 05 00:55:32 compute-0 ansible-async_wrapper.py[51863]: Start module (51863)
Dec 05 00:55:32 compute-0 ansible-async_wrapper.py[51859]: Return async_wrapper task started.
Dec 05 00:55:32 compute-0 sudo[51857]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:32 compute-0 python3.9[51864]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec 05 00:55:33 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec 05 00:55:33 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec 05 00:55:33 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec 05 00:55:33 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec 05 00:55:33 compute-0 kernel: cfg80211: failed to load regulatory.db
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.4597] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51865 uid=0 result="success"
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.4620] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51865 uid=0 result="success"
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5310] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5312] audit: op="connection-add" uuid="4733033e-b349-4da8-936a-745565aa8195" name="br-ex-br" pid=51865 uid=0 result="success"
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5328] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5329] audit: op="connection-add" uuid="8f533c4f-d253-418e-a730-17ae4582acc0" name="br-ex-port" pid=51865 uid=0 result="success"
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5342] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5344] audit: op="connection-add" uuid="a73e10db-a6ec-4485-b445-912f438e1c86" name="eth1-port" pid=51865 uid=0 result="success"
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5357] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5359] audit: op="connection-add" uuid="8e7ee521-5a0b-4baf-a3c5-f56c52f76df9" name="vlan20-port" pid=51865 uid=0 result="success"
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5371] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5373] audit: op="connection-add" uuid="52bbbaaf-5799-4b23-a85c-f4088e52ce08" name="vlan21-port" pid=51865 uid=0 result="success"
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5386] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5387] audit: op="connection-add" uuid="a8e02914-dcb4-454d-af1a-99a710a94712" name="vlan22-port" pid=51865 uid=0 result="success"
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5400] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5401] audit: op="connection-add" uuid="8351dc76-4b2a-4b38-a38b-ec331fba6f0a" name="vlan23-port" pid=51865 uid=0 result="success"
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5422] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu" pid=51865 uid=0 result="success"
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5440] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5442] audit: op="connection-add" uuid="f79589b2-1db3-4f8a-bef4-d4fa276641e4" name="br-ex-if" pid=51865 uid=0 result="success"
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5493] audit: op="connection-update" uuid="f5ed226a-1553-53a1-8171-c813f4b5c69c" name="ci-private-network" args="connection.controller,connection.port-type,connection.master,connection.timestamp,connection.slave-type,ipv6.dns,ipv6.routes,ipv6.routing-rules,ipv6.method,ipv6.addresses,ipv6.addr-gen-mode,ipv4.routing-rules,ipv4.dns,ipv4.routes,ipv4.never-default,ipv4.method,ipv4.addresses,ovs-external-ids.data,ovs-interface.type" pid=51865 uid=0 result="success"
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5510] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5512] audit: op="connection-add" uuid="7dd491ff-d7b6-449f-8ec1-4c6249dea15b" name="vlan20-if" pid=51865 uid=0 result="success"
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5529] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5530] audit: op="connection-add" uuid="296ce3d8-abc2-40f9-b037-fd406638e17c" name="vlan21-if" pid=51865 uid=0 result="success"
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5547] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5549] audit: op="connection-add" uuid="0406741a-5880-4a7c-b8e6-759f13e1395b" name="vlan22-if" pid=51865 uid=0 result="success"
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5566] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5568] audit: op="connection-add" uuid="47411013-6ebd-4ebf-8296-d3c322f63179" name="vlan23-if" pid=51865 uid=0 result="success"
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5581] audit: op="connection-delete" uuid="74afdaaf-08cb-315f-8816-01bd59fc3bf4" name="Wired connection 1" pid=51865 uid=0 result="success"
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5593] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5602] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5606] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (4733033e-b349-4da8-936a-745565aa8195)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5607] audit: op="connection-activate" uuid="4733033e-b349-4da8-936a-745565aa8195" name="br-ex-br" pid=51865 uid=0 result="success"
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5609] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5616] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5621] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (8f533c4f-d253-418e-a730-17ae4582acc0)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5624] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5629] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5633] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (a73e10db-a6ec-4485-b445-912f438e1c86)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5635] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5642] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5646] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (8e7ee521-5a0b-4baf-a3c5-f56c52f76df9)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5648] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5654] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5659] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (52bbbaaf-5799-4b23-a85c-f4088e52ce08)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5660] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5666] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5670] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (a8e02914-dcb4-454d-af1a-99a710a94712)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5672] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5677] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5681] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (8351dc76-4b2a-4b38-a38b-ec331fba6f0a)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5682] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5684] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5685] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5691] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5694] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5697] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (f79589b2-1db3-4f8a-bef4-d4fa276641e4)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5698] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5701] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5702] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5703] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5704] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5714] device (eth1): disconnecting for new activation request.
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5714] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5717] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5719] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5720] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5723] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5726] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5731] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (7dd491ff-d7b6-449f-8ec1-4c6249dea15b)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5732] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5734] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5735] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5737] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5739] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5744] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5748] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (296ce3d8-abc2-40f9-b037-fd406638e17c)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5749] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5752] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5753] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5754] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5757] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5762] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5766] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (0406741a-5880-4a7c-b8e6-759f13e1395b)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5767] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5769] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5771] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5772] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5775] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5779] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5782] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (47411013-6ebd-4ebf-8296-d3c322f63179)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5783] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5786] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5787] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5788] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5790] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5800] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv6.method,ipv6.addr-gen-mode,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu" pid=51865 uid=0 result="success"
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5802] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5805] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5806] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5812] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5815] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5819] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5822] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5824] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 kernel: ovs-system: entered promiscuous mode
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5828] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5831] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5833] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5834] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5839] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5842] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5845] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5846] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 kernel: Timeout policy base is empty
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5849] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5851] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 systemd-udevd[51870]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5854] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5856] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5861] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5864] dhcp4 (eth0): canceled DHCP transaction
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5864] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5864] dhcp4 (eth0): state changed no lease
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5866] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5875] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5877] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51865 uid=0 result="fail" reason="Device is not activated"
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5882] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec 05 00:55:34 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5914] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5917] dhcp4 (eth0): state changed new lease, address=38.102.83.176
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5922] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5956] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5966] device (eth1): disconnecting for new activation request.
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5966] audit: op="connection-activate" uuid="f5ed226a-1553-53a1-8171-c813f4b5c69c" name="ci-private-network" pid=51865 uid=0 result="success"
Dec 05 00:55:34 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.5995] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51865 uid=0 result="success"
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6000] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 05 00:55:34 compute-0 kernel: br-ex: entered promiscuous mode
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6125] device (eth1): Activation: starting connection 'ci-private-network' (f5ed226a-1553-53a1-8171-c813f4b5c69c)
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6128] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6134] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6136] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6140] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6143] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6150] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6151] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6152] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6153] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6153] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6154] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6173] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6180] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6184] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6186] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6189] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6192] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6198] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6201] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6206] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec 05 00:55:34 compute-0 kernel: vlan22: entered promiscuous mode
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6209] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6212] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6215] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6217] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec 05 00:55:34 compute-0 systemd-udevd[51869]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6224] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6226] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6251] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec 05 00:55:34 compute-0 kernel: vlan21: entered promiscuous mode
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6270] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6324] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 kernel: vlan23: entered promiscuous mode
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6328] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6333] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6340] device (eth1): Activation: successful, device activated.
Dec 05 00:55:34 compute-0 systemd-udevd[51973]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6361] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 kernel: vlan20: entered promiscuous mode
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6381] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6389] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6395] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6400] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 05 00:55:34 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6413] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6456] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6458] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6463] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6465] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6471] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6476] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6481] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6496] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6507] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6524] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6543] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6552] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6561] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6569] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6573] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 05 00:55:34 compute-0 NetworkManager[49092]: <info>  [1764896134.6581] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 05 00:55:35 compute-0 NetworkManager[49092]: <info>  [1764896135.7866] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51865 uid=0 result="success"
Dec 05 00:55:36 compute-0 NetworkManager[49092]: <info>  [1764896136.0341] checkpoint[0x564e22c42950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec 05 00:55:36 compute-0 NetworkManager[49092]: <info>  [1764896136.0345] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51865 uid=0 result="success"
Dec 05 00:55:36 compute-0 sudo[52222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcepkobgtpzukbgrabvzcodgowcohlmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896135.713497-295-197216005408263/AnsiballZ_async_status.py'
Dec 05 00:55:36 compute-0 sudo[52222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:36 compute-0 NetworkManager[49092]: <info>  [1764896136.3678] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51865 uid=0 result="success"
Dec 05 00:55:36 compute-0 NetworkManager[49092]: <info>  [1764896136.3694] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51865 uid=0 result="success"
Dec 05 00:55:36 compute-0 python3.9[52225]: ansible-ansible.legacy.async_status Invoked with jid=j477830243345.51859 mode=status _async_dir=/root/.ansible_async
Dec 05 00:55:36 compute-0 sudo[52222]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:36 compute-0 NetworkManager[49092]: <info>  [1764896136.6015] audit: op="networking-control" arg="global-dns-configuration" pid=51865 uid=0 result="success"
Dec 05 00:55:36 compute-0 NetworkManager[49092]: <info>  [1764896136.6055] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec 05 00:55:36 compute-0 NetworkManager[49092]: <info>  [1764896136.6090] audit: op="networking-control" arg="global-dns-configuration" pid=51865 uid=0 result="success"
Dec 05 00:55:36 compute-0 NetworkManager[49092]: <info>  [1764896136.6117] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51865 uid=0 result="success"
Dec 05 00:55:36 compute-0 NetworkManager[49092]: <info>  [1764896136.7636] checkpoint[0x564e22c42a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec 05 00:55:36 compute-0 NetworkManager[49092]: <info>  [1764896136.7645] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51865 uid=0 result="success"
Dec 05 00:55:36 compute-0 ansible-async_wrapper.py[51863]: Module complete (51863)
Dec 05 00:55:37 compute-0 ansible-async_wrapper.py[51862]: Done in kid B.
Dec 05 00:55:39 compute-0 sudo[52327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtxwhxjocyxgpcbzujhguxasevpxodft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896135.713497-295-197216005408263/AnsiballZ_async_status.py'
Dec 05 00:55:39 compute-0 sudo[52327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:39 compute-0 python3.9[52329]: ansible-ansible.legacy.async_status Invoked with jid=j477830243345.51859 mode=status _async_dir=/root/.ansible_async
Dec 05 00:55:39 compute-0 sudo[52327]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:40 compute-0 sudo[52427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wexmvmlckcoiegchdkgckpkwrthfvhfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896135.713497-295-197216005408263/AnsiballZ_async_status.py'
Dec 05 00:55:40 compute-0 sudo[52427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:40 compute-0 python3.9[52429]: ansible-ansible.legacy.async_status Invoked with jid=j477830243345.51859 mode=cleanup _async_dir=/root/.ansible_async
Dec 05 00:55:40 compute-0 sudo[52427]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:40 compute-0 sudo[52579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jazzpecfnjfhpgzgotltnooewcqzlttp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896140.6202617-322-266499731196227/AnsiballZ_stat.py'
Dec 05 00:55:40 compute-0 sudo[52579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:41 compute-0 python3.9[52581]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:55:41 compute-0 sudo[52579]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:41 compute-0 sudo[52702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdulxhjovwrbusnugqivtihwgmwvjhuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896140.6202617-322-266499731196227/AnsiballZ_copy.py'
Dec 05 00:55:41 compute-0 sudo[52702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:41 compute-0 python3.9[52704]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896140.6202617-322-266499731196227/.source.returncode _original_basename=.3_6mzl2u follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:55:41 compute-0 sudo[52702]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:42 compute-0 sudo[52854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrzdtkabzrkcydwgnnlgouyvnoufcwfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896141.9363987-338-157019334410010/AnsiballZ_stat.py'
Dec 05 00:55:42 compute-0 sudo[52854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:42 compute-0 python3.9[52856]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:55:42 compute-0 sudo[52854]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:42 compute-0 sudo[52977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zutrbeefoyajxduxmwvnqevigwyflshd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896141.9363987-338-157019334410010/AnsiballZ_copy.py'
Dec 05 00:55:42 compute-0 sudo[52977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:43 compute-0 python3.9[52979]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896141.9363987-338-157019334410010/.source.cfg _original_basename=.z1z3zln4 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:55:43 compute-0 sudo[52977]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:43 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 05 00:55:43 compute-0 sudo[53132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdvgbfaxzblmlqkqaezncfszrulupuzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896143.317179-353-223112227438317/AnsiballZ_systemd.py'
Dec 05 00:55:43 compute-0 sudo[53132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:55:43 compute-0 python3.9[53134]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 00:55:44 compute-0 systemd[1]: Reloading Network Manager...
Dec 05 00:55:44 compute-0 NetworkManager[49092]: <info>  [1764896144.0313] audit: op="reload" arg="0" pid=53138 uid=0 result="success"
Dec 05 00:55:44 compute-0 NetworkManager[49092]: <info>  [1764896144.0322] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec 05 00:55:44 compute-0 systemd[1]: Reloaded Network Manager.
Dec 05 00:55:44 compute-0 sudo[53132]: pam_unix(sudo:session): session closed for user root
Dec 05 00:55:44 compute-0 sshd-session[45092]: Connection closed by 192.168.122.30 port 39340
Dec 05 00:55:44 compute-0 sshd-session[45089]: pam_unix(sshd:session): session closed for user zuul
Dec 05 00:55:44 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Dec 05 00:55:44 compute-0 systemd[1]: session-10.scope: Consumed 49.725s CPU time.
Dec 05 00:55:44 compute-0 systemd-logind[792]: Session 10 logged out. Waiting for processes to exit.
Dec 05 00:55:44 compute-0 systemd-logind[792]: Removed session 10.
Dec 05 00:55:49 compute-0 sshd-session[53169]: Accepted publickey for zuul from 192.168.122.30 port 57646 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 00:55:49 compute-0 systemd-logind[792]: New session 11 of user zuul.
Dec 05 00:55:49 compute-0 systemd[1]: Started Session 11 of User zuul.
Dec 05 00:55:49 compute-0 sshd-session[53169]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 00:55:50 compute-0 python3.9[53322]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 00:55:51 compute-0 python3.9[53476]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 00:55:52 compute-0 python3.9[53669]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:55:53 compute-0 sshd-session[53172]: Connection closed by 192.168.122.30 port 57646
Dec 05 00:55:53 compute-0 sshd-session[53169]: pam_unix(sshd:session): session closed for user zuul
Dec 05 00:55:53 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Dec 05 00:55:53 compute-0 systemd[1]: session-11.scope: Consumed 2.352s CPU time.
Dec 05 00:55:53 compute-0 systemd-logind[792]: Session 11 logged out. Waiting for processes to exit.
Dec 05 00:55:53 compute-0 systemd-logind[792]: Removed session 11.
Dec 05 00:55:54 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 05 00:55:58 compute-0 sshd-session[53698]: Accepted publickey for zuul from 192.168.122.30 port 48900 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 00:55:58 compute-0 systemd-logind[792]: New session 12 of user zuul.
Dec 05 00:55:58 compute-0 systemd[1]: Started Session 12 of User zuul.
Dec 05 00:55:58 compute-0 sshd-session[53698]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 00:55:59 compute-0 python3.9[53852]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 00:56:00 compute-0 python3.9[54006]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 00:56:01 compute-0 sudo[54160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shrkvbbljiwxjuylvuctojiurgewiijo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896160.7178068-40-153940032828730/AnsiballZ_setup.py'
Dec 05 00:56:01 compute-0 sudo[54160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:01 compute-0 python3.9[54162]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 00:56:01 compute-0 sudo[54160]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:01 compute-0 sudo[54244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-morjhrmsivtteqmvieyksoopxoamtrob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896160.7178068-40-153940032828730/AnsiballZ_dnf.py'
Dec 05 00:56:01 compute-0 sudo[54244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:02 compute-0 python3.9[54246]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 00:56:03 compute-0 sudo[54244]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:03 compute-0 sudo[54398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaeaozqsptxlajebarptmkyqsojhdihn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896163.5556216-52-89701100791071/AnsiballZ_setup.py'
Dec 05 00:56:03 compute-0 sudo[54398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:04 compute-0 python3.9[54400]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 00:56:04 compute-0 sudo[54398]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:05 compute-0 sudo[54593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiycbmsgehdrwxtjfoyhknqbpaqfivww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896164.7711174-63-103583591904502/AnsiballZ_file.py'
Dec 05 00:56:05 compute-0 sudo[54593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:05 compute-0 python3.9[54595]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:56:05 compute-0 sudo[54593]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:06 compute-0 sudo[54746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvikmmpznuefrvdwcmsktetsconxzwsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896165.5960846-71-89222953959317/AnsiballZ_command.py'
Dec 05 00:56:06 compute-0 sudo[54746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:06 compute-0 python3.9[54748]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:56:06 compute-0 podman[54749]: 2025-12-05 00:56:06.274644418 +0000 UTC m=+0.043926615 system refresh
Dec 05 00:56:06 compute-0 sudo[54746]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:07 compute-0 sudo[54909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkzznwhccetsdrknukcogugjxwxpeeey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896166.6642826-79-214810692193153/AnsiballZ_stat.py'
Dec 05 00:56:07 compute-0 sudo[54909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 00:56:07 compute-0 python3.9[54911]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:56:07 compute-0 sudo[54909]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:07 compute-0 sudo[55032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aofsiszipxzbmaohsnilivtvahczmhtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896166.6642826-79-214810692193153/AnsiballZ_copy.py'
Dec 05 00:56:07 compute-0 sudo[55032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:08 compute-0 python3.9[55034]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896166.6642826-79-214810692193153/.source.json follow=False _original_basename=podman_network_config.j2 checksum=addbedc07cb79f12a131f0cddb3b2f6a3889c601 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:56:08 compute-0 sudo[55032]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:08 compute-0 sudo[55184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvivzkvdgivncnyssdkneugbqjsvpcxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896168.276847-94-26263655149675/AnsiballZ_stat.py'
Dec 05 00:56:08 compute-0 sudo[55184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:08 compute-0 python3.9[55186]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:56:08 compute-0 sudo[55184]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:09 compute-0 sudo[55307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehwgwmmcyggobrikdnotksmjpzigubnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896168.276847-94-26263655149675/AnsiballZ_copy.py'
Dec 05 00:56:09 compute-0 sudo[55307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:09 compute-0 python3.9[55309]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896168.276847-94-26263655149675/.source.conf follow=False _original_basename=registries.conf.j2 checksum=086f9dda0e1e7ae15c548d702b012e23e7cc73fc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:56:09 compute-0 sudo[55307]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:10 compute-0 sudo[55459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whvvvtznawnyyesicbwvznxzrxtuyerw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896169.7122009-110-213751347038215/AnsiballZ_ini_file.py'
Dec 05 00:56:10 compute-0 sudo[55459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:10 compute-0 python3.9[55461]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:56:10 compute-0 sudo[55459]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:10 compute-0 sudo[55611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gabtahzsqehxjswgcuikatuawvzocjgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896170.59298-110-35245448338043/AnsiballZ_ini_file.py'
Dec 05 00:56:10 compute-0 sudo[55611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:11 compute-0 python3.9[55613]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:56:11 compute-0 sudo[55611]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:11 compute-0 sudo[55763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvnhvvfbtrmsgpcenzzfvbapjuzqzyvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896171.3381975-110-159127260346940/AnsiballZ_ini_file.py'
Dec 05 00:56:11 compute-0 sudo[55763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:11 compute-0 python3.9[55765]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:56:11 compute-0 sudo[55763]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:12 compute-0 sudo[55915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkypofmwpnfgcrwjgzmtccqfxnsdjotb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896172.0317445-110-98254554409931/AnsiballZ_ini_file.py'
Dec 05 00:56:12 compute-0 sudo[55915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:12 compute-0 python3.9[55917]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:56:12 compute-0 sudo[55915]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:13 compute-0 sudo[56067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibbcynotuamhqzmuksoaqorowddncmib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896172.935985-141-53146891854671/AnsiballZ_dnf.py'
Dec 05 00:56:13 compute-0 sudo[56067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:13 compute-0 python3.9[56069]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 00:56:14 compute-0 sudo[56067]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:15 compute-0 sudo[56220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buxjsveecvaothnxcwnsyduhqbhhkubd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896175.2619565-152-145817676990358/AnsiballZ_setup.py'
Dec 05 00:56:15 compute-0 sudo[56220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:15 compute-0 python3.9[56222]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 00:56:15 compute-0 sudo[56220]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:16 compute-0 sudo[56374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-watgyofzdqniotgbvdkrtgkwitblukkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896176.197983-160-76571495050775/AnsiballZ_stat.py'
Dec 05 00:56:16 compute-0 sudo[56374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:16 compute-0 python3.9[56376]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 00:56:16 compute-0 sudo[56374]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:17 compute-0 sudo[56526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmatrusqtbmzazkzrhsqlnmhqkeemqpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896177.104516-169-270858583954412/AnsiballZ_stat.py'
Dec 05 00:56:17 compute-0 sudo[56526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:17 compute-0 python3.9[56528]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 00:56:17 compute-0 sudo[56526]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:18 compute-0 sudo[56678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lstfrwlmvfqrwyylbnuskkcowgvrjmff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896178.0359864-179-245659360258562/AnsiballZ_command.py'
Dec 05 00:56:18 compute-0 sudo[56678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:18 compute-0 python3.9[56680]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:56:18 compute-0 sudo[56678]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:19 compute-0 sudo[56831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipvvjkiajntdfkcmgpgcdpcshwxwmeyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896178.879165-189-274066058709876/AnsiballZ_service_facts.py'
Dec 05 00:56:19 compute-0 sudo[56831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:19 compute-0 python3.9[56833]: ansible-service_facts Invoked
Dec 05 00:56:19 compute-0 network[56850]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 00:56:19 compute-0 network[56851]: 'network-scripts' will be removed from distribution in near future.
Dec 05 00:56:19 compute-0 network[56852]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 00:56:24 compute-0 sudo[56831]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:25 compute-0 sudo[57135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqppndpcgvqghcdafqqhtxvxmktodssy ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764896184.7744353-204-264884806649082/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764896184.7744353-204-264884806649082/args'
Dec 05 00:56:25 compute-0 sudo[57135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:25 compute-0 sudo[57135]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:26 compute-0 sudo[57302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaxjsdcxsvhqzxgzfnujwlsoynqnpnfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896185.6780446-215-65940321599020/AnsiballZ_dnf.py'
Dec 05 00:56:26 compute-0 sudo[57302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:26 compute-0 python3.9[57304]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 00:56:27 compute-0 sudo[57302]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:28 compute-0 sudo[57455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzegbtgugzpaqdnrlmvldwxmvioavimx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896187.776583-228-97780627280806/AnsiballZ_package_facts.py'
Dec 05 00:56:28 compute-0 sudo[57455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:28 compute-0 python3.9[57457]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 05 00:56:28 compute-0 sudo[57455]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:29 compute-0 sudo[57607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwjtmcibqjwmkntmcynagtetqsmzegam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896189.4744866-238-52317952355442/AnsiballZ_stat.py'
Dec 05 00:56:29 compute-0 sudo[57607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:30 compute-0 python3.9[57609]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:56:30 compute-0 sudo[57607]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:30 compute-0 sudo[57732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rivrlosjbhvxyptbstqcfmhtuqoehglf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896189.4744866-238-52317952355442/AnsiballZ_copy.py'
Dec 05 00:56:30 compute-0 sudo[57732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:30 compute-0 python3.9[57734]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896189.4744866-238-52317952355442/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:56:30 compute-0 sudo[57732]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:31 compute-0 sudo[57886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egioayhoaudecusahnkivagjrauwmwtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896191.0552998-253-112330262379561/AnsiballZ_stat.py'
Dec 05 00:56:31 compute-0 sudo[57886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:31 compute-0 python3.9[57888]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:56:31 compute-0 sudo[57886]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:31 compute-0 sudo[58011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwqrmmhblkqpzjjjimwdnxrbfunipblj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896191.0552998-253-112330262379561/AnsiballZ_copy.py'
Dec 05 00:56:31 compute-0 sudo[58011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:32 compute-0 python3.9[58013]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896191.0552998-253-112330262379561/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:56:32 compute-0 sudo[58011]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:33 compute-0 sudo[58165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhsqvrdorsgnfxjmuqsfluuihqpwfedx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896192.762052-274-30908638678776/AnsiballZ_lineinfile.py'
Dec 05 00:56:33 compute-0 sudo[58165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:33 compute-0 python3.9[58167]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:56:33 compute-0 sudo[58165]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:34 compute-0 sudo[58319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owdkdsdlugynhfjrwocwgmsvfmagyppb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896194.1142292-289-262135929980520/AnsiballZ_setup.py'
Dec 05 00:56:34 compute-0 sudo[58319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:34 compute-0 python3.9[58321]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 00:56:34 compute-0 sudo[58319]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:35 compute-0 sudo[58403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hucpbdurbpnnrytdqylufmsybbageiis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896194.1142292-289-262135929980520/AnsiballZ_systemd.py'
Dec 05 00:56:35 compute-0 sudo[58403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:35 compute-0 python3.9[58405]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 00:56:35 compute-0 sudo[58403]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:36 compute-0 sudo[58557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xehmgkzeiecknhvmzqczqjmyceddfkht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896196.3429422-305-249661220077914/AnsiballZ_setup.py'
Dec 05 00:56:36 compute-0 sudo[58557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:36 compute-0 python3.9[58559]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 00:56:37 compute-0 sudo[58557]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:37 compute-0 sudo[58641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyoygcazeltftltjchxxsdmrwdwfbxqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896196.3429422-305-249661220077914/AnsiballZ_systemd.py'
Dec 05 00:56:37 compute-0 sudo[58641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:37 compute-0 python3.9[58643]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 00:56:37 compute-0 chronyd[795]: chronyd exiting
Dec 05 00:56:37 compute-0 systemd[1]: Stopping NTP client/server...
Dec 05 00:56:37 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Dec 05 00:56:37 compute-0 systemd[1]: Stopped NTP client/server.
Dec 05 00:56:37 compute-0 systemd[1]: Starting NTP client/server...
Dec 05 00:56:37 compute-0 chronyd[58652]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 05 00:56:37 compute-0 chronyd[58652]: Frequency -28.334 +/- 0.265 ppm read from /var/lib/chrony/drift
Dec 05 00:56:37 compute-0 chronyd[58652]: Loaded seccomp filter (level 2)
Dec 05 00:56:37 compute-0 systemd[1]: Started NTP client/server.
Dec 05 00:56:37 compute-0 sudo[58641]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:38 compute-0 sshd-session[53701]: Connection closed by 192.168.122.30 port 48900
Dec 05 00:56:38 compute-0 sshd-session[53698]: pam_unix(sshd:session): session closed for user zuul
Dec 05 00:56:38 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Dec 05 00:56:38 compute-0 systemd[1]: session-12.scope: Consumed 26.737s CPU time.
Dec 05 00:56:38 compute-0 systemd-logind[792]: Session 12 logged out. Waiting for processes to exit.
Dec 05 00:56:38 compute-0 systemd-logind[792]: Removed session 12.
Dec 05 00:56:44 compute-0 sshd-session[58678]: Accepted publickey for zuul from 192.168.122.30 port 35054 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 00:56:44 compute-0 systemd-logind[792]: New session 13 of user zuul.
Dec 05 00:56:44 compute-0 systemd[1]: Started Session 13 of User zuul.
Dec 05 00:56:44 compute-0 sshd-session[58678]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 00:56:45 compute-0 sudo[58831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqugyhimlmlgucajefwhydvwdvuqcgnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896205.0377624-22-90189452848258/AnsiballZ_file.py'
Dec 05 00:56:45 compute-0 sudo[58831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:45 compute-0 python3.9[58833]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:56:45 compute-0 sudo[58831]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:46 compute-0 sudo[58983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itxjmotfzdwawexsulopcjjilvqcosbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896206.0332465-34-162815491009642/AnsiballZ_stat.py'
Dec 05 00:56:46 compute-0 sudo[58983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:46 compute-0 python3.9[58985]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:56:46 compute-0 sudo[58983]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:47 compute-0 sudo[59106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlvhunssrdddvifrbypijhiufmtauhwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896206.0332465-34-162815491009642/AnsiballZ_copy.py'
Dec 05 00:56:47 compute-0 sudo[59106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:47 compute-0 python3.9[59108]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896206.0332465-34-162815491009642/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:56:47 compute-0 sudo[59106]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:47 compute-0 sshd-session[58681]: Connection closed by 192.168.122.30 port 35054
Dec 05 00:56:47 compute-0 sshd-session[58678]: pam_unix(sshd:session): session closed for user zuul
Dec 05 00:56:47 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Dec 05 00:56:47 compute-0 systemd[1]: session-13.scope: Consumed 1.766s CPU time.
Dec 05 00:56:47 compute-0 systemd-logind[792]: Session 13 logged out. Waiting for processes to exit.
Dec 05 00:56:47 compute-0 systemd-logind[792]: Removed session 13.
Dec 05 00:56:54 compute-0 sshd-session[59133]: Accepted publickey for zuul from 192.168.122.30 port 40560 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 00:56:54 compute-0 systemd-logind[792]: New session 14 of user zuul.
Dec 05 00:56:54 compute-0 systemd[1]: Started Session 14 of User zuul.
Dec 05 00:56:54 compute-0 sshd-session[59133]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 00:56:55 compute-0 python3.9[59286]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 00:56:56 compute-0 sudo[59440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkixmimspnjyrbdhlymcjnvknzxcacjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896216.0983846-33-42663524439704/AnsiballZ_file.py'
Dec 05 00:56:56 compute-0 sudo[59440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:56 compute-0 python3.9[59442]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:56:56 compute-0 sudo[59440]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:57 compute-0 sudo[59615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzrcnjrrjizzijeegvddgeiucbooazbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896216.9951377-41-20958911228959/AnsiballZ_stat.py'
Dec 05 00:56:57 compute-0 sudo[59615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:57 compute-0 python3.9[59617]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:56:57 compute-0 sudo[59615]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:58 compute-0 sudo[59738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omesplemskafugmdagyiwpcheueejxzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896216.9951377-41-20958911228959/AnsiballZ_copy.py'
Dec 05 00:56:58 compute-0 sudo[59738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:58 compute-0 python3.9[59740]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764896216.9951377-41-20958911228959/.source.json _original_basename=.7pupz0s5 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:56:58 compute-0 sudo[59738]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:59 compute-0 sudo[59890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwfyffwyltvbnrvshumqvhfebkslqfwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896218.910285-64-33352548782895/AnsiballZ_stat.py'
Dec 05 00:56:59 compute-0 sudo[59890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:59 compute-0 python3.9[59892]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:56:59 compute-0 sudo[59890]: pam_unix(sudo:session): session closed for user root
Dec 05 00:56:59 compute-0 sudo[60013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puncwzbdecwokqvjcucgowxzqzbjewju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896218.910285-64-33352548782895/AnsiballZ_copy.py'
Dec 05 00:56:59 compute-0 sudo[60013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:56:59 compute-0 python3.9[60015]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896218.910285-64-33352548782895/.source _original_basename=.wfrgppzh follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:56:59 compute-0 sudo[60013]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:00 compute-0 sudo[60165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyqvyxycxpizyukqnqhcbbjpefvroggt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896220.2059164-80-177968824525064/AnsiballZ_file.py'
Dec 05 00:57:00 compute-0 sudo[60165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:00 compute-0 python3.9[60167]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:57:00 compute-0 sudo[60165]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:01 compute-0 sudo[60317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifhuudbgufrylykgvojwmnbqnxclvdml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896220.9629488-88-214074288358219/AnsiballZ_stat.py'
Dec 05 00:57:01 compute-0 sudo[60317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:01 compute-0 python3.9[60319]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:57:01 compute-0 sudo[60317]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:01 compute-0 sudo[60440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsvwvcewnhzhoxvxntgnssgoqngtvtdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896220.9629488-88-214074288358219/AnsiballZ_copy.py'
Dec 05 00:57:01 compute-0 sudo[60440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:02 compute-0 python3.9[60442]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896220.9629488-88-214074288358219/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:57:02 compute-0 sudo[60440]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:02 compute-0 sudo[60592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aledvbvyiwxsoggwwlvxjtypvfloyrja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896222.2321987-88-180251409667733/AnsiballZ_stat.py'
Dec 05 00:57:02 compute-0 sudo[60592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:02 compute-0 python3.9[60594]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:57:02 compute-0 sudo[60592]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:03 compute-0 sudo[60715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbxfmcgzjfxlxgrcdocxghseqjwqzsqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896222.2321987-88-180251409667733/AnsiballZ_copy.py'
Dec 05 00:57:03 compute-0 sudo[60715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:03 compute-0 python3.9[60717]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896222.2321987-88-180251409667733/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:57:03 compute-0 sudo[60715]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:04 compute-0 sudo[60867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nadlumxwbqndhfstiollcslctafvsebl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896223.729539-117-34664069318314/AnsiballZ_file.py'
Dec 05 00:57:04 compute-0 sudo[60867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:04 compute-0 python3.9[60869]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:57:04 compute-0 sudo[60867]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:04 compute-0 sudo[61019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iflfqxxjjatdxrmwwzzmicbwopatignf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896224.4686036-125-72600650522989/AnsiballZ_stat.py'
Dec 05 00:57:04 compute-0 sudo[61019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:05 compute-0 python3.9[61021]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:57:05 compute-0 sudo[61019]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:05 compute-0 sudo[61142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrlcdwmibjggdvzgyglsbpqkmcrkkpqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896224.4686036-125-72600650522989/AnsiballZ_copy.py'
Dec 05 00:57:05 compute-0 sudo[61142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:05 compute-0 python3.9[61144]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896224.4686036-125-72600650522989/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:57:05 compute-0 sudo[61142]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:06 compute-0 sudo[61294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tptydhvfxzotqooyhmbalvivasswmnkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896225.9637728-140-115344248334404/AnsiballZ_stat.py'
Dec 05 00:57:06 compute-0 sudo[61294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:06 compute-0 python3.9[61296]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:57:06 compute-0 sudo[61294]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:07 compute-0 sudo[61417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqjgjgsgdkebhlyfcrpgyqqlxvygpafo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896225.9637728-140-115344248334404/AnsiballZ_copy.py'
Dec 05 00:57:07 compute-0 sudo[61417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:07 compute-0 python3.9[61419]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896225.9637728-140-115344248334404/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:57:07 compute-0 sudo[61417]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:08 compute-0 sudo[61569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-befripixmqgyzmicudjfcllyuwdcabcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896227.3997362-155-116724521041633/AnsiballZ_systemd.py'
Dec 05 00:57:08 compute-0 sudo[61569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:08 compute-0 python3.9[61571]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 00:57:08 compute-0 systemd[1]: Reloading.
Dec 05 00:57:08 compute-0 systemd-rc-local-generator[61595]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 00:57:08 compute-0 systemd-sysv-generator[61601]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 00:57:08 compute-0 systemd[1]: Reloading.
Dec 05 00:57:08 compute-0 systemd-rc-local-generator[61629]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 00:57:08 compute-0 systemd-sysv-generator[61634]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 00:57:08 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Dec 05 00:57:08 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Dec 05 00:57:09 compute-0 sudo[61569]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:09 compute-0 sudo[61795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeamkjgcjpzlqbubdxgnnueyswigayiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896229.2022293-163-247688409814825/AnsiballZ_stat.py'
Dec 05 00:57:09 compute-0 sudo[61795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:09 compute-0 python3.9[61797]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:57:09 compute-0 sudo[61795]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:10 compute-0 sudo[61918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glbqbgtpcnqabkbcfjpubedlxdyqzhet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896229.2022293-163-247688409814825/AnsiballZ_copy.py'
Dec 05 00:57:10 compute-0 sudo[61918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:10 compute-0 python3.9[61920]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896229.2022293-163-247688409814825/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:57:10 compute-0 sudo[61918]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:11 compute-0 sudo[62070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvugzslaccjvoxuuqiexbmyyclofwchl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896230.6928594-178-23941689345884/AnsiballZ_stat.py'
Dec 05 00:57:11 compute-0 sudo[62070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:11 compute-0 python3.9[62072]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:57:11 compute-0 sudo[62070]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:11 compute-0 sudo[62193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmsjlwefhvhlmpptfvlrjowkhfycyuuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896230.6928594-178-23941689345884/AnsiballZ_copy.py'
Dec 05 00:57:11 compute-0 sudo[62193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:11 compute-0 python3.9[62195]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896230.6928594-178-23941689345884/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:57:11 compute-0 sudo[62193]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:12 compute-0 sudo[62345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teivaibxjtrncxggtfrqrvaergoshlxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896232.1671236-193-38714195102902/AnsiballZ_systemd.py'
Dec 05 00:57:12 compute-0 sudo[62345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:12 compute-0 python3.9[62347]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 00:57:12 compute-0 systemd[1]: Reloading.
Dec 05 00:57:12 compute-0 systemd-sysv-generator[62379]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 00:57:12 compute-0 systemd-rc-local-generator[62375]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 00:57:13 compute-0 systemd[1]: Reloading.
Dec 05 00:57:13 compute-0 systemd-rc-local-generator[62414]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 00:57:13 compute-0 systemd-sysv-generator[62418]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 00:57:13 compute-0 systemd[1]: Starting Create netns directory...
Dec 05 00:57:13 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 05 00:57:13 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 05 00:57:13 compute-0 systemd[1]: Finished Create netns directory.
Dec 05 00:57:13 compute-0 sudo[62345]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:14 compute-0 python3.9[62575]: ansible-ansible.builtin.service_facts Invoked
Dec 05 00:57:14 compute-0 network[62592]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 00:57:14 compute-0 network[62593]: 'network-scripts' will be removed from distribution in near future.
Dec 05 00:57:14 compute-0 network[62594]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 00:57:19 compute-0 sudo[62854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxmwmksfddbwbgaxcqmwgcsyldvusoca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896238.6129968-209-26762255371598/AnsiballZ_systemd.py'
Dec 05 00:57:19 compute-0 sudo[62854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:19 compute-0 python3.9[62856]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 00:57:19 compute-0 systemd[1]: Reloading.
Dec 05 00:57:19 compute-0 systemd-rc-local-generator[62879]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 00:57:19 compute-0 systemd-sysv-generator[62884]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 00:57:19 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Dec 05 00:57:19 compute-0 iptables.init[62895]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec 05 00:57:20 compute-0 iptables.init[62895]: iptables: Flushing firewall rules: [  OK  ]
Dec 05 00:57:20 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Dec 05 00:57:20 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Dec 05 00:57:20 compute-0 sudo[62854]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:20 compute-0 sudo[63089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piqlyahmhjestkyphzkhbgnwifmycflk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896240.3346748-209-187043937057303/AnsiballZ_systemd.py'
Dec 05 00:57:20 compute-0 sudo[63089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:21 compute-0 python3.9[63091]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 00:57:21 compute-0 sudo[63089]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:21 compute-0 sudo[63243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahjpjwuosbvzywncjaewdpnayxlpqvlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896241.3944101-225-76126328983360/AnsiballZ_systemd.py'
Dec 05 00:57:21 compute-0 sudo[63243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:21 compute-0 python3.9[63245]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 00:57:22 compute-0 systemd[1]: Reloading.
Dec 05 00:57:22 compute-0 systemd-rc-local-generator[63266]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 00:57:22 compute-0 systemd-sysv-generator[63274]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 00:57:22 compute-0 systemd[1]: Starting Netfilter Tables...
Dec 05 00:57:22 compute-0 systemd[1]: Finished Netfilter Tables.
Dec 05 00:57:22 compute-0 sudo[63243]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:23 compute-0 sudo[63434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lswscufhmnwkbdsfvprpnctvspujjnku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896242.579556-233-192521122147196/AnsiballZ_command.py'
Dec 05 00:57:23 compute-0 sudo[63434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:23 compute-0 python3.9[63436]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:57:23 compute-0 sudo[63434]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:24 compute-0 sudo[63587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwawarmkceayouuvabiuxrzqkilflgdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896243.9620895-247-200501691068586/AnsiballZ_stat.py'
Dec 05 00:57:24 compute-0 sudo[63587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:24 compute-0 python3.9[63589]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:57:24 compute-0 sudo[63587]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:24 compute-0 sudo[63712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgdpvxhzazjwjkxgpyfmahoottsafvcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896243.9620895-247-200501691068586/AnsiballZ_copy.py'
Dec 05 00:57:24 compute-0 sudo[63712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:25 compute-0 python3.9[63714]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896243.9620895-247-200501691068586/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:57:25 compute-0 sudo[63712]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:25 compute-0 sudo[63865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgmlizvcyygnpojtvwgyswgnymsolzqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896245.3462765-262-67720414804466/AnsiballZ_systemd.py'
Dec 05 00:57:25 compute-0 sudo[63865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:25 compute-0 python3.9[63867]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 00:57:26 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Dec 05 00:57:26 compute-0 sshd[1009]: Received SIGHUP; restarting.
Dec 05 00:57:26 compute-0 sshd[1009]: Server listening on 0.0.0.0 port 22.
Dec 05 00:57:26 compute-0 sshd[1009]: Server listening on :: port 22.
Dec 05 00:57:26 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Dec 05 00:57:26 compute-0 sudo[63865]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:26 compute-0 sudo[64021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnwqdmzcydwbacqgrltbywyhehqyaors ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896246.2613788-270-181072129305254/AnsiballZ_file.py'
Dec 05 00:57:26 compute-0 sudo[64021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:26 compute-0 python3.9[64023]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:57:26 compute-0 sudo[64021]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:27 compute-0 sudo[64173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npofbfwgaxhlasgeavhmynnyvssweqpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896246.9736989-278-42443763789819/AnsiballZ_stat.py'
Dec 05 00:57:27 compute-0 sudo[64173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:27 compute-0 python3.9[64175]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:57:27 compute-0 sudo[64173]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:27 compute-0 sudo[64296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpmytpepsmejkcrvcakoxcgypkrzaija ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896246.9736989-278-42443763789819/AnsiballZ_copy.py'
Dec 05 00:57:27 compute-0 sudo[64296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:27 compute-0 python3.9[64298]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896246.9736989-278-42443763789819/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:57:27 compute-0 sudo[64296]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:28 compute-0 sudo[64448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apicfozvfkicnsagyxqtyswzhvlazmxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896248.3008835-296-273236028365157/AnsiballZ_timezone.py'
Dec 05 00:57:28 compute-0 sudo[64448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:28 compute-0 python3.9[64450]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 05 00:57:28 compute-0 systemd[1]: Starting Time & Date Service...
Dec 05 00:57:29 compute-0 systemd[1]: Started Time & Date Service.
Dec 05 00:57:29 compute-0 sudo[64448]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:29 compute-0 sudo[64604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuydayjkizesvhcxbbsfgwclksnztwmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896249.4525983-305-133793582580501/AnsiballZ_file.py'
Dec 05 00:57:29 compute-0 sudo[64604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:29 compute-0 python3.9[64606]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:57:30 compute-0 sudo[64604]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:30 compute-0 sudo[64756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsjygpssuztwqmftgryiqsnzsmhqnyxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896250.3588116-313-95273102727955/AnsiballZ_stat.py'
Dec 05 00:57:30 compute-0 sudo[64756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:30 compute-0 python3.9[64758]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:57:30 compute-0 sudo[64756]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:31 compute-0 sudo[64879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gccdctgjqtdldhqmkojynnrlftntbmsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896250.3588116-313-95273102727955/AnsiballZ_copy.py'
Dec 05 00:57:31 compute-0 sudo[64879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:31 compute-0 python3.9[64881]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896250.3588116-313-95273102727955/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:57:31 compute-0 sudo[64879]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:32 compute-0 sudo[65031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utniimwjtnyimmmllpnpxqzechjjjjxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896251.797346-328-232101191811679/AnsiballZ_stat.py'
Dec 05 00:57:32 compute-0 sudo[65031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:32 compute-0 python3.9[65033]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:57:32 compute-0 sudo[65031]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:32 compute-0 sudo[65154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdekdpclnyxypynucgumcjerjlxudhfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896251.797346-328-232101191811679/AnsiballZ_copy.py'
Dec 05 00:57:32 compute-0 sudo[65154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:33 compute-0 python3.9[65156]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896251.797346-328-232101191811679/.source.yaml _original_basename=.n1fbqfp9 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:57:33 compute-0 sudo[65154]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:33 compute-0 sudo[65306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arexzunzzdjoizvvpxgkejafpeteibxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896253.3044994-343-204927693497294/AnsiballZ_stat.py'
Dec 05 00:57:33 compute-0 sudo[65306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:33 compute-0 python3.9[65308]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:57:33 compute-0 sudo[65306]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:34 compute-0 sudo[65429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szzxtaxpxmsejgouupxqlufbpxxzuhaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896253.3044994-343-204927693497294/AnsiballZ_copy.py'
Dec 05 00:57:34 compute-0 sudo[65429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:34 compute-0 python3.9[65431]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896253.3044994-343-204927693497294/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:57:34 compute-0 sudo[65429]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:35 compute-0 sudo[65581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igntancwtlxohuoipjrjeuplmdrpbzsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896254.9989958-358-79384413291402/AnsiballZ_command.py'
Dec 05 00:57:35 compute-0 sudo[65581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:35 compute-0 python3.9[65583]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:57:35 compute-0 sudo[65581]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:36 compute-0 sudo[65734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycxwnyuzcldfjkzwffnhbzdloowzwogt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896255.8210917-366-33845174293575/AnsiballZ_command.py'
Dec 05 00:57:36 compute-0 sudo[65734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:36 compute-0 python3.9[65736]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:57:36 compute-0 sudo[65734]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:37 compute-0 sudo[65887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poladawabwphyskxmyfsjjghdbbkyqkm ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764896256.693974-374-159288493065205/AnsiballZ_edpm_nftables_from_files.py'
Dec 05 00:57:37 compute-0 sudo[65887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:37 compute-0 python3[65889]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 05 00:57:37 compute-0 sudo[65887]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:38 compute-0 sudo[66039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijojrtoreplkldlmdmlulysnnenfevts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896257.63038-382-172527882188618/AnsiballZ_stat.py'
Dec 05 00:57:38 compute-0 sudo[66039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:38 compute-0 python3.9[66041]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:57:38 compute-0 sudo[66039]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:38 compute-0 sudo[66162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhkhenshflfuntmvbjymhblyivibhlqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896257.63038-382-172527882188618/AnsiballZ_copy.py'
Dec 05 00:57:38 compute-0 sudo[66162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:39 compute-0 python3.9[66164]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896257.63038-382-172527882188618/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:57:39 compute-0 sudo[66162]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:39 compute-0 sudo[66314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acwbgtqxgbzcmevwnmritvwgovljytto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896259.3982067-397-183104450018689/AnsiballZ_stat.py'
Dec 05 00:57:39 compute-0 sudo[66314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:40 compute-0 python3.9[66316]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:57:40 compute-0 sudo[66314]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:40 compute-0 sudo[66437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwuyvrdpjeatchezbcnizhvxdsipamgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896259.3982067-397-183104450018689/AnsiballZ_copy.py'
Dec 05 00:57:40 compute-0 sudo[66437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:40 compute-0 python3.9[66439]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896259.3982067-397-183104450018689/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:57:40 compute-0 sudo[66437]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:41 compute-0 sudo[66589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmrcwffujprhpfeuyrhjqnbmypyufayl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896260.9380646-412-110636760058224/AnsiballZ_stat.py'
Dec 05 00:57:41 compute-0 sudo[66589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:41 compute-0 python3.9[66591]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:57:41 compute-0 sudo[66589]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:41 compute-0 sudo[66712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkdxkizrirmbjlauwtcybsrjgdxeyrrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896260.9380646-412-110636760058224/AnsiballZ_copy.py'
Dec 05 00:57:41 compute-0 sudo[66712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:42 compute-0 python3.9[66714]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896260.9380646-412-110636760058224/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:57:42 compute-0 sudo[66712]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:42 compute-0 sudo[66864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwkijsubztpeuazedysmfsiimrqwodpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896262.3461065-427-137532319720244/AnsiballZ_stat.py'
Dec 05 00:57:42 compute-0 sudo[66864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:42 compute-0 python3.9[66866]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:57:42 compute-0 sudo[66864]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:43 compute-0 sudo[66987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erkcnnmyxtyqhifewkdzqqvqthbvxnsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896262.3461065-427-137532319720244/AnsiballZ_copy.py'
Dec 05 00:57:43 compute-0 sudo[66987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:43 compute-0 python3.9[66989]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896262.3461065-427-137532319720244/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:57:43 compute-0 sudo[66987]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:44 compute-0 sudo[67139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rygkztunhgunpbrwsxuxjzvffsamlrkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896263.7734313-442-26909970014840/AnsiballZ_stat.py'
Dec 05 00:57:44 compute-0 sudo[67139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:44 compute-0 python3.9[67141]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:57:44 compute-0 sudo[67139]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:44 compute-0 sudo[67262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htgbjbjpojkyoporkgrnzwweekjspuva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896263.7734313-442-26909970014840/AnsiballZ_copy.py'
Dec 05 00:57:44 compute-0 sudo[67262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:44 compute-0 python3.9[67264]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896263.7734313-442-26909970014840/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:57:45 compute-0 sudo[67262]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:45 compute-0 sudo[67414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqhbtwnvtkvrqdlimeuxblhdtoyeyxcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896265.232822-457-171186059861623/AnsiballZ_file.py'
Dec 05 00:57:45 compute-0 sudo[67414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:45 compute-0 python3.9[67416]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:57:45 compute-0 sudo[67414]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:46 compute-0 sudo[67566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzrvfruvieupqcuyuoylwpmupbomnafj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896266.0063937-465-145463909199050/AnsiballZ_command.py'
Dec 05 00:57:46 compute-0 sudo[67566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:46 compute-0 python3.9[67568]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:57:46 compute-0 sudo[67566]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:47 compute-0 sudo[67725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plrnlekkshttzjgprxzoqtytjxvrhxxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896266.887382-473-140020582570029/AnsiballZ_blockinfile.py'
Dec 05 00:57:47 compute-0 sudo[67725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:47 compute-0 python3.9[67727]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:57:47 compute-0 sudo[67725]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:48 compute-0 sudo[67878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naluorhhqydcplwjqkqlwbusgouldeun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896267.8424451-482-132530427579048/AnsiballZ_file.py'
Dec 05 00:57:48 compute-0 sudo[67878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:48 compute-0 python3.9[67880]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:57:48 compute-0 sudo[67878]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:49 compute-0 sudo[68030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmqzpqljyfeyweepvfpngrxiawiefdai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896268.621061-482-12968328182327/AnsiballZ_file.py'
Dec 05 00:57:49 compute-0 sudo[68030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:49 compute-0 python3.9[68032]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:57:49 compute-0 sudo[68030]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:50 compute-0 sudo[68182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kawvdcsrsanstwiaqkdtpotfqumyczge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896269.5045905-497-258649658107759/AnsiballZ_mount.py'
Dec 05 00:57:50 compute-0 sudo[68182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:50 compute-0 python3.9[68184]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 05 00:57:50 compute-0 sudo[68182]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:50 compute-0 sudo[68335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqwrcxcretbsrbotrmtntamrccpruldn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896270.554664-497-73300853927215/AnsiballZ_mount.py'
Dec 05 00:57:50 compute-0 sudo[68335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:51 compute-0 python3.9[68337]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 05 00:57:51 compute-0 sudo[68335]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:51 compute-0 sshd-session[59136]: Connection closed by 192.168.122.30 port 40560
Dec 05 00:57:51 compute-0 sshd-session[59133]: pam_unix(sshd:session): session closed for user zuul
Dec 05 00:57:51 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Dec 05 00:57:51 compute-0 systemd[1]: session-14.scope: Consumed 40.592s CPU time.
Dec 05 00:57:51 compute-0 systemd-logind[792]: Session 14 logged out. Waiting for processes to exit.
Dec 05 00:57:51 compute-0 systemd-logind[792]: Removed session 14.
Dec 05 00:57:57 compute-0 sshd-session[68363]: Accepted publickey for zuul from 192.168.122.30 port 55534 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 00:57:57 compute-0 systemd-logind[792]: New session 15 of user zuul.
Dec 05 00:57:57 compute-0 systemd[1]: Started Session 15 of User zuul.
Dec 05 00:57:57 compute-0 sshd-session[68363]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 00:57:58 compute-0 sudo[68516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbhdkkzjjrfsnrbtezwtehitmyysjrqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896277.8315554-16-6825541534149/AnsiballZ_tempfile.py'
Dec 05 00:57:58 compute-0 sudo[68516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:58 compute-0 python3.9[68518]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 05 00:57:58 compute-0 sudo[68516]: pam_unix(sudo:session): session closed for user root
Dec 05 00:57:59 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 05 00:57:59 compute-0 sudo[68670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ealhwglmpbsnqmmnstxqooytykcgrcvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896278.8194427-28-242631346770458/AnsiballZ_stat.py'
Dec 05 00:57:59 compute-0 sudo[68670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:57:59 compute-0 python3.9[68672]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 00:57:59 compute-0 sudo[68670]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:00 compute-0 sudo[68822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocahcoyymyfbymopbylvygnvtogswsqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896279.8547359-38-73857175406211/AnsiballZ_setup.py'
Dec 05 00:58:00 compute-0 sudo[68822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:00 compute-0 python3.9[68824]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 00:58:00 compute-0 sudo[68822]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:01 compute-0 sudo[68974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkyiwujzlfpgxzxgvdabpeoqaxqceytj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896281.294948-47-137018977449275/AnsiballZ_blockinfile.py'
Dec 05 00:58:01 compute-0 sudo[68974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:01 compute-0 python3.9[68976]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD34iQxSDvRWxXWiq324tvvnkHz60HCvPTP/DU7o5oImJ7L5PeQTe9tPl2QVsPDuWSCrwTEupWDG8h+dMSTlGmE2dOPB66Zq0d9sww65ZtOq0JsaxhPfTB3aJe6aQDcYq9WQ/1T/lNE0Do7wQL88mneNtNMuLZD9Irm2WwDI38II50hBLyhLkuA6ik5m8wn++kFZPdu0pcYz24ameu4wB8DSKH8UAT3GBfc11AP8MuI6xtpcOT5Dr88jHtVEYH8eW4XWrKQeyZddDcJui/f6NqC4NrPSF4YgDRQ1z6/33N2E9EycvbOgdOt9pq1jpYaWkMHl2KeaAbNoAdSuXTGDhvCzv18a5QdOMVV7965nJMnpteZZjrhzpHSFkbnMvAaoktDOMhKkfPYUY6HhVdkVM7FntS5oT76c92NL3HNHDuV7Oh57/0epCuWK6LT+2z9SlP7VUPaUa2c/nZDSTeZO/gJmuyeJ9Iu8XtE1KvGRpHt6zVpKl1uyEoc+M5SO7YG+r8=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIIWlZK7FF2zVpeujHX1SXvuy5F4vd69JtXI65jfCGUb
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG3QjvzM+uHT65E6nwIhM59XNE6tJ4oKmErztLJ1wZJkltdzzAyZYA6BiT1RzCPoMNPk9MeYIRcQ8NtPcaWiPtU=
                                             create=True mode=0644 path=/tmp/ansible.vcbw3jbu state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:58:01 compute-0 sudo[68974]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:02 compute-0 sudo[69126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nihhsyhssjqmjmnvioozfjafxbqhohsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896282.1537335-55-101324494144198/AnsiballZ_command.py'
Dec 05 00:58:02 compute-0 sudo[69126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:02 compute-0 python3.9[69128]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.vcbw3jbu' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:58:02 compute-0 sudo[69126]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:03 compute-0 sudo[69280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apdcvbqjbtjbitkwjaktpcxtbisfruvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896283.0541475-63-51601895084778/AnsiballZ_file.py'
Dec 05 00:58:03 compute-0 sudo[69280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:03 compute-0 python3.9[69282]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.vcbw3jbu state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:58:03 compute-0 sudo[69280]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:04 compute-0 sshd-session[68366]: Connection closed by 192.168.122.30 port 55534
Dec 05 00:58:04 compute-0 sshd-session[68363]: pam_unix(sshd:session): session closed for user zuul
Dec 05 00:58:04 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Dec 05 00:58:04 compute-0 systemd[1]: session-15.scope: Consumed 4.096s CPU time.
Dec 05 00:58:04 compute-0 systemd-logind[792]: Session 15 logged out. Waiting for processes to exit.
Dec 05 00:58:04 compute-0 systemd-logind[792]: Removed session 15.
Dec 05 00:58:10 compute-0 sshd-session[69307]: Accepted publickey for zuul from 192.168.122.30 port 47356 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 00:58:10 compute-0 systemd-logind[792]: New session 16 of user zuul.
Dec 05 00:58:10 compute-0 systemd[1]: Started Session 16 of User zuul.
Dec 05 00:58:10 compute-0 sshd-session[69307]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 00:58:11 compute-0 python3.9[69460]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 00:58:13 compute-0 sudo[69614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfmzepbivmnrwcyxgtdlllqcieyuppyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896292.471374-32-33119503627169/AnsiballZ_systemd.py'
Dec 05 00:58:13 compute-0 sudo[69614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:13 compute-0 python3.9[69616]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 05 00:58:13 compute-0 sudo[69614]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:14 compute-0 sudo[69768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuikttoswpdeisfewstimzvdlhlezvra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896293.86214-40-4228579459281/AnsiballZ_systemd.py'
Dec 05 00:58:14 compute-0 sudo[69768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:14 compute-0 python3.9[69770]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 00:58:15 compute-0 sudo[69768]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:16 compute-0 sudo[69921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grourtquhzlkqmahgdljviihxndfaudv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896295.8511243-49-111969302404064/AnsiballZ_command.py'
Dec 05 00:58:16 compute-0 sudo[69921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:16 compute-0 python3.9[69923]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:58:16 compute-0 sudo[69921]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:17 compute-0 sudo[70074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjexfneuqlmzurmpqockokruiwppegbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896296.7749813-57-163502586654468/AnsiballZ_stat.py'
Dec 05 00:58:17 compute-0 sudo[70074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:17 compute-0 python3.9[70076]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 00:58:17 compute-0 sudo[70074]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:17 compute-0 sudo[70228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maouhktdozmfxvxpdqlcqxaiudewdjfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896297.6258445-65-86401750060346/AnsiballZ_command.py'
Dec 05 00:58:17 compute-0 sudo[70228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:18 compute-0 python3.9[70230]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:58:18 compute-0 sudo[70228]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:18 compute-0 sudo[70383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeviptaelamyctabevmwyqsgappngrga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896298.4158354-73-103496569003501/AnsiballZ_file.py'
Dec 05 00:58:18 compute-0 sudo[70383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:19 compute-0 python3.9[70385]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:58:19 compute-0 sudo[70383]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:19 compute-0 sshd-session[69310]: Connection closed by 192.168.122.30 port 47356
Dec 05 00:58:19 compute-0 sshd-session[69307]: pam_unix(sshd:session): session closed for user zuul
Dec 05 00:58:19 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Dec 05 00:58:19 compute-0 systemd[1]: session-16.scope: Consumed 5.334s CPU time.
Dec 05 00:58:19 compute-0 systemd-logind[792]: Session 16 logged out. Waiting for processes to exit.
Dec 05 00:58:19 compute-0 systemd-logind[792]: Removed session 16.
Dec 05 00:58:24 compute-0 sshd-session[70411]: Accepted publickey for zuul from 192.168.122.30 port 37070 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 00:58:24 compute-0 systemd-logind[792]: New session 17 of user zuul.
Dec 05 00:58:24 compute-0 systemd[1]: Started Session 17 of User zuul.
Dec 05 00:58:24 compute-0 sshd-session[70411]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 00:58:25 compute-0 python3.9[70564]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 00:58:26 compute-0 sudo[70718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hccvfomeagbwukivrrlxekfnfnqeosmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896306.3656511-34-163224778545628/AnsiballZ_setup.py'
Dec 05 00:58:26 compute-0 sudo[70718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:26 compute-0 python3.9[70720]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 00:58:27 compute-0 sudo[70718]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:27 compute-0 sudo[70802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krcoihpntkmnsggoapiejxdnoeinsycd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896306.3656511-34-163224778545628/AnsiballZ_dnf.py'
Dec 05 00:58:27 compute-0 sudo[70802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:27 compute-0 python3.9[70804]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 05 00:58:29 compute-0 sudo[70802]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:29 compute-0 python3.9[70955]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:58:31 compute-0 python3.9[71106]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 00:58:32 compute-0 python3.9[71256]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 00:58:32 compute-0 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 00:58:32 compute-0 python3.9[71407]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 00:58:33 compute-0 sshd-session[70414]: Connection closed by 192.168.122.30 port 37070
Dec 05 00:58:33 compute-0 sshd-session[70411]: pam_unix(sshd:session): session closed for user zuul
Dec 05 00:58:33 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Dec 05 00:58:33 compute-0 systemd[1]: session-17.scope: Consumed 5.684s CPU time.
Dec 05 00:58:33 compute-0 systemd-logind[792]: Session 17 logged out. Waiting for processes to exit.
Dec 05 00:58:33 compute-0 systemd-logind[792]: Removed session 17.
Dec 05 00:58:39 compute-0 sshd-session[71432]: Accepted publickey for zuul from 192.168.122.30 port 57240 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 00:58:39 compute-0 systemd-logind[792]: New session 18 of user zuul.
Dec 05 00:58:39 compute-0 systemd[1]: Started Session 18 of User zuul.
Dec 05 00:58:39 compute-0 sshd-session[71432]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 00:58:40 compute-0 python3.9[71585]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 00:58:42 compute-0 sudo[71739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jehwllbqpzlhigsjuuqibexyisilfnkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896321.5821967-50-19876747856645/AnsiballZ_file.py'
Dec 05 00:58:42 compute-0 sudo[71739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:42 compute-0 python3.9[71741]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:58:42 compute-0 sudo[71739]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:42 compute-0 sudo[71891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvbaefavrquztaclzwckcanahbrjjfsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896322.4206688-50-203805679662458/AnsiballZ_file.py'
Dec 05 00:58:42 compute-0 sudo[71891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:42 compute-0 python3.9[71893]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:58:42 compute-0 sudo[71891]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:43 compute-0 sudo[72043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltwzkghofvdpwpcinsjtemgawhrnmvpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896323.2288954-65-150867799861915/AnsiballZ_stat.py'
Dec 05 00:58:43 compute-0 sudo[72043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:43 compute-0 python3.9[72045]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:58:44 compute-0 sudo[72043]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:44 compute-0 sudo[72166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwkutueznmkqgxcepesizzttnlhyqsjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896323.2288954-65-150867799861915/AnsiballZ_copy.py'
Dec 05 00:58:44 compute-0 sudo[72166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:44 compute-0 python3.9[72168]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896323.2288954-65-150867799861915/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=5896f62e469b0f9145221f5d7571d3434f8e5542 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:58:44 compute-0 sudo[72166]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:45 compute-0 sudo[72318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hypnfgnqdjprrcsasofrmieffgbtscxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896325.1298184-65-134873501176028/AnsiballZ_stat.py'
Dec 05 00:58:45 compute-0 sudo[72318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:45 compute-0 python3.9[72320]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:58:45 compute-0 sudo[72318]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:46 compute-0 sudo[72441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnqadxfibdujvyiezlarqxemlqjilahn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896325.1298184-65-134873501176028/AnsiballZ_copy.py'
Dec 05 00:58:46 compute-0 sudo[72441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:46 compute-0 python3.9[72443]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896325.1298184-65-134873501176028/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=09fecf97d4f61e8dfa5e5b79c9358a4c1891f28a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:58:46 compute-0 sudo[72441]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:47 compute-0 sudo[72593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euhikocaaxewxkdyddgwafpakduvvjah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896326.638606-65-203465723433256/AnsiballZ_stat.py'
Dec 05 00:58:47 compute-0 sudo[72593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:47 compute-0 python3.9[72595]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:58:47 compute-0 sudo[72593]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:47 compute-0 sudo[72716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncufrjagtmplysfsqxurikbamxzuyavu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896326.638606-65-203465723433256/AnsiballZ_copy.py'
Dec 05 00:58:47 compute-0 sudo[72716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:47 compute-0 chronyd[58652]: Selected source 208.81.1.244 (pool.ntp.org)
Dec 05 00:58:47 compute-0 python3.9[72718]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896326.638606-65-203465723433256/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=cb6dc863a7e49862862f192524608a2149e74923 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:58:47 compute-0 sudo[72716]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:48 compute-0 sudo[72868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqhqavcrgbscqmafgylnccrgcumzelmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896328.0660028-109-235034272083962/AnsiballZ_file.py'
Dec 05 00:58:48 compute-0 sudo[72868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:48 compute-0 python3.9[72870]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:58:48 compute-0 sudo[72868]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:48 compute-0 sudo[73020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygtesmyasfjaokgexxrnhkczpyvtmqbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896328.6856992-109-116707162439723/AnsiballZ_file.py'
Dec 05 00:58:48 compute-0 sudo[73020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:49 compute-0 python3.9[73022]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:58:49 compute-0 sudo[73020]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:49 compute-0 sudo[73172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llhdugtuzujyhwqoakefusfidmafoqyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896329.44266-124-242832380669440/AnsiballZ_stat.py'
Dec 05 00:58:49 compute-0 sudo[73172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:50 compute-0 python3.9[73174]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:58:50 compute-0 sudo[73172]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:50 compute-0 sudo[73295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsppnvrccemdazzbuzjwvjllqlwgscpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896329.44266-124-242832380669440/AnsiballZ_copy.py'
Dec 05 00:58:50 compute-0 sudo[73295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:50 compute-0 python3.9[73297]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896329.44266-124-242832380669440/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=795c456286f8d76351a77ecc4e3ba99a628d7436 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:58:50 compute-0 sudo[73295]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:51 compute-0 sudo[73447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrozhkszqsrutozvtovbobusgsvqdfbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896330.913201-124-16041081002485/AnsiballZ_stat.py'
Dec 05 00:58:51 compute-0 sudo[73447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:51 compute-0 python3.9[73449]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:58:51 compute-0 sudo[73447]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:51 compute-0 sudo[73570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmnqkgrxgrrzbpoxjmttdzgqbouxwzaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896330.913201-124-16041081002485/AnsiballZ_copy.py'
Dec 05 00:58:51 compute-0 sudo[73570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:52 compute-0 python3.9[73572]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896330.913201-124-16041081002485/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=09fecf97d4f61e8dfa5e5b79c9358a4c1891f28a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:58:52 compute-0 sudo[73570]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:52 compute-0 sudo[73722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpklolyjmeidvadoxtubcyaabrdpgrgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896332.3305962-124-168861587018567/AnsiballZ_stat.py'
Dec 05 00:58:52 compute-0 sudo[73722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:52 compute-0 python3.9[73724]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:58:52 compute-0 sudo[73722]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:53 compute-0 sudo[73845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mniukasdckwpbiaehteartggkhvcgtnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896332.3305962-124-168861587018567/AnsiballZ_copy.py'
Dec 05 00:58:53 compute-0 sudo[73845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:53 compute-0 python3.9[73847]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896332.3305962-124-168861587018567/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=2131ad4d8dcdcc5b81ddd1452ca930972dc6654b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:58:53 compute-0 sudo[73845]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:54 compute-0 sudo[73997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urdnurtevbanwcupvlolwblcgiyudpzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896333.6934662-168-278828814764015/AnsiballZ_file.py'
Dec 05 00:58:54 compute-0 sudo[73997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:54 compute-0 python3.9[73999]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:58:54 compute-0 sudo[73997]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:54 compute-0 sudo[74149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgsjlshmlklrgjkooaalwjhzdexncbgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896334.4431221-168-88156664978361/AnsiballZ_file.py'
Dec 05 00:58:54 compute-0 sudo[74149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:55 compute-0 python3.9[74151]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:58:55 compute-0 sudo[74149]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:55 compute-0 sudo[74301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwqwwrrcxytpndvsdshazakxrvttzvcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896335.214728-183-121345435620953/AnsiballZ_stat.py'
Dec 05 00:58:55 compute-0 sudo[74301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:55 compute-0 python3.9[74303]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:58:55 compute-0 sudo[74301]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:56 compute-0 sudo[74424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuqmvsohotppvaikrsjjevugqdpkrcsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896335.214728-183-121345435620953/AnsiballZ_copy.py'
Dec 05 00:58:56 compute-0 sudo[74424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:56 compute-0 python3.9[74426]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896335.214728-183-121345435620953/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=c2c880d24a89434c0556e43578ba5e67c355e46d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:58:56 compute-0 sudo[74424]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:57 compute-0 sudo[74576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkelxjafyykuewmdierztxzbsmcfkzmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896336.6654785-183-218503221300623/AnsiballZ_stat.py'
Dec 05 00:58:57 compute-0 sudo[74576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:57 compute-0 python3.9[74578]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:58:57 compute-0 sudo[74576]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:57 compute-0 sudo[74699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbvxkhuqqwljopdcifmlixtqnjtqhzta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896336.6654785-183-218503221300623/AnsiballZ_copy.py'
Dec 05 00:58:57 compute-0 sudo[74699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:57 compute-0 python3.9[74701]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896336.6654785-183-218503221300623/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=c627cf96372f156350cf2665722ecb932c797bf8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:58:57 compute-0 sudo[74699]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:58 compute-0 sudo[74851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eukbcabihcazdbdjpwmsaqonoioisqnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896338.0841691-183-94728250115389/AnsiballZ_stat.py'
Dec 05 00:58:58 compute-0 sudo[74851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:58 compute-0 python3.9[74853]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:58:58 compute-0 sudo[74851]: pam_unix(sudo:session): session closed for user root
Dec 05 00:58:59 compute-0 sudo[74974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgosjbpnlwswwczantbwzkpfoxzueewi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896338.0841691-183-94728250115389/AnsiballZ_copy.py'
Dec 05 00:58:59 compute-0 sudo[74974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:58:59 compute-0 python3.9[74976]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896338.0841691-183-94728250115389/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=7a49c6b417aa4025a96f5ee1c9d0c2fef03bae53 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:58:59 compute-0 sudo[74974]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:00 compute-0 sudo[75126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roucyvkiftmeojeeoghcknwqrxkpvuqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896339.6941943-227-272813485377219/AnsiballZ_file.py'
Dec 05 00:59:00 compute-0 sudo[75126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:00 compute-0 python3.9[75128]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:59:00 compute-0 sudo[75126]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:00 compute-0 sudo[75278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtdammfczcjbtysaaiideofbnbekzjhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896340.3616025-227-170715051945050/AnsiballZ_file.py'
Dec 05 00:59:00 compute-0 sudo[75278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:00 compute-0 python3.9[75280]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:59:00 compute-0 sudo[75278]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:01 compute-0 sudo[75430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-curypsqybbaefgeazvhjngfmgtaulyvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896341.1451666-242-277421999344596/AnsiballZ_stat.py'
Dec 05 00:59:01 compute-0 sudo[75430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:01 compute-0 python3.9[75432]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:59:01 compute-0 sudo[75430]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:02 compute-0 sudo[75553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugtpawvacuhpykmcnwaujwswktgzepxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896341.1451666-242-277421999344596/AnsiballZ_copy.py'
Dec 05 00:59:02 compute-0 sudo[75553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:02 compute-0 python3.9[75555]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896341.1451666-242-277421999344596/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=fbc71f23ed09b9bcd3e04e386ee5074731d93f0c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:59:02 compute-0 sudo[75553]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:02 compute-0 sudo[75705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxtfydahjcptajdfbbrkamqgjeyyukey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896342.417423-242-57432467684362/AnsiballZ_stat.py'
Dec 05 00:59:02 compute-0 sudo[75705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:02 compute-0 python3.9[75707]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:59:02 compute-0 sudo[75705]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:03 compute-0 sudo[75828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaakivkqnrouerxxyxfvmdnddrxekusq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896342.417423-242-57432467684362/AnsiballZ_copy.py'
Dec 05 00:59:03 compute-0 sudo[75828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:03 compute-0 python3.9[75830]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896342.417423-242-57432467684362/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=464076ef88dcc89aa3cbba91e13b4b726d71f651 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:59:03 compute-0 sudo[75828]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:04 compute-0 sudo[75980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isqtowuxozrixcydxngkrwgtxanzaxcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896343.8123755-242-64294443990939/AnsiballZ_stat.py'
Dec 05 00:59:04 compute-0 sudo[75980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:04 compute-0 python3.9[75982]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:59:04 compute-0 sudo[75980]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:04 compute-0 sudo[76103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfoiuynfoejculdtqxigwhrasqhtador ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896343.8123755-242-64294443990939/AnsiballZ_copy.py'
Dec 05 00:59:04 compute-0 sudo[76103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:04 compute-0 python3.9[76105]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896343.8123755-242-64294443990939/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=742b783dfdbfc50744d200a72e6bc0fd02d3a60e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:59:04 compute-0 sudo[76103]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:05 compute-0 sudo[76255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbneyxlnntaepgjkuqclpounemdhdowf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896345.6754844-302-74131350855792/AnsiballZ_file.py'
Dec 05 00:59:05 compute-0 sudo[76255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:06 compute-0 python3.9[76257]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:59:06 compute-0 sudo[76255]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:06 compute-0 sudo[76407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dimjwgiilwgvzvzvwkuxcarjhbiqtwkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896346.3137732-310-15912402350171/AnsiballZ_stat.py'
Dec 05 00:59:06 compute-0 sudo[76407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:06 compute-0 python3.9[76409]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:59:06 compute-0 sudo[76407]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:07 compute-0 sudo[76530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhlghwcmniygnitlfpegrxvnsvhyjgph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896346.3137732-310-15912402350171/AnsiballZ_copy.py'
Dec 05 00:59:07 compute-0 sudo[76530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:07 compute-0 python3.9[76532]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896346.3137732-310-15912402350171/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aad3215deeeb1eba7754fd1a27527afcf2bb5051 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:59:07 compute-0 sudo[76530]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:08 compute-0 sudo[76682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzzkouqmmobjkhysxdeqpelmzyuuqphn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896347.7742207-326-185730459964953/AnsiballZ_file.py'
Dec 05 00:59:08 compute-0 sudo[76682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:08 compute-0 python3.9[76684]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:59:08 compute-0 sudo[76682]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:08 compute-0 sudo[76834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddajhgfucvesmmcbxhfiiwkkflqjctfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896348.5102446-334-182765789737311/AnsiballZ_stat.py'
Dec 05 00:59:08 compute-0 sudo[76834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:09 compute-0 python3.9[76836]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:59:09 compute-0 sudo[76834]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:09 compute-0 sudo[76957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsabqkodunvinvyujlniblgchoiumvob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896348.5102446-334-182765789737311/AnsiballZ_copy.py'
Dec 05 00:59:09 compute-0 sudo[76957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:09 compute-0 python3.9[76959]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896348.5102446-334-182765789737311/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aad3215deeeb1eba7754fd1a27527afcf2bb5051 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:59:09 compute-0 sudo[76957]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:10 compute-0 sudo[77109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gefqnclzamparpemzlddvjssrxosjuad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896349.9382958-350-149844021686777/AnsiballZ_file.py'
Dec 05 00:59:10 compute-0 sudo[77109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:10 compute-0 python3.9[77111]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:59:10 compute-0 sudo[77109]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:10 compute-0 sudo[77261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sklogqpopnllolmsrobdoewfjmebsgli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896350.641771-358-19177513981717/AnsiballZ_stat.py'
Dec 05 00:59:10 compute-0 sudo[77261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:11 compute-0 python3.9[77263]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:59:11 compute-0 sudo[77261]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:11 compute-0 sudo[77384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfrhcwmckmmosjwhounbvoetjhvyzzhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896350.641771-358-19177513981717/AnsiballZ_copy.py'
Dec 05 00:59:11 compute-0 sudo[77384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:11 compute-0 python3.9[77386]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896350.641771-358-19177513981717/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aad3215deeeb1eba7754fd1a27527afcf2bb5051 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:59:11 compute-0 sudo[77384]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:12 compute-0 sudo[77536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmwttaqkzuiuwviwzbjdotzqbebcnanj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896352.1489701-374-41857440467522/AnsiballZ_file.py'
Dec 05 00:59:12 compute-0 sudo[77536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:12 compute-0 python3.9[77538]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:59:12 compute-0 sudo[77536]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:13 compute-0 sudo[77688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqzgozvxmlyvyiziwvzovndtfaduprqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896352.942393-382-11635490066362/AnsiballZ_stat.py'
Dec 05 00:59:13 compute-0 sudo[77688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:13 compute-0 python3.9[77690]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:59:13 compute-0 sudo[77688]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:13 compute-0 sudo[77811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixqxusetyhfrziuoywypenzgfqswvdpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896352.942393-382-11635490066362/AnsiballZ_copy.py'
Dec 05 00:59:13 compute-0 sudo[77811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:14 compute-0 python3.9[77813]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896352.942393-382-11635490066362/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aad3215deeeb1eba7754fd1a27527afcf2bb5051 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:59:14 compute-0 sudo[77811]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:14 compute-0 sudo[77963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geaiakpurjyalrgaiciwwrtzacovcvnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896354.3855453-398-76257667645434/AnsiballZ_file.py'
Dec 05 00:59:14 compute-0 sudo[77963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:14 compute-0 python3.9[77965]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:59:14 compute-0 sudo[77963]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:15 compute-0 sudo[78115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnzkygydyoxuytcomaxvpzdqtbjirrsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896355.2154088-406-209146982290171/AnsiballZ_stat.py'
Dec 05 00:59:15 compute-0 sudo[78115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:15 compute-0 python3.9[78117]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:59:15 compute-0 sudo[78115]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:16 compute-0 sudo[78238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uddcloawkvvnkvolgziwvafnjfgifhqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896355.2154088-406-209146982290171/AnsiballZ_copy.py'
Dec 05 00:59:16 compute-0 sudo[78238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:16 compute-0 python3.9[78240]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896355.2154088-406-209146982290171/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aad3215deeeb1eba7754fd1a27527afcf2bb5051 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:59:16 compute-0 sudo[78238]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:17 compute-0 sudo[78390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugvpynojlkfsiblknlfgrlnqycripnom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896356.6230826-422-280176215258913/AnsiballZ_file.py'
Dec 05 00:59:17 compute-0 sudo[78390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:17 compute-0 python3.9[78392]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:59:17 compute-0 sudo[78390]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:17 compute-0 sudo[78542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvwsqspjiwkpjavfyubqegcppjszossg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896357.473478-430-25757901754522/AnsiballZ_stat.py'
Dec 05 00:59:17 compute-0 sudo[78542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:17 compute-0 python3.9[78544]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:59:17 compute-0 sudo[78542]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:18 compute-0 sudo[78665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zssqyahokhgafumhnqsapitsftzyhcky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896357.473478-430-25757901754522/AnsiballZ_copy.py'
Dec 05 00:59:18 compute-0 sudo[78665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:18 compute-0 python3.9[78667]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896357.473478-430-25757901754522/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aad3215deeeb1eba7754fd1a27527afcf2bb5051 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:59:18 compute-0 sudo[78665]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:19 compute-0 sshd-session[71435]: Connection closed by 192.168.122.30 port 57240
Dec 05 00:59:19 compute-0 sshd-session[71432]: pam_unix(sshd:session): session closed for user zuul
Dec 05 00:59:19 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Dec 05 00:59:19 compute-0 systemd[1]: session-18.scope: Consumed 30.293s CPU time.
Dec 05 00:59:19 compute-0 systemd-logind[792]: Session 18 logged out. Waiting for processes to exit.
Dec 05 00:59:19 compute-0 systemd-logind[792]: Removed session 18.
Dec 05 00:59:25 compute-0 sshd-session[78692]: Accepted publickey for zuul from 192.168.122.30 port 37580 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 00:59:25 compute-0 systemd-logind[792]: New session 19 of user zuul.
Dec 05 00:59:25 compute-0 systemd[1]: Started Session 19 of User zuul.
Dec 05 00:59:25 compute-0 sshd-session[78692]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 00:59:26 compute-0 python3.9[78845]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 00:59:27 compute-0 sudo[78999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xngqghdqylioznpemgksgtjtollpjzzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896367.1271117-34-57667312838886/AnsiballZ_file.py'
Dec 05 00:59:27 compute-0 sudo[78999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:27 compute-0 python3.9[79001]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:59:27 compute-0 sudo[78999]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:28 compute-0 sudo[79151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glwbxatykqybupksdeolhbdntlnrxlet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896368.099218-34-94603555573217/AnsiballZ_file.py'
Dec 05 00:59:28 compute-0 sudo[79151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:28 compute-0 python3.9[79153]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 00:59:28 compute-0 sudo[79151]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:29 compute-0 python3.9[79303]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 00:59:30 compute-0 sudo[79453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gedvqyexyiwkuvdmiqipdedzivmuubzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896369.7885716-57-145285363079201/AnsiballZ_seboolean.py'
Dec 05 00:59:30 compute-0 sudo[79453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:30 compute-0 python3.9[79455]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 05 00:59:31 compute-0 sudo[79453]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:32 compute-0 sudo[79609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahindopwudlefqkryxmjjdpknmyxeebd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896371.995957-67-121805032596975/AnsiballZ_setup.py'
Dec 05 00:59:32 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec 05 00:59:32 compute-0 sudo[79609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:32 compute-0 python3.9[79611]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 00:59:32 compute-0 sudo[79609]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:33 compute-0 sudo[79693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewjmdhvfebrsxibxptbrzugpgvkpxjhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896371.995957-67-121805032596975/AnsiballZ_dnf.py'
Dec 05 00:59:33 compute-0 sudo[79693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:33 compute-0 python3.9[79695]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 00:59:34 compute-0 sudo[79693]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:35 compute-0 sudo[79846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yucokqubexupeldgaeyijjdrznajnhik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896374.979158-79-97867769396631/AnsiballZ_systemd.py'
Dec 05 00:59:35 compute-0 sudo[79846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:35 compute-0 python3.9[79848]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 00:59:36 compute-0 sudo[79846]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:36 compute-0 sudo[80001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwqgsschgiwuhhzklcssxviyqqfxxzla ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764896376.2018678-87-83603596129070/AnsiballZ_edpm_nftables_snippet.py'
Dec 05 00:59:36 compute-0 sudo[80001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:36 compute-0 python3[80003]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                            rule:
                                              proto: udp
                                              dport: 4789
                                          - rule_name: 119 neutron geneve networks
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              state: ["UNTRACKED"]
                                          - rule_name: 120 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: OUTPUT
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                          - rule_name: 121 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: PREROUTING
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec 05 00:59:36 compute-0 sudo[80001]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:37 compute-0 sudo[80153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmqpmvmbiillajlvuhfngkhikbtkgpgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896377.1705465-96-35013683902049/AnsiballZ_file.py'
Dec 05 00:59:37 compute-0 sudo[80153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:37 compute-0 python3.9[80155]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:59:37 compute-0 sudo[80153]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:38 compute-0 sudo[80305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvylkkhrecvwzkhlesmamguulcjysxqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896377.816568-104-29469204199666/AnsiballZ_stat.py'
Dec 05 00:59:38 compute-0 sudo[80305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:38 compute-0 python3.9[80307]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:59:38 compute-0 sudo[80305]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:38 compute-0 sudo[80383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfswynyrwsfebvidsnsjcrqloatjflap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896377.816568-104-29469204199666/AnsiballZ_file.py'
Dec 05 00:59:38 compute-0 sudo[80383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:39 compute-0 python3.9[80385]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:59:39 compute-0 sudo[80383]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:39 compute-0 sudo[80535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwlsmwmldqaogxtufcpyrdwekewspawu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896379.256599-116-74627169900154/AnsiballZ_stat.py'
Dec 05 00:59:39 compute-0 sudo[80535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:39 compute-0 python3.9[80537]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:59:39 compute-0 sudo[80535]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:40 compute-0 sudo[80613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfrvllegfhvqftiohvcsdiquawaxbojg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896379.256599-116-74627169900154/AnsiballZ_file.py'
Dec 05 00:59:40 compute-0 sudo[80613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:40 compute-0 python3.9[80615]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.2vbjb2vj recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:59:40 compute-0 sudo[80613]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:40 compute-0 sudo[80765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkygzceopaotcfpcbtkiqhmiutbcwzts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896380.6039796-128-207552788561021/AnsiballZ_stat.py'
Dec 05 00:59:40 compute-0 sudo[80765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:41 compute-0 python3.9[80767]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:59:41 compute-0 sudo[80765]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:41 compute-0 sudo[80843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkmgvftofgjpxejvsuohawdydgieihnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896380.6039796-128-207552788561021/AnsiballZ_file.py'
Dec 05 00:59:41 compute-0 sudo[80843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:41 compute-0 python3.9[80845]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:59:41 compute-0 sudo[80843]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:42 compute-0 sudo[80995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyvvezagyqcxrwpomqvjhpyeeoichkwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896381.8127472-141-130407735976583/AnsiballZ_command.py'
Dec 05 00:59:42 compute-0 sudo[80995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:42 compute-0 python3.9[80997]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:59:42 compute-0 sudo[80995]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:43 compute-0 sudo[81148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muytuohinnmfnvahowgelsujxozyests ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764896382.7627814-149-177962220395589/AnsiballZ_edpm_nftables_from_files.py'
Dec 05 00:59:43 compute-0 sudo[81148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:43 compute-0 python3[81150]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 05 00:59:43 compute-0 sudo[81148]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:44 compute-0 sudo[81300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilhogokljjgextundjgccemlnfrotkgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896383.7084906-157-257366225876435/AnsiballZ_stat.py'
Dec 05 00:59:44 compute-0 sudo[81300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:44 compute-0 python3.9[81302]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:59:44 compute-0 sudo[81300]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:44 compute-0 sudo[81425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmzoyyrpeajdukrqjopcuhbffojuwgde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896383.7084906-157-257366225876435/AnsiballZ_copy.py'
Dec 05 00:59:44 compute-0 sudo[81425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:45 compute-0 python3.9[81427]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896383.7084906-157-257366225876435/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:59:45 compute-0 sudo[81425]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:45 compute-0 sudo[81577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbsqdbghxiasmlrcttrpxxyrexychtsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896385.3914971-172-233421426334572/AnsiballZ_stat.py'
Dec 05 00:59:45 compute-0 sudo[81577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:46 compute-0 python3.9[81579]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:59:46 compute-0 sudo[81577]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:46 compute-0 sudo[81702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kygjssfoiknrbmnqrdxgssaveeqyevca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896385.3914971-172-233421426334572/AnsiballZ_copy.py'
Dec 05 00:59:46 compute-0 sudo[81702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:46 compute-0 python3.9[81704]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896385.3914971-172-233421426334572/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:59:46 compute-0 sudo[81702]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:47 compute-0 sudo[81854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyshcwpuqelvmmmkpiclvqihysstbypd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896386.9870028-187-87464829771111/AnsiballZ_stat.py'
Dec 05 00:59:47 compute-0 sudo[81854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:47 compute-0 python3.9[81856]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:59:47 compute-0 sudo[81854]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:48 compute-0 sudo[81979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpymrrcpjwniklbgpfhseppqvrcbitfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896386.9870028-187-87464829771111/AnsiballZ_copy.py'
Dec 05 00:59:48 compute-0 sudo[81979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:48 compute-0 python3.9[81981]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896386.9870028-187-87464829771111/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:59:48 compute-0 sudo[81979]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:48 compute-0 sudo[82131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfilhlieclapvfbezyvhfhqrbaqbjzyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896388.5961626-202-254681956390108/AnsiballZ_stat.py'
Dec 05 00:59:48 compute-0 sudo[82131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:49 compute-0 python3.9[82133]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:59:49 compute-0 sudo[82131]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:49 compute-0 sudo[82256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vykqymqefuatehaihrrorlfevnoikkjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896388.5961626-202-254681956390108/AnsiballZ_copy.py'
Dec 05 00:59:49 compute-0 sudo[82256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:49 compute-0 python3.9[82258]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896388.5961626-202-254681956390108/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:59:49 compute-0 sudo[82256]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:50 compute-0 sudo[82408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwraacafpyypnghnelqnnunjvywrzvlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896390.038504-217-25039120914979/AnsiballZ_stat.py'
Dec 05 00:59:50 compute-0 sudo[82408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:50 compute-0 python3.9[82410]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 00:59:50 compute-0 sudo[82408]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:51 compute-0 sudo[82533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clejgdqeyyhhzuynedgotdnjialjbclg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896390.038504-217-25039120914979/AnsiballZ_copy.py'
Dec 05 00:59:51 compute-0 sudo[82533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:51 compute-0 python3.9[82535]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896390.038504-217-25039120914979/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:59:51 compute-0 sudo[82533]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:51 compute-0 sudo[82685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lknxtyaxqdkibkwqydmxapajolxdppzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896391.6342204-232-55154194042696/AnsiballZ_file.py'
Dec 05 00:59:51 compute-0 sudo[82685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:52 compute-0 python3.9[82687]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:59:52 compute-0 sudo[82685]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:52 compute-0 sudo[82837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awpazwdskbqvfvhgxgknjhguxcrpuvwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896392.336916-240-9450342719397/AnsiballZ_command.py'
Dec 05 00:59:52 compute-0 sudo[82837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:52 compute-0 python3.9[82839]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:59:52 compute-0 sudo[82837]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:53 compute-0 sudo[82992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrzhdcqouzfjymariawzxnwdzrxvwwyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896393.0611455-248-255576156873354/AnsiballZ_blockinfile.py'
Dec 05 00:59:53 compute-0 sudo[82992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:53 compute-0 python3.9[82994]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:59:53 compute-0 sudo[82992]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:54 compute-0 sudo[83144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnthymahjwbdwuvjmpfaaugywynteghe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896394.073356-257-118322286307742/AnsiballZ_command.py'
Dec 05 00:59:54 compute-0 sudo[83144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:54 compute-0 python3.9[83146]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:59:54 compute-0 sudo[83144]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:55 compute-0 sudo[83297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qurcrqwtldftegkkdpiypafnoairnbls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896394.8735282-265-260336611468713/AnsiballZ_stat.py'
Dec 05 00:59:55 compute-0 sudo[83297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:55 compute-0 python3.9[83299]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 00:59:55 compute-0 sudo[83297]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:55 compute-0 sudo[83451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggpsgcabjsfayrjtpazpabrwxbqzdtsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896395.6134546-273-132942549363142/AnsiballZ_command.py'
Dec 05 00:59:55 compute-0 sudo[83451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:56 compute-0 python3.9[83453]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:59:56 compute-0 sudo[83451]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:56 compute-0 sudo[83606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txrvpzsyqsottaiwhroewohomnnuphct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896396.353553-281-181277530439279/AnsiballZ_file.py'
Dec 05 00:59:56 compute-0 sudo[83606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:56 compute-0 python3.9[83608]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 00:59:56 compute-0 sudo[83606]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:57 compute-0 python3.9[83758]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 00:59:58 compute-0 sudo[83909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhvyyaoauwbkfddigtbtqenmfxxfvvwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896398.5623314-321-245682918420267/AnsiballZ_command.py'
Dec 05 00:59:58 compute-0 sudo[83909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:59 compute-0 python3.9[83911]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:f2:93:49:d5" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:59:59 compute-0 ovs-vsctl[83912]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:f2:93:49:d5 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec 05 00:59:59 compute-0 sudo[83909]: pam_unix(sudo:session): session closed for user root
Dec 05 00:59:59 compute-0 sudo[84062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvjvkdfkwfrvrxmegfcaasqgphesjlgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896399.3882313-330-241108057703579/AnsiballZ_command.py'
Dec 05 00:59:59 compute-0 sudo[84062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 00:59:59 compute-0 python3.9[84064]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ovs-vsctl show | grep -q "Manager"
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 00:59:59 compute-0 sudo[84062]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:00 compute-0 sudo[84217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mylixnzwescftkmktnzrmhyrhwqarztf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896400.0277576-338-131541759221083/AnsiballZ_command.py'
Dec 05 01:00:00 compute-0 sudo[84217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:00 compute-0 python3.9[84219]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:00:00 compute-0 ovs-vsctl[84220]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec 05 01:00:00 compute-0 sudo[84217]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:01 compute-0 python3.9[84370]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:00:01 compute-0 sudo[84522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksdncfnooeuxgtwzhqpnzyxuksayutzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896401.4047792-355-74067256436443/AnsiballZ_file.py'
Dec 05 01:00:01 compute-0 sudo[84522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:01 compute-0 python3.9[84524]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:00:01 compute-0 sudo[84522]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:02 compute-0 sudo[84674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snfpmlbubwcuhqsydvmojcpjyngvrdwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896402.185082-363-15564602838781/AnsiballZ_stat.py'
Dec 05 01:00:02 compute-0 sudo[84674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:02 compute-0 python3.9[84676]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:00:02 compute-0 sudo[84674]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:02 compute-0 sudo[84752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzcdogxrzsxqbywdzjxbybmdepwvucsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896402.185082-363-15564602838781/AnsiballZ_file.py'
Dec 05 01:00:02 compute-0 sudo[84752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:03 compute-0 python3.9[84754]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:00:03 compute-0 sudo[84752]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:03 compute-0 sudo[84904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alizwrgwmxvzxltavgrucumwoflqdqfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896403.4088895-363-93303701709043/AnsiballZ_stat.py'
Dec 05 01:00:03 compute-0 sudo[84904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:04 compute-0 python3.9[84906]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:00:04 compute-0 sudo[84904]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:04 compute-0 sudo[84982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjuntyntcozdlhfjapzfnqvnyatajsdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896403.4088895-363-93303701709043/AnsiballZ_file.py'
Dec 05 01:00:04 compute-0 sudo[84982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:04 compute-0 python3.9[84984]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:00:04 compute-0 sudo[84982]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:05 compute-0 sudo[85134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtwveupkjlmlkhestprpssksltjsajdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896404.8260896-386-37810872288026/AnsiballZ_file.py'
Dec 05 01:00:05 compute-0 sudo[85134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:05 compute-0 python3.9[85136]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:00:05 compute-0 sudo[85134]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:05 compute-0 sudo[85286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgecjpthybwelhfbumedaawxjbirjdcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896405.553463-394-281309960812391/AnsiballZ_stat.py'
Dec 05 01:00:05 compute-0 sudo[85286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:06 compute-0 python3.9[85288]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:00:06 compute-0 sudo[85286]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:06 compute-0 sudo[85364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpmgiweujvofcqvufznynimxkrtehpjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896405.553463-394-281309960812391/AnsiballZ_file.py'
Dec 05 01:00:06 compute-0 sudo[85364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:06 compute-0 python3.9[85366]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:00:06 compute-0 sudo[85364]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:07 compute-0 sudo[85516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsstyfnkkbzgaidmlpjmcbfdwekhizma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896406.763165-406-120192404371258/AnsiballZ_stat.py'
Dec 05 01:00:07 compute-0 sudo[85516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:07 compute-0 python3.9[85518]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:00:07 compute-0 sudo[85516]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:07 compute-0 sudo[85594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwrtyosadwrkoltxgwndojhplcfmmsal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896406.763165-406-120192404371258/AnsiballZ_file.py'
Dec 05 01:00:07 compute-0 sudo[85594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:07 compute-0 python3.9[85596]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:00:07 compute-0 sudo[85594]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:08 compute-0 sudo[85746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snffquibkmoceafbsraddammlbsnordo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896407.9686027-418-86053881582077/AnsiballZ_systemd.py'
Dec 05 01:00:08 compute-0 sudo[85746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:08 compute-0 python3.9[85748]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:00:08 compute-0 systemd[1]: Reloading.
Dec 05 01:00:08 compute-0 systemd-rc-local-generator[85778]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:00:08 compute-0 systemd-sysv-generator[85782]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:00:08 compute-0 sudo[85746]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:09 compute-0 sudo[85936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqodgcnrfpahszzqiauwejshvezzxbts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896408.9897146-426-178127953547284/AnsiballZ_stat.py'
Dec 05 01:00:09 compute-0 sudo[85936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:09 compute-0 python3.9[85938]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:00:09 compute-0 sudo[85936]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:09 compute-0 sudo[86014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktbgcmnwkchhsxklgnubhkpbdplywayb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896408.9897146-426-178127953547284/AnsiballZ_file.py'
Dec 05 01:00:09 compute-0 sudo[86014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:10 compute-0 python3.9[86016]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:00:10 compute-0 sudo[86014]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:10 compute-0 sudo[86166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chfbrnpmupyhkwnxcuttyrqtzyrqlloo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896410.2831388-438-246203765917585/AnsiballZ_stat.py'
Dec 05 01:00:10 compute-0 sudo[86166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:10 compute-0 python3.9[86168]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:00:10 compute-0 sudo[86166]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:11 compute-0 sudo[86244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bksdeqtxmsywusvpcmzdymwzgnbuiulh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896410.2831388-438-246203765917585/AnsiballZ_file.py'
Dec 05 01:00:11 compute-0 sudo[86244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:11 compute-0 python3.9[86246]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:00:11 compute-0 sudo[86244]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:12 compute-0 sudo[86396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnrnzjeoeamofwpdgrempuppygmbquvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896411.6469562-450-91467374183662/AnsiballZ_systemd.py'
Dec 05 01:00:12 compute-0 sudo[86396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:12 compute-0 python3.9[86398]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:00:12 compute-0 systemd[1]: Reloading.
Dec 05 01:00:12 compute-0 systemd-rc-local-generator[86421]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:00:12 compute-0 systemd-sysv-generator[86428]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:00:12 compute-0 systemd[1]: Starting Create netns directory...
Dec 05 01:00:12 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 05 01:00:12 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 05 01:00:12 compute-0 systemd[1]: Finished Create netns directory.
Dec 05 01:00:12 compute-0 sudo[86396]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:13 compute-0 sudo[86589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyxgodyhkutkqrtmhmsdfjqxdqbiljju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896413.0479074-460-72582635167006/AnsiballZ_file.py'
Dec 05 01:00:13 compute-0 sudo[86589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:13 compute-0 python3.9[86591]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:00:13 compute-0 sudo[86589]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:14 compute-0 sudo[86741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oflsgosdewonnytenmhzojvxejrqqytu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896413.8710167-468-58731839497891/AnsiballZ_stat.py'
Dec 05 01:00:14 compute-0 sudo[86741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:14 compute-0 python3.9[86743]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:00:14 compute-0 sudo[86741]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:14 compute-0 sudo[86864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjtyolvbxmmsddkieitginoektkfmpri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896413.8710167-468-58731839497891/AnsiballZ_copy.py'
Dec 05 01:00:14 compute-0 sudo[86864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:15 compute-0 python3.9[86866]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896413.8710167-468-58731839497891/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:00:15 compute-0 sudo[86864]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:15 compute-0 sudo[87016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhlpxmqaeouookmhxuafnihvyinmcrux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896415.4016998-485-202519683804015/AnsiballZ_file.py'
Dec 05 01:00:15 compute-0 sudo[87016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:15 compute-0 python3.9[87018]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:00:15 compute-0 sudo[87016]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:16 compute-0 sudo[87168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oevzhyygqwbbctcwbtbbobsadarozttm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896416.1009028-493-163454999765026/AnsiballZ_stat.py'
Dec 05 01:00:16 compute-0 sudo[87168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:16 compute-0 python3.9[87170]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:00:16 compute-0 sudo[87168]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:16 compute-0 sudo[87291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzzkseuyzrubysvfqqsutheqzzrobwfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896416.1009028-493-163454999765026/AnsiballZ_copy.py'
Dec 05 01:00:16 compute-0 sudo[87291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:17 compute-0 python3.9[87293]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896416.1009028-493-163454999765026/.source.json _original_basename=.09y95qyj follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:00:17 compute-0 sudo[87291]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:17 compute-0 sudo[87443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoxxtbmzihgymxlipodxqdjazldapufx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896417.4108682-508-38657231823474/AnsiballZ_file.py'
Dec 05 01:00:17 compute-0 sudo[87443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:18 compute-0 python3.9[87445]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:00:18 compute-0 sudo[87443]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:18 compute-0 sudo[87595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcfthwhguwbgbvoggngayvmfprgblxit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896418.255087-516-134718801425786/AnsiballZ_stat.py'
Dec 05 01:00:18 compute-0 sudo[87595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:18 compute-0 sudo[87595]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:19 compute-0 sudo[87718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkjxkeetfckvvbwrutxjojstsrwmgjyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896418.255087-516-134718801425786/AnsiballZ_copy.py'
Dec 05 01:00:19 compute-0 sudo[87718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:19 compute-0 sudo[87718]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:20 compute-0 sudo[87870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfxptblmiaianzxpjvyrlrzebwkhodnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896419.8819296-533-263744823189080/AnsiballZ_container_config_data.py'
Dec 05 01:00:20 compute-0 sudo[87870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:20 compute-0 python3.9[87872]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec 05 01:00:20 compute-0 sudo[87870]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:21 compute-0 sudo[88022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccaobbwkdgqnqtwoqyjguxewgnmqtxhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896420.863139-542-36751599798827/AnsiballZ_container_config_hash.py'
Dec 05 01:00:21 compute-0 sudo[88022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:21 compute-0 python3.9[88024]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 01:00:21 compute-0 sudo[88022]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:22 compute-0 sudo[88174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jriqpldhohcevmwzywbpmviswphzumnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896421.8562655-551-176342998210359/AnsiballZ_podman_container_info.py'
Dec 05 01:00:22 compute-0 sudo[88174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:22 compute-0 python3.9[88176]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 05 01:00:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 01:00:22 compute-0 sudo[88174]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:23 compute-0 sudo[88337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uanabzinanmajveeijywjtdwytgrmwoe ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764896423.1049616-564-214751654422566/AnsiballZ_edpm_container_manage.py'
Dec 05 01:00:23 compute-0 sudo[88337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:23 compute-0 python3[88339]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 01:00:24 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 01:00:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1693855093-lower\x2dmapped.mount: Deactivated successfully.
Dec 05 01:00:29 compute-0 podman[88352]: 2025-12-05 01:00:29.662391392 +0000 UTC m=+5.608239358 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 05 01:00:29 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 01:00:29 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 01:00:29 compute-0 podman[88472]: 2025-12-05 01:00:29.883129873 +0000 UTC m=+0.062864896 container create d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:00:29 compute-0 podman[88472]: 2025-12-05 01:00:29.851574633 +0000 UTC m=+0.031309626 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 05 01:00:29 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 05 01:00:29 compute-0 python3[88339]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 05 01:00:30 compute-0 sudo[88337]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:30 compute-0 sudo[88660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfxginjftewlkcrqvldnsndcezwjnydf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896430.307386-572-156948759691011/AnsiballZ_stat.py'
Dec 05 01:00:30 compute-0 sudo[88660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:30 compute-0 python3.9[88662]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:00:30 compute-0 sudo[88660]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:31 compute-0 sudo[88814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgpgskkpqendbxkgcquxqaurzteiqljn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896431.191944-581-246871621986190/AnsiballZ_file.py'
Dec 05 01:00:31 compute-0 sudo[88814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:31 compute-0 python3.9[88816]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:00:31 compute-0 sudo[88814]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:31 compute-0 sudo[88890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozjaplphyjfleqjcdokcdpjvegckslgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896431.191944-581-246871621986190/AnsiballZ_stat.py'
Dec 05 01:00:31 compute-0 sudo[88890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:32 compute-0 python3.9[88892]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:00:32 compute-0 sudo[88890]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:32 compute-0 sudo[89041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvavtgjovtqhlttdtofsgtxasdrdfezo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896432.2564032-581-234032621706389/AnsiballZ_copy.py'
Dec 05 01:00:32 compute-0 sudo[89041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:32 compute-0 python3.9[89043]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764896432.2564032-581-234032621706389/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:00:32 compute-0 sudo[89041]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:33 compute-0 sudo[89117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-safiskorvbnpntjcvmgtvclezihmrbze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896432.2564032-581-234032621706389/AnsiballZ_systemd.py'
Dec 05 01:00:33 compute-0 sudo[89117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:33 compute-0 python3.9[89119]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 01:00:33 compute-0 systemd[1]: Reloading.
Dec 05 01:00:33 compute-0 systemd-rc-local-generator[89140]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:00:33 compute-0 systemd-sysv-generator[89148]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:00:33 compute-0 sudo[89117]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:33 compute-0 sudo[89229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkewbgcvshvponrlsmothistetcukjwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896432.2564032-581-234032621706389/AnsiballZ_systemd.py'
Dec 05 01:00:33 compute-0 sudo[89229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:34 compute-0 python3.9[89231]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:00:34 compute-0 systemd[1]: Reloading.
Dec 05 01:00:34 compute-0 systemd-rc-local-generator[89260]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:00:34 compute-0 systemd-sysv-generator[89265]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:00:34 compute-0 systemd[1]: Starting ovn_controller container...
Dec 05 01:00:34 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec 05 01:00:34 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:00:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ca743cda8be747f1c67d276c4b62aeeadba99090713c7c0f5f1be3652a04951/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 05 01:00:34 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.
Dec 05 01:00:34 compute-0 podman[89272]: 2025-12-05 01:00:34.893372299 +0000 UTC m=+0.191138383 container init d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 01:00:34 compute-0 ovn_controller[89286]: + sudo -E kolla_set_configs
Dec 05 01:00:34 compute-0 podman[89272]: 2025-12-05 01:00:34.92866335 +0000 UTC m=+0.226429334 container start d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 05 01:00:34 compute-0 edpm-start-podman-container[89272]: ovn_controller
Dec 05 01:00:34 compute-0 systemd[1]: Created slice User Slice of UID 0.
Dec 05 01:00:34 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 05 01:00:35 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 05 01:00:35 compute-0 systemd[1]: Starting User Manager for UID 0...
Dec 05 01:00:35 compute-0 edpm-start-podman-container[89271]: Creating additional drop-in dependency for "ovn_controller" (d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d)
Dec 05 01:00:35 compute-0 systemd[89327]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Dec 05 01:00:35 compute-0 podman[89293]: 2025-12-05 01:00:35.054310797 +0000 UTC m=+0.099528692 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:00:35 compute-0 systemd[1]: d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d-67d8d571b88bbdcc.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 01:00:35 compute-0 systemd[1]: d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d-67d8d571b88bbdcc.service: Failed with result 'exit-code'.
Dec 05 01:00:35 compute-0 systemd[1]: Reloading.
Dec 05 01:00:35 compute-0 systemd[89327]: Queued start job for default target Main User Target.
Dec 05 01:00:35 compute-0 systemd[89327]: Created slice User Application Slice.
Dec 05 01:00:35 compute-0 systemd[89327]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 05 01:00:35 compute-0 systemd[89327]: Started Daily Cleanup of User's Temporary Directories.
Dec 05 01:00:35 compute-0 systemd[89327]: Reached target Paths.
Dec 05 01:00:35 compute-0 systemd[89327]: Reached target Timers.
Dec 05 01:00:35 compute-0 systemd-rc-local-generator[89374]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:00:35 compute-0 systemd[89327]: Starting D-Bus User Message Bus Socket...
Dec 05 01:00:35 compute-0 systemd[89327]: Starting Create User's Volatile Files and Directories...
Dec 05 01:00:35 compute-0 systemd-sysv-generator[89379]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:00:35 compute-0 systemd[89327]: Finished Create User's Volatile Files and Directories.
Dec 05 01:00:35 compute-0 systemd[89327]: Listening on D-Bus User Message Bus Socket.
Dec 05 01:00:35 compute-0 systemd[89327]: Reached target Sockets.
Dec 05 01:00:35 compute-0 systemd[89327]: Reached target Basic System.
Dec 05 01:00:35 compute-0 systemd[89327]: Reached target Main User Target.
Dec 05 01:00:35 compute-0 systemd[89327]: Startup finished in 146ms.
Dec 05 01:00:35 compute-0 systemd[1]: Started User Manager for UID 0.
Dec 05 01:00:35 compute-0 systemd[1]: Started ovn_controller container.
Dec 05 01:00:35 compute-0 systemd[1]: Started Session c1 of User root.
Dec 05 01:00:35 compute-0 sudo[89229]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:35 compute-0 ovn_controller[89286]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 01:00:35 compute-0 ovn_controller[89286]: INFO:__main__:Validating config file
Dec 05 01:00:35 compute-0 ovn_controller[89286]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 01:00:35 compute-0 ovn_controller[89286]: INFO:__main__:Writing out command to execute
Dec 05 01:00:35 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Dec 05 01:00:35 compute-0 ovn_controller[89286]: ++ cat /run_command
Dec 05 01:00:35 compute-0 ovn_controller[89286]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 05 01:00:35 compute-0 ovn_controller[89286]: + ARGS=
Dec 05 01:00:35 compute-0 ovn_controller[89286]: + sudo kolla_copy_cacerts
Dec 05 01:00:35 compute-0 systemd[1]: Started Session c2 of User root.
Dec 05 01:00:35 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Dec 05 01:00:35 compute-0 ovn_controller[89286]: + [[ ! -n '' ]]
Dec 05 01:00:35 compute-0 ovn_controller[89286]: + . kolla_extend_start
Dec 05 01:00:35 compute-0 ovn_controller[89286]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 05 01:00:35 compute-0 ovn_controller[89286]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec 05 01:00:35 compute-0 ovn_controller[89286]: + umask 0022
Dec 05 01:00:35 compute-0 ovn_controller[89286]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec 05 01:00:35 compute-0 NetworkManager[49092]: <info>  [1764896435.5464] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Dec 05 01:00:35 compute-0 NetworkManager[49092]: <info>  [1764896435.5474] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 01:00:35 compute-0 NetworkManager[49092]: <info>  [1764896435.5486] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec 05 01:00:35 compute-0 NetworkManager[49092]: <info>  [1764896435.5493] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Dec 05 01:00:35 compute-0 NetworkManager[49092]: <info>  [1764896435.5500] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 05 01:00:35 compute-0 kernel: br-int: entered promiscuous mode
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00022|main|INFO|OVS feature set changed, force recompute.
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 05 01:00:35 compute-0 ovn_controller[89286]: 2025-12-05T01:00:35Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 05 01:00:35 compute-0 NetworkManager[49092]: <info>  [1764896435.5793] manager: (ovn-f2dffe-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec 05 01:00:35 compute-0 systemd-udevd[89430]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 01:00:35 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Dec 05 01:00:35 compute-0 systemd-udevd[89441]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 01:00:35 compute-0 NetworkManager[49092]: <info>  [1764896435.6107] device (genev_sys_6081): carrier: link connected
Dec 05 01:00:35 compute-0 NetworkManager[49092]: <info>  [1764896435.6116] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Dec 05 01:00:35 compute-0 sudo[89552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgmiicnvbirearbaqabmndelhufmwcfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896435.6168354-609-60719654352016/AnsiballZ_command.py'
Dec 05 01:00:35 compute-0 sudo[89552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:36 compute-0 python3.9[89554]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:00:36 compute-0 ovs-vsctl[89555]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec 05 01:00:36 compute-0 sudo[89552]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:36 compute-0 sudo[89705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdqcgbeaduomvzltofkovmwshhcvvcuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896436.4875898-617-16868496949811/AnsiballZ_command.py'
Dec 05 01:00:36 compute-0 sudo[89705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:37 compute-0 python3.9[89707]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:00:37 compute-0 ovs-vsctl[89709]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec 05 01:00:37 compute-0 sudo[89705]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:37 compute-0 sudo[89860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gajbrulfamuyoylonriacvqffnjpdnre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896437.6046777-631-162147821751064/AnsiballZ_command.py'
Dec 05 01:00:37 compute-0 sudo[89860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:38 compute-0 python3.9[89862]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:00:38 compute-0 ovs-vsctl[89863]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec 05 01:00:38 compute-0 sudo[89860]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:38 compute-0 sshd-session[78695]: Connection closed by 192.168.122.30 port 37580
Dec 05 01:00:38 compute-0 sshd-session[78692]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:00:38 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Dec 05 01:00:38 compute-0 systemd[1]: session-19.scope: Consumed 1min 2.506s CPU time.
Dec 05 01:00:38 compute-0 systemd-logind[792]: Session 19 logged out. Waiting for processes to exit.
Dec 05 01:00:38 compute-0 systemd-logind[792]: Removed session 19.
Dec 05 01:00:43 compute-0 sshd-session[89888]: Accepted publickey for zuul from 192.168.122.30 port 52400 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 01:00:43 compute-0 systemd-logind[792]: New session 21 of user zuul.
Dec 05 01:00:43 compute-0 systemd[1]: Started Session 21 of User zuul.
Dec 05 01:00:43 compute-0 sshd-session[89888]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:00:44 compute-0 python3.9[90041]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:00:45 compute-0 systemd[1]: Stopping User Manager for UID 0...
Dec 05 01:00:45 compute-0 systemd[89327]: Activating special unit Exit the Session...
Dec 05 01:00:45 compute-0 systemd[89327]: Stopped target Main User Target.
Dec 05 01:00:45 compute-0 systemd[89327]: Stopped target Basic System.
Dec 05 01:00:45 compute-0 systemd[89327]: Stopped target Paths.
Dec 05 01:00:45 compute-0 systemd[89327]: Stopped target Sockets.
Dec 05 01:00:45 compute-0 systemd[89327]: Stopped target Timers.
Dec 05 01:00:45 compute-0 systemd[89327]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 05 01:00:45 compute-0 systemd[89327]: Closed D-Bus User Message Bus Socket.
Dec 05 01:00:45 compute-0 systemd[89327]: Stopped Create User's Volatile Files and Directories.
Dec 05 01:00:45 compute-0 systemd[89327]: Removed slice User Application Slice.
Dec 05 01:00:45 compute-0 systemd[89327]: Reached target Shutdown.
Dec 05 01:00:45 compute-0 systemd[89327]: Finished Exit the Session.
Dec 05 01:00:45 compute-0 systemd[89327]: Reached target Exit the Session.
Dec 05 01:00:45 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Dec 05 01:00:45 compute-0 systemd[1]: Stopped User Manager for UID 0.
Dec 05 01:00:45 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 05 01:00:45 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 05 01:00:45 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 05 01:00:45 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 05 01:00:45 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Dec 05 01:00:46 compute-0 sudo[90198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnvupjxfrbrnqfxytpvslkmmlxanktfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896445.5018506-34-152246275205757/AnsiballZ_command.py'
Dec 05 01:00:46 compute-0 sudo[90198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:46 compute-0 python3.9[90200]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:00:46 compute-0 sudo[90198]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:47 compute-0 sudo[90363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcoyebkzvlunulyccmhlvyvxgebviebn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896446.8029432-45-57402981742684/AnsiballZ_systemd_service.py'
Dec 05 01:00:47 compute-0 sudo[90363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:47 compute-0 python3.9[90365]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 01:00:47 compute-0 systemd[1]: Reloading.
Dec 05 01:00:47 compute-0 systemd-rc-local-generator[90391]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:00:47 compute-0 systemd-sysv-generator[90395]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:00:48 compute-0 sudo[90363]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:48 compute-0 python3.9[90549]: ansible-ansible.builtin.service_facts Invoked
Dec 05 01:00:50 compute-0 network[90566]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 01:00:50 compute-0 network[90567]: 'network-scripts' will be removed from distribution in near future.
Dec 05 01:00:50 compute-0 network[90568]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 01:00:56 compute-0 sudo[90828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvvxmflcdurkeetatpsgpwfdsxddiflp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896455.8308544-64-225230002232016/AnsiballZ_systemd_service.py'
Dec 05 01:00:56 compute-0 sudo[90828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:56 compute-0 python3.9[90830]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:00:56 compute-0 sudo[90828]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:57 compute-0 sudo[90981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oerzbmozqpxjpwcnnxfvnrerbdejscfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896456.72942-64-189542245016575/AnsiballZ_systemd_service.py'
Dec 05 01:00:57 compute-0 sudo[90981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:57 compute-0 python3.9[90983]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:00:58 compute-0 sudo[90981]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:59 compute-0 sudo[91134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwqcoquezwgodbytbdssmdyggnwelhhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896458.7347543-64-79532572012350/AnsiballZ_systemd_service.py'
Dec 05 01:00:59 compute-0 sudo[91134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:00:59 compute-0 python3.9[91136]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:00:59 compute-0 sudo[91134]: pam_unix(sudo:session): session closed for user root
Dec 05 01:00:59 compute-0 sudo[91287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsaadhbqwiczdemljqkitfmfcxolzczl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896459.6519084-64-245802860038340/AnsiballZ_systemd_service.py'
Dec 05 01:00:59 compute-0 sudo[91287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:00 compute-0 python3.9[91289]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:01:00 compute-0 sudo[91287]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:00 compute-0 sudo[91440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbqgdxzpzlqwqmlgncliwqsfrouzhfhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896460.4726396-64-233384966888242/AnsiballZ_systemd_service.py'
Dec 05 01:01:00 compute-0 sudo[91440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:01 compute-0 python3.9[91442]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:01:01 compute-0 sudo[91440]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:01 compute-0 sudo[91593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oefoetlphwddvpykzlkhnrrsivirhjdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896461.4688694-64-115390987160185/AnsiballZ_systemd_service.py'
Dec 05 01:01:01 compute-0 sudo[91593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:01 compute-0 CROND[91597]: (root) CMD (run-parts /etc/cron.hourly)
Dec 05 01:01:02 compute-0 run-parts[91600]: (/etc/cron.hourly) starting 0anacron
Dec 05 01:01:02 compute-0 anacron[91608]: Anacron started on 2025-12-05
Dec 05 01:01:02 compute-0 anacron[91608]: Will run job `cron.daily' in 14 min.
Dec 05 01:01:02 compute-0 anacron[91608]: Will run job `cron.weekly' in 34 min.
Dec 05 01:01:02 compute-0 anacron[91608]: Will run job `cron.monthly' in 54 min.
Dec 05 01:01:02 compute-0 anacron[91608]: Jobs will be executed sequentially
Dec 05 01:01:02 compute-0 run-parts[91610]: (/etc/cron.hourly) finished 0anacron
Dec 05 01:01:02 compute-0 CROND[91596]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 05 01:01:02 compute-0 python3.9[91595]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:01:02 compute-0 sudo[91593]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:02 compute-0 sudo[91761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjyjybtnklvwlnnmaagnmdccpohhiets ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896462.3412905-64-80287209528201/AnsiballZ_systemd_service.py'
Dec 05 01:01:02 compute-0 sudo[91761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:03 compute-0 python3.9[91763]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:01:03 compute-0 sudo[91761]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:04 compute-0 sudo[91914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iojlrjdnghkiodurrddvmkmblatetvpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896463.6209433-116-153358996782440/AnsiballZ_file.py'
Dec 05 01:01:04 compute-0 sudo[91914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:04 compute-0 python3.9[91916]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:01:04 compute-0 sudo[91914]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:05 compute-0 sudo[92066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cekbklnycipvnsvoatovvmhowydqxmwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896464.790288-116-244112835412948/AnsiballZ_file.py'
Dec 05 01:01:05 compute-0 sudo[92066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:05 compute-0 ovn_controller[89286]: 2025-12-05T01:01:05Z|00025|memory|INFO|16000 kB peak resident set size after 29.6 seconds
Dec 05 01:01:05 compute-0 ovn_controller[89286]: 2025-12-05T01:01:05Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Dec 05 01:01:05 compute-0 podman[92068]: 2025-12-05 01:01:05.188667418 +0000 UTC m=+0.089653596 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 05 01:01:05 compute-0 python3.9[92069]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:01:05 compute-0 sudo[92066]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:05 compute-0 sudo[92245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqnlgjytnmvowpkshskiasptfhmsnggl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896465.6045408-116-50500563151984/AnsiballZ_file.py'
Dec 05 01:01:05 compute-0 sudo[92245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:06 compute-0 python3.9[92247]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:01:06 compute-0 sudo[92245]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:06 compute-0 sudo[92397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trqlrebbbjcmhrddnghtxhffpqyuwbho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896466.359199-116-269911469033126/AnsiballZ_file.py'
Dec 05 01:01:06 compute-0 sudo[92397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:06 compute-0 python3.9[92399]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:01:06 compute-0 sudo[92397]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:07 compute-0 sudo[92549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pooxpfmvhzowhskvuynauxritbyduyjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896466.9896462-116-25667101209269/AnsiballZ_file.py'
Dec 05 01:01:07 compute-0 sudo[92549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:07 compute-0 python3.9[92551]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:01:07 compute-0 sudo[92549]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:07 compute-0 sudo[92701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njibfvmvihrphfxvppslgphounpioyel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896467.6006503-116-144169317944925/AnsiballZ_file.py'
Dec 05 01:01:07 compute-0 sudo[92701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:08 compute-0 python3.9[92703]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:01:08 compute-0 sudo[92701]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:08 compute-0 sudo[92853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onppmbhartxwlkjlkjkocnnhraufrapk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896468.2787256-116-33223537060424/AnsiballZ_file.py'
Dec 05 01:01:08 compute-0 sudo[92853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:08 compute-0 python3.9[92855]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:01:08 compute-0 sudo[92853]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:09 compute-0 sudo[93005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhecnunjwumdggoajbhqvwkfxdszwjvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896469.0781307-166-138865031713866/AnsiballZ_file.py'
Dec 05 01:01:09 compute-0 sudo[93005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:09 compute-0 python3.9[93007]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:01:09 compute-0 sudo[93005]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:10 compute-0 sudo[93157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdtwxjsaydmtlxnvfulmqjfdxrpduojs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896469.821071-166-259910636170211/AnsiballZ_file.py'
Dec 05 01:01:10 compute-0 sudo[93157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:10 compute-0 python3.9[93159]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:01:10 compute-0 sudo[93157]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:11 compute-0 sudo[93309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipmczufmiuuojnoiblwddafyhwyfkpjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896470.71629-166-120049174825193/AnsiballZ_file.py'
Dec 05 01:01:11 compute-0 sudo[93309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:11 compute-0 python3.9[93311]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:01:11 compute-0 sudo[93309]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:11 compute-0 sudo[93461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkcoowwdeyfguhvapfjjwhjfdqudwcim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896471.4402096-166-56659619092649/AnsiballZ_file.py'
Dec 05 01:01:11 compute-0 sudo[93461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:11 compute-0 python3.9[93463]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:01:11 compute-0 sudo[93461]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:12 compute-0 sudo[93613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjihljzeylgibqoizpbohxrtnqsolbne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896472.0409873-166-61415810964718/AnsiballZ_file.py'
Dec 05 01:01:12 compute-0 sudo[93613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:12 compute-0 python3.9[93615]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:01:12 compute-0 sudo[93613]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:12 compute-0 sudo[93765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usmdwzycasbmmwozbnqmshqrammxlwvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896472.63626-166-117552599484700/AnsiballZ_file.py'
Dec 05 01:01:12 compute-0 sudo[93765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:13 compute-0 python3.9[93767]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:01:13 compute-0 sudo[93765]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:13 compute-0 sudo[93917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrxwmvtsbwqysztslreltedfcdjevbwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896473.3551023-166-192551528506398/AnsiballZ_file.py'
Dec 05 01:01:13 compute-0 sudo[93917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:13 compute-0 python3.9[93919]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:01:13 compute-0 sudo[93917]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:14 compute-0 sudo[94069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-netrzmwryatynztuyigbqpfgcvkicjjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896474.2204514-217-62581083284063/AnsiballZ_command.py'
Dec 05 01:01:14 compute-0 sudo[94069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:14 compute-0 python3.9[94071]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                              systemctl disable --now certmonger.service
                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                            fi
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:01:14 compute-0 sudo[94069]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:15 compute-0 python3.9[94223]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 01:01:16 compute-0 sudo[94373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zskwcgpuqecocjzukgadztdcakatqaxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896476.0243943-235-163406646270271/AnsiballZ_systemd_service.py'
Dec 05 01:01:16 compute-0 sudo[94373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:16 compute-0 python3.9[94375]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 01:01:16 compute-0 systemd[1]: Reloading.
Dec 05 01:01:16 compute-0 systemd-rc-local-generator[94403]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:01:16 compute-0 systemd-sysv-generator[94407]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:01:17 compute-0 sudo[94373]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:17 compute-0 sudo[94561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mknidckmzshxlrnptrheyhouxzhjkciz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896477.254412-243-38186000515672/AnsiballZ_command.py'
Dec 05 01:01:17 compute-0 sudo[94561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:17 compute-0 python3.9[94563]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:01:17 compute-0 sudo[94561]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:18 compute-0 sudo[94714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrufozkgdwjufutzramwtivcohxbosuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896478.092088-243-99913607018982/AnsiballZ_command.py'
Dec 05 01:01:18 compute-0 sudo[94714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:18 compute-0 python3.9[94716]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:01:18 compute-0 sudo[94714]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:19 compute-0 sudo[94867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glcbwaqknwoefmnwrmnqirudywjaseiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896478.8234003-243-45419470270828/AnsiballZ_command.py'
Dec 05 01:01:19 compute-0 sudo[94867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:19 compute-0 python3.9[94869]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:01:19 compute-0 sudo[94867]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:19 compute-0 sudo[95020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgozgzjkdzyheodcymdhgcfaefkbbmdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896479.479777-243-240622102369959/AnsiballZ_command.py'
Dec 05 01:01:19 compute-0 sudo[95020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:20 compute-0 python3.9[95022]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:01:20 compute-0 sudo[95020]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:20 compute-0 sudo[95173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awzsmsyprmazmipnxhazhczbbxyhphtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896480.2647123-243-217411459016401/AnsiballZ_command.py'
Dec 05 01:01:20 compute-0 sudo[95173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:20 compute-0 python3.9[95175]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:01:20 compute-0 sudo[95173]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:21 compute-0 sudo[95326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfswcavlkonjwvmcpqtnhlfivnoaytrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896481.0484505-243-128487577097748/AnsiballZ_command.py'
Dec 05 01:01:21 compute-0 sudo[95326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:21 compute-0 python3.9[95328]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:01:21 compute-0 sudo[95326]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:22 compute-0 sudo[95479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soqxzljuoseyebxfnrvmhqjpnlbftekl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896481.860347-243-145011106242810/AnsiballZ_command.py'
Dec 05 01:01:22 compute-0 sudo[95479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:22 compute-0 python3.9[95481]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:01:22 compute-0 sudo[95479]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:23 compute-0 sudo[95632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrkzmvgetvbhahngnsudfvzrcfhgwpiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896482.9004056-297-281245884849194/AnsiballZ_getent.py'
Dec 05 01:01:23 compute-0 sudo[95632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:23 compute-0 python3.9[95634]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec 05 01:01:23 compute-0 sudo[95632]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:24 compute-0 sudo[95785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcbigdrwgkcvaikixhfsigliqattexga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896483.7857895-305-268394736661607/AnsiballZ_group.py'
Dec 05 01:01:24 compute-0 sudo[95785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:24 compute-0 python3.9[95787]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 05 01:01:24 compute-0 groupadd[95788]: group added to /etc/group: name=libvirt, GID=42473
Dec 05 01:01:24 compute-0 groupadd[95788]: group added to /etc/gshadow: name=libvirt
Dec 05 01:01:24 compute-0 groupadd[95788]: new group: name=libvirt, GID=42473
Dec 05 01:01:24 compute-0 sudo[95785]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:25 compute-0 sudo[95943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqlfbypusplekuuqgrcntxxatktqemdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896484.8258412-313-270355468323553/AnsiballZ_user.py'
Dec 05 01:01:25 compute-0 sudo[95943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:25 compute-0 python3.9[95945]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 05 01:01:25 compute-0 useradd[95947]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Dec 05 01:01:25 compute-0 sudo[95943]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:26 compute-0 sudo[96103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbbeljmycxxyfaawxyxcudjsvazpgpar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896486.229023-324-138371186135331/AnsiballZ_setup.py'
Dec 05 01:01:26 compute-0 sudo[96103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:26 compute-0 python3.9[96105]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 01:01:27 compute-0 sudo[96103]: pam_unix(sudo:session): session closed for user root
Dec 05 01:01:27 compute-0 sudo[96187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhermnblsdfhcemaaswiliuxbyzmgphb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896486.229023-324-138371186135331/AnsiballZ_dnf.py'
Dec 05 01:01:27 compute-0 sudo[96187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:01:27 compute-0 python3.9[96189]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 01:01:35 compute-0 podman[96204]: 2025-12-05 01:01:35.718347841 +0000 UTC m=+0.127896684 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Dec 05 01:01:54 compute-0 kernel: SELinux:  Converting 2758 SID table entries...
Dec 05 01:01:54 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 01:01:54 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 05 01:01:54 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 01:01:54 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 05 01:01:54 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 01:01:54 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 01:01:54 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 01:02:03 compute-0 kernel: SELinux:  Converting 2758 SID table entries...
Dec 05 01:02:03 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 01:02:03 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 05 01:02:03 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 01:02:03 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 05 01:02:03 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 01:02:03 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 01:02:03 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 01:02:06 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec 05 01:02:06 compute-0 podman[96419]: 2025-12-05 01:02:06.680440104 +0000 UTC m=+0.098326996 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:02:37 compute-0 podman[108069]: 2025-12-05 01:02:37.69079617 +0000 UTC m=+0.101137216 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 01:02:58 compute-0 kernel: SELinux:  Converting 2759 SID table entries...
Dec 05 01:02:58 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 05 01:02:58 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 05 01:02:58 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 05 01:02:58 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 05 01:02:58 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 05 01:02:58 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 05 01:02:58 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 05 01:02:59 compute-0 groupadd[113289]: group added to /etc/group: name=dnsmasq, GID=992
Dec 05 01:02:59 compute-0 groupadd[113289]: group added to /etc/gshadow: name=dnsmasq
Dec 05 01:02:59 compute-0 groupadd[113289]: new group: name=dnsmasq, GID=992
Dec 05 01:02:59 compute-0 useradd[113296]: new user: name=dnsmasq, UID=992, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Dec 05 01:03:00 compute-0 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Dec 05 01:03:00 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec 05 01:03:00 compute-0 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Dec 05 01:03:00 compute-0 groupadd[113309]: group added to /etc/group: name=clevis, GID=991
Dec 05 01:03:00 compute-0 groupadd[113309]: group added to /etc/gshadow: name=clevis
Dec 05 01:03:00 compute-0 groupadd[113309]: new group: name=clevis, GID=991
Dec 05 01:03:00 compute-0 useradd[113316]: new user: name=clevis, UID=991, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Dec 05 01:03:01 compute-0 usermod[113326]: add 'clevis' to group 'tss'
Dec 05 01:03:01 compute-0 usermod[113326]: add 'clevis' to shadow group 'tss'
Dec 05 01:03:03 compute-0 polkitd[43575]: Reloading rules
Dec 05 01:03:03 compute-0 polkitd[43575]: Collecting garbage unconditionally...
Dec 05 01:03:03 compute-0 polkitd[43575]: Loading rules from directory /etc/polkit-1/rules.d
Dec 05 01:03:03 compute-0 polkitd[43575]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 05 01:03:03 compute-0 polkitd[43575]: Finished loading, compiling and executing 3 rules
Dec 05 01:03:03 compute-0 polkitd[43575]: Reloading rules
Dec 05 01:03:03 compute-0 polkitd[43575]: Collecting garbage unconditionally...
Dec 05 01:03:03 compute-0 polkitd[43575]: Loading rules from directory /etc/polkit-1/rules.d
Dec 05 01:03:03 compute-0 polkitd[43575]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 05 01:03:03 compute-0 polkitd[43575]: Finished loading, compiling and executing 3 rules
Dec 05 01:03:04 compute-0 groupadd[113513]: group added to /etc/group: name=ceph, GID=167
Dec 05 01:03:04 compute-0 groupadd[113513]: group added to /etc/gshadow: name=ceph
Dec 05 01:03:04 compute-0 groupadd[113513]: new group: name=ceph, GID=167
Dec 05 01:03:04 compute-0 useradd[113519]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Dec 05 01:03:07 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Dec 05 01:03:07 compute-0 sshd[1009]: Received signal 15; terminating.
Dec 05 01:03:07 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Dec 05 01:03:07 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Dec 05 01:03:07 compute-0 systemd[1]: sshd.service: Consumed 2.109s CPU time, read 32.0K from disk, written 8.0K to disk.
Dec 05 01:03:07 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Dec 05 01:03:07 compute-0 systemd[1]: Stopping sshd-keygen.target...
Dec 05 01:03:07 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 01:03:07 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 01:03:07 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 05 01:03:07 compute-0 systemd[1]: Reached target sshd-keygen.target.
Dec 05 01:03:07 compute-0 systemd[1]: Starting OpenSSH server daemon...
Dec 05 01:03:07 compute-0 sshd[114018]: Server listening on 0.0.0.0 port 22.
Dec 05 01:03:07 compute-0 sshd[114018]: Server listening on :: port 22.
Dec 05 01:03:07 compute-0 systemd[1]: Started OpenSSH server daemon.
Dec 05 01:03:07 compute-0 podman[114071]: 2025-12-05 01:03:07.879302259 +0000 UTC m=+0.136082333 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 01:03:09 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 01:03:09 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 05 01:03:09 compute-0 systemd[1]: Reloading.
Dec 05 01:03:09 compute-0 systemd-rc-local-generator[114303]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:03:09 compute-0 systemd-sysv-generator[114307]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:03:10 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 01:03:12 compute-0 sudo[96187]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:13 compute-0 sudo[118842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plarfvjaektoffgrzrnhmuppuhrbxmys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896593.041155-336-239469457903652/AnsiballZ_systemd.py'
Dec 05 01:03:13 compute-0 sudo[118842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:14 compute-0 python3.9[118867]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 01:03:14 compute-0 systemd[1]: Reloading.
Dec 05 01:03:14 compute-0 systemd-rc-local-generator[119318]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:03:14 compute-0 systemd-sysv-generator[119322]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:03:14 compute-0 sudo[118842]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:14 compute-0 sudo[120184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suxomfrcfasctbsfvtylukmeqtistqpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896594.5259316-336-144650077842038/AnsiballZ_systemd.py'
Dec 05 01:03:14 compute-0 sudo[120184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:15 compute-0 python3.9[120198]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 01:03:15 compute-0 systemd[1]: Reloading.
Dec 05 01:03:15 compute-0 systemd-sysv-generator[120650]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:03:15 compute-0 systemd-rc-local-generator[120646]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:03:15 compute-0 sudo[120184]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:15 compute-0 sudo[121403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xujveuvytgwrmuwtylsemuqnisiekeke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896595.5394723-336-77108073831643/AnsiballZ_systemd.py'
Dec 05 01:03:15 compute-0 sudo[121403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:16 compute-0 python3.9[121425]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 01:03:16 compute-0 systemd[1]: Reloading.
Dec 05 01:03:16 compute-0 systemd-rc-local-generator[121876]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:03:16 compute-0 systemd-sysv-generator[121883]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:03:16 compute-0 sudo[121403]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:16 compute-0 sudo[122730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpebbkfrkkdfbqmqhflmkcrvgoftlsnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896596.5959232-336-68640910770922/AnsiballZ_systemd.py'
Dec 05 01:03:16 compute-0 sudo[122730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:17 compute-0 python3.9[122745]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 01:03:17 compute-0 systemd[1]: Reloading.
Dec 05 01:03:17 compute-0 systemd-sysv-generator[123099]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:03:17 compute-0 systemd-rc-local-generator[123096]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:03:17 compute-0 sudo[122730]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:17 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 01:03:17 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 05 01:03:17 compute-0 systemd[1]: man-db-cache-update.service: Consumed 10.194s CPU time.
Dec 05 01:03:17 compute-0 systemd[1]: run-r0bd431dd1bb94b368c7680f119d79f43.service: Deactivated successfully.
Dec 05 01:03:18 compute-0 sudo[123587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukeeiahfntrkpxyzrhbdclrxgyblmqvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896597.6938913-365-225351247231660/AnsiballZ_systemd.py'
Dec 05 01:03:18 compute-0 sudo[123587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:18 compute-0 python3.9[123589]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:03:19 compute-0 systemd[1]: Reloading.
Dec 05 01:03:19 compute-0 systemd-rc-local-generator[123618]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:03:19 compute-0 systemd-sysv-generator[123621]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:03:19 compute-0 sudo[123587]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:20 compute-0 sudo[123776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgwmseemwgwlyurqlgcyogjgbdimpdgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896599.9975796-365-11128095459437/AnsiballZ_systemd.py'
Dec 05 01:03:20 compute-0 sudo[123776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:20 compute-0 python3.9[123778]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:03:20 compute-0 systemd[1]: Reloading.
Dec 05 01:03:20 compute-0 systemd-sysv-generator[123812]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:03:20 compute-0 systemd-rc-local-generator[123809]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:03:21 compute-0 sudo[123776]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:21 compute-0 sudo[123966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwidjbfmqeraxaifhctpjewsejqnymup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896601.256563-365-271060740740529/AnsiballZ_systemd.py'
Dec 05 01:03:21 compute-0 sudo[123966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:21 compute-0 python3.9[123968]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:03:22 compute-0 systemd[1]: Reloading.
Dec 05 01:03:22 compute-0 systemd-rc-local-generator[123999]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:03:22 compute-0 systemd-sysv-generator[124003]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:03:22 compute-0 sudo[123966]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:22 compute-0 sudo[124156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqnxbzgdyhkfyvycmszfpcazdymfuoaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896602.52803-365-184605399823156/AnsiballZ_systemd.py'
Dec 05 01:03:22 compute-0 sudo[124156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:23 compute-0 python3.9[124158]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:03:24 compute-0 sudo[124156]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:24 compute-0 sudo[124311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvthuwhdcwibhzwdpvlljfdzbbjmybqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896604.4589784-365-87044930808284/AnsiballZ_systemd.py'
Dec 05 01:03:24 compute-0 sudo[124311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:25 compute-0 python3.9[124313]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:03:25 compute-0 systemd[1]: Reloading.
Dec 05 01:03:25 compute-0 systemd-rc-local-generator[124345]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:03:25 compute-0 systemd-sysv-generator[124350]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:03:25 compute-0 sudo[124311]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:26 compute-0 sudo[124501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hadxltlkqljobvgihaakwqmiyaxtsihl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896605.6722202-401-218823723914231/AnsiballZ_systemd.py'
Dec 05 01:03:26 compute-0 sudo[124501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:26 compute-0 python3.9[124503]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 01:03:26 compute-0 systemd[1]: Reloading.
Dec 05 01:03:26 compute-0 systemd-sysv-generator[124536]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:03:26 compute-0 systemd-rc-local-generator[124532]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:03:26 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Dec 05 01:03:26 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec 05 01:03:26 compute-0 sudo[124501]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:27 compute-0 sudo[124694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejyvwkvrujkxookscgrtobhqbqectqip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896607.0634248-409-240956586762933/AnsiballZ_systemd.py'
Dec 05 01:03:27 compute-0 sudo[124694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:27 compute-0 python3.9[124696]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:03:27 compute-0 sudo[124694]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:28 compute-0 sudo[124849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpjshileqqrbbeoyhgbxybisxwwpfpfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896607.9415705-409-210149725359610/AnsiballZ_systemd.py'
Dec 05 01:03:28 compute-0 sudo[124849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:28 compute-0 python3.9[124851]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:03:29 compute-0 sudo[124849]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:30 compute-0 sudo[125004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncdmwngfrhquavaanxhrlfrbfqyhoxnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896609.9811687-409-227574749811630/AnsiballZ_systemd.py'
Dec 05 01:03:30 compute-0 sudo[125004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:30 compute-0 python3.9[125006]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:03:30 compute-0 sudo[125004]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:31 compute-0 sudo[125159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoutjsvmfkewkrepnkhmmwufhlzdhnsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896610.9169295-409-270498841274891/AnsiballZ_systemd.py'
Dec 05 01:03:31 compute-0 sudo[125159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:31 compute-0 python3.9[125161]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:03:31 compute-0 sudo[125159]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:32 compute-0 sudo[125314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgkwnaavagrabeenxxlqxfgckbkoymcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896611.8054695-409-33733887106838/AnsiballZ_systemd.py'
Dec 05 01:03:32 compute-0 sudo[125314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:32 compute-0 python3.9[125316]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:03:32 compute-0 sudo[125314]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:33 compute-0 sudo[125469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twjrybkzctcanfudxjwrckfagwdjtiud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896612.7288966-409-8594760278860/AnsiballZ_systemd.py'
Dec 05 01:03:33 compute-0 sudo[125469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:33 compute-0 python3.9[125471]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:03:33 compute-0 sudo[125469]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:33 compute-0 sudo[125624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhtnuyhlcmdwqsiecqwcqqazghefrjxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896613.6554737-409-51535520330562/AnsiballZ_systemd.py'
Dec 05 01:03:33 compute-0 sudo[125624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:34 compute-0 python3.9[125626]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:03:34 compute-0 sudo[125624]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:34 compute-0 sudo[125779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejcjqmpaurdpiqrjqrjtuffnuasitdwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896614.5291407-409-124086563637775/AnsiballZ_systemd.py'
Dec 05 01:03:34 compute-0 sudo[125779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:35 compute-0 python3.9[125781]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:03:35 compute-0 sudo[125779]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:35 compute-0 sudo[125934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwutzwegxgktlzrpssnicylxlvzwmofi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896615.4221315-409-158384005046475/AnsiballZ_systemd.py'
Dec 05 01:03:35 compute-0 sudo[125934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:36 compute-0 python3.9[125936]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:03:36 compute-0 sudo[125934]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:36 compute-0 sudo[126089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehuxegxolmmwpfotnpabqksxigykpomo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896616.347763-409-89759492894527/AnsiballZ_systemd.py'
Dec 05 01:03:36 compute-0 sudo[126089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:37 compute-0 python3.9[126091]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:03:37 compute-0 sudo[126089]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:37 compute-0 sudo[126244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqigewpfwxvnwfqtosrctkmqyzybukdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896617.3118105-409-20274697599410/AnsiballZ_systemd.py'
Dec 05 01:03:37 compute-0 sudo[126244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:37 compute-0 python3.9[126246]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:03:38 compute-0 sudo[126244]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:38 compute-0 podman[126248]: 2025-12-05 01:03:38.189401166 +0000 UTC m=+0.138567978 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 05 01:03:38 compute-0 sudo[126427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaotfolgonjhzeotcedqozycvfkgnonj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896618.3351152-409-41579194913953/AnsiballZ_systemd.py'
Dec 05 01:03:38 compute-0 sudo[126427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:39 compute-0 python3.9[126429]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:03:40 compute-0 sudo[126427]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:40 compute-0 sudo[126582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-purmejnetrtpjpibergcmalkrsdaqtmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896620.3620036-409-125645333023966/AnsiballZ_systemd.py'
Dec 05 01:03:40 compute-0 sudo[126582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:41 compute-0 python3.9[126584]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:03:41 compute-0 sudo[126582]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:41 compute-0 sudo[126737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldjaxahzuwqsksqsqsgqebmzlssnsoed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896621.2810397-409-203121118920908/AnsiballZ_systemd.py'
Dec 05 01:03:41 compute-0 sudo[126737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:41 compute-0 python3.9[126739]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:03:42 compute-0 sudo[126737]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:42 compute-0 sudo[126892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvexqjbxsaehaphuwtzckikryullzevd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896622.415895-511-108294102250170/AnsiballZ_file.py'
Dec 05 01:03:42 compute-0 sudo[126892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:42 compute-0 python3.9[126894]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:03:42 compute-0 sudo[126892]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:43 compute-0 sudo[127044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olkpszrhlknpyjblienxiyrmnjqwywgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896623.0169353-511-45602945264932/AnsiballZ_file.py'
Dec 05 01:03:43 compute-0 sudo[127044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:43 compute-0 python3.9[127046]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:03:43 compute-0 sudo[127044]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:43 compute-0 sudo[127196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxdscjxpojxdfskmhobxnxubxkkpxsbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896623.583131-511-73596778343491/AnsiballZ_file.py'
Dec 05 01:03:43 compute-0 sudo[127196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:43 compute-0 python3.9[127198]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:03:44 compute-0 sudo[127196]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:44 compute-0 sudo[127348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojjtwljenscksymqsffbwtxczugarcpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896624.1324584-511-14863834608073/AnsiballZ_file.py'
Dec 05 01:03:44 compute-0 sudo[127348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:44 compute-0 python3.9[127350]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:03:44 compute-0 sudo[127348]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:45 compute-0 sudo[127500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgxbkrjumyvlxnthrrsqlyatqvokqknu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896624.7427762-511-21186993468750/AnsiballZ_file.py'
Dec 05 01:03:45 compute-0 sudo[127500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:45 compute-0 python3.9[127502]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:03:45 compute-0 sudo[127500]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:45 compute-0 sudo[127652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llvwvcdsrnsmgsyqkyqspzkzvlwqpjna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896625.4950047-511-274788977249490/AnsiballZ_file.py'
Dec 05 01:03:45 compute-0 sudo[127652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:46 compute-0 python3.9[127654]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:03:46 compute-0 sudo[127652]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:46 compute-0 sudo[127804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhwxytvurvgxbdgpnnpaycohvtkyqdbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896626.346985-554-99597404895868/AnsiballZ_stat.py'
Dec 05 01:03:46 compute-0 sudo[127804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:47 compute-0 python3.9[127806]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:03:47 compute-0 sudo[127804]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:47 compute-0 sudo[127929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flmkcsnbqxpzknmrmlfjdnnmwamgrles ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896626.346985-554-99597404895868/AnsiballZ_copy.py'
Dec 05 01:03:47 compute-0 sudo[127929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:47 compute-0 python3.9[127931]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764896626.346985-554-99597404895868/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:03:47 compute-0 sudo[127929]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:48 compute-0 sudo[128081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtsqirrrtruyyykojyshvijsmttzeghl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896628.0172966-554-279389015720054/AnsiballZ_stat.py'
Dec 05 01:03:48 compute-0 sudo[128081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:48 compute-0 python3.9[128083]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:03:48 compute-0 sudo[128081]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:48 compute-0 sudo[128206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ageoxzopzvikparcohsvwufbgrhlausk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896628.0172966-554-279389015720054/AnsiballZ_copy.py'
Dec 05 01:03:48 compute-0 sudo[128206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:49 compute-0 python3.9[128208]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764896628.0172966-554-279389015720054/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:03:49 compute-0 sudo[128206]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:49 compute-0 sudo[128358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quljextefagbsojlpxdwsuapuwgsgnqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896629.2994168-554-281457487101718/AnsiballZ_stat.py'
Dec 05 01:03:49 compute-0 sudo[128358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:49 compute-0 python3.9[128360]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:03:49 compute-0 sudo[128358]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:50 compute-0 sudo[128483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nucfztuyjhwghrzwpopruryphdknbhun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896629.2994168-554-281457487101718/AnsiballZ_copy.py'
Dec 05 01:03:50 compute-0 sudo[128483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:50 compute-0 python3.9[128485]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764896629.2994168-554-281457487101718/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:03:50 compute-0 sudo[128483]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:50 compute-0 sudo[128635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmxdxbenhnautiofiisrvzmmsjeoqtvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896630.55554-554-2717490696401/AnsiballZ_stat.py'
Dec 05 01:03:50 compute-0 sudo[128635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:51 compute-0 python3.9[128637]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:03:51 compute-0 sudo[128635]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:51 compute-0 sudo[128760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogeyraqleibztbejqupxbwpntwvzkafr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896630.55554-554-2717490696401/AnsiballZ_copy.py'
Dec 05 01:03:51 compute-0 sudo[128760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:51 compute-0 python3.9[128762]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764896630.55554-554-2717490696401/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:03:51 compute-0 sudo[128760]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:52 compute-0 sudo[128912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctsgkqbtmsdqhbaxglioqnozyfrtpyfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896631.845243-554-80891701137782/AnsiballZ_stat.py'
Dec 05 01:03:52 compute-0 sudo[128912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:52 compute-0 python3.9[128914]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:03:52 compute-0 sudo[128912]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:52 compute-0 sudo[129037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xemigqvowttoswzgnnliuxpmulowueew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896631.845243-554-80891701137782/AnsiballZ_copy.py'
Dec 05 01:03:52 compute-0 sudo[129037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:52 compute-0 python3.9[129039]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764896631.845243-554-80891701137782/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:03:52 compute-0 sudo[129037]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:53 compute-0 sudo[129189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giasbhhhscbmycjllsbupcwgvlowndma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896633.0404656-554-13431186608292/AnsiballZ_stat.py'
Dec 05 01:03:53 compute-0 sudo[129189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:53 compute-0 python3.9[129191]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:03:53 compute-0 sudo[129189]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:54 compute-0 sudo[129314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhhfuejjcgdmokzdtduvudbhrvxehldb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896633.0404656-554-13431186608292/AnsiballZ_copy.py'
Dec 05 01:03:54 compute-0 sudo[129314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:54 compute-0 python3.9[129316]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764896633.0404656-554-13431186608292/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:03:54 compute-0 sudo[129314]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:54 compute-0 sudo[129466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbbvbszzvyodeedrrbcuuojqctkakpit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896634.432735-554-202042377749202/AnsiballZ_stat.py'
Dec 05 01:03:54 compute-0 sudo[129466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:55 compute-0 python3.9[129468]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:03:55 compute-0 sudo[129466]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:55 compute-0 sudo[129589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzvlijozbsykiabeubnuxvctrcjrmmwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896634.432735-554-202042377749202/AnsiballZ_copy.py'
Dec 05 01:03:55 compute-0 sudo[129589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:55 compute-0 python3.9[129591]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764896634.432735-554-202042377749202/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:03:55 compute-0 sudo[129589]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:56 compute-0 sudo[129741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pupsxmmihoucydypydtybrwiidphnzlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896635.9371138-554-276378709331709/AnsiballZ_stat.py'
Dec 05 01:03:56 compute-0 sudo[129741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:56 compute-0 python3.9[129743]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:03:56 compute-0 sudo[129741]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:56 compute-0 sudo[129866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybfmlzbtrumxxgpdezzfeudwlqrbhwjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896635.9371138-554-276378709331709/AnsiballZ_copy.py'
Dec 05 01:03:56 compute-0 sudo[129866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:57 compute-0 python3.9[129868]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764896635.9371138-554-276378709331709/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:03:57 compute-0 sudo[129866]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:57 compute-0 sudo[130018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvenbcvwojpfwuuapyymzhpxtljobljr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896637.286268-667-227439866895254/AnsiballZ_command.py'
Dec 05 01:03:57 compute-0 sudo[130018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:57 compute-0 python3.9[130020]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec 05 01:03:57 compute-0 sudo[130018]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:58 compute-0 sudo[130171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnvkontvcwywmtkxrswahdtlwzqmkegf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896638.0547812-676-241436190419460/AnsiballZ_file.py'
Dec 05 01:03:58 compute-0 sudo[130171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:58 compute-0 python3.9[130173]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:03:58 compute-0 sudo[130171]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:59 compute-0 sudo[130323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjhrrputafimesakzrbwxkdivnvgyizu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896638.7512956-676-151514620908846/AnsiballZ_file.py'
Dec 05 01:03:59 compute-0 sudo[130323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:59 compute-0 python3.9[130325]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:03:59 compute-0 sudo[130323]: pam_unix(sudo:session): session closed for user root
Dec 05 01:03:59 compute-0 sudo[130475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elndgefpwwgutoaigqqbjxfmrxpfkivw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896639.4034684-676-44110116015522/AnsiballZ_file.py'
Dec 05 01:03:59 compute-0 sudo[130475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:03:59 compute-0 python3.9[130477]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:03:59 compute-0 sudo[130475]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:00 compute-0 sudo[130627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mngmbxxjysqppoagksmhbhdtygmfxmbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896640.0024683-676-136543120621381/AnsiballZ_file.py'
Dec 05 01:04:00 compute-0 sudo[130627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:00 compute-0 python3.9[130629]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:00 compute-0 sudo[130627]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:00 compute-0 sudo[130779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zszmqwytmzxjifobtarqkqtrzcwfuwty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896640.711554-676-80837071113943/AnsiballZ_file.py'
Dec 05 01:04:00 compute-0 sudo[130779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:01 compute-0 python3.9[130781]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:01 compute-0 sudo[130779]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:01 compute-0 sudo[130931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obcbpbyingccrebmbpppnaktxeuyvggp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896641.3092756-676-38531665427058/AnsiballZ_file.py'
Dec 05 01:04:01 compute-0 sudo[130931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:01 compute-0 python3.9[130933]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:01 compute-0 sudo[130931]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:02 compute-0 sudo[131083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubjkezyovpwnnhcsqerveeaffxexxtsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896641.9327025-676-86012086990261/AnsiballZ_file.py'
Dec 05 01:04:02 compute-0 sudo[131083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:02 compute-0 python3.9[131085]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:02 compute-0 sudo[131083]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:02 compute-0 sudo[131235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-covodbtlvypgeunbtfzsdaeshnbqhqxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896642.485296-676-271885902939498/AnsiballZ_file.py'
Dec 05 01:04:02 compute-0 sudo[131235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:02 compute-0 python3.9[131237]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:02 compute-0 sudo[131235]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:03 compute-0 sudo[131387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vatdrioavolnbicbtltaeydssmaieawh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896643.050761-676-248614626935437/AnsiballZ_file.py'
Dec 05 01:04:03 compute-0 sudo[131387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:03 compute-0 python3.9[131389]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:03 compute-0 sudo[131387]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:04 compute-0 sudo[131539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnchcvodfjzpostongijflngvvzmgjtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896643.7407389-676-264219636607817/AnsiballZ_file.py'
Dec 05 01:04:04 compute-0 sudo[131539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:04 compute-0 python3.9[131541]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:04 compute-0 sudo[131539]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:04 compute-0 sudo[131691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brqupppoofirakcfhibjwpovmxpngpdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896644.392693-676-271087851351893/AnsiballZ_file.py'
Dec 05 01:04:04 compute-0 sudo[131691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:04 compute-0 python3.9[131693]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:04 compute-0 sudo[131691]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:05 compute-0 sudo[131843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrzscyjkzvmjncpwfyvpglmdgfzaifjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896645.0143363-676-50431953084150/AnsiballZ_file.py'
Dec 05 01:04:05 compute-0 sudo[131843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:05 compute-0 python3.9[131845]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:05 compute-0 sudo[131843]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:05 compute-0 sudo[131995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqihmdycvzfiywggghkuxgfcgwbrelnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896645.5898125-676-60819148106431/AnsiballZ_file.py'
Dec 05 01:04:05 compute-0 sudo[131995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:06 compute-0 python3.9[131997]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:06 compute-0 sudo[131995]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:06 compute-0 sudo[132147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-houxhitbrfjewvavwrywadftjmkndsov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896646.2439468-676-13158414604404/AnsiballZ_file.py'
Dec 05 01:04:06 compute-0 sudo[132147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:06 compute-0 python3.9[132149]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:06 compute-0 sudo[132147]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:07 compute-0 sudo[132299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abhtabsmdemrthkczlkslhugauylpokj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896646.9896336-775-12580375816305/AnsiballZ_stat.py'
Dec 05 01:04:07 compute-0 sudo[132299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:07 compute-0 python3.9[132301]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:04:07 compute-0 sudo[132299]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:07 compute-0 sudo[132422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvpqysqeeekgpldogchizacxzwmcjmnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896646.9896336-775-12580375816305/AnsiballZ_copy.py'
Dec 05 01:04:07 compute-0 sudo[132422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:08 compute-0 python3.9[132424]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896646.9896336-775-12580375816305/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:08 compute-0 sudo[132422]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:08 compute-0 sudo[132586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxjcztpdpklaqafhsgeowlmwovgeyqil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896648.1929936-775-235011796221774/AnsiballZ_stat.py'
Dec 05 01:04:08 compute-0 sudo[132586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:08 compute-0 podman[132548]: 2025-12-05 01:04:08.606164854 +0000 UTC m=+0.081620709 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 01:04:08 compute-0 python3.9[132594]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:04:08 compute-0 sudo[132586]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:09 compute-0 sudo[132722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgcavftzrbezpqpbzxoppwpymnalxobw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896648.1929936-775-235011796221774/AnsiballZ_copy.py'
Dec 05 01:04:09 compute-0 sudo[132722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:09 compute-0 python3.9[132724]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896648.1929936-775-235011796221774/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:09 compute-0 sudo[132722]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:09 compute-0 sudo[132874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwwvovwzaidtefwjkteoascererysicx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896649.5615401-775-33584170055738/AnsiballZ_stat.py'
Dec 05 01:04:09 compute-0 sudo[132874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:10 compute-0 python3.9[132876]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:04:10 compute-0 sudo[132874]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:10 compute-0 sudo[132997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crkfumzjlaaibeznzeekrwmjewztiaff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896649.5615401-775-33584170055738/AnsiballZ_copy.py'
Dec 05 01:04:10 compute-0 sudo[132997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:10 compute-0 python3.9[132999]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896649.5615401-775-33584170055738/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:10 compute-0 sudo[132997]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:11 compute-0 sudo[133149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqisthhwzalqblywoqkcxmljbzynbebj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896650.7667232-775-184861549337845/AnsiballZ_stat.py'
Dec 05 01:04:11 compute-0 sudo[133149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:11 compute-0 python3.9[133151]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:04:11 compute-0 sudo[133149]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:11 compute-0 sudo[133272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcctclluyesgfxhyjwnxqqtnsqkoxjtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896650.7667232-775-184861549337845/AnsiballZ_copy.py'
Dec 05 01:04:11 compute-0 sudo[133272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:11 compute-0 python3.9[133274]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896650.7667232-775-184861549337845/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:11 compute-0 sudo[133272]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:12 compute-0 sudo[133424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzhcfqrhnnfnxuochflokcjgkpznbgri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896651.9649706-775-12252156789663/AnsiballZ_stat.py'
Dec 05 01:04:12 compute-0 sudo[133424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:12 compute-0 python3.9[133426]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:04:12 compute-0 sudo[133424]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:12 compute-0 sudo[133547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tavulodhucyucldrrorjiimhvoldmyor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896651.9649706-775-12252156789663/AnsiballZ_copy.py'
Dec 05 01:04:12 compute-0 sudo[133547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:13 compute-0 python3.9[133549]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896651.9649706-775-12252156789663/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:13 compute-0 sudo[133547]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:13 compute-0 sudo[133699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcnwwfpjqwigzhkuqsbqicdpkqgfhgpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896653.2135928-775-238228153391353/AnsiballZ_stat.py'
Dec 05 01:04:13 compute-0 sudo[133699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:13 compute-0 python3.9[133701]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:04:13 compute-0 sudo[133699]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:14 compute-0 sudo[133822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhbvankajpocxhkrxqlbgbibnjcsfwsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896653.2135928-775-238228153391353/AnsiballZ_copy.py'
Dec 05 01:04:14 compute-0 sudo[133822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:14 compute-0 python3.9[133824]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896653.2135928-775-238228153391353/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:14 compute-0 sudo[133822]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:14 compute-0 sudo[133974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuwzhifkyimvadtnvzbytpcivonfgupc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896654.451138-775-94665213687039/AnsiballZ_stat.py'
Dec 05 01:04:14 compute-0 sudo[133974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:15 compute-0 python3.9[133976]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:04:15 compute-0 sudo[133974]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:15 compute-0 sudo[134097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pryaslzaoqxtxoapcsdlmluetmompgvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896654.451138-775-94665213687039/AnsiballZ_copy.py'
Dec 05 01:04:15 compute-0 sudo[134097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:15 compute-0 python3.9[134099]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896654.451138-775-94665213687039/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:15 compute-0 sudo[134097]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:16 compute-0 sudo[134249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loaxpymgtikrhomgvlmbnoqifyzeyfaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896655.8195562-775-50173359051458/AnsiballZ_stat.py'
Dec 05 01:04:16 compute-0 sudo[134249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:16 compute-0 python3.9[134251]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:04:16 compute-0 sudo[134249]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:16 compute-0 sudo[134372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seaztnfstwueegqyoxuosrmoszdyfeof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896655.8195562-775-50173359051458/AnsiballZ_copy.py'
Dec 05 01:04:16 compute-0 sudo[134372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:16 compute-0 python3.9[134374]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896655.8195562-775-50173359051458/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:16 compute-0 sudo[134372]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:17 compute-0 sudo[134524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oysejgdxwasetjrirutmajotulrxgrie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896656.9842217-775-40833753809001/AnsiballZ_stat.py'
Dec 05 01:04:17 compute-0 sudo[134524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:17 compute-0 python3.9[134526]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:04:17 compute-0 sudo[134524]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:17 compute-0 sudo[134647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtjwiethdubxvttocfcrwnqckyfzqymo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896656.9842217-775-40833753809001/AnsiballZ_copy.py'
Dec 05 01:04:17 compute-0 sudo[134647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:18 compute-0 python3.9[134649]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896656.9842217-775-40833753809001/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:18 compute-0 sudo[134647]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:18 compute-0 sudo[134799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rawfrmtfjzsllocmgprmrgyknatmdnct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896658.2251844-775-132716557087409/AnsiballZ_stat.py'
Dec 05 01:04:18 compute-0 sudo[134799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:18 compute-0 python3.9[134801]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:04:18 compute-0 sudo[134799]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:19 compute-0 sudo[134922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctngcpdmjaqevfgkbnhidknqygnmbroh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896658.2251844-775-132716557087409/AnsiballZ_copy.py'
Dec 05 01:04:19 compute-0 sudo[134922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:19 compute-0 python3.9[134924]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896658.2251844-775-132716557087409/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:19 compute-0 sudo[134922]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:19 compute-0 sudo[135074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gygzmpkwzdzfrjsixhpcqrvdanrxbveg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896659.549394-775-3087490862360/AnsiballZ_stat.py'
Dec 05 01:04:19 compute-0 sudo[135074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:20 compute-0 python3.9[135076]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:04:20 compute-0 sudo[135074]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:20 compute-0 sudo[135197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvsyglrvdjlghpdxztjuwrfhuaqyiwoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896659.549394-775-3087490862360/AnsiballZ_copy.py'
Dec 05 01:04:20 compute-0 sudo[135197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:21 compute-0 python3.9[135199]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896659.549394-775-3087490862360/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:21 compute-0 sudo[135197]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:21 compute-0 sudo[135349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjbkgoqidmirfkxcufhpzgvfjbeixgnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896661.3103588-775-199062854097710/AnsiballZ_stat.py'
Dec 05 01:04:21 compute-0 sudo[135349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:21 compute-0 python3.9[135351]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:04:21 compute-0 sudo[135349]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:22 compute-0 sudo[135472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exrkisbfvwlqycugqphksynoxatvfvps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896661.3103588-775-199062854097710/AnsiballZ_copy.py'
Dec 05 01:04:22 compute-0 sudo[135472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:22 compute-0 python3.9[135474]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896661.3103588-775-199062854097710/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:22 compute-0 sudo[135472]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:22 compute-0 sudo[135624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdmsdgxzilkycfqowcirwjxrsmjubxoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896662.586354-775-49232989099221/AnsiballZ_stat.py'
Dec 05 01:04:22 compute-0 sudo[135624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:23 compute-0 python3.9[135626]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:04:23 compute-0 sudo[135624]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:23 compute-0 sudo[135747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzijplyscfvdjpwohxzpcwjqswkxydnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896662.586354-775-49232989099221/AnsiballZ_copy.py'
Dec 05 01:04:23 compute-0 sudo[135747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:23 compute-0 python3.9[135749]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896662.586354-775-49232989099221/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:23 compute-0 sudo[135747]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:24 compute-0 sudo[135899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wafcndfvusciyuzlaoqhsfjkjdqggiml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896663.9311082-775-47238445569365/AnsiballZ_stat.py'
Dec 05 01:04:24 compute-0 sudo[135899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:24 compute-0 python3.9[135901]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:04:24 compute-0 sudo[135899]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:25 compute-0 sudo[136022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xueawuywsuxeuzmjsslucdxipdrzokva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896663.9311082-775-47238445569365/AnsiballZ_copy.py'
Dec 05 01:04:25 compute-0 sudo[136022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:25 compute-0 python3.9[136024]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896663.9311082-775-47238445569365/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:25 compute-0 sudo[136022]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:26 compute-0 python3.9[136174]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:04:26 compute-0 sudo[136327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvnmxhqzlruqfifgiejvjfojoagtnmlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896666.4311006-981-158994184267251/AnsiballZ_seboolean.py'
Dec 05 01:04:26 compute-0 sudo[136327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:27 compute-0 python3.9[136329]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec 05 01:04:28 compute-0 sudo[136327]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:28 compute-0 sudo[136483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqagckjgryekxzischntgwmprxrqvilt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896668.5476012-989-185969615861109/AnsiballZ_copy.py'
Dec 05 01:04:28 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec 05 01:04:28 compute-0 sudo[136483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:29 compute-0 python3.9[136485]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:29 compute-0 sudo[136483]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:29 compute-0 sudo[136635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfdxuchflxeqnrdfbgzpvwhxguskwsko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896669.336063-989-49232292027199/AnsiballZ_copy.py'
Dec 05 01:04:29 compute-0 sudo[136635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:29 compute-0 python3.9[136637]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:29 compute-0 sudo[136635]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:30 compute-0 sudo[136787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtjwsqchtcgtpdzjxaroxvqqefubqqds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896670.0891316-989-254270511338253/AnsiballZ_copy.py'
Dec 05 01:04:30 compute-0 sudo[136787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:30 compute-0 python3.9[136789]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:30 compute-0 sudo[136787]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:31 compute-0 sudo[136939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nijbpzemgrrerltaqpjthancapwhtdgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896670.8283715-989-65028530505984/AnsiballZ_copy.py'
Dec 05 01:04:31 compute-0 sudo[136939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:31 compute-0 python3.9[136941]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:31 compute-0 sudo[136939]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:31 compute-0 sudo[137091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlsxdcgyrvdemndkmqqbebohqkoguuag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896671.5273783-989-68971921466868/AnsiballZ_copy.py'
Dec 05 01:04:31 compute-0 sudo[137091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:32 compute-0 python3.9[137093]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:32 compute-0 sudo[137091]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:32 compute-0 sudo[137243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrfzigfqateysfxbaauoztloqkcvvfwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896672.3693502-1025-173486206245612/AnsiballZ_copy.py'
Dec 05 01:04:32 compute-0 sudo[137243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:32 compute-0 python3.9[137245]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:32 compute-0 sudo[137243]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:33 compute-0 sudo[137395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pitmmwanykybaoeajdjygkncibsgbcvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896673.1050591-1025-65121227929843/AnsiballZ_copy.py'
Dec 05 01:04:33 compute-0 sudo[137395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:33 compute-0 python3.9[137397]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:33 compute-0 sudo[137395]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:34 compute-0 sudo[137547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aojphklbjhbiulhoudhxlqdfmirbxzdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896673.9827535-1025-9575638992226/AnsiballZ_copy.py'
Dec 05 01:04:34 compute-0 sudo[137547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:34 compute-0 python3.9[137549]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:34 compute-0 sudo[137547]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:35 compute-0 sudo[137699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svrqtwhitgfczjtizcwyjaqpmaahuewj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896674.7929182-1025-162118498826589/AnsiballZ_copy.py'
Dec 05 01:04:35 compute-0 sudo[137699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:35 compute-0 python3.9[137701]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:35 compute-0 sudo[137699]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:35 compute-0 sudo[137851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvdgmouvgrrtawkhjzgggauvejvlvvje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896675.4476829-1025-145213166935913/AnsiballZ_copy.py'
Dec 05 01:04:35 compute-0 sudo[137851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:35 compute-0 python3.9[137853]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:35 compute-0 sudo[137851]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:36 compute-0 sudo[138003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfigettxsxohyfeqjlrkogstlmcvbwun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896676.204848-1061-227694259598767/AnsiballZ_systemd.py'
Dec 05 01:04:36 compute-0 sudo[138003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:36 compute-0 python3.9[138005]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 01:04:36 compute-0 systemd[1]: Reloading.
Dec 05 01:04:36 compute-0 systemd-rc-local-generator[138029]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:04:36 compute-0 systemd-sysv-generator[138035]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:04:37 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Dec 05 01:04:37 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Dec 05 01:04:37 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Dec 05 01:04:37 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec 05 01:04:37 compute-0 systemd[1]: Starting libvirt logging daemon...
Dec 05 01:04:37 compute-0 systemd[1]: Started libvirt logging daemon.
Dec 05 01:04:37 compute-0 sudo[138003]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:37 compute-0 sudo[138196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itwbxwblzymdhqmzoeyrtecgkxjqdvlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896677.4922457-1061-51152088550294/AnsiballZ_systemd.py'
Dec 05 01:04:37 compute-0 sudo[138196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:38 compute-0 python3.9[138198]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 01:04:38 compute-0 systemd[1]: Reloading.
Dec 05 01:04:38 compute-0 systemd-sysv-generator[138225]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:04:38 compute-0 systemd-rc-local-generator[138222]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:04:38 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Dec 05 01:04:38 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec 05 01:04:38 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec 05 01:04:38 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec 05 01:04:38 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec 05 01:04:38 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec 05 01:04:38 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec 05 01:04:38 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec 05 01:04:38 compute-0 sudo[138196]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:39 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec 05 01:04:39 compute-0 podman[138362]: 2025-12-05 01:04:39.232164672 +0000 UTC m=+0.127924047 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 01:04:39 compute-0 sudo[138438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vffgmtlixgnfgiryvuslbvyumarbtjpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896678.8088386-1061-81849136314734/AnsiballZ_systemd.py'
Dec 05 01:04:39 compute-0 sudo[138438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:39 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec 05 01:04:39 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec 05 01:04:39 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec 05 01:04:39 compute-0 python3.9[138441]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 01:04:39 compute-0 systemd[1]: Reloading.
Dec 05 01:04:39 compute-0 systemd-sysv-generator[138474]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:04:39 compute-0 systemd-rc-local-generator[138467]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:04:39 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec 05 01:04:39 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec 05 01:04:39 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec 05 01:04:39 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec 05 01:04:39 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 05 01:04:39 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 05 01:04:40 compute-0 sudo[138438]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:40 compute-0 setroubleshoot[138363]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 2ebc3f3e-823e-438e-8e55-7092ceae60db
Dec 05 01:04:40 compute-0 sudo[138659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoqrpapzpyiuqetlrjlcrhsahacavhwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896680.167358-1061-207855445453856/AnsiballZ_systemd.py'
Dec 05 01:04:40 compute-0 sudo[138659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:40 compute-0 setroubleshoot[138363]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 05 01:04:40 compute-0 setroubleshoot[138363]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 2ebc3f3e-823e-438e-8e55-7092ceae60db
Dec 05 01:04:40 compute-0 setroubleshoot[138363]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 05 01:04:40 compute-0 python3.9[138661]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 01:04:40 compute-0 systemd[1]: Reloading.
Dec 05 01:04:40 compute-0 systemd-sysv-generator[138691]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:04:40 compute-0 systemd-rc-local-generator[138688]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:04:41 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Dec 05 01:04:41 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Dec 05 01:04:41 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 05 01:04:41 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec 05 01:04:41 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec 05 01:04:41 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec 05 01:04:41 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec 05 01:04:41 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec 05 01:04:41 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec 05 01:04:41 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec 05 01:04:41 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Dec 05 01:04:41 compute-0 systemd[1]: Started libvirt QEMU daemon.
Dec 05 01:04:41 compute-0 sudo[138659]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:41 compute-0 sudo[138873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndhjnchxirwdcbfrfqaecojzmfhjqitb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896681.4148102-1061-221665458479434/AnsiballZ_systemd.py'
Dec 05 01:04:41 compute-0 sudo[138873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:42 compute-0 python3.9[138875]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 01:04:42 compute-0 systemd[1]: Reloading.
Dec 05 01:04:42 compute-0 systemd-rc-local-generator[138904]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:04:42 compute-0 systemd-sysv-generator[138907]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:04:42 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Dec 05 01:04:42 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Dec 05 01:04:42 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Dec 05 01:04:42 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec 05 01:04:42 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec 05 01:04:42 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec 05 01:04:42 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 05 01:04:42 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 05 01:04:42 compute-0 sudo[138873]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:43 compute-0 sudo[139085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdpuljazonispuxleegxlxejyoeuocxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896682.8406105-1098-200769991891341/AnsiballZ_file.py'
Dec 05 01:04:43 compute-0 sudo[139085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:43 compute-0 python3.9[139087]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:43 compute-0 sudo[139085]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:43 compute-0 sudo[139237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlzmlxhaihwjpcymjjuahhtwequbgfca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896683.6066062-1106-99142469824432/AnsiballZ_find.py'
Dec 05 01:04:43 compute-0 sudo[139237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:44 compute-0 python3.9[139239]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 01:04:44 compute-0 sudo[139237]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:45 compute-0 sudo[139389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfwdhkyjugxuogsfpciqrwecflwurjmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896684.8031545-1120-234277022385105/AnsiballZ_stat.py'
Dec 05 01:04:45 compute-0 sudo[139389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:45 compute-0 python3.9[139391]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:04:45 compute-0 sudo[139389]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:45 compute-0 sudo[139512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcoxnhqtfxsrvltnwvyxijiqspsavhdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896684.8031545-1120-234277022385105/AnsiballZ_copy.py'
Dec 05 01:04:45 compute-0 sudo[139512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:45 compute-0 python3.9[139514]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896684.8031545-1120-234277022385105/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:45 compute-0 sudo[139512]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:46 compute-0 sudo[139664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azlfwaszeodheoaerkvkjbiqxygtuqbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896686.310252-1136-144446637834573/AnsiballZ_file.py'
Dec 05 01:04:46 compute-0 sudo[139664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:46 compute-0 python3.9[139666]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:46 compute-0 sudo[139664]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:47 compute-0 sudo[139816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdolydkfacqgppzudmtmerptqjmmaqow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896687.0575912-1144-89597178960102/AnsiballZ_stat.py'
Dec 05 01:04:47 compute-0 sudo[139816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:47 compute-0 python3.9[139818]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:04:47 compute-0 sudo[139816]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:47 compute-0 sudo[139894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrawlorwpjgtgvcbiyyglsoyjlswkusx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896687.0575912-1144-89597178960102/AnsiballZ_file.py'
Dec 05 01:04:47 compute-0 sudo[139894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:48 compute-0 python3.9[139896]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:48 compute-0 sudo[139894]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:48 compute-0 sudo[140046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysvnwewodichpkovawzmsvettqvyrmvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896688.2863553-1156-88198375903945/AnsiballZ_stat.py'
Dec 05 01:04:48 compute-0 sudo[140046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:48 compute-0 python3.9[140048]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:04:48 compute-0 sudo[140046]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:49 compute-0 sudo[140124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krhoqsemsmhoibsbuepofcmbxjgoadzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896688.2863553-1156-88198375903945/AnsiballZ_file.py'
Dec 05 01:04:49 compute-0 sudo[140124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:49 compute-0 python3.9[140126]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.g3sagl8j recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:49 compute-0 sudo[140124]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:49 compute-0 sudo[140276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxkghgyftsfypqchmkmcidmpptuxktxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896689.3742657-1168-204722995838628/AnsiballZ_stat.py'
Dec 05 01:04:49 compute-0 sudo[140276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:49 compute-0 python3.9[140278]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:04:49 compute-0 sudo[140276]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:50 compute-0 sudo[140354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-outqbmunuydvgsmopjunibuqakguwjjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896689.3742657-1168-204722995838628/AnsiballZ_file.py'
Dec 05 01:04:50 compute-0 sudo[140354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:50 compute-0 python3.9[140356]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:50 compute-0 sudo[140354]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:50 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec 05 01:04:50 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec 05 01:04:50 compute-0 sudo[140506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eacuxmlwjapzjkzxkuwhjmrseezyusmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896690.5970302-1181-173125319086691/AnsiballZ_command.py'
Dec 05 01:04:50 compute-0 sudo[140506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:51 compute-0 python3.9[140508]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:04:51 compute-0 sudo[140506]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:51 compute-0 sudo[140659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsbzxbezpgctpxohukddkjqbwqpguvur ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764896691.3579643-1189-61564248219367/AnsiballZ_edpm_nftables_from_files.py'
Dec 05 01:04:51 compute-0 sudo[140659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:52 compute-0 python3[140661]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 05 01:04:52 compute-0 sudo[140659]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:52 compute-0 sudo[140811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cafnqsiohoxvyjrwmhuurunctpltlvxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896692.334628-1197-246764280795299/AnsiballZ_stat.py'
Dec 05 01:04:52 compute-0 sudo[140811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:52 compute-0 python3.9[140813]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:04:52 compute-0 sudo[140811]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:53 compute-0 sudo[140889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fppxcmqoujodpiqoequqasajiaoasuum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896692.334628-1197-246764280795299/AnsiballZ_file.py'
Dec 05 01:04:53 compute-0 sudo[140889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:53 compute-0 python3.9[140891]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:53 compute-0 sudo[140889]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:54 compute-0 sudo[141041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wswufxtcsfsqyyvefygmuitbgrlqpkhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896693.7224689-1209-74773450364831/AnsiballZ_stat.py'
Dec 05 01:04:54 compute-0 sudo[141041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:54 compute-0 python3.9[141043]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:04:54 compute-0 sudo[141041]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:54 compute-0 sudo[141119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spsnpgpjpofpicoyzrmtxcpjlmlmijuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896693.7224689-1209-74773450364831/AnsiballZ_file.py'
Dec 05 01:04:54 compute-0 sudo[141119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:54 compute-0 python3.9[141121]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:54 compute-0 sudo[141119]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:55 compute-0 sudo[141271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mimgwimwxqvdalafkymybeagcanaagub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896694.995596-1221-245340854318569/AnsiballZ_stat.py'
Dec 05 01:04:55 compute-0 sudo[141271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:55 compute-0 python3.9[141273]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:04:55 compute-0 sudo[141271]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:55 compute-0 sudo[141349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heodnllsrftzefosqlmoijtdtqhywrqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896694.995596-1221-245340854318569/AnsiballZ_file.py'
Dec 05 01:04:55 compute-0 sudo[141349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:56 compute-0 python3.9[141351]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:56 compute-0 sudo[141349]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:56 compute-0 sudo[141501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqzhihiinrzfyllvbclgtznceaiilwno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896696.390583-1233-70509084451155/AnsiballZ_stat.py'
Dec 05 01:04:56 compute-0 sudo[141501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:57 compute-0 python3.9[141503]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:04:57 compute-0 sudo[141501]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:57 compute-0 sudo[141579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwgvklhjkstjbuuihgpnfguanwvmqctn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896696.390583-1233-70509084451155/AnsiballZ_file.py'
Dec 05 01:04:57 compute-0 sudo[141579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:57 compute-0 python3.9[141581]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:57 compute-0 sudo[141579]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:58 compute-0 sudo[141731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wudybpzjdnxvdjneasurlkmhgihpstiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896697.7921166-1245-268524551476562/AnsiballZ_stat.py'
Dec 05 01:04:58 compute-0 sudo[141731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:58 compute-0 python3.9[141733]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:04:58 compute-0 sudo[141731]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:58 compute-0 sudo[141856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rljqgsgmrzaxsgnzzkmqhpsijuufpsnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896697.7921166-1245-268524551476562/AnsiballZ_copy.py'
Dec 05 01:04:58 compute-0 sudo[141856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:59 compute-0 python3.9[141858]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896697.7921166-1245-268524551476562/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:59 compute-0 sudo[141856]: pam_unix(sudo:session): session closed for user root
Dec 05 01:04:59 compute-0 sudo[142008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrmwpmvpkhtgywseswqssnqodbcpodsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896699.221969-1260-240397773861219/AnsiballZ_file.py'
Dec 05 01:04:59 compute-0 sudo[142008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:04:59 compute-0 python3.9[142010]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:04:59 compute-0 sudo[142008]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:00 compute-0 sudo[142160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shovrgchofhosfshmjdgmzjbqymrvpeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896699.901997-1268-79652481762858/AnsiballZ_command.py'
Dec 05 01:05:00 compute-0 sudo[142160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:00 compute-0 python3.9[142162]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:05:00 compute-0 sudo[142160]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:01 compute-0 sudo[142315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-banchgdaahlwobkulnmxfuoslnqttzes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896700.6735973-1276-198337871154919/AnsiballZ_blockinfile.py'
Dec 05 01:05:01 compute-0 sudo[142315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:01 compute-0 python3.9[142317]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:05:01 compute-0 sudo[142315]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:01 compute-0 sudo[142467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ookuufsbegwtsamwfbhwqctrrhcbrioj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896701.627047-1285-263884632141520/AnsiballZ_command.py'
Dec 05 01:05:01 compute-0 sudo[142467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:02 compute-0 python3.9[142469]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:05:02 compute-0 sudo[142467]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:02 compute-0 sudo[142620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmyjpwasxjpxnwxwaozelqzrhrfoqfqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896702.3635778-1293-91627482517941/AnsiballZ_stat.py'
Dec 05 01:05:02 compute-0 sudo[142620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:02 compute-0 python3.9[142622]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:05:02 compute-0 sudo[142620]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:03 compute-0 sudo[142774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrjkmywlhidroqtlafdpumblcoruelaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896703.061283-1301-3688228325502/AnsiballZ_command.py'
Dec 05 01:05:03 compute-0 sudo[142774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:03 compute-0 python3.9[142776]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:05:03 compute-0 sudo[142774]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:04 compute-0 sudo[142929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkgzurhvgaqmbpxfwpfrwstjzckpjjas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896703.9105577-1309-162697015968583/AnsiballZ_file.py'
Dec 05 01:05:04 compute-0 sudo[142929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:04 compute-0 python3.9[142931]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:05:04 compute-0 sudo[142929]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:05 compute-0 sudo[143081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkbsbjtznslfskivigkzbbxhbdxxmgzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896704.7191954-1317-86518957866666/AnsiballZ_stat.py'
Dec 05 01:05:05 compute-0 sudo[143081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:05 compute-0 python3.9[143083]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:05:05 compute-0 sudo[143081]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:05 compute-0 sudo[143204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvdbonflvifzzvmhhaeoudylqsrzesyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896704.7191954-1317-86518957866666/AnsiballZ_copy.py'
Dec 05 01:05:05 compute-0 sudo[143204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:05 compute-0 python3.9[143206]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896704.7191954-1317-86518957866666/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:05:05 compute-0 sudo[143204]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:06 compute-0 sudo[143356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prblezpaomxibkwyklxmdwocdlpmzzqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896706.1512437-1332-202513347411991/AnsiballZ_stat.py'
Dec 05 01:05:06 compute-0 sudo[143356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:06 compute-0 python3.9[143358]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:05:06 compute-0 sudo[143356]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:07 compute-0 sudo[143479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cndgwldfhxeqrkurfjxpzoxszysbkjsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896706.1512437-1332-202513347411991/AnsiballZ_copy.py'
Dec 05 01:05:07 compute-0 sudo[143479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:07 compute-0 python3.9[143481]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896706.1512437-1332-202513347411991/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:05:07 compute-0 sudo[143479]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:07 compute-0 sudo[143631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmasljpyyzvciterlttstebdozlzmror ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896707.5948699-1347-147789048625061/AnsiballZ_stat.py'
Dec 05 01:05:07 compute-0 sudo[143631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:08 compute-0 python3.9[143633]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:05:08 compute-0 sudo[143631]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:08 compute-0 sudo[143754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujelmxfubxwipmaqiilkrrbckpigstbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896707.5948699-1347-147789048625061/AnsiballZ_copy.py'
Dec 05 01:05:08 compute-0 sudo[143754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:08 compute-0 python3.9[143756]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896707.5948699-1347-147789048625061/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:05:08 compute-0 sudo[143754]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:09 compute-0 sudo[143906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgmxfeunvuxehrjcfbhsixlqgrjhxptx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896708.9549644-1362-92730340406722/AnsiballZ_systemd.py'
Dec 05 01:05:09 compute-0 sudo[143906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:09 compute-0 podman[143908]: 2025-12-05 01:05:09.44576415 +0000 UTC m=+0.126521338 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:05:09 compute-0 python3.9[143909]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:05:09 compute-0 systemd[1]: Reloading.
Dec 05 01:05:09 compute-0 systemd-rc-local-generator[143961]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:05:09 compute-0 systemd-sysv-generator[143967]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:05:11 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Dec 05 01:05:11 compute-0 sudo[143906]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:11 compute-0 sudo[144125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpnanhrrauwptadafcurpxbxttxjuzbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896711.255962-1370-158071864975838/AnsiballZ_systemd.py'
Dec 05 01:05:11 compute-0 sudo[144125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:11 compute-0 python3.9[144127]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 05 01:05:11 compute-0 systemd[1]: Reloading.
Dec 05 01:05:11 compute-0 systemd-sysv-generator[144154]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:05:11 compute-0 systemd-rc-local-generator[144148]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:05:12 compute-0 systemd[1]: Reloading.
Dec 05 01:05:12 compute-0 systemd-rc-local-generator[144189]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:05:12 compute-0 systemd-sysv-generator[144193]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:05:12 compute-0 sudo[144125]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:12 compute-0 sshd-session[89891]: Connection closed by 192.168.122.30 port 52400
Dec 05 01:05:12 compute-0 sshd-session[89888]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:05:12 compute-0 systemd[1]: session-21.scope: Deactivated successfully.
Dec 05 01:05:12 compute-0 systemd[1]: session-21.scope: Consumed 3min 31.189s CPU time.
Dec 05 01:05:12 compute-0 systemd-logind[792]: Session 21 logged out. Waiting for processes to exit.
Dec 05 01:05:12 compute-0 sshd-session[144074]: Invalid user user from 45.135.232.92 port 36608
Dec 05 01:05:12 compute-0 systemd-logind[792]: Removed session 21.
Dec 05 01:05:13 compute-0 sshd-session[144074]: Connection reset by invalid user user 45.135.232.92 port 36608 [preauth]
Dec 05 01:05:15 compute-0 sshd-session[144225]: Connection reset by authenticating user root 45.135.232.92 port 36612 [preauth]
Dec 05 01:05:17 compute-0 sshd-session[144227]: Connection reset by authenticating user root 45.135.232.92 port 49808 [preauth]
Dec 05 01:05:18 compute-0 sshd-session[144231]: Accepted publickey for zuul from 192.168.122.30 port 40404 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 01:05:18 compute-0 systemd-logind[792]: New session 22 of user zuul.
Dec 05 01:05:18 compute-0 systemd[1]: Started Session 22 of User zuul.
Dec 05 01:05:18 compute-0 sshd-session[144231]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:05:18 compute-0 sshd-session[144229]: Invalid user Admin from 45.135.232.92 port 49814
Dec 05 01:05:19 compute-0 sshd-session[144229]: Connection reset by invalid user Admin 45.135.232.92 port 49814 [preauth]
Dec 05 01:05:20 compute-0 python3.9[144386]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:05:21 compute-0 sshd-session[144358]: Connection reset by authenticating user root 45.135.232.92 port 49842 [preauth]
Dec 05 01:05:21 compute-0 sudo[144540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyzdhsbxbbhljmjzmcamtuxormqxglka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896720.907678-36-83980747793132/AnsiballZ_systemd_service.py'
Dec 05 01:05:21 compute-0 sudo[144540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:21 compute-0 python3.9[144542]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 01:05:21 compute-0 systemd[1]: Reloading.
Dec 05 01:05:21 compute-0 systemd-rc-local-generator[144565]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:05:21 compute-0 systemd-sysv-generator[144570]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:05:22 compute-0 sudo[144540]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:23 compute-0 python3.9[144726]: ansible-ansible.builtin.service_facts Invoked
Dec 05 01:05:23 compute-0 network[144743]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 01:05:23 compute-0 network[144744]: 'network-scripts' will be removed from distribution in near future.
Dec 05 01:05:23 compute-0 network[144745]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 01:05:29 compute-0 sudo[145014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udaaygkrvscnxvvjbcjiliqbakxgqsaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896729.0921838-55-204392965191285/AnsiballZ_systemd_service.py'
Dec 05 01:05:29 compute-0 sudo[145014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:29 compute-0 python3.9[145016]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:05:29 compute-0 sudo[145014]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:30 compute-0 sudo[145167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcpozdzobkjwwlfpkykvqgtydpymmidh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896730.062989-65-3987909705075/AnsiballZ_file.py'
Dec 05 01:05:30 compute-0 sudo[145167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:30 compute-0 python3.9[145169]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:05:30 compute-0 sudo[145167]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:31 compute-0 sudo[145319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfogolijsbwccexwirgscpnzlwqbxaxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896730.861214-73-124329211136500/AnsiballZ_file.py'
Dec 05 01:05:31 compute-0 sudo[145319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:31 compute-0 python3.9[145321]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:05:31 compute-0 sudo[145319]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:32 compute-0 sudo[145471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmjmczempmsmkazbtdcpysnovmpjjcgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896731.5639122-82-100207332996645/AnsiballZ_command.py'
Dec 05 01:05:32 compute-0 sudo[145471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:32 compute-0 python3.9[145473]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:05:32 compute-0 sudo[145471]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:33 compute-0 python3.9[145625]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 01:05:33 compute-0 sudo[145775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihtatoyuijwmzgjmalfgilqjeshueqmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896733.4043822-100-234676482095913/AnsiballZ_systemd_service.py'
Dec 05 01:05:33 compute-0 sudo[145775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:34 compute-0 python3.9[145777]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 01:05:34 compute-0 systemd[1]: Reloading.
Dec 05 01:05:34 compute-0 systemd-sysv-generator[145807]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:05:34 compute-0 systemd-rc-local-generator[145802]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:05:34 compute-0 sudo[145775]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:34 compute-0 sudo[145962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfxupeuwdtoumwgjdmrnjzqdrrwguowe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896734.5352697-108-71257080280486/AnsiballZ_command.py'
Dec 05 01:05:34 compute-0 sudo[145962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:35 compute-0 python3.9[145964]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:05:35 compute-0 sudo[145962]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:35 compute-0 sudo[146115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfcpsrixdrhhjhxxahhpgljpkzfuqzpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896735.286633-117-169376153086424/AnsiballZ_file.py'
Dec 05 01:05:35 compute-0 sudo[146115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:35 compute-0 python3.9[146117]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:05:35 compute-0 sudo[146115]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:36 compute-0 python3.9[146267]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:05:37 compute-0 python3.9[146419]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:05:38 compute-0 python3.9[146540]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896736.7981956-133-90194017109416/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:05:38 compute-0 sudo[146690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feyohyvimdkprznashkitsbxvwyahofa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896738.309938-148-57666241014638/AnsiballZ_group.py'
Dec 05 01:05:38 compute-0 sudo[146690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:39 compute-0 python3.9[146692]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Dec 05 01:05:39 compute-0 sudo[146690]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:39 compute-0 podman[146769]: 2025-12-05 01:05:39.705129721 +0000 UTC m=+0.117314037 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 01:05:39 compute-0 sudo[146871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syutqzbgamntichsweeadvvpadjrtwyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896739.468253-159-8888071130344/AnsiballZ_getent.py'
Dec 05 01:05:39 compute-0 sudo[146871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:40 compute-0 python3.9[146873]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec 05 01:05:40 compute-0 sudo[146871]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:40 compute-0 sudo[147024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlnvymcsmvtxwicdjirblyiuiyniiouy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896740.3418307-167-248811159825811/AnsiballZ_group.py'
Dec 05 01:05:40 compute-0 sudo[147024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:40 compute-0 python3.9[147026]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 05 01:05:40 compute-0 groupadd[147027]: group added to /etc/group: name=ceilometer, GID=42405
Dec 05 01:05:40 compute-0 groupadd[147027]: group added to /etc/gshadow: name=ceilometer
Dec 05 01:05:40 compute-0 groupadd[147027]: new group: name=ceilometer, GID=42405
Dec 05 01:05:40 compute-0 sudo[147024]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:41 compute-0 sudo[147182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgdvratrchdcqquahserbuhxwjqbxkme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896741.0565763-175-15274321354943/AnsiballZ_user.py'
Dec 05 01:05:41 compute-0 sudo[147182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:05:41 compute-0 python3.9[147184]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 05 01:05:41 compute-0 useradd[147186]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/0
Dec 05 01:05:41 compute-0 useradd[147186]: add 'ceilometer' to group 'libvirt'
Dec 05 01:05:41 compute-0 useradd[147186]: add 'ceilometer' to shadow group 'libvirt'
Dec 05 01:05:41 compute-0 sudo[147182]: pam_unix(sudo:session): session closed for user root
Dec 05 01:05:43 compute-0 python3.9[147342]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:05:43 compute-0 python3.9[147463]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764896742.661243-201-85843281529925/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:05:44 compute-0 python3.9[147613]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:05:44 compute-0 python3.9[147734]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764896743.8280778-201-76061545703720/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:05:45 compute-0 python3.9[147884]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:05:46 compute-0 python3.9[148005]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764896744.991766-201-62915567657064/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:05:46 compute-0 python3.9[148155]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:05:47 compute-0 python3.9[148307]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:05:48 compute-0 python3.9[148459]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:05:48 compute-0 python3.9[148580]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896747.6889687-260-97483718524400/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:05:49 compute-0 python3.9[148730]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:05:49 compute-0 python3.9[148806]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:05:50 compute-0 python3.9[148956]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:05:51 compute-0 python3.9[149077]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896750.103119-260-115629858909633/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=4096a0f5410f47dcaf8ab19e56a9d8e211effecd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:05:51 compute-0 python3.9[149227]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:05:52 compute-0 python3.9[149348]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896751.396997-260-245763008668448/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:05:53 compute-0 python3.9[149498]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:05:53 compute-0 python3.9[149619]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896752.6221862-260-148524536485348/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:05:54 compute-0 python3.9[149769]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:05:55 compute-0 python3.9[149890]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896753.8618016-260-17095427333205/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:05:55 compute-0 python3.9[150040]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:05:56 compute-0 python3.9[150161]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896755.2900317-260-250805069559045/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:05:57 compute-0 python3.9[150311]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:05:57 compute-0 python3.9[150432]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896756.647258-260-21738875230583/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:05:58 compute-0 python3.9[150582]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:05:59 compute-0 python3.9[150703]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896757.9909909-260-113179732565968/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:06:00 compute-0 python3.9[150853]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:06:00 compute-0 python3.9[150974]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896759.4269624-260-192098596907389/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:06:01 compute-0 python3.9[151124]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:06:02 compute-0 python3.9[151245]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896761.0721328-260-233894767635941/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:06:03 compute-0 python3.9[151395]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:06:03 compute-0 python3.9[151471]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:06:04 compute-0 python3.9[151621]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:06:04 compute-0 python3.9[151697]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:06:05 compute-0 python3.9[151847]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:06:06 compute-0 python3.9[151923]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:06:06 compute-0 sudo[152073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtgqecqkgeftmvmjczmuqgllkxqmegdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896766.2373242-449-117758396049661/AnsiballZ_file.py'
Dec 05 01:06:06 compute-0 sudo[152073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:06 compute-0 python3.9[152075]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:06:06 compute-0 sudo[152073]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:07 compute-0 sudo[152225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seviyppaxypqalmeryoiazpczkmizjeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896767.1292267-457-119532360265010/AnsiballZ_file.py'
Dec 05 01:06:07 compute-0 sudo[152225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:07 compute-0 python3.9[152227]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:06:07 compute-0 sudo[152225]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:08 compute-0 sudo[152377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cehbrefrluabcbsnwrmoflnofgdahihb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896767.9325888-465-181063355523563/AnsiballZ_file.py'
Dec 05 01:06:08 compute-0 sudo[152377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:08 compute-0 python3.9[152379]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:06:08 compute-0 sudo[152377]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:09 compute-0 sudo[152529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqsbjzmlrzcjhjtduofbdvudxporipxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896768.6256816-473-84628720967173/AnsiballZ_systemd_service.py'
Dec 05 01:06:09 compute-0 sudo[152529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:09 compute-0 python3.9[152531]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:06:09 compute-0 systemd[1]: Reloading.
Dec 05 01:06:09 compute-0 systemd-rc-local-generator[152561]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:06:09 compute-0 systemd-sysv-generator[152565]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:06:09 compute-0 systemd[1]: Listening on Podman API Socket.
Dec 05 01:06:09 compute-0 sudo[152529]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:09 compute-0 podman[152570]: 2025-12-05 01:06:09.886829477 +0000 UTC m=+0.095001678 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 05 01:06:10 compute-0 sudo[152744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqdsdfvkdjzcukkroskmlojxdqbiwkzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896770.14119-482-75715068803620/AnsiballZ_stat.py'
Dec 05 01:06:10 compute-0 sudo[152744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:10 compute-0 python3.9[152746]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:06:10 compute-0 sudo[152744]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:11 compute-0 sudo[152867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovzidgkrtxrlqampbpimzfrxvgedpcyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896770.14119-482-75715068803620/AnsiballZ_copy.py'
Dec 05 01:06:11 compute-0 sudo[152867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:11 compute-0 python3.9[152869]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896770.14119-482-75715068803620/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:06:11 compute-0 sudo[152867]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:11 compute-0 sudo[152943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mncqlhadqimksfutvuyowjcwacqaajym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896770.14119-482-75715068803620/AnsiballZ_stat.py'
Dec 05 01:06:11 compute-0 sudo[152943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:11 compute-0 python3.9[152945]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:06:11 compute-0 sudo[152943]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:12 compute-0 sudo[153066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvltxqtowzikqrwmioeiqrhkmtznzwki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896770.14119-482-75715068803620/AnsiballZ_copy.py'
Dec 05 01:06:12 compute-0 sudo[153066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:12 compute-0 python3.9[153068]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896770.14119-482-75715068803620/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:06:12 compute-0 sudo[153066]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:13 compute-0 sudo[153218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcnricstwspbyaspnchaqdyknukmoiod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896772.775688-510-163689010078773/AnsiballZ_container_config_data.py'
Dec 05 01:06:13 compute-0 sudo[153218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:13 compute-0 python3.9[153220]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Dec 05 01:06:13 compute-0 sudo[153218]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:14 compute-0 sudo[153370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kffbcfakknztppxwjdplghiqivzpwsse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896773.7396452-519-115254670919316/AnsiballZ_container_config_hash.py'
Dec 05 01:06:14 compute-0 sudo[153370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:14 compute-0 python3.9[153372]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 01:06:14 compute-0 sudo[153370]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:15 compute-0 sudo[153522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezaloqfydmsyxysuxesyxydhwarhiuks ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764896774.8159442-529-65261596547511/AnsiballZ_edpm_container_manage.py'
Dec 05 01:06:15 compute-0 sudo[153522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:15 compute-0 python3[153524]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 01:06:31 compute-0 podman[153538]: 2025-12-05 01:06:31.03494709 +0000 UTC m=+15.282830028 image pull b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec 05 01:06:31 compute-0 podman[153679]: 2025-12-05 01:06:31.170939389 +0000 UTC m=+0.046878695 container create 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 01:06:31 compute-0 podman[153679]: 2025-12-05 01:06:31.143006739 +0000 UTC m=+0.018946065 image pull b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec 05 01:06:31 compute-0 python3[153524]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Dec 05 01:06:31 compute-0 sudo[153522]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:31 compute-0 sudo[153865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kldlaovcdkhofrwobgotuclpnpvbcpin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896791.4610937-537-21460270425/AnsiballZ_stat.py'
Dec 05 01:06:31 compute-0 sudo[153865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:31 compute-0 python3.9[153867]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:06:31 compute-0 sudo[153865]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:32 compute-0 sudo[154019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqwykngrralmcrpwpjvzvulkzashxsif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896792.237254-546-44637538003307/AnsiballZ_file.py'
Dec 05 01:06:32 compute-0 sudo[154019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:32 compute-0 python3.9[154021]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:06:32 compute-0 sudo[154019]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:33 compute-0 sudo[154170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixsigsentwrptccvfijetistwldufpui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896792.7826326-546-15650274504505/AnsiballZ_copy.py'
Dec 05 01:06:33 compute-0 sudo[154170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:33 compute-0 python3.9[154172]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764896792.7826326-546-15650274504505/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:06:33 compute-0 sudo[154170]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:34 compute-0 sudo[154246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twsicpeqfvcbnonkqavprfaxiywpqfgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896792.7826326-546-15650274504505/AnsiballZ_systemd.py'
Dec 05 01:06:34 compute-0 sudo[154246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:34 compute-0 python3.9[154248]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 01:06:34 compute-0 systemd[1]: Reloading.
Dec 05 01:06:34 compute-0 systemd-sysv-generator[154279]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:06:34 compute-0 systemd-rc-local-generator[154275]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:06:34 compute-0 sudo[154246]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:34 compute-0 sudo[154357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqqhyhsnebmxjextypuydmksfyvgdwis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896792.7826326-546-15650274504505/AnsiballZ_systemd.py'
Dec 05 01:06:34 compute-0 sudo[154357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:35 compute-0 python3.9[154359]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:06:35 compute-0 systemd[1]: Reloading.
Dec 05 01:06:35 compute-0 systemd-rc-local-generator[154389]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:06:35 compute-0 systemd-sysv-generator[154392]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:06:35 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Dec 05 01:06:35 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:06:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:06:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec 05 01:06:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 05 01:06:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 05 01:06:35 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.
Dec 05 01:06:35 compute-0 podman[154399]: 2025-12-05 01:06:35.798492236 +0000 UTC m=+0.145941617 container init 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4)
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: + sudo -E kolla_set_configs
Dec 05 01:06:35 compute-0 sudo[154420]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: sudo: unable to send audit message: Operation not permitted
Dec 05 01:06:35 compute-0 sudo[154420]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 05 01:06:35 compute-0 podman[154399]: 2025-12-05 01:06:35.833152734 +0000 UTC m=+0.180602075 container start 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:06:35 compute-0 podman[154399]: ceilometer_agent_compute
Dec 05 01:06:35 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Dec 05 01:06:35 compute-0 sudo[154357]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: INFO:__main__:Validating config file
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: INFO:__main__:Copying service configuration files
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 05 01:06:35 compute-0 podman[154421]: 2025-12-05 01:06:35.903685027 +0000 UTC m=+0.054924653 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: INFO:__main__:Writing out command to execute
Dec 05 01:06:35 compute-0 systemd[1]: 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-3dc2176127e65ca3.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 01:06:35 compute-0 systemd[1]: 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-3dc2176127e65ca3.service: Failed with result 'exit-code'.
Dec 05 01:06:35 compute-0 sudo[154420]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: ++ cat /run_command
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: + ARGS=
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: + sudo kolla_copy_cacerts
Dec 05 01:06:35 compute-0 sudo[154447]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: sudo: unable to send audit message: Operation not permitted
Dec 05 01:06:35 compute-0 sudo[154447]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 05 01:06:35 compute-0 sudo[154447]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: + [[ ! -n '' ]]
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: + . kolla_extend_start
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: + umask 0022
Dec 05 01:06:35 compute-0 ceilometer_agent_compute[154414]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec 05 01:06:36 compute-0 sudo[154595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgisikdwqkhuhzwajddjgqefocovizcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896796.2677326-570-6310918617404/AnsiballZ_systemd.py'
Dec 05 01:06:36 compute-0 sudo[154595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.678 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.678 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.678 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.679 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.679 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.679 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.679 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.679 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.679 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.679 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.680 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.680 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.680 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.680 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.680 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.680 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.680 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.680 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.681 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.681 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.681 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.681 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.681 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.681 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.681 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.681 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.681 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.684 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.684 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.684 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.684 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.684 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.684 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.684 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.684 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.684 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.684 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.684 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.685 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.685 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.685 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.685 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.685 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.685 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.685 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.685 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.685 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.685 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.685 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.685 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.686 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.686 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.686 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.686 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.686 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.686 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.686 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.686 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.686 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.686 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.686 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.686 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.688 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.688 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.688 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.688 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.688 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.688 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.688 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.688 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.688 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.688 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.688 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.688 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.713 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.714 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.714 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.714 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.714 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.714 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.714 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.714 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.714 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.715 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.715 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.715 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.715 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.715 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.715 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.715 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.715 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.715 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.715 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.716 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.716 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.716 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.716 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.716 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.716 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.716 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.716 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.716 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.716 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.716 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.717 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.717 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.717 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.717 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.717 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.717 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.717 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.717 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.717 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.717 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.717 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.717 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.718 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.718 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.718 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.718 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.718 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.718 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.718 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.718 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.718 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.718 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.718 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.718 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.719 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.719 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.719 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.719 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.719 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.719 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.719 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.719 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.719 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.719 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.719 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.719 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.720 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.720 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.720 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.720 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.720 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.720 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.720 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.720 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.720 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.720 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.721 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.721 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.721 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.721 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.721 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.721 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.721 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.721 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.721 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.721 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.721 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.721 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.723 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.723 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.723 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.723 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.723 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.723 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.723 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.723 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.723 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.723 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.723 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.723 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.724 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.724 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.724 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.724 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.724 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.724 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.724 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.724 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.724 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.724 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.724 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.725 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.725 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.725 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.725 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.725 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.725 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.725 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.725 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.725 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.725 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.725 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.725 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.727 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.727 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.727 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.727 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.727 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.729 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.732 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.733 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec 05 01:06:36 compute-0 python3.9[154597]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.942 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.951 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.952 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 05 01:06:36 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.952 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 05 01:06:36 compute-0 systemd[1]: Stopping ceilometer_agent_compute container...
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.051 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.068 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.068 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.068 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.068 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.068 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.068 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.068 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.068 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.069 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.069 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.069 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.069 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.069 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.069 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.069 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.069 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.069 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.069 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.070 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.070 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.070 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.070 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.070 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.070 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.070 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.070 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.070 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.070 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.070 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.071 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.071 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.071 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.071 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.071 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.071 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.071 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.071 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.071 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.071 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.072 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.072 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.072 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.072 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.072 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.072 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.072 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.072 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.072 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.072 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.072 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.072 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.073 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.073 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.073 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.073 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.073 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.073 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.073 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.073 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.073 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.073 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.073 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.073 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.075 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.075 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.075 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.075 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.075 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.075 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.075 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.075 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.075 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.075 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.075 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.077 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.077 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.077 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.077 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.077 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.077 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.077 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.077 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.077 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.077 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.077 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.081 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.081 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.081 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.081 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.081 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.081 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.081 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.081 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.081 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.081 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.083 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.094 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.095 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a93830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe6f6a92390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a93860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.096 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a938c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a91910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a93920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a91940>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f9cc3170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a93980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a939e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a93a10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a91a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f9949a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a93a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a93ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f9db02f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a902f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a93b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a91b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a923c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a91c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a91ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a91d00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a91d30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a93d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a90590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe6f6a93800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a93da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe6f6a93890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe6f6a91af0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe6f6a938f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe6f6a91a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe6fa43adb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe6f6a93950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe6f6a939b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe6f7bb14f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe6f6a91c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe6f6a91a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe6f6a93a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe6f6a93aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe6f6a93d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe6f6a902c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe6f6a93b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe6f6a91b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe6f92cf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe6f6a91be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe6f6a919d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe6f6a91cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe6f6a93dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe6f6a93d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe6f6a90560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe6f91da720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.152 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.152 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.152 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.152 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Dec 05 01:06:37 compute-0 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.159 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Dec 05 01:06:37 compute-0 virtqemud[138703]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 05 01:06:37 compute-0 virtqemud[138703]: hostname: compute-0
Dec 05 01:06:37 compute-0 virtqemud[138703]: End of file while reading data: Input/output error
Dec 05 01:06:37 compute-0 virtqemud[138703]: End of file while reading data: Input/output error
Dec 05 01:06:37 compute-0 systemd[1]: libpod-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope: Deactivated successfully.
Dec 05 01:06:37 compute-0 systemd[1]: libpod-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope: Consumed 1.470s CPU time.
Dec 05 01:06:37 compute-0 podman[154609]: 2025-12-05 01:06:37.327359655 +0000 UTC m=+0.315920623 container died 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute)
Dec 05 01:06:37 compute-0 systemd[1]: 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-3dc2176127e65ca3.timer: Deactivated successfully.
Dec 05 01:06:37 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.
Dec 05 01:06:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-userdata-shm.mount: Deactivated successfully.
Dec 05 01:06:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1-merged.mount: Deactivated successfully.
Dec 05 01:06:38 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec 05 01:06:40 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 05 01:06:40 compute-0 podman[154647]: 2025-12-05 01:06:40.168713827 +0000 UTC m=+0.122565946 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:06:41 compute-0 podman[154609]: 2025-12-05 01:06:41.10237904 +0000 UTC m=+4.090939938 container cleanup 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm)
Dec 05 01:06:41 compute-0 podman[154609]: ceilometer_agent_compute
Dec 05 01:06:41 compute-0 podman[154675]: ceilometer_agent_compute
Dec 05 01:06:41 compute-0 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Dec 05 01:06:41 compute-0 systemd[1]: Stopped ceilometer_agent_compute container.
Dec 05 01:06:41 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Dec 05 01:06:41 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:06:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:06:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec 05 01:06:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 05 01:06:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 05 01:06:41 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.
Dec 05 01:06:41 compute-0 podman[154687]: 2025-12-05 01:06:41.354511407 +0000 UTC m=+0.135858346 container init 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: + sudo -E kolla_set_configs
Dec 05 01:06:41 compute-0 sudo[154708]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: sudo: unable to send audit message: Operation not permitted
Dec 05 01:06:41 compute-0 sudo[154708]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 05 01:06:41 compute-0 podman[154687]: 2025-12-05 01:06:41.391993421 +0000 UTC m=+0.173340330 container start 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 01:06:41 compute-0 podman[154687]: ceilometer_agent_compute
Dec 05 01:06:41 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Dec 05 01:06:41 compute-0 sudo[154595]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: INFO:__main__:Validating config file
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: INFO:__main__:Copying service configuration files
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: INFO:__main__:Writing out command to execute
Dec 05 01:06:41 compute-0 sudo[154708]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:41 compute-0 podman[154709]: 2025-12-05 01:06:41.468172828 +0000 UTC m=+0.064171518 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible)
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: ++ cat /run_command
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: + ARGS=
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: + sudo kolla_copy_cacerts
Dec 05 01:06:41 compute-0 systemd[1]: 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-57d4f94636a0dba8.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 01:06:41 compute-0 systemd[1]: 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-57d4f94636a0dba8.service: Failed with result 'exit-code'.
Dec 05 01:06:41 compute-0 sudo[154731]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: sudo: unable to send audit message: Operation not permitted
Dec 05 01:06:41 compute-0 sudo[154731]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 05 01:06:41 compute-0 sudo[154731]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: + [[ ! -n '' ]]
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: + . kolla_extend_start
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: + umask 0022
Dec 05 01:06:41 compute-0 ceilometer_agent_compute[154702]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec 05 01:06:42 compute-0 sudo[154883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcnpftjjbetrjifusldztzpnqsnjukov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896801.6723359-578-202301564311480/AnsiballZ_stat.py'
Dec 05 01:06:42 compute-0 sudo[154883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:42 compute-0 python3.9[154885]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:06:42 compute-0 sudo[154883]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.322 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.322 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.322 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.322 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.322 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.322 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.323 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.323 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.323 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.323 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.323 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.323 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.323 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.323 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.323 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.323 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.324 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.324 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.324 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.324 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.324 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.324 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.324 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.324 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.324 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.324 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.324 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.325 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.325 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.325 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.325 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.325 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.325 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.325 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.325 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.325 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.326 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.326 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.326 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.326 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.326 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.326 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.326 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.326 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.326 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.326 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.327 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.327 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.327 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.327 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.327 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.327 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.327 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.327 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.327 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.327 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.327 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.327 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.328 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.328 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.328 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.328 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.328 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.328 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.328 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.328 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.328 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.328 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.328 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.329 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.329 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.329 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.329 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.329 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.329 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.329 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.329 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.329 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.329 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.329 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.329 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.330 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.330 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.330 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.330 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.330 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.330 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.330 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.330 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.330 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.330 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.330 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.331 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.331 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.331 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.331 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.331 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.331 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.331 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.331 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.331 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.331 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.331 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.331 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.332 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.332 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.332 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.332 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.332 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.332 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.332 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.332 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.332 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.332 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.332 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.332 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.335 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.335 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.335 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.335 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.335 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.335 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.335 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.335 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.335 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.335 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.335 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.358 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.358 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.359 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.359 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.359 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.359 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.359 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.359 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.359 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.360 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.360 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.360 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.360 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.360 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.360 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.360 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.360 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.360 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.361 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.361 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.361 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.361 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.361 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.361 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.361 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.361 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.361 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.361 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.362 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.362 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.362 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.362 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.362 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.362 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.362 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.362 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.362 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.363 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.363 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.363 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.363 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.363 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.363 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.363 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.363 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.363 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.363 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.363 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.364 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.364 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.364 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.364 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.364 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.364 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.364 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.364 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.364 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.364 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.365 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.365 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.365 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.365 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.365 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.365 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.365 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.365 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.365 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.365 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.366 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.366 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.366 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.366 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.366 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.366 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.366 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.366 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.366 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.366 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.367 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.367 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.367 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.367 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.367 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.367 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.367 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.367 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.367 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.368 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.368 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.368 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.368 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.368 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.368 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.368 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.368 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.368 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.368 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.368 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.368 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.369 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.369 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.369 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.369 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.369 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.369 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.369 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.369 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.369 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.369 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.369 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.370 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.370 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.370 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.370 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.370 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.370 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.370 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.370 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.370 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.371 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.371 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.371 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.371 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.371 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.371 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.371 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.371 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.371 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.371 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.371 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.372 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.372 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.372 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.372 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.372 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.372 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.372 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.372 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.372 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.372 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.372 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.372 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.373 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.373 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.373 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.373 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.373 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.373 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.373 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.373 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.373 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.373 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.373 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.373 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.375 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.377 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.378 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.380 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.387 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.387 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.387 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 05 01:06:42 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.509 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.509 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.509 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.509 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.509 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.510 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.510 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.510 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.510 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.510 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.510 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.510 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.510 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.510 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.510 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.511 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.511 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.511 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.511 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.511 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.511 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.511 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.511 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.511 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.511 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.511 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.513 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.513 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.513 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.513 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.513 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.513 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.513 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.513 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.513 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.513 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.513 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.513 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.516 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.516 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.516 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.516 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.516 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.516 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.516 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.516 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.516 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.516 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.516 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.516 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.522 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.522 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.522 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.524 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.538 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.539 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.539 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.539 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.540 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.540 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.540 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.540 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.540 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.544 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.544 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.544 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.544 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.545 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.545 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.545 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.546 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.546 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.546 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.546 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.547 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.547 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.549 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.549 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.549 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.549 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.549 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.549 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:06:42 compute-0 sudo[155020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zblzakyutaapdaggvlozxxndenldpmbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896801.6723359-578-202301564311480/AnsiballZ_copy.py'
Dec 05 01:06:42 compute-0 sudo[155020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:42 compute-0 python3.9[155022]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896801.6723359-578-202301564311480/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:06:42 compute-0 sudo[155020]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:43 compute-0 sudo[155172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfrxgfjguvrzrhuczgqtohwqzgrcjmlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896803.1698523-595-264405987494950/AnsiballZ_container_config_data.py'
Dec 05 01:06:43 compute-0 sudo[155172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:43 compute-0 python3.9[155174]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Dec 05 01:06:43 compute-0 sudo[155172]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:44 compute-0 sudo[155324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idnexnlvjzjnkgymhajbuscntyqnntdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896803.9929192-604-49279477576607/AnsiballZ_container_config_hash.py'
Dec 05 01:06:44 compute-0 sudo[155324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:44 compute-0 python3.9[155326]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 01:06:44 compute-0 sudo[155324]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:45 compute-0 sudo[155476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quqaedibkdyvyvwwgkjygsjgbtdrysqr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764896805.0447285-614-29768220178402/AnsiballZ_edpm_container_manage.py'
Dec 05 01:06:45 compute-0 sudo[155476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:45 compute-0 python3[155478]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 01:06:46 compute-0 podman[155491]: 2025-12-05 01:06:46.992875053 +0000 UTC m=+1.219950093 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec 05 01:06:47 compute-0 podman[155587]: 2025-12-05 01:06:47.13104152 +0000 UTC m=+0.043271064 container create 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible)
Dec 05 01:06:47 compute-0 podman[155587]: 2025-12-05 01:06:47.106728891 +0000 UTC m=+0.018958395 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec 05 01:06:47 compute-0 python3[155478]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Dec 05 01:06:47 compute-0 sudo[155476]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:47 compute-0 sudo[155775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axcmymmjuzsnqxaxacejsxvkgazgengq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896807.4990022-622-185972368890967/AnsiballZ_stat.py'
Dec 05 01:06:47 compute-0 sudo[155775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:47 compute-0 python3.9[155777]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:06:47 compute-0 sudo[155775]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:48 compute-0 sudo[155929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubxgymjeprezmejihwwkygzopviokata ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896808.2853017-631-15756275477031/AnsiballZ_file.py'
Dec 05 01:06:48 compute-0 sudo[155929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:48 compute-0 python3.9[155931]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:06:48 compute-0 sudo[155929]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:49 compute-0 sudo[156080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pedpozapjqicsdxormdwwdrtwczffbfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896808.884332-631-140619042267442/AnsiballZ_copy.py'
Dec 05 01:06:49 compute-0 sudo[156080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:49 compute-0 python3.9[156082]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764896808.884332-631-140619042267442/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:06:49 compute-0 sudo[156080]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:49 compute-0 sudo[156156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbxrmnajpiupteerncizbwffyudqfnlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896808.884332-631-140619042267442/AnsiballZ_systemd.py'
Dec 05 01:06:49 compute-0 sudo[156156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:50 compute-0 python3.9[156158]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 01:06:50 compute-0 systemd[1]: Reloading.
Dec 05 01:06:50 compute-0 systemd-rc-local-generator[156187]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:06:50 compute-0 systemd-sysv-generator[156191]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:06:50 compute-0 sudo[156156]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:50 compute-0 sudo[156268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gonvvsaikpzbgfkewbmkwlcmgzpcydjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896808.884332-631-140619042267442/AnsiballZ_systemd.py'
Dec 05 01:06:50 compute-0 sudo[156268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:51 compute-0 python3.9[156270]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:06:51 compute-0 systemd[1]: Reloading.
Dec 05 01:06:51 compute-0 systemd-rc-local-generator[156297]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:06:51 compute-0 systemd-sysv-generator[156301]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:06:51 compute-0 systemd[1]: Starting node_exporter container...
Dec 05 01:06:51 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae76d3462a5826074750f1233391fe337ca691f19a9e669132d737b113b57717/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae76d3462a5826074750f1233391fe337ca691f19a9e669132d737b113b57717/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 05 01:06:51 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.
Dec 05 01:06:51 compute-0 podman[156311]: 2025-12-05 01:06:51.811723054 +0000 UTC m=+0.156042898 container init 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.825Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.825Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.825Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.825Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.825Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=node_exporter.go:117 level=info collector=arp
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=node_exporter.go:117 level=info collector=bcache
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=node_exporter.go:117 level=info collector=bonding
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=node_exporter.go:117 level=info collector=cpu
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=node_exporter.go:117 level=info collector=edac
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=filefd
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=netclass
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=netdev
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=netstat
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=nfs
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=nvme
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=softnet
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=systemd
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=xfs
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=zfs
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.828Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec 05 01:06:51 compute-0 node_exporter[156326]: ts=2025-12-05T01:06:51.828Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec 05 01:06:51 compute-0 podman[156311]: 2025-12-05 01:06:51.845001669 +0000 UTC m=+0.189321453 container start 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:06:51 compute-0 podman[156311]: node_exporter
Dec 05 01:06:51 compute-0 systemd[1]: Started node_exporter container.
Dec 05 01:06:51 compute-0 sudo[156268]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:51 compute-0 podman[156335]: 2025-12-05 01:06:51.909812935 +0000 UTC m=+0.050961320 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:06:52 compute-0 sudo[156508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhzzflrbpfmmrzzppnobguosreuzapwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896812.0628934-655-42831300674951/AnsiballZ_systemd.py'
Dec 05 01:06:52 compute-0 sudo[156508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:52 compute-0 python3.9[156510]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 01:06:53 compute-0 systemd[1]: Stopping node_exporter container...
Dec 05 01:06:53 compute-0 systemd[1]: libpod-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.scope: Deactivated successfully.
Dec 05 01:06:53 compute-0 podman[156514]: 2025-12-05 01:06:53.859101825 +0000 UTC m=+0.053616573 container died 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:06:53 compute-0 systemd[1]: 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a-631ee002f4e24dfa.timer: Deactivated successfully.
Dec 05 01:06:53 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.
Dec 05 01:06:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a-userdata-shm.mount: Deactivated successfully.
Dec 05 01:06:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae76d3462a5826074750f1233391fe337ca691f19a9e669132d737b113b57717-merged.mount: Deactivated successfully.
Dec 05 01:06:54 compute-0 podman[156514]: 2025-12-05 01:06:54.031545148 +0000 UTC m=+0.226059886 container cleanup 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 01:06:54 compute-0 podman[156514]: node_exporter
Dec 05 01:06:54 compute-0 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 05 01:06:54 compute-0 podman[156546]: node_exporter
Dec 05 01:06:54 compute-0 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Dec 05 01:06:54 compute-0 systemd[1]: Stopped node_exporter container.
Dec 05 01:06:54 compute-0 systemd[1]: Starting node_exporter container...
Dec 05 01:06:54 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae76d3462a5826074750f1233391fe337ca691f19a9e669132d737b113b57717/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae76d3462a5826074750f1233391fe337ca691f19a9e669132d737b113b57717/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 05 01:06:54 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.
Dec 05 01:06:54 compute-0 podman[156559]: 2025-12-05 01:06:54.269981513 +0000 UTC m=+0.129353766 container init 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.289Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.289Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.289Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.289Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.290Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.290Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.290Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.290Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.290Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=arp
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=bcache
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=bonding
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=cpu
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=edac
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=filefd
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=netclass
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=netdev
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=netstat
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=nfs
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=nvme
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=softnet
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=systemd
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=xfs
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=zfs
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.292Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec 05 01:06:54 compute-0 node_exporter[156575]: ts=2025-12-05T01:06:54.292Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec 05 01:06:54 compute-0 podman[156559]: 2025-12-05 01:06:54.304240128 +0000 UTC m=+0.163612401 container start 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 01:06:54 compute-0 podman[156559]: node_exporter
Dec 05 01:06:54 compute-0 systemd[1]: Started node_exporter container.
Dec 05 01:06:54 compute-0 sudo[156508]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:54 compute-0 podman[156584]: 2025-12-05 01:06:54.403114774 +0000 UTC m=+0.078818579 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:06:54 compute-0 sudo[156756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hinfvqfcskvqskldxvobvnaijktsafee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896814.5393744-663-258535666563393/AnsiballZ_stat.py'
Dec 05 01:06:54 compute-0 sudo[156756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:55 compute-0 python3.9[156758]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:06:55 compute-0 sudo[156756]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:55 compute-0 sudo[156879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwokbjpbqymefqdmxnlanbyfxidgaoja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896814.5393744-663-258535666563393/AnsiballZ_copy.py'
Dec 05 01:06:55 compute-0 sudo[156879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:55 compute-0 python3.9[156881]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896814.5393744-663-258535666563393/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:06:55 compute-0 sudo[156879]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:56 compute-0 sudo[157031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxodixeyjskoqyroaxjiplhrixkxxwyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896816.2330258-680-172208711916875/AnsiballZ_container_config_data.py'
Dec 05 01:06:56 compute-0 sudo[157031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:56 compute-0 python3.9[157033]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Dec 05 01:06:56 compute-0 sudo[157031]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:57 compute-0 sudo[157183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypakprduhwmuasxeiyqijyvakcaxlblk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896817.0360837-689-131608057861375/AnsiballZ_container_config_hash.py'
Dec 05 01:06:57 compute-0 sudo[157183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:57 compute-0 python3.9[157185]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 01:06:57 compute-0 sudo[157183]: pam_unix(sudo:session): session closed for user root
Dec 05 01:06:58 compute-0 sudo[157335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrwkxtbtaqedqcnybrgliaprldanhyzs ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764896817.9228761-699-129567208637079/AnsiballZ_edpm_container_manage.py'
Dec 05 01:06:58 compute-0 sudo[157335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:06:58 compute-0 python3[157337]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 01:07:00 compute-0 podman[157350]: 2025-12-05 01:07:00.068777021 +0000 UTC m=+1.387408642 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec 05 01:07:00 compute-0 podman[157447]: 2025-12-05 01:07:00.181975248 +0000 UTC m=+0.040117087 container create 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible)
Dec 05 01:07:00 compute-0 podman[157447]: 2025-12-05 01:07:00.160420834 +0000 UTC m=+0.018562673 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec 05 01:07:00 compute-0 python3[157337]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Dec 05 01:07:00 compute-0 sudo[157335]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:00 compute-0 sudo[157636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baycpqkbmgeicycfbxpdfphhdpwrfsum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896820.4927847-707-10374644784688/AnsiballZ_stat.py'
Dec 05 01:07:00 compute-0 sudo[157636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:01 compute-0 python3.9[157638]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:07:01 compute-0 sudo[157636]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:01 compute-0 sudo[157790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itesojtbwdinkmxxektothrusugznasy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896821.3681717-716-124195892548250/AnsiballZ_file.py'
Dec 05 01:07:01 compute-0 sudo[157790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:01 compute-0 python3.9[157792]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:07:01 compute-0 sudo[157790]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:02 compute-0 sudo[157941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obzfocrkiidszttlsrzzpdioappmqozf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896822.005405-716-253758520666698/AnsiballZ_copy.py'
Dec 05 01:07:02 compute-0 sudo[157941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:02 compute-0 python3.9[157943]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764896822.005405-716-253758520666698/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:07:02 compute-0 sudo[157941]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:03 compute-0 sudo[158017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiwrrrafapdyxcgmonldrrbstbqujlyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896822.005405-716-253758520666698/AnsiballZ_systemd.py'
Dec 05 01:07:03 compute-0 sudo[158017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:03 compute-0 python3.9[158019]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 01:07:03 compute-0 systemd[1]: Reloading.
Dec 05 01:07:03 compute-0 systemd-rc-local-generator[158040]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:07:03 compute-0 systemd-sysv-generator[158045]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:07:03 compute-0 sudo[158017]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:04 compute-0 sudo[158128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fggcbqshyjvrazizqtqtybapvxetvamv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896822.005405-716-253758520666698/AnsiballZ_systemd.py'
Dec 05 01:07:04 compute-0 sudo[158128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:04 compute-0 python3.9[158130]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:07:04 compute-0 systemd[1]: Reloading.
Dec 05 01:07:04 compute-0 systemd-sysv-generator[158165]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:07:04 compute-0 systemd-rc-local-generator[158160]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:07:04 compute-0 systemd[1]: Starting podman_exporter container...
Dec 05 01:07:04 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492903c1adf186d3b8596a7f27f4933dbed2c6566affc3d01772a4df0cbd308a/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492903c1adf186d3b8596a7f27f4933dbed2c6566affc3d01772a4df0cbd308a/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 05 01:07:04 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.
Dec 05 01:07:04 compute-0 podman[158171]: 2025-12-05 01:07:04.943193373 +0000 UTC m=+0.161237708 container init 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:07:04 compute-0 podman_exporter[158186]: ts=2025-12-05T01:07:04.964Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec 05 01:07:04 compute-0 podman_exporter[158186]: ts=2025-12-05T01:07:04.964Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec 05 01:07:04 compute-0 podman_exporter[158186]: ts=2025-12-05T01:07:04.964Z caller=handler.go:94 level=info msg="enabled collectors"
Dec 05 01:07:04 compute-0 podman_exporter[158186]: ts=2025-12-05T01:07:04.964Z caller=handler.go:105 level=info collector=container
Dec 05 01:07:04 compute-0 podman[158171]: 2025-12-05 01:07:04.985229478 +0000 UTC m=+0.203273763 container start 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 01:07:04 compute-0 podman[158171]: podman_exporter
Dec 05 01:07:04 compute-0 systemd[1]: Starting Podman API Service...
Dec 05 01:07:04 compute-0 systemd[1]: Started Podman API Service.
Dec 05 01:07:05 compute-0 systemd[1]: Started podman_exporter container.
Dec 05 01:07:05 compute-0 podman[158197]: time="2025-12-05T01:07:05Z" level=info msg="/usr/bin/podman filtering at log level info"
Dec 05 01:07:05 compute-0 podman[158197]: time="2025-12-05T01:07:05Z" level=info msg="Setting parallel job count to 25"
Dec 05 01:07:05 compute-0 podman[158197]: time="2025-12-05T01:07:05Z" level=info msg="Using sqlite as database backend"
Dec 05 01:07:05 compute-0 podman[158197]: time="2025-12-05T01:07:05Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Dec 05 01:07:05 compute-0 podman[158197]: time="2025-12-05T01:07:05Z" level=info msg="Using systemd socket activation to determine API endpoint"
Dec 05 01:07:05 compute-0 podman[158197]: time="2025-12-05T01:07:05Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Dec 05 01:07:05 compute-0 sudo[158128]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:05 compute-0 podman[158197]: @ - - [05/Dec/2025:01:07:05 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec 05 01:07:05 compute-0 podman[158197]: time="2025-12-05T01:07:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:07:05 compute-0 podman[158197]: @ - - [05/Dec/2025:01:07:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 9686 "" "Go-http-client/1.1"
Dec 05 01:07:05 compute-0 podman_exporter[158186]: ts=2025-12-05T01:07:05.082Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec 05 01:07:05 compute-0 podman_exporter[158186]: ts=2025-12-05T01:07:05.083Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec 05 01:07:05 compute-0 podman_exporter[158186]: ts=2025-12-05T01:07:05.084Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec 05 01:07:05 compute-0 podman[158196]: 2025-12-05 01:07:05.090434329 +0000 UTC m=+0.084970589 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:07:05 compute-0 systemd[1]: 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e-3d5bdfaef99ef5a6.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 01:07:05 compute-0 systemd[1]: 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e-3d5bdfaef99ef5a6.service: Failed with result 'exit-code'.
Dec 05 01:07:05 compute-0 sudo[158382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhrkjwhmlcpijdakayoxjghpbbkjlcjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896825.3417532-740-8740051902720/AnsiballZ_systemd.py'
Dec 05 01:07:05 compute-0 sudo[158382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:05 compute-0 python3.9[158384]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 01:07:06 compute-0 systemd[1]: Stopping podman_exporter container...
Dec 05 01:07:06 compute-0 podman[158197]: @ - - [05/Dec/2025:01:07:05 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Dec 05 01:07:06 compute-0 systemd[1]: libpod-63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.scope: Deactivated successfully.
Dec 05 01:07:06 compute-0 podman[158388]: 2025-12-05 01:07:06.0996875 +0000 UTC m=+0.066652134 container died 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:07:06 compute-0 systemd[1]: 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e-3d5bdfaef99ef5a6.timer: Deactivated successfully.
Dec 05 01:07:06 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.
Dec 05 01:07:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e-userdata-shm.mount: Deactivated successfully.
Dec 05 01:07:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-492903c1adf186d3b8596a7f27f4933dbed2c6566affc3d01772a4df0cbd308a-merged.mount: Deactivated successfully.
Dec 05 01:07:06 compute-0 podman[158388]: 2025-12-05 01:07:06.384451532 +0000 UTC m=+0.351416136 container cleanup 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:07:06 compute-0 podman[158388]: podman_exporter
Dec 05 01:07:06 compute-0 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 05 01:07:06 compute-0 podman[158417]: podman_exporter
Dec 05 01:07:06 compute-0 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Dec 05 01:07:06 compute-0 systemd[1]: Stopped podman_exporter container.
Dec 05 01:07:06 compute-0 systemd[1]: Starting podman_exporter container...
Dec 05 01:07:06 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492903c1adf186d3b8596a7f27f4933dbed2c6566affc3d01772a4df0cbd308a/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492903c1adf186d3b8596a7f27f4933dbed2c6566affc3d01772a4df0cbd308a/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 05 01:07:06 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.
Dec 05 01:07:06 compute-0 podman[158430]: 2025-12-05 01:07:06.688245871 +0000 UTC m=+0.172842186 container init 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:07:06 compute-0 podman_exporter[158445]: ts=2025-12-05T01:07:06.713Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec 05 01:07:06 compute-0 podman_exporter[158445]: ts=2025-12-05T01:07:06.713Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec 05 01:07:06 compute-0 podman_exporter[158445]: ts=2025-12-05T01:07:06.713Z caller=handler.go:94 level=info msg="enabled collectors"
Dec 05 01:07:06 compute-0 podman_exporter[158445]: ts=2025-12-05T01:07:06.713Z caller=handler.go:105 level=info collector=container
Dec 05 01:07:06 compute-0 podman[158197]: @ - - [05/Dec/2025:01:07:06 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec 05 01:07:06 compute-0 podman[158197]: time="2025-12-05T01:07:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:07:06 compute-0 podman[158430]: 2025-12-05 01:07:06.730532924 +0000 UTC m=+0.215129179 container start 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 01:07:06 compute-0 podman[158430]: podman_exporter
Dec 05 01:07:06 compute-0 podman[158197]: @ - - [05/Dec/2025:01:07:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 9688 "" "Go-http-client/1.1"
Dec 05 01:07:06 compute-0 podman_exporter[158445]: ts=2025-12-05T01:07:06.740Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec 05 01:07:06 compute-0 podman_exporter[158445]: ts=2025-12-05T01:07:06.741Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec 05 01:07:06 compute-0 podman_exporter[158445]: ts=2025-12-05T01:07:06.741Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec 05 01:07:06 compute-0 systemd[1]: Started podman_exporter container.
Dec 05 01:07:06 compute-0 sudo[158382]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:06 compute-0 podman[158454]: 2025-12-05 01:07:06.850486209 +0000 UTC m=+0.103791789 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 01:07:07 compute-0 sudo[158631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvrpzwanqygecsvemnotsolqqtgspkkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896827.0250788-748-227243265095570/AnsiballZ_stat.py'
Dec 05 01:07:07 compute-0 sudo[158631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:07 compute-0 python3.9[158633]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:07:07 compute-0 sudo[158631]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:07 compute-0 sudo[158754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjfrscwjozkknjltqtoivoyoejliuejo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896827.0250788-748-227243265095570/AnsiballZ_copy.py'
Dec 05 01:07:07 compute-0 sudo[158754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:08 compute-0 python3.9[158756]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896827.0250788-748-227243265095570/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:07:08 compute-0 sudo[158754]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:08 compute-0 sudo[158906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koqjmcqjkymffzraxuhtwzstsygszkoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896828.5995424-765-48686440042334/AnsiballZ_container_config_data.py'
Dec 05 01:07:08 compute-0 sudo[158906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:09 compute-0 python3.9[158908]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Dec 05 01:07:09 compute-0 sudo[158906]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:09 compute-0 sudo[159058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmnvpgbcfcoictcvkxjxfsiitnqmzlfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896829.4911785-774-49029000088732/AnsiballZ_container_config_hash.py'
Dec 05 01:07:09 compute-0 sudo[159058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:10 compute-0 python3.9[159060]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 01:07:10 compute-0 sudo[159058]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:10 compute-0 sudo[159210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adxfjwvvlnhqamvzesncrmyoqajqiyge ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764896830.440561-784-82024301082028/AnsiballZ_edpm_container_manage.py'
Dec 05 01:07:10 compute-0 sudo[159210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:11 compute-0 python3[159212]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 01:07:11 compute-0 podman[159238]: 2025-12-05 01:07:11.669796943 +0000 UTC m=+0.076790517 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.build-date=20251125)
Dec 05 01:07:11 compute-0 systemd[1]: 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-57d4f94636a0dba8.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 01:07:11 compute-0 systemd[1]: 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-57d4f94636a0dba8.service: Failed with result 'exit-code'.
Dec 05 01:07:11 compute-0 podman[159239]: 2025-12-05 01:07:11.7326963 +0000 UTC m=+0.139104016 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec 05 01:07:13 compute-0 podman[159225]: 2025-12-05 01:07:13.630138903 +0000 UTC m=+2.459739805 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec 05 01:07:13 compute-0 podman[159367]: 2025-12-05 01:07:13.76639071 +0000 UTC m=+0.048892547 container create 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, architecture=x86_64, maintainer=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.7, name=ubi9-minimal, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc.)
Dec 05 01:07:13 compute-0 podman[159367]: 2025-12-05 01:07:13.73909883 +0000 UTC m=+0.021600667 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec 05 01:07:13 compute-0 python3[159212]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec 05 01:07:13 compute-0 sudo[159210]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:14 compute-0 sudo[159557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jebrghxoifqpwckyhxowcgkwsaegkdyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896834.0890121-792-272160255215921/AnsiballZ_stat.py'
Dec 05 01:07:14 compute-0 sudo[159557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:14 compute-0 python3.9[159559]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:07:14 compute-0 sudo[159557]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:15 compute-0 sudo[159711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqqaexuiuiiotrocmtnbipkjfinafwxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896834.8295097-801-11197473010930/AnsiballZ_file.py'
Dec 05 01:07:15 compute-0 sudo[159711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:15 compute-0 python3.9[159713]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:07:15 compute-0 sudo[159711]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:15 compute-0 sudo[159862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfbbwhhlgbrxxuoemwfcineywqtxxzdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896835.4069624-801-173414154476593/AnsiballZ_copy.py'
Dec 05 01:07:15 compute-0 sudo[159862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:16 compute-0 python3.9[159864]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764896835.4069624-801-173414154476593/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:07:16 compute-0 sudo[159862]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:16 compute-0 sudo[159938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdaaizfqslriquvibjfrbtklubyuxkee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896835.4069624-801-173414154476593/AnsiballZ_systemd.py'
Dec 05 01:07:16 compute-0 sudo[159938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:16 compute-0 python3.9[159940]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 01:07:16 compute-0 systemd[1]: Reloading.
Dec 05 01:07:16 compute-0 systemd-rc-local-generator[159967]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:07:16 compute-0 systemd-sysv-generator[159971]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:07:17 compute-0 sudo[159938]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:17 compute-0 sudo[160049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzuflefycnqgfkiqarreqfwslverqwdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896835.4069624-801-173414154476593/AnsiballZ_systemd.py'
Dec 05 01:07:17 compute-0 sudo[160049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:17 compute-0 python3.9[160051]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:07:17 compute-0 systemd[1]: Reloading.
Dec 05 01:07:17 compute-0 systemd-rc-local-generator[160079]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:07:17 compute-0 systemd-sysv-generator[160083]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:07:17 compute-0 systemd[1]: Starting openstack_network_exporter container...
Dec 05 01:07:18 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:07:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952f396ae4fa4ce7b65d319291f161c33a679374731006ecac684d22de091d0e/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 05 01:07:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952f396ae4fa4ce7b65d319291f161c33a679374731006ecac684d22de091d0e/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 05 01:07:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952f396ae4fa4ce7b65d319291f161c33a679374731006ecac684d22de091d0e/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:07:18 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.
Dec 05 01:07:18 compute-0 podman[160091]: 2025-12-05 01:07:18.115495829 +0000 UTC m=+0.126020983 container init 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter)
Dec 05 01:07:18 compute-0 openstack_network_exporter[160106]: INFO    01:07:18 main.go:48: registering *bridge.Collector
Dec 05 01:07:18 compute-0 openstack_network_exporter[160106]: INFO    01:07:18 main.go:48: registering *coverage.Collector
Dec 05 01:07:18 compute-0 openstack_network_exporter[160106]: INFO    01:07:18 main.go:48: registering *datapath.Collector
Dec 05 01:07:18 compute-0 openstack_network_exporter[160106]: INFO    01:07:18 main.go:48: registering *iface.Collector
Dec 05 01:07:18 compute-0 openstack_network_exporter[160106]: INFO    01:07:18 main.go:48: registering *memory.Collector
Dec 05 01:07:18 compute-0 openstack_network_exporter[160106]: INFO    01:07:18 main.go:48: registering *ovnnorthd.Collector
Dec 05 01:07:18 compute-0 openstack_network_exporter[160106]: INFO    01:07:18 main.go:48: registering *ovn.Collector
Dec 05 01:07:18 compute-0 openstack_network_exporter[160106]: INFO    01:07:18 main.go:48: registering *ovsdbserver.Collector
Dec 05 01:07:18 compute-0 openstack_network_exporter[160106]: INFO    01:07:18 main.go:48: registering *pmd_perf.Collector
Dec 05 01:07:18 compute-0 openstack_network_exporter[160106]: INFO    01:07:18 main.go:48: registering *pmd_rxq.Collector
Dec 05 01:07:18 compute-0 openstack_network_exporter[160106]: INFO    01:07:18 main.go:48: registering *vswitch.Collector
Dec 05 01:07:18 compute-0 openstack_network_exporter[160106]: NOTICE  01:07:18 main.go:76: listening on https://:9105/metrics
Dec 05 01:07:18 compute-0 podman[160091]: 2025-12-05 01:07:18.14666715 +0000 UTC m=+0.157192284 container start 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 05 01:07:18 compute-0 podman[160091]: openstack_network_exporter
Dec 05 01:07:18 compute-0 systemd[1]: Started openstack_network_exporter container.
Dec 05 01:07:18 compute-0 sudo[160049]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:18 compute-0 podman[160117]: 2025-12-05 01:07:18.298513587 +0000 UTC m=+0.144173822 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, version=9.6, distribution-scope=public, container_name=openstack_network_exporter, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Dec 05 01:07:18 compute-0 sudo[160288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezqeoigbtididdjehoyvasrosapklpci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896838.4997134-825-200225326254094/AnsiballZ_systemd.py'
Dec 05 01:07:18 compute-0 sudo[160288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:19 compute-0 python3.9[160290]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 01:07:19 compute-0 systemd[1]: Stopping openstack_network_exporter container...
Dec 05 01:07:19 compute-0 systemd[1]: libpod-348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.scope: Deactivated successfully.
Dec 05 01:07:19 compute-0 podman[160294]: 2025-12-05 01:07:19.27081464 +0000 UTC m=+0.066825409 container died 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vcs-type=git, io.openshift.tags=minimal rhel9, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 01:07:19 compute-0 systemd[1]: 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88-2a2a4f50eb063d7d.timer: Deactivated successfully.
Dec 05 01:07:19 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.
Dec 05 01:07:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88-userdata-shm.mount: Deactivated successfully.
Dec 05 01:07:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-952f396ae4fa4ce7b65d319291f161c33a679374731006ecac684d22de091d0e-merged.mount: Deactivated successfully.
Dec 05 01:07:20 compute-0 podman[160294]: 2025-12-05 01:07:20.114278425 +0000 UTC m=+0.910289234 container cleanup 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm)
Dec 05 01:07:20 compute-0 podman[160294]: openstack_network_exporter
Dec 05 01:07:20 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 05 01:07:20 compute-0 podman[160321]: openstack_network_exporter
Dec 05 01:07:20 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Dec 05 01:07:20 compute-0 systemd[1]: Stopped openstack_network_exporter container.
Dec 05 01:07:20 compute-0 systemd[1]: Starting openstack_network_exporter container...
Dec 05 01:07:20 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952f396ae4fa4ce7b65d319291f161c33a679374731006ecac684d22de091d0e/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 05 01:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952f396ae4fa4ce7b65d319291f161c33a679374731006ecac684d22de091d0e/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 05 01:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952f396ae4fa4ce7b65d319291f161c33a679374731006ecac684d22de091d0e/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:07:20 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.
Dec 05 01:07:20 compute-0 podman[160334]: 2025-12-05 01:07:20.339507033 +0000 UTC m=+0.136427203 container init 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=edpm, maintainer=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 01:07:20 compute-0 openstack_network_exporter[160350]: INFO    01:07:20 main.go:48: registering *bridge.Collector
Dec 05 01:07:20 compute-0 openstack_network_exporter[160350]: INFO    01:07:20 main.go:48: registering *coverage.Collector
Dec 05 01:07:20 compute-0 openstack_network_exporter[160350]: INFO    01:07:20 main.go:48: registering *datapath.Collector
Dec 05 01:07:20 compute-0 openstack_network_exporter[160350]: INFO    01:07:20 main.go:48: registering *iface.Collector
Dec 05 01:07:20 compute-0 openstack_network_exporter[160350]: INFO    01:07:20 main.go:48: registering *memory.Collector
Dec 05 01:07:20 compute-0 openstack_network_exporter[160350]: INFO    01:07:20 main.go:48: registering *ovnnorthd.Collector
Dec 05 01:07:20 compute-0 openstack_network_exporter[160350]: INFO    01:07:20 main.go:48: registering *ovn.Collector
Dec 05 01:07:20 compute-0 openstack_network_exporter[160350]: INFO    01:07:20 main.go:48: registering *ovsdbserver.Collector
Dec 05 01:07:20 compute-0 openstack_network_exporter[160350]: INFO    01:07:20 main.go:48: registering *pmd_perf.Collector
Dec 05 01:07:20 compute-0 openstack_network_exporter[160350]: INFO    01:07:20 main.go:48: registering *pmd_rxq.Collector
Dec 05 01:07:20 compute-0 openstack_network_exporter[160350]: INFO    01:07:20 main.go:48: registering *vswitch.Collector
Dec 05 01:07:20 compute-0 openstack_network_exporter[160350]: NOTICE  01:07:20 main.go:76: listening on https://:9105/metrics
Dec 05 01:07:20 compute-0 podman[160334]: 2025-12-05 01:07:20.382926131 +0000 UTC m=+0.179846361 container start 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.7, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=)
Dec 05 01:07:20 compute-0 podman[160334]: openstack_network_exporter
Dec 05 01:07:20 compute-0 systemd[1]: Started openstack_network_exporter container.
Dec 05 01:07:20 compute-0 sudo[160288]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:20 compute-0 podman[160360]: 2025-12-05 01:07:20.504248868 +0000 UTC m=+0.097595587 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, config_id=edpm, version=9.6, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible)
Dec 05 01:07:21 compute-0 sudo[160527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbuupjkbevuhzmsveawtnlvpbrqxidmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896840.6818194-833-146728253659587/AnsiballZ_find.py'
Dec 05 01:07:21 compute-0 sudo[160527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:21 compute-0 python3.9[160529]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 01:07:21 compute-0 sudo[160527]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:22 compute-0 sudo[160679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xncbcruuywblaqgtpwgnkmepfucaxutf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896841.758096-843-268933528766002/AnsiballZ_podman_container_info.py'
Dec 05 01:07:22 compute-0 sudo[160679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:22 compute-0 python3.9[160681]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec 05 01:07:22 compute-0 sudo[160679]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:23 compute-0 sudo[160844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uecvyegscaslxmyzcjdpzixvwafgojyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896842.8546293-851-104636406194061/AnsiballZ_podman_container_exec.py'
Dec 05 01:07:23 compute-0 sudo[160844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:23 compute-0 python3.9[160846]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:07:23 compute-0 systemd[1]: Started libpod-conmon-d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.scope.
Dec 05 01:07:23 compute-0 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 01:07:23 compute-0 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 01:07:23 compute-0 podman[160847]: 2025-12-05 01:07:23.745978433 +0000 UTC m=+0.113842528 container exec d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 01:07:23 compute-0 podman[160868]: 2025-12-05 01:07:23.850168173 +0000 UTC m=+0.080552242 container exec_died d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:07:23 compute-0 podman[160847]: 2025-12-05 01:07:23.920312654 +0000 UTC m=+0.288176719 container exec_died d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:07:23 compute-0 systemd[1]: libpod-conmon-d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.scope: Deactivated successfully.
Dec 05 01:07:23 compute-0 sudo[160844]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:24 compute-0 sudo[161037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfkjnobxgylsckoknrmhnlouzcbrdfls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896844.3635182-859-52741590873096/AnsiballZ_podman_container_exec.py'
Dec 05 01:07:24 compute-0 sudo[161037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:24 compute-0 podman[161002]: 2025-12-05 01:07:24.626539559 +0000 UTC m=+0.049167585 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 01:07:24 compute-0 python3.9[161053]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:07:24 compute-0 systemd[1]: Started libpod-conmon-d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.scope.
Dec 05 01:07:24 compute-0 podman[161054]: 2025-12-05 01:07:24.903385748 +0000 UTC m=+0.086141265 container exec d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:07:24 compute-0 podman[161054]: 2025-12-05 01:07:24.936095786 +0000 UTC m=+0.118851283 container exec_died d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:07:24 compute-0 systemd[1]: libpod-conmon-d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.scope: Deactivated successfully.
Dec 05 01:07:24 compute-0 sudo[161037]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:25 compute-0 sudo[161235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffpchcxmwzcdxrrujbyhsrqrszeoossz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896845.156735-867-154979370517488/AnsiballZ_file.py'
Dec 05 01:07:25 compute-0 sudo[161235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:25 compute-0 python3.9[161237]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:07:25 compute-0 sudo[161235]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:26 compute-0 sudo[161387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eirqqljgmygmmontfvqfkaoeuuqtzkod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896845.9585073-876-204085416286696/AnsiballZ_podman_container_info.py'
Dec 05 01:07:26 compute-0 sudo[161387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:26 compute-0 python3.9[161389]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec 05 01:07:26 compute-0 sudo[161387]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:27 compute-0 sudo[161552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-husszvvspgaqnvhmqyeymosrjczgaixb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896846.836057-884-223773749413626/AnsiballZ_podman_container_exec.py'
Dec 05 01:07:27 compute-0 sudo[161552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:27 compute-0 python3.9[161554]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:07:27 compute-0 systemd[1]: Started libpod-conmon-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope.
Dec 05 01:07:27 compute-0 podman[161555]: 2025-12-05 01:07:27.456910896 +0000 UTC m=+0.105542124 container exec 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:07:27 compute-0 podman[161555]: 2025-12-05 01:07:27.465155906 +0000 UTC m=+0.113787104 container exec_died 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:07:27 compute-0 sudo[161552]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:27 compute-0 systemd[1]: libpod-conmon-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope: Deactivated successfully.
Dec 05 01:07:28 compute-0 sudo[161737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpzzduelzhfmvwujrwqmhlgkypqqbaff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896847.7111104-892-86459524724751/AnsiballZ_podman_container_exec.py'
Dec 05 01:07:28 compute-0 sudo[161737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:28 compute-0 python3.9[161739]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:07:28 compute-0 systemd[1]: Started libpod-conmon-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope.
Dec 05 01:07:28 compute-0 podman[161740]: 2025-12-05 01:07:28.407682983 +0000 UTC m=+0.109465223 container exec 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 01:07:28 compute-0 podman[161740]: 2025-12-05 01:07:28.448296515 +0000 UTC m=+0.150078655 container exec_died 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:07:28 compute-0 systemd[1]: libpod-conmon-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope: Deactivated successfully.
Dec 05 01:07:28 compute-0 sudo[161737]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:29 compute-0 sudo[161920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezhhocraxcfqpuqevzpszqsbuffcjemk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896848.6998422-900-267661438978450/AnsiballZ_file.py'
Dec 05 01:07:29 compute-0 sudo[161920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:29 compute-0 python3.9[161922]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:07:29 compute-0 sudo[161920]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:30 compute-0 sudo[162072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qofnfjvjzperphhyoxikitmhgrprixzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896849.7749949-909-224131674999387/AnsiballZ_podman_container_info.py'
Dec 05 01:07:30 compute-0 sudo[162072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:30 compute-0 python3.9[162074]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec 05 01:07:30 compute-0 sudo[162072]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:31 compute-0 sudo[162237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jksixejhqpfkqxymvkgsfdzjeronoqyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896850.7427988-917-143730189681606/AnsiballZ_podman_container_exec.py'
Dec 05 01:07:31 compute-0 sudo[162237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:31 compute-0 python3.9[162239]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:07:31 compute-0 systemd[1]: Started libpod-conmon-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.scope.
Dec 05 01:07:31 compute-0 podman[162240]: 2025-12-05 01:07:31.345524141 +0000 UTC m=+0.078613643 container exec 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:07:31 compute-0 podman[162259]: 2025-12-05 01:07:31.408107655 +0000 UTC m=+0.051594139 container exec_died 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:07:31 compute-0 podman[162240]: 2025-12-05 01:07:31.414990106 +0000 UTC m=+0.148079578 container exec_died 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 01:07:31 compute-0 systemd[1]: libpod-conmon-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.scope: Deactivated successfully.
Dec 05 01:07:31 compute-0 sudo[162237]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:31 compute-0 sudo[162421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgndmxzkggoebijvyxckoxcakyszgadb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896851.6751153-925-117524968071871/AnsiballZ_podman_container_exec.py'
Dec 05 01:07:31 compute-0 sudo[162421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:32 compute-0 python3.9[162423]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:07:32 compute-0 systemd[1]: Started libpod-conmon-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.scope.
Dec 05 01:07:32 compute-0 podman[162424]: 2025-12-05 01:07:32.297544383 +0000 UTC m=+0.099197437 container exec 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:07:32 compute-0 podman[162424]: 2025-12-05 01:07:32.332363203 +0000 UTC m=+0.134016277 container exec_died 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:07:32 compute-0 systemd[1]: libpod-conmon-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.scope: Deactivated successfully.
Dec 05 01:07:32 compute-0 sudo[162421]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:32 compute-0 sudo[162601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hykdiiepnqbteslnikxdykbwuztzkbyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896852.6351342-933-169585319665606/AnsiballZ_file.py'
Dec 05 01:07:32 compute-0 sudo[162601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:33 compute-0 python3.9[162603]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:07:33 compute-0 sudo[162601]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:33 compute-0 sudo[162753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywketlxkkhfkmydipbxvkfxaiufmveme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896853.346459-942-206766134149739/AnsiballZ_podman_container_info.py'
Dec 05 01:07:33 compute-0 sudo[162753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:34 compute-0 python3.9[162755]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec 05 01:07:34 compute-0 sudo[162753]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:34 compute-0 sudo[162918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbeuavhiswkyrsjxhnaknyjlbdkdirib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896854.2720287-950-164976252193964/AnsiballZ_podman_container_exec.py'
Dec 05 01:07:34 compute-0 sudo[162918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:34 compute-0 python3.9[162920]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:07:34 compute-0 systemd[1]: Started libpod-conmon-63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.scope.
Dec 05 01:07:34 compute-0 podman[162921]: 2025-12-05 01:07:34.8060262 +0000 UTC m=+0.067226725 container exec 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 01:07:34 compute-0 podman[162921]: 2025-12-05 01:07:34.839168864 +0000 UTC m=+0.100369359 container exec_died 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:07:34 compute-0 systemd[1]: libpod-conmon-63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.scope: Deactivated successfully.
Dec 05 01:07:34 compute-0 sudo[162918]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:35 compute-0 sudo[163100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlflxcssaedueedohnejpenffwvehfcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896855.0708177-958-233294903229870/AnsiballZ_podman_container_exec.py'
Dec 05 01:07:35 compute-0 sudo[163100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:35 compute-0 python3.9[163102]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:07:35 compute-0 systemd[1]: Started libpod-conmon-63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.scope.
Dec 05 01:07:35 compute-0 podman[163103]: 2025-12-05 01:07:35.608040419 +0000 UTC m=+0.054480410 container exec 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:07:35 compute-0 podman[163122]: 2025-12-05 01:07:35.667148937 +0000 UTC m=+0.048665807 container exec_died 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:07:35 compute-0 podman[163103]: 2025-12-05 01:07:35.673079863 +0000 UTC m=+0.119519664 container exec_died 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:07:35 compute-0 systemd[1]: libpod-conmon-63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.scope: Deactivated successfully.
Dec 05 01:07:35 compute-0 sudo[163100]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:36 compute-0 sudo[163284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjesfocqkdiyiclzuefepjskskpygpcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896855.86274-966-181841719671039/AnsiballZ_file.py'
Dec 05 01:07:36 compute-0 sudo[163284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:36 compute-0 python3.9[163286]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:07:36 compute-0 sudo[163284]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:36 compute-0 sudo[163436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcupydhdewxaavjjitapyrpjigrqsrxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896856.5379102-975-129625257253511/AnsiballZ_podman_container_info.py'
Dec 05 01:07:36 compute-0 sudo[163436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:37 compute-0 python3.9[163438]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec 05 01:07:37 compute-0 sudo[163436]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:37 compute-0 sudo[163613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icehbbggwomyozqnqafrfabtvrgjysle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896857.2860487-983-267007163343227/AnsiballZ_podman_container_exec.py'
Dec 05 01:07:37 compute-0 sudo[163613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:37 compute-0 podman[163575]: 2025-12-05 01:07:37.594802881 +0000 UTC m=+0.062883184 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 01:07:37 compute-0 python3.9[163624]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:07:37 compute-0 systemd[1]: Started libpod-conmon-348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.scope.
Dec 05 01:07:37 compute-0 podman[163627]: 2025-12-05 01:07:37.894094645 +0000 UTC m=+0.087951673 container exec 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.6, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Dec 05 01:07:37 compute-0 podman[163627]: 2025-12-05 01:07:37.923404722 +0000 UTC m=+0.117261720 container exec_died 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, maintainer=Red Hat, Inc., architecture=x86_64, version=9.6, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible)
Dec 05 01:07:37 compute-0 systemd[1]: libpod-conmon-348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.scope: Deactivated successfully.
Dec 05 01:07:37 compute-0 sudo[163613]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:38 compute-0 sudo[163808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awkitwawnskvqumuxbnuxtwndnlnanjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896858.162863-991-102311795698364/AnsiballZ_podman_container_exec.py'
Dec 05 01:07:38 compute-0 sudo[163808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:38 compute-0 python3.9[163810]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:07:38 compute-0 systemd[1]: Started libpod-conmon-348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.scope.
Dec 05 01:07:38 compute-0 podman[163811]: 2025-12-05 01:07:38.789364255 +0000 UTC m=+0.080809274 container exec 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=ubi9-minimal, vendor=Red Hat, Inc.)
Dec 05 01:07:38 compute-0 podman[163811]: 2025-12-05 01:07:38.821270044 +0000 UTC m=+0.112715033 container exec_died 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, name=ubi9-minimal, config_id=edpm, release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 05 01:07:38 compute-0 systemd[1]: libpod-conmon-348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.scope: Deactivated successfully.
Dec 05 01:07:38 compute-0 sudo[163808]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:39 compute-0 sudo[163994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxfueyfpqutqclobwvgluyxugzsiqqvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896859.066242-999-158876424039591/AnsiballZ_file.py'
Dec 05 01:07:39 compute-0 sudo[163994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:39 compute-0 python3.9[163996]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:07:39 compute-0 sudo[163994]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:40 compute-0 sudo[164146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayyupzngnbsxpwilxrvkhgzsdqnwrpii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896859.796788-1008-26606116178856/AnsiballZ_file.py'
Dec 05 01:07:40 compute-0 sudo[164146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:40 compute-0 python3.9[164148]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:07:40 compute-0 sudo[164146]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:40 compute-0 sudo[164298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykkvhmwquovzpzwnofzhlmnvjlaeqyzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896860.4550586-1016-203472479103166/AnsiballZ_stat.py'
Dec 05 01:07:40 compute-0 sudo[164298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:40 compute-0 python3.9[164300]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:07:40 compute-0 sudo[164298]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:41 compute-0 sudo[164421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukafsmjtvbmhlnsxnbbnrbmkytguifsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896860.4550586-1016-203472479103166/AnsiballZ_copy.py'
Dec 05 01:07:41 compute-0 sudo[164421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:41 compute-0 python3.9[164423]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896860.4550586-1016-203472479103166/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:07:41 compute-0 sudo[164421]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:42 compute-0 sudo[164593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhsgvfccmayqjopaokohpolutuvkguma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896861.7037697-1032-18837480568654/AnsiballZ_file.py'
Dec 05 01:07:42 compute-0 sudo[164593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:42 compute-0 podman[164547]: 2025-12-05 01:07:42.018211316 +0000 UTC m=+0.059582402 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec 05 01:07:42 compute-0 podman[164548]: 2025-12-05 01:07:42.074108065 +0000 UTC m=+0.113095024 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 01:07:42 compute-0 python3.9[164612]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:07:42 compute-0 sudo[164593]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:42 compute-0 sudo[164772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjhqopikltcljruxbovblqmdkyylrmwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896862.3563337-1040-128949938587354/AnsiballZ_stat.py'
Dec 05 01:07:42 compute-0 sudo[164772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:42 compute-0 python3.9[164774]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:07:42 compute-0 sudo[164772]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:42 compute-0 sudo[164850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehwthsjsqvclxidnchlwuyrzqjbijqtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896862.3563337-1040-128949938587354/AnsiballZ_file.py'
Dec 05 01:07:42 compute-0 sudo[164850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:43 compute-0 python3.9[164852]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:07:43 compute-0 sudo[164850]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:43 compute-0 sudo[165002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypwnolbcmwxnoyoyftdzuhfiiatdlbrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896863.3935583-1052-90539106981235/AnsiballZ_stat.py'
Dec 05 01:07:43 compute-0 sudo[165002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:43 compute-0 python3.9[165004]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:07:43 compute-0 sudo[165002]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:44 compute-0 sudo[165080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nimxiebdfhrfsitjkfclbfaaebtuepdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896863.3935583-1052-90539106981235/AnsiballZ_file.py'
Dec 05 01:07:44 compute-0 sudo[165080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:44 compute-0 python3.9[165082]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.c2522dd1 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:07:44 compute-0 sudo[165080]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:44 compute-0 sudo[165232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-copvcwehlijoufgjemsfbkfyqexingea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896864.4199362-1064-60293383171036/AnsiballZ_stat.py'
Dec 05 01:07:44 compute-0 sudo[165232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:44 compute-0 python3.9[165234]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:07:44 compute-0 sudo[165232]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:45 compute-0 sudo[165310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmobtzlahajacxdtfegnmmagfapcsqze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896864.4199362-1064-60293383171036/AnsiballZ_file.py'
Dec 05 01:07:45 compute-0 sudo[165310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:45 compute-0 python3.9[165312]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:07:45 compute-0 sudo[165310]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:45 compute-0 sudo[165462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdcjdelysffntcxmhfbrcyagdjhekxkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896865.6372144-1077-30576638035804/AnsiballZ_command.py'
Dec 05 01:07:45 compute-0 sudo[165462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:46 compute-0 python3.9[165464]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:07:46 compute-0 sudo[165462]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:46 compute-0 sudo[165615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtflcxkeymsrkrrdtgpnlwknlxkiebvb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764896866.251446-1085-80074671893841/AnsiballZ_edpm_nftables_from_files.py'
Dec 05 01:07:46 compute-0 sudo[165615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:46 compute-0 python3[165617]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 05 01:07:46 compute-0 sudo[165615]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:47 compute-0 sudo[165767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vevltwrasklunrweehmhfraepdpocrfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896867.0731716-1093-79569740598117/AnsiballZ_stat.py'
Dec 05 01:07:47 compute-0 sudo[165767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:47 compute-0 python3.9[165769]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:07:47 compute-0 sudo[165767]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:47 compute-0 sudo[165845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyclflxnhfogrtayzbofmsvgmuwtohln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896867.0731716-1093-79569740598117/AnsiballZ_file.py'
Dec 05 01:07:47 compute-0 sudo[165845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:47 compute-0 python3.9[165847]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:07:48 compute-0 sudo[165845]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:48 compute-0 sudo[165997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovizfcccmnvvnvtzthjydtuecmgnumho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896868.1856363-1105-28745496004041/AnsiballZ_stat.py'
Dec 05 01:07:48 compute-0 sudo[165997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:48 compute-0 python3.9[165999]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:07:48 compute-0 sudo[165997]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:49 compute-0 sudo[166075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqvxtrjviuxnpnavabmgkpmreaeszpxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896868.1856363-1105-28745496004041/AnsiballZ_file.py'
Dec 05 01:07:49 compute-0 sudo[166075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:49 compute-0 python3.9[166077]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:07:49 compute-0 sudo[166075]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:49 compute-0 sudo[166227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bypkylygxlgsarbkbzkpohzvzsuqihlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896869.3870602-1117-48875884599538/AnsiballZ_stat.py'
Dec 05 01:07:49 compute-0 sudo[166227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:49 compute-0 python3.9[166229]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:07:49 compute-0 sudo[166227]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:50 compute-0 sudo[166305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nubvdniqbooccucxvhbvopgdsvzkamug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896869.3870602-1117-48875884599538/AnsiballZ_file.py'
Dec 05 01:07:50 compute-0 sudo[166305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:50 compute-0 python3.9[166307]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:07:50 compute-0 sudo[166305]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:50 compute-0 podman[166384]: 2025-12-05 01:07:50.653840398 +0000 UTC m=+0.077273615 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_id=edpm, release=1755695350, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.expose-services=, version=9.6, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 01:07:50 compute-0 sudo[166479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsifanwqbbnxfdhgwcfhfsnjcuazkniv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896870.4289584-1129-113747979989675/AnsiballZ_stat.py'
Dec 05 01:07:50 compute-0 sudo[166479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:50 compute-0 python3.9[166481]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:07:50 compute-0 sudo[166479]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:51 compute-0 sudo[166557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diqyzktwmllztxakoxelkwjupegqaeax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896870.4289584-1129-113747979989675/AnsiballZ_file.py'
Dec 05 01:07:51 compute-0 sudo[166557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:51 compute-0 python3.9[166559]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:07:51 compute-0 sudo[166557]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:51 compute-0 sudo[166709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahoxmtoelzydatlorroglyidrienqkmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896871.5155437-1141-40659162752585/AnsiballZ_stat.py'
Dec 05 01:07:51 compute-0 sudo[166709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:52 compute-0 python3.9[166711]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:07:52 compute-0 sudo[166709]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:52 compute-0 sudo[166834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-welukxoapcvunyovfewjhopxqlpzlbyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896871.5155437-1141-40659162752585/AnsiballZ_copy.py'
Dec 05 01:07:52 compute-0 sudo[166834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:52 compute-0 python3.9[166836]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896871.5155437-1141-40659162752585/.source.nft follow=False _original_basename=ruleset.j2 checksum=bc835bd485c96b4ac7465e87d3a790a8d097f2aa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:07:52 compute-0 sudo[166834]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:53 compute-0 sudo[166986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxbouzaxfjrnjmmulaadrwcfyxhebxtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896872.9369538-1156-252607588856485/AnsiballZ_file.py'
Dec 05 01:07:53 compute-0 sudo[166986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:53 compute-0 python3.9[166988]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:07:53 compute-0 sudo[166986]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:54 compute-0 sudo[167138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnemeanuaptfepfyyylxovexoiunokrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896873.7117646-1164-70183683465755/AnsiballZ_command.py'
Dec 05 01:07:54 compute-0 sudo[167138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:54 compute-0 python3.9[167140]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:07:54 compute-0 sudo[167138]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:54 compute-0 sudo[167304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmpwyvugdzfcqfsjypeysdncnxsvpsjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896874.3839486-1172-44922275223604/AnsiballZ_blockinfile.py'
Dec 05 01:07:54 compute-0 sudo[167304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:54 compute-0 podman[167267]: 2025-12-05 01:07:54.941383955 +0000 UTC m=+0.065026424 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 01:07:55 compute-0 python3.9[167309]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:07:55 compute-0 sudo[167304]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:55 compute-0 sudo[167468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vofnrlewknqukyfkglahdywyiwctkqmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896875.4275851-1181-21233241176204/AnsiballZ_command.py'
Dec 05 01:07:55 compute-0 sudo[167468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:55 compute-0 python3.9[167470]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:07:55 compute-0 sudo[167468]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:56 compute-0 sudo[167621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iggknwksxatzkxageabyvpjoouceqcwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896876.109027-1189-258989646878089/AnsiballZ_stat.py'
Dec 05 01:07:56 compute-0 sudo[167621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:56 compute-0 python3.9[167623]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:07:56 compute-0 sudo[167621]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:57 compute-0 sudo[167775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qytquqcxyvwhxzpmcggcipvuzxtkqjuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896876.7994437-1197-11960952761634/AnsiballZ_command.py'
Dec 05 01:07:57 compute-0 sudo[167775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:57 compute-0 python3.9[167777]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:07:57 compute-0 sudo[167775]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:57 compute-0 sudo[167930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odnpwxxyasgwzfwrayuaxqiwmxaebiwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896877.4986494-1205-11632464455454/AnsiballZ_file.py'
Dec 05 01:07:57 compute-0 sudo[167930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:07:58 compute-0 python3.9[167932]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:07:58 compute-0 sudo[167930]: pam_unix(sudo:session): session closed for user root
Dec 05 01:07:58 compute-0 sshd-session[144234]: Connection closed by 192.168.122.30 port 40404
Dec 05 01:07:58 compute-0 sshd-session[144231]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:07:58 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Dec 05 01:07:58 compute-0 systemd[1]: session-22.scope: Consumed 2min 4.612s CPU time.
Dec 05 01:07:58 compute-0 systemd-logind[792]: Session 22 logged out. Waiting for processes to exit.
Dec 05 01:07:58 compute-0 systemd-logind[792]: Removed session 22.
Dec 05 01:07:59 compute-0 podman[158197]: time="2025-12-05T01:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:07:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 12784 "" "Go-http-client/1.1"
Dec 05 01:07:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2132 "" "Go-http-client/1.1"
Dec 05 01:08:01 compute-0 openstack_network_exporter[160350]: ERROR   01:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:08:01 compute-0 openstack_network_exporter[160350]: ERROR   01:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:08:01 compute-0 openstack_network_exporter[160350]: ERROR   01:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:08:01 compute-0 openstack_network_exporter[160350]: ERROR   01:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:08:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:08:01 compute-0 openstack_network_exporter[160350]: ERROR   01:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:08:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:08:04 compute-0 sshd-session[167963]: Accepted publickey for zuul from 192.168.122.30 port 34988 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 01:08:04 compute-0 systemd-logind[792]: New session 23 of user zuul.
Dec 05 01:08:04 compute-0 systemd[1]: Started Session 23 of User zuul.
Dec 05 01:08:04 compute-0 sshd-session[167963]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:08:04 compute-0 sudo[168117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzkzgcbmcdrbtqkdqqzzjmvpexfvrslm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896884.1577554-24-45043853991394/AnsiballZ_systemd_service.py'
Dec 05 01:08:04 compute-0 sudo[168117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:05 compute-0 python3.9[168119]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 01:08:05 compute-0 systemd[1]: Reloading.
Dec 05 01:08:05 compute-0 systemd-rc-local-generator[168147]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:08:05 compute-0 systemd-sysv-generator[168150]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:08:05 compute-0 sudo[168117]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:06 compute-0 python3.9[168304]: ansible-ansible.builtin.service_facts Invoked
Dec 05 01:08:06 compute-0 network[168321]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 01:08:06 compute-0 network[168322]: 'network-scripts' will be removed from distribution in near future.
Dec 05 01:08:06 compute-0 network[168323]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 01:08:07 compute-0 podman[168347]: 2025-12-05 01:08:07.723871192 +0000 UTC m=+0.076289328 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:08:12 compute-0 podman[168460]: 2025-12-05 01:08:12.659121928 +0000 UTC m=+0.083228222 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4)
Dec 05 01:08:12 compute-0 podman[168461]: 2025-12-05 01:08:12.671728909 +0000 UTC m=+0.090410681 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:08:13 compute-0 sudo[168660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xejnsskcyadklyerxcfuohtctrnrdiwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896893.0604298-47-86543890311834/AnsiballZ_systemd_service.py'
Dec 05 01:08:13 compute-0 sudo[168660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:13 compute-0 python3.9[168662]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:08:13 compute-0 sudo[168660]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:14 compute-0 sudo[168813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlaejooittsxrquaxdpiltgbgvghvckh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896894.0319073-57-70611391773248/AnsiballZ_file.py'
Dec 05 01:08:14 compute-0 sudo[168813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:14 compute-0 python3.9[168815]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:08:14 compute-0 sudo[168813]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:15 compute-0 sudo[168965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbopmqsnxfxjbdkfsttecberinuizsaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896894.9285398-65-110756297323849/AnsiballZ_file.py'
Dec 05 01:08:15 compute-0 sudo[168965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:15 compute-0 python3.9[168967]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:08:15 compute-0 sudo[168965]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:16 compute-0 sudo[169117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znffzkzyereyhacwcrcvxjsjitmmaixr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896895.7279458-74-82381887195589/AnsiballZ_command.py'
Dec 05 01:08:16 compute-0 sudo[169117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:16 compute-0 python3.9[169119]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:08:16 compute-0 sudo[169117]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:17 compute-0 python3.9[169271]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 01:08:18 compute-0 sudo[169421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viwvzapplpzimevkjmqddvqeqxxkkekr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896897.6979241-92-64109932764957/AnsiballZ_systemd_service.py'
Dec 05 01:08:18 compute-0 sudo[169421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:18 compute-0 python3.9[169423]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 01:08:18 compute-0 systemd[1]: Reloading.
Dec 05 01:08:18 compute-0 systemd-rc-local-generator[169450]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:08:18 compute-0 systemd-sysv-generator[169454]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:08:18 compute-0 sudo[169421]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:19 compute-0 sudo[169607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhwluflfvpgbiyfgbsylcvlebmfdhnmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896898.8081646-100-83777510036569/AnsiballZ_command.py'
Dec 05 01:08:19 compute-0 sudo[169607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:19 compute-0 python3.9[169609]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:08:19 compute-0 sudo[169607]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:20 compute-0 sudo[169760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npqtzrnklkhqzaerewbiapynsupykuie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896899.9454525-109-126989174217880/AnsiballZ_file.py'
Dec 05 01:08:20 compute-0 sudo[169760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:20 compute-0 python3.9[169762]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:08:20 compute-0 sudo[169760]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:21 compute-0 podman[169886]: 2025-12-05 01:08:21.354689141 +0000 UTC m=+0.074424436 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, container_name=openstack_network_exporter, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-type=git, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, release=1755695350, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible)
Dec 05 01:08:21 compute-0 python3.9[169928]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:08:22 compute-0 python3.9[170084]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:08:23 compute-0 python3.9[170205]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896901.7899587-125-141671540543922/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:08:24 compute-0 sudo[170356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iinmjaudmcfhnpyecdqmseknvjpjmlqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896903.5531814-143-49334048207345/AnsiballZ_getent.py'
Dec 05 01:08:24 compute-0 sudo[170356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:24 compute-0 python3.9[170358]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec 05 01:08:24 compute-0 sudo[170356]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:25 compute-0 podman[170483]: 2025-12-05 01:08:25.321150315 +0000 UTC m=+0.066296089 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:08:25 compute-0 python3.9[170521]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:08:26 compute-0 python3.9[170653]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764896904.9715354-171-54553531384899/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:08:26 compute-0 python3.9[170803]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:08:27 compute-0 python3.9[170924]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764896906.322149-171-192554944794391/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:08:27 compute-0 python3.9[171074]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:08:28 compute-0 python3.9[171195]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764896907.5024347-171-281401353561977/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:08:29 compute-0 python3.9[171345]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:08:29 compute-0 podman[158197]: time="2025-12-05T01:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:08:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 12784 "" "Go-http-client/1.1"
Dec 05 01:08:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2136 "" "Go-http-client/1.1"
Dec 05 01:08:30 compute-0 python3.9[171499]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:08:30 compute-0 python3.9[171651]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:08:31 compute-0 openstack_network_exporter[160350]: ERROR   01:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:08:31 compute-0 openstack_network_exporter[160350]: ERROR   01:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:08:31 compute-0 openstack_network_exporter[160350]: ERROR   01:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:08:31 compute-0 openstack_network_exporter[160350]: ERROR   01:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:08:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:08:31 compute-0 openstack_network_exporter[160350]: ERROR   01:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:08:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:08:31 compute-0 python3.9[171772]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896910.4038968-230-200323059941808/.source.json follow=False _original_basename=ceilometer-agent-ipmi.json.j2 checksum=21255e7f7db3155b4a491729298d9407fe6f8335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:08:32 compute-0 python3.9[171922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:08:32 compute-0 python3.9[171998]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:08:33 compute-0 python3.9[172148]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:08:33 compute-0 python3.9[172269]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896912.816547-230-18849588089998/.source.json follow=False _original_basename=ceilometer_agent_ipmi.json.j2 checksum=cf81874b7544c057599ec397442879f74d42b3ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:08:34 compute-0 python3.9[172419]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:08:35 compute-0 python3.9[172540]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896914.1797688-230-14887794234914/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:08:35 compute-0 python3.9[172690]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:08:36 compute-0 python3.9[172811]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896915.3937232-230-235198200570475/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:08:37 compute-0 python3.9[172961]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:08:37 compute-0 python3.9[173082]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896916.7616167-230-168119229064234/.source.json follow=False _original_basename=kepler.json.j2 checksum=89451093c8765edd3915016a9e87770fe489178d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:08:37 compute-0 podman[173083]: 2025-12-05 01:08:37.863783096 +0000 UTC m=+0.071546343 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:08:38 compute-0 python3.9[173257]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:08:39 compute-0 python3.9[173333]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:08:40 compute-0 sudo[173483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcuktnvchiqsjtouontysckyldutwylt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896919.3079453-325-241903336876282/AnsiballZ_file.py'
Dec 05 01:08:40 compute-0 sudo[173483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:40 compute-0 python3.9[173485]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:08:40 compute-0 sudo[173483]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:40 compute-0 sudo[173635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thjvdfmaoqdbhaxdtahaiyxrxqliyycc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896920.5146089-333-263969289917201/AnsiballZ_file.py'
Dec 05 01:08:40 compute-0 sudo[173635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:40 compute-0 python3.9[173637]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:08:41 compute-0 sudo[173635]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:41 compute-0 sudo[173787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjzvjmhlnatcusbngpvjcmoyazlavoyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896921.2014098-341-256433475631337/AnsiballZ_file.py'
Dec 05 01:08:41 compute-0 sudo[173787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:41 compute-0 python3.9[173789]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:08:41 compute-0 sudo[173787]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:42 compute-0 sudo[173939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlefdzsvmshkdoyoyysqnudmahmsqccv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896921.8865435-349-9481887317414/AnsiballZ_stat.py'
Dec 05 01:08:42 compute-0 sudo[173939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:42 compute-0 python3.9[173941]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:08:42 compute-0 sudo[173939]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.539 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.539 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.539 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.540 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.540 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.540 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.542 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.543 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.543 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.543 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.543 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.544 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.544 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.544 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.544 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.544 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.544 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.545 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.545 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.545 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.545 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.546 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.546 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.546 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.546 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.546 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.546 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.547 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.547 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.547 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.547 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.547 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.549 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.549 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:08:42 compute-0 sudo[174063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvqrwdvycxfncdsxsuwklqxubawitveg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896921.8865435-349-9481887317414/AnsiballZ_copy.py'
Dec 05 01:08:42 compute-0 sudo[174063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:42 compute-0 podman[174065]: 2025-12-05 01:08:42.781868334 +0000 UTC m=+0.057914010 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.4)
Dec 05 01:08:42 compute-0 podman[174066]: 2025-12-05 01:08:42.819808666 +0000 UTC m=+0.095080652 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 05 01:08:42 compute-0 python3.9[174067]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896921.8865435-349-9481887317414/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:08:42 compute-0 sudo[174063]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:43 compute-0 sudo[174184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eapxcpbqpcmvhnlqspbzgaitegbyykjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896921.8865435-349-9481887317414/AnsiballZ_stat.py'
Dec 05 01:08:43 compute-0 sudo[174184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:43 compute-0 python3.9[174186]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:08:43 compute-0 sudo[174184]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:43 compute-0 sudo[174307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdolitfdanhdgmavgioqzzewcsgimagw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896921.8865435-349-9481887317414/AnsiballZ_copy.py'
Dec 05 01:08:43 compute-0 sudo[174307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:43 compute-0 python3.9[174309]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896921.8865435-349-9481887317414/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:08:43 compute-0 sudo[174307]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:44 compute-0 sudo[174459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stprcjiuxlounoqtqgqusgavwdgqjfoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896924.1043966-349-159969000862248/AnsiballZ_stat.py'
Dec 05 01:08:44 compute-0 sudo[174459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:44 compute-0 python3.9[174461]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:08:44 compute-0 sudo[174459]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:44 compute-0 sudo[174582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjjslfkegowzjazwkrjcmznhkydovpxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896924.1043966-349-159969000862248/AnsiballZ_copy.py'
Dec 05 01:08:44 compute-0 sudo[174582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:45 compute-0 python3.9[174584]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896924.1043966-349-159969000862248/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:08:45 compute-0 sudo[174582]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:46 compute-0 sudo[174734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gghbgxposysgmygycfwxqmavohjrqigd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896925.5325878-391-239863675559556/AnsiballZ_container_config_data.py'
Dec 05 01:08:46 compute-0 sudo[174734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:46 compute-0 python3.9[174736]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Dec 05 01:08:46 compute-0 sudo[174734]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:46 compute-0 sudo[174886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdldoqtehgeetifohavshhuiebqtkwzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896926.4643505-400-253877738114140/AnsiballZ_container_config_hash.py'
Dec 05 01:08:46 compute-0 sudo[174886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:47 compute-0 python3.9[174888]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 01:08:47 compute-0 sudo[174886]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:48 compute-0 sudo[175038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gupvbqawhfkntsxyhljhrkstatlpscii ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764896927.4484894-410-128284359886997/AnsiballZ_edpm_container_manage.py'
Dec 05 01:08:48 compute-0 sudo[175038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:48 compute-0 python3[175040]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 01:08:51 compute-0 podman[175099]: 2025-12-05 01:08:51.92709733 +0000 UTC m=+0.327313355 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, distribution-scope=public, vcs-type=git, version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 01:08:53 compute-0 podman[175054]: 2025-12-05 01:08:53.994024611 +0000 UTC m=+5.583194417 image pull 24d4416455a3caf43088be1a1fdcd72d9680ad5e64ac2b338cb2cc50d15f5acc quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec 05 01:08:54 compute-0 podman[175173]: 2025-12-05 01:08:54.146479228 +0000 UTC m=+0.044875862 container create 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125)
Dec 05 01:08:54 compute-0 podman[175173]: 2025-12-05 01:08:54.119783897 +0000 UTC m=+0.018180531 image pull 24d4416455a3caf43088be1a1fdcd72d9680ad5e64ac2b338cb2cc50d15f5acc quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec 05 01:08:54 compute-0 python3[175040]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck ipmi --label config_id=edpm --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Dec 05 01:08:54 compute-0 sudo[175038]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:54 compute-0 sudo[175361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwpnggkqyamibsvmawklcpobhmpmddse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896934.4709947-418-276384757470136/AnsiballZ_stat.py'
Dec 05 01:08:54 compute-0 sudo[175361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:54 compute-0 python3.9[175363]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:08:55 compute-0 sudo[175361]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:55 compute-0 sudo[175525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqahvwrttrxfzsxzaoxzfrqeofyulxau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896935.2946544-427-3120657536911/AnsiballZ_file.py'
Dec 05 01:08:55 compute-0 sudo[175525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:55 compute-0 podman[175489]: 2025-12-05 01:08:55.635048347 +0000 UTC m=+0.060586689 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:08:55 compute-0 python3.9[175530]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:08:55 compute-0 sudo[175525]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:56 compute-0 sudo[175692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trcnygqhmmqmwmrjvfhntrcmlhgrlvym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896935.892654-427-208129851032019/AnsiballZ_copy.py'
Dec 05 01:08:56 compute-0 sudo[175692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:56 compute-0 python3.9[175694]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764896935.892654-427-208129851032019/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:08:56 compute-0 sudo[175692]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:57 compute-0 sudo[175768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrkmrequozhlggxzuzgbtxglomhquxeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896935.892654-427-208129851032019/AnsiballZ_systemd.py'
Dec 05 01:08:57 compute-0 sudo[175768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:57 compute-0 python3.9[175770]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 01:08:57 compute-0 systemd[1]: Reloading.
Dec 05 01:08:57 compute-0 systemd-sysv-generator[175800]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:08:57 compute-0 systemd-rc-local-generator[175792]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:08:57 compute-0 sudo[175768]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:58 compute-0 sudo[175878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzforcpjwanjmlynzaceghdcztauvfkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896935.892654-427-208129851032019/AnsiballZ_systemd.py'
Dec 05 01:08:58 compute-0 sudo[175878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:08:58 compute-0 python3.9[175880]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:08:58 compute-0 systemd[1]: Reloading.
Dec 05 01:08:58 compute-0 systemd-rc-local-generator[175910]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:08:58 compute-0 systemd-sysv-generator[175914]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:08:58 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec 05 01:08:58 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:08:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aeaaee3422df2ccf4d4601c96e9c4f445969cd6f5a16b56f77ac2bf9514fd4d/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:08:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aeaaee3422df2ccf4d4601c96e9c4f445969cd6f5a16b56f77ac2bf9514fd4d/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec 05 01:08:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aeaaee3422df2ccf4d4601c96e9c4f445969cd6f5a16b56f77ac2bf9514fd4d/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 05 01:08:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aeaaee3422df2ccf4d4601c96e9c4f445969cd6f5a16b56f77ac2bf9514fd4d/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 05 01:08:58 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.
Dec 05 01:08:58 compute-0 podman[175920]: 2025-12-05 01:08:58.995373695 +0000 UTC m=+0.168675488 container init 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125)
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: + sudo -E kolla_set_configs
Dec 05 01:08:59 compute-0 podman[175920]: 2025-12-05 01:08:59.037971997 +0000 UTC m=+0.211273740 container start 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:08:59 compute-0 podman[175920]: ceilometer_agent_ipmi
Dec 05 01:08:59 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Dec 05 01:08:59 compute-0 sudo[175941]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 01:08:59 compute-0 sudo[175941]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 01:08:59 compute-0 sudo[175941]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 05 01:08:59 compute-0 sudo[175878]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: INFO:__main__:Validating config file
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: INFO:__main__:Copying service configuration files
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: INFO:__main__:Writing out command to execute
Dec 05 01:08:59 compute-0 sudo[175941]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: ++ cat /run_command
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: + ARGS=
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: + sudo kolla_copy_cacerts
Dec 05 01:08:59 compute-0 podman[175942]: 2025-12-05 01:08:59.139593808 +0000 UTC m=+0.077908138 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:08:59 compute-0 systemd[1]: 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335-775ecfd96ef75a18.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 01:08:59 compute-0 sudo[175962]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 05 01:08:59 compute-0 systemd[1]: 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335-775ecfd96ef75a18.service: Failed with result 'exit-code'.
Dec 05 01:08:59 compute-0 sudo[175962]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 01:08:59 compute-0 sudo[175962]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 05 01:08:59 compute-0 sudo[175962]: pam_unix(sudo:session): session closed for user root
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: + [[ ! -n '' ]]
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: + . kolla_extend_start
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: + umask 0022
Dec 05 01:08:59 compute-0 ceilometer_agent_ipmi[175935]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec 05 01:08:59 compute-0 podman[158197]: time="2025-12-05T01:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:08:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 15575 "" "Go-http-client/1.1"
Dec 05 01:08:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2567 "" "Go-http-client/1.1"
Dec 05 01:08:59 compute-0 sudo[176116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdigisgvaagsgnuhttaaodexaidjiohm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896939.4696877-453-174248016725061/AnsiballZ_container_config_data.py'
Dec 05 01:08:59 compute-0 sudo[176116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:00 compute-0 python3.9[176118]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Dec 05 01:09:00 compute-0 sudo[176116]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.166 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.166 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.166 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.166 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.167 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.167 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.167 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.167 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.167 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.167 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.167 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.167 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.168 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.168 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.168 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.168 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.168 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.168 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.168 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.168 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.168 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.168 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.168 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.168 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.169 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.169 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.169 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.169 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.169 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.169 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.169 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.169 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.169 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.169 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.169 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.169 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.170 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.170 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.170 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.170 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.170 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.170 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.170 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.170 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.170 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.170 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.170 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.170 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.171 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.171 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.171 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.171 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.171 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.171 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.171 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.171 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.171 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.171 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.171 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.171 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.172 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.172 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.172 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.172 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.172 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.172 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.172 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.172 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.172 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.172 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.175 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.175 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.175 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.175 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.175 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.175 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.175 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.175 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.175 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.175 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.175 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.176 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.176 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.176 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.176 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.176 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.176 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.176 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.176 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.176 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.176 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.176 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.177 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.177 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.177 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.177 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.177 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.177 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.177 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.177 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.177 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.177 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.177 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.178 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.178 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.178 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.178 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.178 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.178 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.178 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.178 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.178 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.178 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.178 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.178 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.179 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.179 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.179 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.179 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.179 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.179 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.179 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.179 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.179 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.179 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.179 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.180 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.180 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.180 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.180 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.180 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.180 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.181 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.181 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.181 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.181 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.181 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.181 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.201 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.203 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.204 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 05 01:09:00 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.383 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpazubod4i/privsep.sock']
Dec 05 01:09:00 compute-0 sudo[176186]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpazubod4i/privsep.sock
Dec 05 01:09:00 compute-0 sudo[176186]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 01:09:00 compute-0 sudo[176186]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 05 01:09:00 compute-0 sudo[176275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrrvlmakhtpkbgshikulougxcuaeudlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896940.3470252-462-155245155844837/AnsiballZ_container_config_hash.py'
Dec 05 01:09:00 compute-0 sudo[176275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:00 compute-0 python3.9[176277]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 01:09:00 compute-0 sudo[176275]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:01 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec 05 01:09:01 compute-0 sudo[176186]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.205 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.206 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpazubod4i/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.027 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.032 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.036 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.036 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.316 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.317 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.317 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.318 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.318 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.318 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.318 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.318 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.318 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.318 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.318 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.319 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.319 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.321 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.322 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.322 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.322 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.322 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.322 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.322 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.322 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.322 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.322 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.322 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.323 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.323 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.323 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.323 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.323 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.323 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.323 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.323 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.324 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.324 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.324 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.324 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.324 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.324 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.324 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.324 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.324 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.324 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.326 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.326 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.326 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.326 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.326 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.326 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.326 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.326 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.326 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.326 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.327 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.327 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.327 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.327 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.327 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.327 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.327 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.327 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.327 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.328 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.328 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.328 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.328 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.328 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.328 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.328 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.328 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.329 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.329 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.329 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.329 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.329 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.329 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.329 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.329 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.329 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.329 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.330 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.330 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.330 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.330 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.330 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.330 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.330 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.330 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.330 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.331 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.331 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.331 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.331 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.331 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.331 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.331 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.331 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.331 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.332 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.332 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.332 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.332 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.332 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.332 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.332 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.332 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.333 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.333 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.333 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.333 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.333 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.333 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.333 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.333 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.334 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.334 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.334 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.334 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.334 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.334 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.334 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.334 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.334 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.334 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.335 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.335 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.335 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.335 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.335 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.335 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.335 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.335 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.335 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.335 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.336 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.336 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.336 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.336 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.336 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.336 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.336 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.336 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.336 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.336 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.337 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.337 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.337 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.337 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.337 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.337 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.337 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.337 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.337 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.337 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.338 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.338 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.338 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.338 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.338 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.338 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.338 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.338 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.338 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.339 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.339 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.339 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.339 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.339 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.339 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.339 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.339 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.339 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.339 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.339 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.340 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.340 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.340 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.340 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.340 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.340 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.340 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.340 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.340 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.341 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.341 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.341 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.341 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.341 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.341 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.341 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.341 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.341 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.341 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.342 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.342 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.342 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.342 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.342 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.342 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.342 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.342 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.342 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.342 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.342 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec 05 01:09:01 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.345 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec 05 01:09:01 compute-0 openstack_network_exporter[160350]: ERROR   01:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:09:01 compute-0 openstack_network_exporter[160350]: ERROR   01:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:09:01 compute-0 openstack_network_exporter[160350]: ERROR   01:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:09:01 compute-0 openstack_network_exporter[160350]: ERROR   01:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:09:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:09:01 compute-0 openstack_network_exporter[160350]: ERROR   01:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:09:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:09:01 compute-0 sudo[176433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lljesnetkvxjrcyefsugshsttgifdziu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764896941.240793-472-154795262075960/AnsiballZ_edpm_container_manage.py'
Dec 05 01:09:01 compute-0 sudo[176433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:01 compute-0 python3[176435]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 01:09:08 compute-0 podman[176579]: 2025-12-05 01:09:08.232270043 +0000 UTC m=+0.151458752 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 01:09:08 compute-0 podman[176449]: 2025-12-05 01:09:08.325806805 +0000 UTC m=+6.426361077 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec 05 01:09:08 compute-0 podman[176675]: 2025-12-05 01:09:08.50599192 +0000 UTC m=+0.063385182 container create de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release-0.7.12=, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, vcs-type=git, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, release=1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, managed_by=edpm_ansible)
Dec 05 01:09:08 compute-0 podman[176675]: 2025-12-05 01:09:08.469527996 +0000 UTC m=+0.026921338 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec 05 01:09:08 compute-0 python3[176435]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Dec 05 01:09:08 compute-0 sudo[176433]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:09 compute-0 sudo[176863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfsaroldpmurcqmgknncicvbtamtjgma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896948.9120717-480-269572054747260/AnsiballZ_stat.py'
Dec 05 01:09:09 compute-0 sudo[176863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:09 compute-0 python3.9[176865]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:09:09 compute-0 sudo[176863]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:10 compute-0 sudo[177017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcmqnbcuxjowhrdueneehhpkiitcxjwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896949.737764-489-4929903622013/AnsiballZ_file.py'
Dec 05 01:09:10 compute-0 sudo[177017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:10 compute-0 python3.9[177019]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:09:10 compute-0 sudo[177017]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:10 compute-0 sudo[177168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kolhyqwcbnlkraliffstggtueayhrtzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896950.3974333-489-70350947086373/AnsiballZ_copy.py'
Dec 05 01:09:10 compute-0 sudo[177168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:11 compute-0 python3.9[177170]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764896950.3974333-489-70350947086373/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:09:11 compute-0 sudo[177168]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:11 compute-0 sudo[177244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-japeswbandaiqjxpypvazujncuybzyfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896950.3974333-489-70350947086373/AnsiballZ_systemd.py'
Dec 05 01:09:11 compute-0 sudo[177244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:11 compute-0 python3.9[177246]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 01:09:11 compute-0 systemd[1]: Reloading.
Dec 05 01:09:11 compute-0 systemd-rc-local-generator[177276]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:09:11 compute-0 systemd-sysv-generator[177280]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:09:12 compute-0 sudo[177244]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:12 compute-0 sudo[177356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fobjkczcudraitmwbgtvvjiupkadnzus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896950.3974333-489-70350947086373/AnsiballZ_systemd.py'
Dec 05 01:09:12 compute-0 sudo[177356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:12 compute-0 python3.9[177358]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:09:12 compute-0 systemd[1]: Reloading.
Dec 05 01:09:12 compute-0 systemd-rc-local-generator[177385]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:09:12 compute-0 systemd-sysv-generator[177390]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:09:13 compute-0 systemd[1]: Starting kepler container...
Dec 05 01:09:13 compute-0 podman[177398]: 2025-12-05 01:09:13.274008852 +0000 UTC m=+0.231680869 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:09:13 compute-0 podman[177396]: 2025-12-05 01:09:13.297430758 +0000 UTC m=+0.257587629 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 05 01:09:13 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:09:15 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91.
Dec 05 01:09:15 compute-0 podman[177399]: 2025-12-05 01:09:15.365410717 +0000 UTC m=+2.304707788 container init de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, vcs-type=git, version=9.4, architecture=x86_64, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, managed_by=edpm_ansible, io.openshift.tags=base rhel9, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, name=ubi9)
Dec 05 01:09:15 compute-0 kepler[177459]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 05 01:09:15 compute-0 kepler[177459]: I1205 01:09:15.392856       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec 05 01:09:15 compute-0 kepler[177459]: I1205 01:09:15.393081       1 config.go:293] using gCgroup ID in the BPF program: true
Dec 05 01:09:15 compute-0 kepler[177459]: I1205 01:09:15.393134       1 config.go:295] kernel version: 5.14
Dec 05 01:09:15 compute-0 kepler[177459]: I1205 01:09:15.393854       1 power.go:78] Unable to obtain power, use estimate method
Dec 05 01:09:15 compute-0 kepler[177459]: I1205 01:09:15.393904       1 redfish.go:169] failed to get redfish credential file path
Dec 05 01:09:15 compute-0 kepler[177459]: I1205 01:09:15.394257       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec 05 01:09:15 compute-0 kepler[177459]: I1205 01:09:15.394263       1 power.go:79] using none to obtain power
Dec 05 01:09:15 compute-0 kepler[177459]: E1205 01:09:15.394278       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec 05 01:09:15 compute-0 kepler[177459]: E1205 01:09:15.394535       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec 05 01:09:15 compute-0 kepler[177459]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 05 01:09:15 compute-0 podman[177399]: 2025-12-05 01:09:15.395761753 +0000 UTC m=+2.335058734 container start de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1214.1726694543, vcs-type=git, name=ubi9, version=9.4, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 01:09:15 compute-0 kepler[177459]: I1205 01:09:15.396733       1 exporter.go:84] Number of CPUs: 8
Dec 05 01:09:15 compute-0 podman[177399]: kepler
Dec 05 01:09:15 compute-0 systemd[1]: Started kepler container.
Dec 05 01:09:15 compute-0 sudo[177356]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:15 compute-0 podman[177469]: 2025-12-05 01:09:15.488293549 +0000 UTC m=+0.076626145 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, config_id=edpm, vendor=Red Hat, Inc., container_name=kepler, io.openshift.tags=base rhel9, architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, release=1214.1726694543, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 05 01:09:15 compute-0 systemd[1]: de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91-15f04539a3e49d9c.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 01:09:15 compute-0 systemd[1]: de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91-15f04539a3e49d9c.service: Failed with result 'exit-code'.
Dec 05 01:09:15 compute-0 kepler[177459]: I1205 01:09:15.965392       1 watcher.go:83] Using in cluster k8s config
Dec 05 01:09:15 compute-0 kepler[177459]: I1205 01:09:15.965608       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec 05 01:09:15 compute-0 kepler[177459]: E1205 01:09:15.965795       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec 05 01:09:15 compute-0 kepler[177459]: I1205 01:09:15.971500       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec 05 01:09:15 compute-0 kepler[177459]: I1205 01:09:15.971651       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec 05 01:09:15 compute-0 kepler[177459]: I1205 01:09:15.980004       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec 05 01:09:15 compute-0 kepler[177459]: I1205 01:09:15.980153       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec 05 01:09:15 compute-0 kepler[177459]: I1205 01:09:15.988818       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 05 01:09:15 compute-0 kepler[177459]: I1205 01:09:15.989021       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 05 01:09:15 compute-0 kepler[177459]: I1205 01:09:15.989144       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec 05 01:09:16 compute-0 kepler[177459]: I1205 01:09:16.004836       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 05 01:09:16 compute-0 kepler[177459]: I1205 01:09:16.005007       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 05 01:09:16 compute-0 kepler[177459]: I1205 01:09:16.005018       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 05 01:09:16 compute-0 kepler[177459]: I1205 01:09:16.005027       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 05 01:09:16 compute-0 kepler[177459]: I1205 01:09:16.005039       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 05 01:09:16 compute-0 kepler[177459]: I1205 01:09:16.005066       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec 05 01:09:16 compute-0 kepler[177459]: I1205 01:09:16.005219       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec 05 01:09:16 compute-0 kepler[177459]: I1205 01:09:16.005343       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec 05 01:09:16 compute-0 kepler[177459]: I1205 01:09:16.005419       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec 05 01:09:16 compute-0 kepler[177459]: I1205 01:09:16.005452       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec 05 01:09:16 compute-0 kepler[177459]: I1205 01:09:16.005612       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec 05 01:09:16 compute-0 kepler[177459]: I1205 01:09:16.006493       1 exporter.go:208] Started Kepler in 613.897533ms
Dec 05 01:09:16 compute-0 sudo[177653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhjfrkrwppqwtwryscjcxpmmgsgbkcal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896955.6627982-513-249621216186855/AnsiballZ_systemd.py'
Dec 05 01:09:16 compute-0 sudo[177653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:16 compute-0 python3.9[177655]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 01:09:16 compute-0 systemd[1]: Stopping ceilometer_agent_ipmi container...
Dec 05 01:09:16 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:16.568 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec 05 01:09:16 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:16.670 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Dec 05 01:09:16 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:16.670 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Dec 05 01:09:16 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:16.671 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Dec 05 01:09:16 compute-0 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:16.687 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Dec 05 01:09:16 compute-0 systemd[1]: libpod-88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.scope: Deactivated successfully.
Dec 05 01:09:16 compute-0 systemd[1]: libpod-88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.scope: Consumed 2.677s CPU time.
Dec 05 01:09:16 compute-0 podman[177659]: 2025-12-05 01:09:16.910289934 +0000 UTC m=+0.422569221 container died 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:09:16 compute-0 systemd[1]: 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335-775ecfd96ef75a18.timer: Deactivated successfully.
Dec 05 01:09:16 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.
Dec 05 01:09:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335-userdata-shm.mount: Deactivated successfully.
Dec 05 01:09:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-5aeaaee3422df2ccf4d4601c96e9c4f445969cd6f5a16b56f77ac2bf9514fd4d-merged.mount: Deactivated successfully.
Dec 05 01:09:17 compute-0 podman[177659]: 2025-12-05 01:09:17.476566715 +0000 UTC m=+0.988846022 container cleanup 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 01:09:17 compute-0 podman[177659]: ceilometer_agent_ipmi
Dec 05 01:09:17 compute-0 podman[177686]: ceilometer_agent_ipmi
Dec 05 01:09:17 compute-0 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Dec 05 01:09:17 compute-0 systemd[1]: Stopped ceilometer_agent_ipmi container.
Dec 05 01:09:17 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec 05 01:09:17 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aeaaee3422df2ccf4d4601c96e9c4f445969cd6f5a16b56f77ac2bf9514fd4d/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aeaaee3422df2ccf4d4601c96e9c4f445969cd6f5a16b56f77ac2bf9514fd4d/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec 05 01:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aeaaee3422df2ccf4d4601c96e9c4f445969cd6f5a16b56f77ac2bf9514fd4d/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 05 01:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aeaaee3422df2ccf4d4601c96e9c4f445969cd6f5a16b56f77ac2bf9514fd4d/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 05 01:09:17 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.
Dec 05 01:09:17 compute-0 podman[177698]: 2025-12-05 01:09:17.835681313 +0000 UTC m=+0.217724758 container init 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: + sudo -E kolla_set_configs
Dec 05 01:09:17 compute-0 podman[177698]: 2025-12-05 01:09:17.868065451 +0000 UTC m=+0.250108886 container start 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 05 01:09:17 compute-0 podman[177698]: ceilometer_agent_ipmi
Dec 05 01:09:17 compute-0 sudo[177718]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 01:09:17 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Dec 05 01:09:17 compute-0 sudo[177718]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 01:09:17 compute-0 sudo[177718]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 05 01:09:17 compute-0 sudo[177653]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: INFO:__main__:Validating config file
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: INFO:__main__:Copying service configuration files
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: INFO:__main__:Writing out command to execute
Dec 05 01:09:17 compute-0 sudo[177718]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: ++ cat /run_command
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: + ARGS=
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: + sudo kolla_copy_cacerts
Dec 05 01:09:17 compute-0 podman[177719]: 2025-12-05 01:09:17.975578725 +0000 UTC m=+0.089020536 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 05 01:09:17 compute-0 sudo[177739]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 05 01:09:17 compute-0 sudo[177739]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 01:09:17 compute-0 sudo[177739]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 05 01:09:17 compute-0 sudo[177739]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:17 compute-0 systemd[1]: 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335-3bd305131483247.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 01:09:17 compute-0 systemd[1]: 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335-3bd305131483247.service: Failed with result 'exit-code'.
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: + [[ ! -n '' ]]
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: + . kolla_extend_start
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: + umask 0022
Dec 05 01:09:17 compute-0 ceilometer_agent_ipmi[177712]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec 05 01:09:18 compute-0 sudo[177892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngdxysmrqeojszfcdojjmxiolljratwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896958.235933-521-149627381643603/AnsiballZ_systemd.py'
Dec 05 01:09:18 compute-0 sudo[177892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.865 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.865 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.865 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.865 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.865 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.866 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.866 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.866 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.866 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.866 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.866 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.866 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.866 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.867 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.867 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.867 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.867 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.867 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.867 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.867 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.867 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.867 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.867 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.867 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.869 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.869 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.869 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.869 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.869 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.869 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.869 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.869 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.869 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.869 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.869 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.870 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.870 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.870 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.870 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.870 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.870 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.870 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.870 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.870 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.870 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.870 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.870 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.871 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.871 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.871 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.871 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.871 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.871 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.871 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.871 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.871 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.871 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.871 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.872 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.872 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.872 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.872 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.872 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.872 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.872 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.872 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.872 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.872 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.872 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.872 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.873 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.873 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.873 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.873 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.873 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.873 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.873 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.873 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.873 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.873 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.874 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.874 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.874 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.874 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.874 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.874 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.874 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.874 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.874 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.875 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.875 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.875 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.875 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.875 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.875 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.875 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.875 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.875 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.875 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.876 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.876 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.876 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.876 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.876 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.876 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.876 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.876 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.877 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.877 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.877 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.877 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.877 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.877 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.877 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.877 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.877 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.878 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.878 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.878 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.878 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.878 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.878 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.878 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.878 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.878 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.879 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.879 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.879 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.879 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.879 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.879 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.879 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.879 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.879 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.879 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.880 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.880 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.880 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.880 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.880 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.880 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.880 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.880 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.880 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.880 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.880 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.881 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.881 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.881 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.881 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.881 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.881 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.902 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.903 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.904 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 05 01:09:18 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.921 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmps8_kvf0_/privsep.sock']
Dec 05 01:09:18 compute-0 sudo[177899]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmps8_kvf0_/privsep.sock
Dec 05 01:09:18 compute-0 sudo[177899]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 01:09:18 compute-0 sudo[177899]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 05 01:09:18 compute-0 python3.9[177894]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 01:09:19 compute-0 systemd[1]: Stopping kepler container...
Dec 05 01:09:19 compute-0 kepler[177459]: I1205 01:09:19.110720       1 exporter.go:218] Received shutdown signal
Dec 05 01:09:19 compute-0 kepler[177459]: I1205 01:09:19.111878       1 exporter.go:226] Exiting...
Dec 05 01:09:19 compute-0 systemd[1]: libpod-de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91.scope: Deactivated successfully.
Dec 05 01:09:19 compute-0 podman[177905]: 2025-12-05 01:09:19.313254306 +0000 UTC m=+0.255581878 container died de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vendor=Red Hat, Inc., config_id=edpm, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, build-date=2024-09-18T21:23:30)
Dec 05 01:09:19 compute-0 systemd[1]: de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91-15f04539a3e49d9c.timer: Deactivated successfully.
Dec 05 01:09:19 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91.
Dec 05 01:09:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91-userdata-shm.mount: Deactivated successfully.
Dec 05 01:09:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a374ec8aa50f4d970047ac6324333a688dcc2712f075ca8bf268b9db1c5579b0-merged.mount: Deactivated successfully.
Dec 05 01:09:19 compute-0 podman[177905]: 2025-12-05 01:09:19.913402684 +0000 UTC m=+0.855730246 container cleanup de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, container_name=kepler, config_id=edpm, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, release=1214.1726694543, architecture=x86_64)
Dec 05 01:09:19 compute-0 podman[177905]: kepler
Dec 05 01:09:19 compute-0 sudo[177899]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:19 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:19.949 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 05 01:09:19 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:19.950 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmps8_kvf0_/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 05 01:09:19 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:19.488 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 05 01:09:19 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:19.496 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 05 01:09:19 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:19.501 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 05 01:09:19 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:19.501 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec 05 01:09:19 compute-0 podman[177935]: kepler
Dec 05 01:09:19 compute-0 systemd[1]: edpm_kepler.service: Deactivated successfully.
Dec 05 01:09:19 compute-0 systemd[1]: Stopped kepler container.
Dec 05 01:09:20 compute-0 systemd[1]: Starting kepler container...
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.055 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.055 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.056 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.056 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.056 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.057 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.057 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.057 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.057 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.057 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.057 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.057 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.057 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.060 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.060 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.060 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.060 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.060 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.061 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.061 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.061 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.061 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.061 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.061 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.061 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.061 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.061 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.061 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.062 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.062 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.062 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.062 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.062 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.062 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.062 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.062 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.062 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.062 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.063 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.063 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.063 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.063 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.063 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.063 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.063 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.063 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.063 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.063 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.064 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.064 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.064 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.064 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.064 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.064 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.064 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.064 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.064 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.064 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.064 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.065 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.065 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.065 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.065 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.065 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.065 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.065 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.065 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.065 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.065 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.065 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.066 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.066 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.066 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.066 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.066 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.066 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.066 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.066 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.066 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.066 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.067 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.067 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.067 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.067 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.067 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.067 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.067 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.067 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.067 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.067 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.068 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.068 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.068 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.068 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.068 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.068 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.068 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.068 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.068 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.068 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.069 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.069 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.069 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.069 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.069 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.069 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.069 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.069 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.069 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.069 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.070 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.070 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.070 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.070 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.070 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.070 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.070 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.070 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.070 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.071 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.071 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.071 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.071 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.071 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.071 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.071 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.071 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.071 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.071 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.072 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.072 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.072 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.072 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.072 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.072 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.072 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.072 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.072 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.072 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.072 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.073 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.073 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.073 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.073 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.073 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.073 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.073 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.073 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.073 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.073 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.074 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.074 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.074 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.074 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.074 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.074 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.074 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.074 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.074 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.074 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.074 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.075 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.075 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.075 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.075 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.075 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.075 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.075 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.075 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.075 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.075 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.075 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.075 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.076 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.076 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.076 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.076 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.076 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.076 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.076 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.076 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.076 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.076 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.076 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.076 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.077 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.077 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.077 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.077 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.077 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.077 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.077 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.077 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.077 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.077 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.078 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.078 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.078 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.078 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.078 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.078 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.078 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.078 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.078 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.078 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.078 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.079 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.079 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.079 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.079 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.079 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.079 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.079 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.079 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.079 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.079 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec 05 01:09:20 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:09:20 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.082 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec 05 01:09:20 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91.
Dec 05 01:09:20 compute-0 podman[177950]: 2025-12-05 01:09:20.118615037 +0000 UTC m=+0.104875586 container init de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, container_name=kepler, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=)
Dec 05 01:09:20 compute-0 kepler[177967]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 05 01:09:20 compute-0 podman[177950]: 2025-12-05 01:09:20.146000476 +0000 UTC m=+0.132261005 container start de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.openshift.expose-services=, name=ubi9, com.redhat.component=ubi9-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vendor=Red Hat, Inc.)
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.148795       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.148978       1 config.go:293] using gCgroup ID in the BPF program: true
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.149015       1 config.go:295] kernel version: 5.14
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.149609       1 power.go:78] Unable to obtain power, use estimate method
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.149634       1 redfish.go:169] failed to get redfish credential file path
Dec 05 01:09:20 compute-0 podman[177950]: kepler
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.150218       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.150238       1 power.go:79] using none to obtain power
Dec 05 01:09:20 compute-0 kepler[177967]: E1205 01:09:20.150260       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec 05 01:09:20 compute-0 kepler[177967]: E1205 01:09:20.150293       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec 05 01:09:20 compute-0 kepler[177967]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.152488       1 exporter.go:84] Number of CPUs: 8
Dec 05 01:09:20 compute-0 systemd[1]: Started kepler container.
Dec 05 01:09:20 compute-0 sudo[177892]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:20 compute-0 podman[177978]: 2025-12-05 01:09:20.238327086 +0000 UTC m=+0.082400984 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vcs-type=git, release=1214.1726694543, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.expose-services=, com.redhat.component=ubi9-container, managed_by=edpm_ansible, version=9.4, distribution-scope=public, io.buildah.version=1.29.0)
Dec 05 01:09:20 compute-0 systemd[1]: de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91-4ebf4b8c79608771.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 01:09:20 compute-0 systemd[1]: de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91-4ebf4b8c79608771.service: Failed with result 'exit-code'.
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.576599       1 watcher.go:83] Using in cluster k8s config
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.576642       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec 05 01:09:20 compute-0 kepler[177967]: E1205 01:09:20.576722       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.582683       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.582736       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.587496       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.587537       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.594689       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.594730       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.594751       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.601332       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.601371       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.601376       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.601382       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.601390       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.601406       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.601507       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.601547       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.601596       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.601624       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.601716       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec 05 01:09:20 compute-0 kepler[177967]: I1205 01:09:20.602173       1 exporter.go:208] Started Kepler in 453.628694ms
Dec 05 01:09:20 compute-0 sudo[178161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edbrydtupbuqerlwprtksjgqgpxokdgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896960.40205-529-98803004593963/AnsiballZ_find.py'
Dec 05 01:09:20 compute-0 sudo[178161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:20 compute-0 python3.9[178163]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 01:09:21 compute-0 sudo[178161]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:22 compute-0 sudo[178313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmwvoguuvztulhgfzdojltdthyitbuhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896961.433565-539-235300523172126/AnsiballZ_podman_container_info.py'
Dec 05 01:09:22 compute-0 sudo[178313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:22 compute-0 python3.9[178315]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec 05 01:09:22 compute-0 sudo[178313]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:23 compute-0 sudo[178477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjxxiarcupdbjhnecfrjngbfoxvdtefe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896962.7042594-547-114575936324435/AnsiballZ_podman_container_exec.py'
Dec 05 01:09:23 compute-0 sudo[178477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:23 compute-0 python3.9[178479]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:09:23 compute-0 podman[178480]: 2025-12-05 01:09:23.753953474 +0000 UTC m=+0.159876490 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 01:09:23 compute-0 systemd[1]: Started libpod-conmon-d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.scope.
Dec 05 01:09:23 compute-0 podman[178486]: 2025-12-05 01:09:23.813455795 +0000 UTC m=+0.181693945 container exec d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:09:23 compute-0 podman[178486]: 2025-12-05 01:09:23.846374047 +0000 UTC m=+0.214612197 container exec_died d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:09:23 compute-0 systemd[1]: libpod-conmon-d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.scope: Deactivated successfully.
Dec 05 01:09:23 compute-0 sudo[178477]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:24 compute-0 sudo[178678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoladgpxnsinpxkqnbtbqksvedgaudms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896964.1490946-555-162617774522129/AnsiballZ_podman_container_exec.py'
Dec 05 01:09:24 compute-0 sudo[178678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:24 compute-0 python3.9[178680]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:09:25 compute-0 systemd[1]: Started libpod-conmon-d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.scope.
Dec 05 01:09:25 compute-0 podman[178681]: 2025-12-05 01:09:25.074333219 +0000 UTC m=+0.157368085 container exec d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 01:09:25 compute-0 podman[178681]: 2025-12-05 01:09:25.109086399 +0000 UTC m=+0.192121265 container exec_died d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 05 01:09:25 compute-0 systemd[1]: libpod-conmon-d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.scope: Deactivated successfully.
Dec 05 01:09:25 compute-0 sudo[178678]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:26 compute-0 sudo[178879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhhfxzjpysfkncitadvoecvmlcyhunwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896965.4906998-563-38191476267607/AnsiballZ_file.py'
Dec 05 01:09:26 compute-0 sudo[178879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:26 compute-0 podman[178834]: 2025-12-05 01:09:26.016844419 +0000 UTC m=+0.117819310 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:09:26 compute-0 python3.9[178887]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:09:26 compute-0 sudo[178879]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:26 compute-0 sudo[179037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdkcfkakqdepsyavglttnlocyxmipnli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896966.5182939-572-164280240551847/AnsiballZ_podman_container_info.py'
Dec 05 01:09:26 compute-0 sudo[179037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:27 compute-0 python3.9[179039]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec 05 01:09:27 compute-0 sudo[179037]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:27 compute-0 sudo[179202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isigyvsbccqfphvrpnjdgbgfvymqwcen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896967.5442982-580-216997400239343/AnsiballZ_podman_container_exec.py'
Dec 05 01:09:27 compute-0 sudo[179202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:28 compute-0 python3.9[179204]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:09:28 compute-0 systemd[1]: Started libpod-conmon-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope.
Dec 05 01:09:28 compute-0 podman[179205]: 2025-12-05 01:09:28.344650396 +0000 UTC m=+0.121330302 container exec 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute)
Dec 05 01:09:28 compute-0 podman[179205]: 2025-12-05 01:09:28.379306383 +0000 UTC m=+0.155986299 container exec_died 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 05 01:09:28 compute-0 systemd[1]: libpod-conmon-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope: Deactivated successfully.
Dec 05 01:09:28 compute-0 sudo[179202]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:29 compute-0 sudo[179382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unbuaylawvznsieitoddvdbepdwtrpyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896968.6956947-588-214529091291566/AnsiballZ_podman_container_exec.py'
Dec 05 01:09:29 compute-0 sudo[179382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:29 compute-0 python3.9[179384]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:09:29 compute-0 systemd[1]: Started libpod-conmon-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope.
Dec 05 01:09:29 compute-0 podman[179385]: 2025-12-05 01:09:29.595998952 +0000 UTC m=+0.173024360 container exec 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 05 01:09:29 compute-0 podman[179385]: 2025-12-05 01:09:29.60441258 +0000 UTC m=+0.181437958 container exec_died 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 01:09:29 compute-0 sudo[179382]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:29 compute-0 systemd[1]: libpod-conmon-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope: Deactivated successfully.
Dec 05 01:09:29 compute-0 podman[158197]: time="2025-12-05T01:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:09:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18535 "" "Go-http-client/1.1"
Dec 05 01:09:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2991 "" "Go-http-client/1.1"
Dec 05 01:09:30 compute-0 sudo[179566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehfehvhyqeplybxmpllwjygijpygpxto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896969.9441373-596-121188298922201/AnsiballZ_file.py'
Dec 05 01:09:30 compute-0 sudo[179566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:30 compute-0 python3.9[179568]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:09:30 compute-0 sudo[179566]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:31 compute-0 openstack_network_exporter[160350]: ERROR   01:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:09:31 compute-0 openstack_network_exporter[160350]: ERROR   01:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:09:31 compute-0 openstack_network_exporter[160350]: ERROR   01:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:09:31 compute-0 openstack_network_exporter[160350]: ERROR   01:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:09:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:09:31 compute-0 openstack_network_exporter[160350]: ERROR   01:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:09:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:09:31 compute-0 sudo[179718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqyehipheenkphlwydatqvlzdmkijrvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896970.9792242-605-101983963972380/AnsiballZ_podman_container_info.py'
Dec 05 01:09:31 compute-0 sudo[179718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:31 compute-0 python3.9[179720]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec 05 01:09:31 compute-0 sudo[179718]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:32 compute-0 sudo[179883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lidaxrffmxfcxuvvbtachxxrojdmbylu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896972.101786-613-37835004055766/AnsiballZ_podman_container_exec.py'
Dec 05 01:09:32 compute-0 sudo[179883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:32 compute-0 python3.9[179887]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:09:32 compute-0 systemd[1]: Started libpod-conmon-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.scope.
Dec 05 01:09:32 compute-0 podman[179888]: 2025-12-05 01:09:32.956527046 +0000 UTC m=+0.108525751 container exec 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 01:09:33 compute-0 podman[179909]: 2025-12-05 01:09:33.124719849 +0000 UTC m=+0.149658534 container exec_died 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 01:09:33 compute-0 podman[179888]: 2025-12-05 01:09:33.151759429 +0000 UTC m=+0.303758084 container exec_died 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 01:09:33 compute-0 systemd[1]: libpod-conmon-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.scope: Deactivated successfully.
Dec 05 01:09:33 compute-0 sudo[179883]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:34 compute-0 sudo[180071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjegytxtyuewrdfxnbvshzlaryvznnfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896973.5371692-621-169256894976770/AnsiballZ_podman_container_exec.py'
Dec 05 01:09:34 compute-0 sudo[180071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:34 compute-0 python3.9[180073]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:09:34 compute-0 systemd[1]: Started libpod-conmon-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.scope.
Dec 05 01:09:34 compute-0 podman[180074]: 2025-12-05 01:09:34.460359949 +0000 UTC m=+0.146753960 container exec 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 01:09:34 compute-0 podman[180074]: 2025-12-05 01:09:34.496365321 +0000 UTC m=+0.182759272 container exec_died 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 01:09:34 compute-0 sudo[180071]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:34 compute-0 systemd[1]: libpod-conmon-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.scope: Deactivated successfully.
Dec 05 01:09:35 compute-0 sudo[180253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otadcjjvqryzlfenklydyodhbcpeyalz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896974.8736773-629-152838854285468/AnsiballZ_file.py'
Dec 05 01:09:35 compute-0 sudo[180253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:35 compute-0 python3.9[180255]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:09:35 compute-0 sudo[180253]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:36 compute-0 sudo[180405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xicqtlsnbfrmxdffzwttpznqweugjihm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896975.92225-638-91440142479940/AnsiballZ_podman_container_info.py'
Dec 05 01:09:36 compute-0 sudo[180405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:36 compute-0 python3.9[180407]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec 05 01:09:36 compute-0 sudo[180405]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:37 compute-0 sudo[180570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-barejhaktipohyldnminsuyxzlvmitfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896977.065942-646-120920009324864/AnsiballZ_podman_container_exec.py'
Dec 05 01:09:37 compute-0 sudo[180570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:37 compute-0 python3.9[180572]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:09:37 compute-0 systemd[1]: Started libpod-conmon-63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.scope.
Dec 05 01:09:37 compute-0 podman[180573]: 2025-12-05 01:09:37.844355372 +0000 UTC m=+0.126959039 container exec 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 01:09:37 compute-0 podman[180573]: 2025-12-05 01:09:37.883947085 +0000 UTC m=+0.166550742 container exec_died 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 01:09:37 compute-0 systemd[1]: libpod-conmon-63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.scope: Deactivated successfully.
Dec 05 01:09:37 compute-0 sudo[180570]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:38 compute-0 podman[180652]: 2025-12-05 01:09:38.479443698 +0000 UTC m=+0.134334803 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:09:38 compute-0 sudo[180777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpoxjsvegmgdbtzkddrvlfyilgfopvcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896978.2740579-654-259357196057952/AnsiballZ_podman_container_exec.py'
Dec 05 01:09:38 compute-0 sudo[180777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:38 compute-0 python3.9[180779]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:09:39 compute-0 systemd[1]: Started libpod-conmon-63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.scope.
Dec 05 01:09:39 compute-0 podman[180780]: 2025-12-05 01:09:39.207136808 +0000 UTC m=+0.211321132 container exec 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 01:09:39 compute-0 podman[180780]: 2025-12-05 01:09:39.242032878 +0000 UTC m=+0.246217112 container exec_died 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:09:39 compute-0 systemd[1]: libpod-conmon-63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.scope: Deactivated successfully.
Dec 05 01:09:39 compute-0 sudo[180777]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:40 compute-0 sudo[180960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgscpqiindhfglsoveuwwyqrfktkxrkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896980.4426055-662-10425421859642/AnsiballZ_file.py'
Dec 05 01:09:40 compute-0 sudo[180960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:41 compute-0 python3.9[180962]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:09:41 compute-0 sudo[180960]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:41 compute-0 sudo[181112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjgzbfdgyddfjwpiqyhgwaszocsobgic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896981.3805394-671-180005487531417/AnsiballZ_podman_container_info.py'
Dec 05 01:09:41 compute-0 sudo[181112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:42 compute-0 python3.9[181114]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec 05 01:09:42 compute-0 sudo[181112]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:42 compute-0 sudo[181276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzcwddnimjhuokqvevcztbfiawkxknlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896982.512486-679-101930494811199/AnsiballZ_podman_container_exec.py'
Dec 05 01:09:42 compute-0 sudo[181276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:43 compute-0 python3.9[181278]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:09:43 compute-0 systemd[1]: Started libpod-conmon-348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.scope.
Dec 05 01:09:43 compute-0 podman[181279]: 2025-12-05 01:09:43.340792844 +0000 UTC m=+0.145241534 container exec 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6)
Dec 05 01:09:43 compute-0 podman[181279]: 2025-12-05 01:09:43.379103388 +0000 UTC m=+0.183552088 container exec_died 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, com.redhat.component=ubi9-minimal-container)
Dec 05 01:09:43 compute-0 systemd[1]: libpod-conmon-348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.scope: Deactivated successfully.
Dec 05 01:09:43 compute-0 sudo[181276]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:43 compute-0 podman[181297]: 2025-12-05 01:09:43.491341989 +0000 UTC m=+0.140722527 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:09:43 compute-0 podman[181294]: 2025-12-05 01:09:43.523079603 +0000 UTC m=+0.170618325 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 01:09:44 compute-0 sudo[181497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqrfsubfbvynpllmxmrwoztoaojipxex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896983.717267-687-144455522356985/AnsiballZ_podman_container_exec.py'
Dec 05 01:09:44 compute-0 sudo[181497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:44 compute-0 python3.9[181499]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:09:44 compute-0 systemd[1]: Started libpod-conmon-348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.scope.
Dec 05 01:09:44 compute-0 podman[181500]: 2025-12-05 01:09:44.729996563 +0000 UTC m=+0.178219936 container exec 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, architecture=x86_64, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 05 01:09:44 compute-0 podman[181500]: 2025-12-05 01:09:44.764578044 +0000 UTC m=+0.212801347 container exec_died 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1755695350, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, version=9.6, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public)
Dec 05 01:09:44 compute-0 systemd[1]: libpod-conmon-348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.scope: Deactivated successfully.
Dec 05 01:09:44 compute-0 sudo[181497]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:45 compute-0 sudo[181677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfhvsdnnuzhywnymvoootyqbqbfxemmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896985.0883787-695-239116942368810/AnsiballZ_file.py'
Dec 05 01:09:45 compute-0 sudo[181677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:45 compute-0 python3.9[181679]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:09:45 compute-0 sudo[181677]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:46 compute-0 sudo[181829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqcqfvcanjivyqhnbmejpxgercgzxhps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896986.2080457-704-239061017352081/AnsiballZ_podman_container_info.py'
Dec 05 01:09:46 compute-0 sudo[181829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:46 compute-0 python3.9[181831]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Dec 05 01:09:47 compute-0 sudo[181829]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:48 compute-0 sudo[181994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wujhhmrszanvvssmbbchwffitmkcgrdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896987.4960437-712-268934936880132/AnsiballZ_podman_container_exec.py'
Dec 05 01:09:48 compute-0 sudo[181994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:48 compute-0 podman[181996]: 2025-12-05 01:09:48.266738413 +0000 UTC m=+0.149107241 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 01:09:48 compute-0 systemd[1]: 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335-3bd305131483247.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 01:09:48 compute-0 systemd[1]: 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335-3bd305131483247.service: Failed with result 'exit-code'.
Dec 05 01:09:48 compute-0 python3.9[181997]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:09:48 compute-0 systemd[1]: Started libpod-conmon-88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.scope.
Dec 05 01:09:48 compute-0 podman[182016]: 2025-12-05 01:09:48.546868685 +0000 UTC m=+0.148520654 container exec 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 05 01:09:48 compute-0 podman[182016]: 2025-12-05 01:09:48.582129836 +0000 UTC m=+0.183781745 container exec_died 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 05 01:09:48 compute-0 sudo[181994]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:48 compute-0 systemd[1]: libpod-conmon-88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.scope: Deactivated successfully.
Dec 05 01:09:49 compute-0 sudo[182195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udnyathprxkgtcfeidayqewpxnblihvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896988.962648-720-267259724417062/AnsiballZ_podman_container_exec.py'
Dec 05 01:09:49 compute-0 sudo[182195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:49 compute-0 python3.9[182198]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:09:49 compute-0 systemd[1]: Started libpod-conmon-88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.scope.
Dec 05 01:09:49 compute-0 podman[182199]: 2025-12-05 01:09:49.868700978 +0000 UTC m=+0.142734678 container exec 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 05 01:09:49 compute-0 podman[182199]: 2025-12-05 01:09:49.903053622 +0000 UTC m=+0.177087232 container exec_died 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:09:50 compute-0 systemd[1]: libpod-conmon-88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.scope: Deactivated successfully.
Dec 05 01:09:50 compute-0 sudo[182195]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:50 compute-0 podman[182312]: 2025-12-05 01:09:50.728574884 +0000 UTC m=+0.138993984 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., vcs-type=git, release-0.7.12=, com.redhat.component=ubi9-container, version=9.4, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Dec 05 01:09:50 compute-0 sudo[182399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygpfccshjpaynhoyijlngeudpuhyiova ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896990.3602304-728-277851792742328/AnsiballZ_file.py'
Dec 05 01:09:50 compute-0 sudo[182399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:51 compute-0 python3.9[182401]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:09:51 compute-0 sudo[182399]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:52 compute-0 sudo[182551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acazatmfgaampuddupkyzttcllbpffuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896991.6565177-737-18800535316225/AnsiballZ_podman_container_info.py'
Dec 05 01:09:52 compute-0 sudo[182551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:52 compute-0 python3.9[182553]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Dec 05 01:09:52 compute-0 sudo[182551]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:53 compute-0 sudo[182715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atngcvhywrdqavzsyqmzwofpuigrjfrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896993.0086951-745-136204948066943/AnsiballZ_podman_container_exec.py'
Dec 05 01:09:53 compute-0 sudo[182715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:53 compute-0 python3.9[182717]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:09:53 compute-0 systemd[1]: Started libpod-conmon-de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91.scope.
Dec 05 01:09:53 compute-0 podman[182718]: 2025-12-05 01:09:53.950318933 +0000 UTC m=+0.148286596 container exec de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release=1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, config_id=edpm, version=9.4, distribution-scope=public)
Dec 05 01:09:53 compute-0 podman[182718]: 2025-12-05 01:09:53.984981096 +0000 UTC m=+0.182948739 container exec_died de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.4, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, name=ubi9, com.redhat.component=ubi9-container, container_name=kepler, maintainer=Red Hat, Inc., release=1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, release-0.7.12=)
Dec 05 01:09:54 compute-0 systemd[1]: libpod-conmon-de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91.scope: Deactivated successfully.
Dec 05 01:09:54 compute-0 sudo[182715]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:54 compute-0 podman[182734]: 2025-12-05 01:09:54.073598309 +0000 UTC m=+0.122283227 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, vcs-type=git, config_id=edpm, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 01:09:54 compute-0 sudo[182920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eipgoeflmbxnamdlowmcolqzzaslomgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896994.3618917-753-86314376829931/AnsiballZ_podman_container_exec.py'
Dec 05 01:09:54 compute-0 sudo[182920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:55 compute-0 python3.9[182922]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:09:55 compute-0 systemd[1]: Started libpod-conmon-de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91.scope.
Dec 05 01:09:55 compute-0 podman[182923]: 2025-12-05 01:09:55.329343393 +0000 UTC m=+0.167787059 container exec de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, name=ubi9, build-date=2024-09-18T21:23:30)
Dec 05 01:09:55 compute-0 podman[182923]: 2025-12-05 01:09:55.364587604 +0000 UTC m=+0.203031240 container exec_died de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, architecture=x86_64, config_id=edpm, io.openshift.expose-services=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=base rhel9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, container_name=kepler, release=1214.1726694543, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc.)
Dec 05 01:09:55 compute-0 systemd[1]: libpod-conmon-de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91.scope: Deactivated successfully.
Dec 05 01:09:55 compute-0 sudo[182920]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:56 compute-0 sudo[183120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjgjdtxeawtqgccrbxjdbfdkuyoocgyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896995.7363286-761-265613575764243/AnsiballZ_file.py'
Dec 05 01:09:56 compute-0 podman[183079]: 2025-12-05 01:09:56.288359322 +0000 UTC m=+0.122939027 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 01:09:56 compute-0 sudo[183120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:56 compute-0 python3.9[183129]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:09:56 compute-0 sudo[183120]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:57 compute-0 sudo[183280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuhoqlqeytawmsmzojptvjzwzgxucfbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896996.9305294-770-26408476074086/AnsiballZ_file.py'
Dec 05 01:09:57 compute-0 sudo[183280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:57 compute-0 python3.9[183282]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:09:57 compute-0 sudo[183280]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:58 compute-0 sudo[183432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpjwtlyjxvgjkysujcflfysozjtbkdkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896997.9472897-778-271174939500151/AnsiballZ_stat.py'
Dec 05 01:09:58 compute-0 sudo[183432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:58 compute-0 python3.9[183434]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:09:58 compute-0 sudo[183432]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:59 compute-0 sudo[183555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egkmubyhbzwuoksjmvczybzqnutqjtvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764896997.9472897-778-271174939500151/AnsiballZ_copy.py'
Dec 05 01:09:59 compute-0 sudo[183555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:09:59 compute-0 python3.9[183557]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896997.9472897-778-271174939500151/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:09:59 compute-0 sudo[183555]: pam_unix(sudo:session): session closed for user root
Dec 05 01:09:59 compute-0 podman[158197]: time="2025-12-05T01:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:09:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18534 "" "Go-http-client/1.1"
Dec 05 01:09:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2989 "" "Go-http-client/1.1"
Dec 05 01:10:00 compute-0 sudo[183707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxdfnqvppeggnroebckjfndtxrlutnrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897000.031764-794-261956352905305/AnsiballZ_file.py'
Dec 05 01:10:00 compute-0 sudo[183707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:00 compute-0 python3.9[183709]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:10:00 compute-0 sudo[183707]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:01 compute-0 openstack_network_exporter[160350]: ERROR   01:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:10:01 compute-0 openstack_network_exporter[160350]: ERROR   01:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:10:01 compute-0 openstack_network_exporter[160350]: ERROR   01:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:10:01 compute-0 openstack_network_exporter[160350]: ERROR   01:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:10:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:10:01 compute-0 openstack_network_exporter[160350]: ERROR   01:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:10:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:10:01 compute-0 sudo[183859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqizolladpwyvchqobmfjhgylrqzjjop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897001.0632796-802-227072268621502/AnsiballZ_stat.py'
Dec 05 01:10:01 compute-0 sudo[183859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:01 compute-0 python3.9[183861]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:10:01 compute-0 sudo[183859]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:02 compute-0 sudo[183937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqoipbzmmxmvujynhuebwxyftijwwuhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897001.0632796-802-227072268621502/AnsiballZ_file.py'
Dec 05 01:10:02 compute-0 sudo[183937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:02 compute-0 python3.9[183939]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:10:02 compute-0 sudo[183937]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:03 compute-0 sudo[184089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcesbekfrsrmrgbjsttiveluuuvpcilm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897002.83899-814-82003341288018/AnsiballZ_stat.py'
Dec 05 01:10:03 compute-0 sudo[184089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:03 compute-0 python3.9[184091]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:10:03 compute-0 sudo[184089]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:04 compute-0 sudo[184167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuxqqznrwlsfuomzuitrsnmhjlcsttkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897002.83899-814-82003341288018/AnsiballZ_file.py'
Dec 05 01:10:04 compute-0 sudo[184167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:04 compute-0 python3.9[184169]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.x9n4pzcd recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:10:04 compute-0 sudo[184167]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:05 compute-0 sudo[184319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-invuqgdmdigfkyuzlftxemabzdvahjqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897004.5411787-826-259056763769461/AnsiballZ_stat.py'
Dec 05 01:10:05 compute-0 sudo[184319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:05 compute-0 python3.9[184321]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:10:05 compute-0 sudo[184319]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:05 compute-0 sudo[184397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogtsnurybsqmlahizechduhwmzwszbab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897004.5411787-826-259056763769461/AnsiballZ_file.py'
Dec 05 01:10:05 compute-0 sudo[184397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:05 compute-0 python3.9[184399]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:10:05 compute-0 sudo[184397]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:06 compute-0 sudo[184549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usmuhytkyalnqzjpagyhwyqjepzxfrpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897006.2738156-839-152390170784197/AnsiballZ_command.py'
Dec 05 01:10:06 compute-0 sudo[184549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:07 compute-0 python3.9[184551]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:10:07 compute-0 sudo[184549]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:08 compute-0 sudo[184702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkqajzeuudsvufsjencpthjmobzmmiie ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764897007.4690182-847-161281134720046/AnsiballZ_edpm_nftables_from_files.py'
Dec 05 01:10:08 compute-0 sudo[184702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:08 compute-0 python3[184704]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 05 01:10:08 compute-0 sudo[184702]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:08 compute-0 podman[184729]: 2025-12-05 01:10:08.706163254 +0000 UTC m=+0.106394754 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 01:10:09 compute-0 sudo[184876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvxlksyfcnpmphxzaudrxftqyiibnumx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897008.7347348-855-167240029131193/AnsiballZ_stat.py'
Dec 05 01:10:09 compute-0 sudo[184876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:09 compute-0 python3.9[184878]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:10:09 compute-0 sudo[184876]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:10 compute-0 sudo[184954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcbebqopclfeqlfskrtnqwhtehnfefsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897008.7347348-855-167240029131193/AnsiballZ_file.py'
Dec 05 01:10:10 compute-0 sudo[184954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:10 compute-0 python3.9[184956]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:10:10 compute-0 sudo[184954]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:11 compute-0 sudo[185106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqosekkzkrnaroutwkhnbzicscnmpmuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897010.6206987-867-159339083347049/AnsiballZ_stat.py'
Dec 05 01:10:11 compute-0 sudo[185106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:11 compute-0 python3.9[185108]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:10:11 compute-0 sudo[185106]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:11 compute-0 sudo[185184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkmqftlveggvttlbtzafsyxdhnxtmrjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897010.6206987-867-159339083347049/AnsiballZ_file.py'
Dec 05 01:10:11 compute-0 sudo[185184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:12 compute-0 python3.9[185186]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:10:12 compute-0 sudo[185184]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:12 compute-0 sudo[185336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikxxtraajzkmympykfgnllgorqjqehqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897012.328203-879-234639654968068/AnsiballZ_stat.py'
Dec 05 01:10:12 compute-0 sudo[185336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:12 compute-0 python3.9[185338]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:10:13 compute-0 sudo[185336]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:13 compute-0 sudo[185414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sewqlwtkwhshwfkxrjqsouyyyrnicncy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897012.328203-879-234639654968068/AnsiballZ_file.py'
Dec 05 01:10:13 compute-0 sudo[185414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:13 compute-0 python3.9[185416]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:10:13 compute-0 sudo[185414]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:13 compute-0 podman[185417]: 2025-12-05 01:10:13.693676183 +0000 UTC m=+0.110213460 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 05 01:10:13 compute-0 podman[185418]: 2025-12-05 01:10:13.73338959 +0000 UTC m=+0.132827077 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 01:10:14 compute-0 sudo[185611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feazkzmjeiwiqksjaasqxmprflrtraut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897013.8985553-891-125438905664744/AnsiballZ_stat.py'
Dec 05 01:10:14 compute-0 sudo[185611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:14 compute-0 python3.9[185613]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:10:14 compute-0 sudo[185611]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:15 compute-0 sudo[185689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlsiverimkeezudwxxyolqmtnizqdymx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897013.8985553-891-125438905664744/AnsiballZ_file.py'
Dec 05 01:10:15 compute-0 sudo[185689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:15 compute-0 python3.9[185691]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:10:15 compute-0 sudo[185689]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:16 compute-0 sudo[185841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkyzefrkdmcjihuboahucewqmtrmtswi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897015.8137627-903-20766571221428/AnsiballZ_stat.py'
Dec 05 01:10:16 compute-0 sudo[185841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:16 compute-0 python3.9[185843]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:10:16 compute-0 sudo[185841]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:17 compute-0 sudo[185966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiaxyfhuilqkhiboykthcedwyfcwyttq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897015.8137627-903-20766571221428/AnsiballZ_copy.py'
Dec 05 01:10:17 compute-0 sudo[185966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:17 compute-0 python3.9[185968]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764897015.8137627-903-20766571221428/.source.nft follow=False _original_basename=ruleset.j2 checksum=195cfcdc3ed4fc7d98b13eed88ef5cb7956fa1b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:10:17 compute-0 sudo[185966]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:18 compute-0 sudo[186128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omgjkaiqwwbizrljwtqpuksvbxhnjrkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897017.83911-918-135429585971391/AnsiballZ_file.py'
Dec 05 01:10:18 compute-0 sudo[186128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:18 compute-0 podman[186092]: 2025-12-05 01:10:18.588132846 +0000 UTC m=+0.132814397 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:10:18 compute-0 python3.9[186136]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:10:18 compute-0 sudo[186128]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:19 compute-0 sudo[186289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgegkchtztfvvymvmrwqrhllwwubcnmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897019.0814376-926-218760964319017/AnsiballZ_command.py'
Dec 05 01:10:19 compute-0 sudo[186289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:19 compute-0 python3.9[186291]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:10:19 compute-0 sudo[186289]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:20 compute-0 podman[186418]: 2025-12-05 01:10:20.948824672 +0000 UTC m=+0.113724326 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release=1214.1726694543, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, name=ubi9, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler)
Dec 05 01:10:20 compute-0 sudo[186461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpdwhcmobqmgbuikbzdljfiykemdggip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897020.19943-934-96896398022240/AnsiballZ_blockinfile.py'
Dec 05 01:10:20 compute-0 sudo[186461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:21 compute-0 python3.9[186463]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:10:21 compute-0 sudo[186461]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:22 compute-0 sudo[186613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jztqoemsghljyuzhrcjdpakwwzmioxin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897021.523038-943-98974872420553/AnsiballZ_command.py'
Dec 05 01:10:22 compute-0 sudo[186613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:22 compute-0 python3.9[186615]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:10:22 compute-0 sudo[186613]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:23 compute-0 sudo[186766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsxyyfcanonbgogozuuevoplhvxzcoge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897022.5428674-951-105418427460707/AnsiballZ_stat.py'
Dec 05 01:10:23 compute-0 sudo[186766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:23 compute-0 python3.9[186768]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:10:23 compute-0 sudo[186766]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:24 compute-0 sudo[186920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plkjipddbwnsgpccktysdxwgmmbvdwid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897023.6564634-959-281427007688412/AnsiballZ_command.py'
Dec 05 01:10:24 compute-0 sudo[186920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:24 compute-0 podman[186922]: 2025-12-05 01:10:24.383529352 +0000 UTC m=+0.131919939 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, release=1755695350, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.expose-services=, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9)
Dec 05 01:10:24 compute-0 python3.9[186923]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:10:24 compute-0 sudo[186920]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:25 compute-0 sudo[187096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcuufulngobewiledvphkmdiotsntxpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897024.7228694-967-178385068481093/AnsiballZ_file.py'
Dec 05 01:10:25 compute-0 sudo[187096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:25 compute-0 python3.9[187098]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:10:25 compute-0 sudo[187096]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:25 compute-0 sshd-session[167966]: Connection closed by 192.168.122.30 port 34988
Dec 05 01:10:25 compute-0 sshd-session[167963]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:10:25 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Dec 05 01:10:25 compute-0 systemd[1]: session-23.scope: Consumed 1min 59.058s CPU time.
Dec 05 01:10:25 compute-0 systemd-logind[792]: Session 23 logged out. Waiting for processes to exit.
Dec 05 01:10:25 compute-0 systemd-logind[792]: Removed session 23.
Dec 05 01:10:26 compute-0 podman[187123]: 2025-12-05 01:10:26.704810832 +0000 UTC m=+0.110878110 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:10:29 compute-0 podman[158197]: time="2025-12-05T01:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:10:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18533 "" "Go-http-client/1.1"
Dec 05 01:10:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2994 "" "Go-http-client/1.1"
Dec 05 01:10:31 compute-0 openstack_network_exporter[160350]: ERROR   01:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:10:31 compute-0 openstack_network_exporter[160350]: ERROR   01:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:10:31 compute-0 openstack_network_exporter[160350]: ERROR   01:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:10:31 compute-0 openstack_network_exporter[160350]: ERROR   01:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:10:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:10:31 compute-0 openstack_network_exporter[160350]: ERROR   01:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:10:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:10:31 compute-0 sshd-session[187147]: Accepted publickey for zuul from 192.168.122.30 port 46454 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 01:10:31 compute-0 systemd-logind[792]: New session 24 of user zuul.
Dec 05 01:10:31 compute-0 systemd[1]: Started Session 24 of User zuul.
Dec 05 01:10:31 compute-0 sshd-session[187147]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:10:33 compute-0 python3.9[187300]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:10:34 compute-0 sudo[187454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaeehamjfpieboxmlzwadbcqqjqftlrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897033.6677625-34-50337552464237/AnsiballZ_systemd.py'
Dec 05 01:10:34 compute-0 sudo[187454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:34 compute-0 python3.9[187456]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Dec 05 01:10:35 compute-0 sudo[187454]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:35 compute-0 sudo[187607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peitkiimgkgtxdpzwoufxizzyyxhylmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897035.2838647-42-265604056841259/AnsiballZ_setup.py'
Dec 05 01:10:35 compute-0 sudo[187607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:36 compute-0 python3.9[187609]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 01:10:36 compute-0 sudo[187607]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:37 compute-0 sudo[187691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbiocepyvgxeaekqyqguvsyzzvuvbonj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897035.2838647-42-265604056841259/AnsiballZ_dnf.py'
Dec 05 01:10:37 compute-0 sudo[187691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:37 compute-0 python3.9[187693]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 01:10:39 compute-0 podman[187695]: 2025-12-05 01:10:39.706452865 +0000 UTC m=+0.111244718 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.541 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.542 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.544 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.544 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.545 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.546 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.547 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.549 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.553 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.553 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.553 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.553 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.553 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.553 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.554 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.554 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.554 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.554 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.554 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.554 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.558 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.558 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.560 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.560 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.561 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.561 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.561 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:10:43 compute-0 sudo[187691]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:44 compute-0 podman[187806]: 2025-12-05 01:10:44.737330347 +0000 UTC m=+0.136689577 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 01:10:44 compute-0 podman[187813]: 2025-12-05 01:10:44.765819323 +0000 UTC m=+0.158080480 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec 05 01:10:44 compute-0 sudo[187916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvazoxwkwmqjrvtqevtctawfiqigxwww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897044.2251117-54-91166646645123/AnsiballZ_stat.py'
Dec 05 01:10:44 compute-0 sudo[187916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:45 compute-0 python3.9[187918]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:10:45 compute-0 sudo[187916]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:45 compute-0 sudo[188039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgtjssnujjcygowleupvqhoteebhrihl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897044.2251117-54-91166646645123/AnsiballZ_copy.py'
Dec 05 01:10:45 compute-0 sudo[188039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:46 compute-0 python3.9[188041]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764897044.2251117-54-91166646645123/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:10:46 compute-0 sudo[188039]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:47 compute-0 sudo[188191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjarsvirzhjjpaundrvjpiuuzvdkivij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897046.3719978-69-242596274784764/AnsiballZ_file.py'
Dec 05 01:10:47 compute-0 sudo[188191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:47 compute-0 python3.9[188193]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:10:47 compute-0 sudo[188191]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:48 compute-0 sudo[188343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-japuxtzhudwqbeqzhqndozbtetjqanbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897047.520598-77-78670246744713/AnsiballZ_stat.py'
Dec 05 01:10:48 compute-0 sudo[188343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:48 compute-0 python3.9[188345]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:10:48 compute-0 sudo[188343]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:48 compute-0 podman[188440]: 2025-12-05 01:10:48.930854627 +0000 UTC m=+0.086106728 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 05 01:10:48 compute-0 sudo[188482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpmsghkczgdtnydkmehyrmddempdsslq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897047.520598-77-78670246744713/AnsiballZ_copy.py'
Dec 05 01:10:48 compute-0 sudo[188482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:49 compute-0 python3.9[188487]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764897047.520598-77-78670246744713/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:10:49 compute-0 sudo[188482]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:49 compute-0 sudo[188638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edxemxctidcrjpioiicrplznulvpdbmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897049.3980768-92-187961434746617/AnsiballZ_systemd.py'
Dec 05 01:10:49 compute-0 sudo[188638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:10:50 compute-0 python3.9[188640]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 01:10:50 compute-0 systemd[1]: Stopping System Logging Service...
Dec 05 01:10:50 compute-0 rsyslogd[1008]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1008" x-info="https://www.rsyslog.com"] exiting on signal 15.
Dec 05 01:10:50 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Dec 05 01:10:50 compute-0 systemd[1]: Stopped System Logging Service.
Dec 05 01:10:50 compute-0 systemd[1]: rsyslog.service: Consumed 1.818s CPU time, 5.2M memory peak, read 0B from disk, written 3.7M to disk.
Dec 05 01:10:50 compute-0 systemd[1]: Starting System Logging Service...
Dec 05 01:10:50 compute-0 rsyslogd[188644]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="188644" x-info="https://www.rsyslog.com"] start
Dec 05 01:10:50 compute-0 systemd[1]: Started System Logging Service.
Dec 05 01:10:50 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 01:10:50 compute-0 rsyslogd[188644]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Dec 05 01:10:50 compute-0 rsyslogd[188644]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Dec 05 01:10:50 compute-0 rsyslogd[188644]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Dec 05 01:10:50 compute-0 sudo[188638]: pam_unix(sudo:session): session closed for user root
Dec 05 01:10:50 compute-0 rsyslogd[188644]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Dec 05 01:10:51 compute-0 sshd-session[187150]: Connection closed by 192.168.122.30 port 46454
Dec 05 01:10:51 compute-0 sshd-session[187147]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:10:51 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Dec 05 01:10:51 compute-0 systemd[1]: session-24.scope: Consumed 15.771s CPU time.
Dec 05 01:10:51 compute-0 systemd-logind[792]: Session 24 logged out. Waiting for processes to exit.
Dec 05 01:10:51 compute-0 systemd-logind[792]: Removed session 24.
Dec 05 01:10:51 compute-0 podman[188674]: 2025-12-05 01:10:51.242122718 +0000 UTC m=+0.118139406 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, io.openshift.tags=base rhel9, version=9.4, container_name=kepler, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container)
Dec 05 01:10:54 compute-0 podman[188695]: 2025-12-05 01:10:54.724515533 +0000 UTC m=+0.130061337 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, version=9.6, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-type=git, name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, release=1755695350, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Dec 05 01:10:57 compute-0 podman[188716]: 2025-12-05 01:10:57.67616102 +0000 UTC m=+0.083431191 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:10:59 compute-0 podman[158197]: time="2025-12-05T01:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:10:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18533 "" "Go-http-client/1.1"
Dec 05 01:10:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2991 "" "Go-http-client/1.1"
Dec 05 01:11:00 compute-0 sshd-session[188740]: Accepted publickey for zuul from 38.102.83.179 port 45342 ssh2: RSA SHA256:TVb6vFiLOEHtrkkdoyIozA4b0isBLmSla+NPtR7bFX8
Dec 05 01:11:00 compute-0 systemd-logind[792]: New session 25 of user zuul.
Dec 05 01:11:00 compute-0 systemd[1]: Started Session 25 of User zuul.
Dec 05 01:11:00 compute-0 sshd-session[188740]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:11:00 compute-0 sudo[188816]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rponwaiurkveuwuvnlezsmgzcdojsbav ; /usr/bin/python3'
Dec 05 01:11:00 compute-0 sudo[188816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:00 compute-0 useradd[188820]: new group: name=ceph-admin, GID=42478
Dec 05 01:11:00 compute-0 useradd[188820]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Dec 05 01:11:01 compute-0 sudo[188816]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:01 compute-0 sudo[188902]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pooqrimkigjezwdwziooruzkljmcgczp ; /usr/bin/python3'
Dec 05 01:11:01 compute-0 sudo[188902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:01 compute-0 openstack_network_exporter[160350]: ERROR   01:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:11:01 compute-0 openstack_network_exporter[160350]: ERROR   01:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:11:01 compute-0 openstack_network_exporter[160350]: ERROR   01:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:11:01 compute-0 openstack_network_exporter[160350]: ERROR   01:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:11:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:11:01 compute-0 openstack_network_exporter[160350]: ERROR   01:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:11:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:11:01 compute-0 sudo[188902]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:01 compute-0 sudo[188975]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpytjzifiidzwrgukotxzfpdqesoxxmz ; /usr/bin/python3'
Dec 05 01:11:01 compute-0 sudo[188975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:01 compute-0 sudo[188975]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:02 compute-0 sudo[189025]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfuexrujbfvwyvymgtcivaxmtgvzhqxa ; /usr/bin/python3'
Dec 05 01:11:02 compute-0 sudo[189025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:02 compute-0 sudo[189025]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:02 compute-0 sudo[189051]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebfbqjuzrrtymcgoicgvretflkadpbrm ; /usr/bin/python3'
Dec 05 01:11:02 compute-0 sudo[189051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:02 compute-0 sudo[189051]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:02 compute-0 sudo[189077]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnvqrewegflsrsqnjafopecjanktxmod ; /usr/bin/python3'
Dec 05 01:11:03 compute-0 sudo[189077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:03 compute-0 sudo[189077]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:03 compute-0 sudo[189103]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgunorgipwgjsgzgjvxtxwpyyyjumcar ; /usr/bin/python3'
Dec 05 01:11:03 compute-0 sudo[189103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:03 compute-0 sudo[189103]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:04 compute-0 sudo[189181]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpwqrabuawxsputrdvwwremramuuluqd ; /usr/bin/python3'
Dec 05 01:11:04 compute-0 sudo[189181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:04 compute-0 sudo[189181]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:04 compute-0 sudo[189254]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljdfgpdzemjvcagescqmcvxqwanncvnh ; /usr/bin/python3'
Dec 05 01:11:04 compute-0 sudo[189254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:04 compute-0 sudo[189254]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:05 compute-0 sudo[189356]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yexwthpfayxoglqjraqsjdzgxgaghugz ; /usr/bin/python3'
Dec 05 01:11:05 compute-0 sudo[189356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:05 compute-0 sudo[189356]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:05 compute-0 sudo[189429]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvhbvgdvppspbisutezwzjbbeypcnrws ; /usr/bin/python3'
Dec 05 01:11:05 compute-0 sudo[189429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:05 compute-0 sudo[189429]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:06 compute-0 sudo[189479]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocpbngrrgudrqabituulafcusieplwct ; /usr/bin/python3'
Dec 05 01:11:06 compute-0 sudo[189479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:06 compute-0 python3[189481]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:11:07 compute-0 sudo[189479]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:08 compute-0 sudo[189583]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmszbemghxhsmcepdeysxdpstgcwfkbo ; /usr/bin/python3'
Dec 05 01:11:08 compute-0 sudo[189583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:08 compute-0 python3[189585]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 01:11:09 compute-0 sudo[189583]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:09 compute-0 sudo[189610]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqfjmrtefkfnwkzuqvjtajcivvfgunml ; /usr/bin/python3'
Dec 05 01:11:09 compute-0 sudo[189610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:09 compute-0 podman[189612]: 2025-12-05 01:11:09.955528788 +0000 UTC m=+0.098783701 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:11:10 compute-0 python3[189613]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 01:11:10 compute-0 sudo[189610]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:10 compute-0 sudo[189660]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mriornafvkzcqfrlxxscarbaadbbdsse ; /usr/bin/python3'
Dec 05 01:11:10 compute-0 sudo[189660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:10 compute-0 python3[189662]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                           losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                           lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:11:10 compute-0 kernel: loop: module loaded
Dec 05 01:11:10 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Dec 05 01:11:10 compute-0 sudo[189660]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:10 compute-0 sudo[189695]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybpcseqgaswhuqqxsbfdmhhefokdsifs ; /usr/bin/python3'
Dec 05 01:11:10 compute-0 sudo[189695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:10 compute-0 python3[189697]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                           vgcreate ceph_vg0 /dev/loop3
                                           lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                           lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:11:11 compute-0 lvm[189700]: PV /dev/loop3 not used.
Dec 05 01:11:11 compute-0 lvm[189702]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 01:11:11 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec 05 01:11:11 compute-0 lvm[189710]:   1 logical volume(s) in volume group "ceph_vg0" now active
Dec 05 01:11:11 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec 05 01:11:11 compute-0 lvm[189712]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 01:11:11 compute-0 lvm[189712]: VG ceph_vg0 finished
Dec 05 01:11:11 compute-0 sudo[189695]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:11 compute-0 sudo[189788]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtatzilpyayuecoqkuuvqqyemfhnneki ; /usr/bin/python3'
Dec 05 01:11:11 compute-0 sudo[189788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:12 compute-0 python3[189790]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 01:11:12 compute-0 sudo[189788]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:12 compute-0 sudo[189861]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwbeummivxnopsvndlkogxjaxuvuqpga ; /usr/bin/python3'
Dec 05 01:11:12 compute-0 sudo[189861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:12 compute-0 python3[189863]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764897071.5515091-36706-13281376886548/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:11:12 compute-0 sudo[189861]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:13 compute-0 sudo[189911]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvbvzkmvqccxgtnavjzksfqiajcdogub ; /usr/bin/python3'
Dec 05 01:11:13 compute-0 sudo[189911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:13 compute-0 python3[189913]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:11:14 compute-0 systemd[1]: Reloading.
Dec 05 01:11:14 compute-0 systemd-sysv-generator[189939]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:11:14 compute-0 systemd-rc-local-generator[189936]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:11:14 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 05 01:11:14 compute-0 bash[189955]: /dev/loop3: [64513]:4194935 (/var/lib/ceph-osd-0.img)
Dec 05 01:11:15 compute-0 podman[189952]: 2025-12-05 01:11:15.007280828 +0000 UTC m=+0.086507529 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 05 01:11:15 compute-0 podman[189954]: 2025-12-05 01:11:15.09880486 +0000 UTC m=+0.168951931 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 05 01:11:15 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 05 01:11:15 compute-0 sudo[189911]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:15 compute-0 lvm[189997]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 01:11:15 compute-0 lvm[189997]: VG ceph_vg0 finished
Dec 05 01:11:15 compute-0 sudo[190021]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwfnqdddotqycskrufunfovdrttvitbk ; /usr/bin/python3'
Dec 05 01:11:15 compute-0 sudo[190021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:15 compute-0 python3[190023]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 01:11:16 compute-0 sudo[190021]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:17 compute-0 sudo[190048]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urxecoafblikodtqwfxxgmxrvamrcgbq ; /usr/bin/python3'
Dec 05 01:11:17 compute-0 sudo[190048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:17 compute-0 python3[190050]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 01:11:17 compute-0 sudo[190048]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:17 compute-0 sudo[190074]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhlnoozwoawhnovwmudnfeakfdhsxgwy ; /usr/bin/python3'
Dec 05 01:11:17 compute-0 sudo[190074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:17 compute-0 python3[190076]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                           losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                           lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:11:17 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Dec 05 01:11:17 compute-0 sudo[190074]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:17 compute-0 sudo[190105]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-livluxpcazujkyosgjpxadhjvcpgdgde ; /usr/bin/python3'
Dec 05 01:11:17 compute-0 sudo[190105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:18 compute-0 python3[190107]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                           vgcreate ceph_vg1 /dev/loop4
                                           lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                           lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:11:18 compute-0 lvm[190110]: PV /dev/loop4 not used.
Dec 05 01:11:18 compute-0 lvm[190121]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 05 01:11:18 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Dec 05 01:11:18 compute-0 sudo[190105]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:18 compute-0 lvm[190123]:   1 logical volume(s) in volume group "ceph_vg1" now active
Dec 05 01:11:18 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Dec 05 01:11:18 compute-0 sudo[190199]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlbzdoyjuigkeqrjzeehtvrhpluluoha ; /usr/bin/python3'
Dec 05 01:11:18 compute-0 sudo[190199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:18 compute-0 python3[190201]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 01:11:19 compute-0 sudo[190199]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:19 compute-0 sudo[190272]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sztwimhcovbhgiyugauywwrhhtwsrtvw ; /usr/bin/python3'
Dec 05 01:11:19 compute-0 sudo[190272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:19 compute-0 podman[190274]: 2025-12-05 01:11:19.523389731 +0000 UTC m=+0.113244266 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Dec 05 01:11:19 compute-0 python3[190275]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764897078.5805826-36733-66576700948134/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:11:19 compute-0 sudo[190272]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:19 compute-0 sudo[190343]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogsedrrpfwgffppfdfahwhcsczlgozbv ; /usr/bin/python3'
Dec 05 01:11:19 compute-0 sudo[190343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:20 compute-0 python3[190345]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:11:20 compute-0 systemd[1]: Reloading.
Dec 05 01:11:20 compute-0 systemd-sysv-generator[190376]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:11:20 compute-0 systemd-rc-local-generator[190372]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:11:20 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 05 01:11:20 compute-0 bash[190386]: /dev/loop4: [64513]:4330406 (/var/lib/ceph-osd-1.img)
Dec 05 01:11:20 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 05 01:11:20 compute-0 lvm[190387]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 05 01:11:20 compute-0 lvm[190387]: VG ceph_vg1 finished
Dec 05 01:11:20 compute-0 sudo[190343]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:20 compute-0 sudo[190411]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaachkvapnqpitrvxkuuzdnoafonvtik ; /usr/bin/python3'
Dec 05 01:11:20 compute-0 sudo[190411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:21 compute-0 python3[190413]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 01:11:21 compute-0 podman[190415]: 2025-12-05 01:11:21.718794652 +0000 UTC m=+0.133754743 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, distribution-scope=public, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, config_id=edpm, io.openshift.tags=base rhel9, vcs-type=git, version=9.4, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30)
Dec 05 01:11:22 compute-0 sudo[190411]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:22 compute-0 sudo[190456]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryberomthpwyrnvceymhyredajmcrqzg ; /usr/bin/python3'
Dec 05 01:11:22 compute-0 sudo[190456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:22 compute-0 python3[190458]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 01:11:22 compute-0 sudo[190456]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:23 compute-0 sudo[190482]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlpifoungiqxnxcvnoxumughlwhlnihy ; /usr/bin/python3'
Dec 05 01:11:23 compute-0 sudo[190482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:23 compute-0 python3[190484]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                           losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                           lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:11:23 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Dec 05 01:11:23 compute-0 sudo[190482]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:23 compute-0 sudo[190514]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lckvhcarfrfccekyjearkipzckhyhzfe ; /usr/bin/python3'
Dec 05 01:11:23 compute-0 sudo[190514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:23 compute-0 python3[190516]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                           vgcreate ceph_vg2 /dev/loop5
                                           lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                           lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:11:23 compute-0 lvm[190519]: PV /dev/loop5 not used.
Dec 05 01:11:23 compute-0 lvm[190521]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 05 01:11:23 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Dec 05 01:11:23 compute-0 lvm[190528]:   1 logical volume(s) in volume group "ceph_vg2" now active
Dec 05 01:11:23 compute-0 lvm[190532]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 05 01:11:23 compute-0 lvm[190532]: VG ceph_vg2 finished
Dec 05 01:11:23 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Dec 05 01:11:23 compute-0 sudo[190514]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:24 compute-0 sudo[190608]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tunpkhoqydrhhkwvgnucrjvttpotfxny ; /usr/bin/python3'
Dec 05 01:11:24 compute-0 sudo[190608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:24 compute-0 python3[190610]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 01:11:24 compute-0 sudo[190608]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:24 compute-0 sudo[190681]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnitjcnxykjuxlvmkdvtwebyarrvmdtw ; /usr/bin/python3'
Dec 05 01:11:24 compute-0 sudo[190681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:24 compute-0 podman[190683]: 2025-12-05 01:11:24.926549737 +0000 UTC m=+0.118684841 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container)
Dec 05 01:11:24 compute-0 python3[190684]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764897084.077308-36760-67253213508547/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:11:24 compute-0 sudo[190681]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:25 compute-0 sudo[190751]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twflkbizqfsevjwbbwjwzduxbficbywn ; /usr/bin/python3'
Dec 05 01:11:25 compute-0 sudo[190751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:25 compute-0 python3[190753]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:11:25 compute-0 systemd[1]: Reloading.
Dec 05 01:11:25 compute-0 systemd-sysv-generator[190780]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:11:25 compute-0 systemd-rc-local-generator[190777]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:11:25 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 05 01:11:26 compute-0 bash[190793]: /dev/loop5: [64513]:4391047 (/var/lib/ceph-osd-2.img)
Dec 05 01:11:26 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 05 01:11:26 compute-0 lvm[190794]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 05 01:11:26 compute-0 lvm[190794]: VG ceph_vg2 finished
Dec 05 01:11:26 compute-0 sudo[190751]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:28 compute-0 python3[190818]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:11:28 compute-0 podman[190870]: 2025-12-05 01:11:28.673720268 +0000 UTC m=+0.095020644 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:11:29 compute-0 podman[158197]: time="2025-12-05T01:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:11:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18533 "" "Go-http-client/1.1"
Dec 05 01:11:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2995 "" "Go-http-client/1.1"
Dec 05 01:11:30 compute-0 sudo[190940]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yknspogndkeicgrzdpdkeaeccgietxpg ; /usr/bin/python3'
Dec 05 01:11:30 compute-0 sudo[190940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:30 compute-0 python3[190942]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 05 01:11:31 compute-0 openstack_network_exporter[160350]: ERROR   01:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:11:31 compute-0 openstack_network_exporter[160350]: ERROR   01:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:11:31 compute-0 openstack_network_exporter[160350]: ERROR   01:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:11:31 compute-0 openstack_network_exporter[160350]: ERROR   01:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:11:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:11:31 compute-0 openstack_network_exporter[160350]: ERROR   01:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:11:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:11:33 compute-0 groupadd[190948]: group added to /etc/group: name=cephadm, GID=990
Dec 05 01:11:33 compute-0 groupadd[190948]: group added to /etc/gshadow: name=cephadm
Dec 05 01:11:33 compute-0 groupadd[190948]: new group: name=cephadm, GID=990
Dec 05 01:11:33 compute-0 useradd[190955]: new user: name=cephadm, UID=990, GID=990, home=/var/lib/cephadm, shell=/bin/bash, from=none
Dec 05 01:11:34 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 01:11:34 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 05 01:11:35 compute-0 sudo[190940]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:35 compute-0 sudo[191067]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qilwddqrodauybdynbezqjqofyslayvg ; /usr/bin/python3'
Dec 05 01:11:35 compute-0 sudo[191067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:35 compute-0 python3[191069]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 01:11:35 compute-0 sudo[191067]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:35 compute-0 sudo[191095]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzvrasjrvxvrfkkmpeylrxtmdusxssme ; /usr/bin/python3'
Dec 05 01:11:35 compute-0 sudo[191095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:35 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 01:11:35 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 05 01:11:35 compute-0 systemd[1]: run-r79e069bdae344900b779ae4a1d576cad.service: Deactivated successfully.
Dec 05 01:11:35 compute-0 python3[191097]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:11:36 compute-0 sudo[191095]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:36 compute-0 sudo[191160]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exhgsffichzzoxddwgepdciisgjiassg ; /usr/bin/python3'
Dec 05 01:11:36 compute-0 sudo[191160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:37 compute-0 python3[191162]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:11:37 compute-0 sudo[191160]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:37 compute-0 sudo[191186]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppkfllmwglahatcvgpyaxgbrvyugyxsz ; /usr/bin/python3'
Dec 05 01:11:37 compute-0 sudo[191186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:37 compute-0 python3[191188]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:11:37 compute-0 sudo[191186]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:38 compute-0 sudo[191264]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxnrqcyakhqmrwdnonoxvlvxkgohyokv ; /usr/bin/python3'
Dec 05 01:11:38 compute-0 sudo[191264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:38 compute-0 python3[191266]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 01:11:38 compute-0 sudo[191264]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:38 compute-0 sudo[191337]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpgondezxqegryklsfmjggmdmhngsxwo ; /usr/bin/python3'
Dec 05 01:11:38 compute-0 sudo[191337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:38 compute-0 python3[191339]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764897097.8567848-36917-80351689164863/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:11:38 compute-0 sudo[191337]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:39 compute-0 sudo[191439]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqmaycitrnrnmmrpetmqkgwljjprrgdo ; /usr/bin/python3'
Dec 05 01:11:39 compute-0 sudo[191439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:40 compute-0 python3[191441]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 01:11:40 compute-0 sudo[191439]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:40 compute-0 sudo[191512]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aznhihhvkaeculgiemzxpvdesthkwbeo ; /usr/bin/python3'
Dec 05 01:11:40 compute-0 sudo[191512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:40 compute-0 podman[191514]: 2025-12-05 01:11:40.678488009 +0000 UTC m=+0.104493365 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:11:40 compute-0 python3[191515]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764897099.337382-36935-237579661061677/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:11:40 compute-0 sudo[191512]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:41 compute-0 sudo[191587]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gykggrjcxhlhyraifrpnmasawlyjnxja ; /usr/bin/python3'
Dec 05 01:11:41 compute-0 sudo[191587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:41 compute-0 python3[191589]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 01:11:41 compute-0 sudo[191587]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:41 compute-0 sudo[191615]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srcyvlqvtvrrwsoywmfhwziktyjrjvma ; /usr/bin/python3'
Dec 05 01:11:41 compute-0 sudo[191615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:41 compute-0 python3[191617]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 01:11:41 compute-0 sudo[191615]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:41 compute-0 sudo[191643]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baxrfafurgwztymtlaichgyzqpuztoim ; /usr/bin/python3'
Dec 05 01:11:41 compute-0 sudo[191643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:42 compute-0 python3[191645]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 01:11:42 compute-0 sudo[191643]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:42 compute-0 sudo[191671]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcwjeigiuckpensydcrvmeqelldgvzjc ; /usr/bin/python3'
Dec 05 01:11:42 compute-0 sudo[191671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:11:42 compute-0 python3[191673]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:11:42 compute-0 sshd-session[191689]: Accepted publickey for ceph-admin from 192.168.122.100 port 47702 ssh2: RSA SHA256:pCaoUHSsXPy6f749SicfXH920NTNVwogKR2+VGIbug4
Dec 05 01:11:42 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 05 01:11:42 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 05 01:11:42 compute-0 systemd-logind[792]: New session 26 of user ceph-admin.
Dec 05 01:11:42 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 05 01:11:42 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 05 01:11:42 compute-0 systemd[191693]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 01:11:43 compute-0 systemd[191693]: Queued start job for default target Main User Target.
Dec 05 01:11:43 compute-0 systemd[191693]: Created slice User Application Slice.
Dec 05 01:11:43 compute-0 systemd[191693]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 05 01:11:43 compute-0 systemd[191693]: Started Daily Cleanup of User's Temporary Directories.
Dec 05 01:11:43 compute-0 systemd[191693]: Reached target Paths.
Dec 05 01:11:43 compute-0 systemd[191693]: Reached target Timers.
Dec 05 01:11:43 compute-0 systemd[191693]: Starting D-Bus User Message Bus Socket...
Dec 05 01:11:43 compute-0 systemd[191693]: Starting Create User's Volatile Files and Directories...
Dec 05 01:11:43 compute-0 systemd[191693]: Finished Create User's Volatile Files and Directories.
Dec 05 01:11:43 compute-0 systemd[191693]: Listening on D-Bus User Message Bus Socket.
Dec 05 01:11:43 compute-0 systemd[191693]: Reached target Sockets.
Dec 05 01:11:43 compute-0 systemd[191693]: Reached target Basic System.
Dec 05 01:11:43 compute-0 systemd[191693]: Reached target Main User Target.
Dec 05 01:11:43 compute-0 systemd[191693]: Startup finished in 175ms.
Dec 05 01:11:43 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 05 01:11:43 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Dec 05 01:11:43 compute-0 sshd-session[191689]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 01:11:43 compute-0 sudo[191709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Dec 05 01:11:43 compute-0 sudo[191709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:11:43 compute-0 sudo[191709]: pam_unix(sudo:session): session closed for user root
Dec 05 01:11:43 compute-0 sshd-session[191708]: Received disconnect from 192.168.122.100 port 47702:11: disconnected by user
Dec 05 01:11:43 compute-0 sshd-session[191708]: Disconnected from user ceph-admin 192.168.122.100 port 47702
Dec 05 01:11:43 compute-0 sshd-session[191689]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 05 01:11:43 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Dec 05 01:11:43 compute-0 systemd-logind[792]: Session 26 logged out. Waiting for processes to exit.
Dec 05 01:11:43 compute-0 systemd-logind[792]: Removed session 26.
Dec 05 01:11:45 compute-0 podman[191770]: 2025-12-05 01:11:45.670114128 +0000 UTC m=+0.084277216 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute)
Dec 05 01:11:45 compute-0 podman[191771]: 2025-12-05 01:11:45.708963061 +0000 UTC m=+0.119049739 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=ovn_controller)
Dec 05 01:11:49 compute-0 podman[191833]: 2025-12-05 01:11:49.671236495 +0000 UTC m=+0.083566407 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm)
Dec 05 01:11:52 compute-0 podman[191853]: 2025-12-05 01:11:52.726714692 +0000 UTC m=+0.130390035 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.component=ubi9-container, io.openshift.expose-services=, container_name=kepler, release=1214.1726694543, version=9.4, name=ubi9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, config_id=edpm)
Dec 05 01:11:53 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Dec 05 01:11:53 compute-0 systemd[191693]: Activating special unit Exit the Session...
Dec 05 01:11:53 compute-0 systemd[191693]: Stopped target Main User Target.
Dec 05 01:11:53 compute-0 systemd[191693]: Stopped target Basic System.
Dec 05 01:11:53 compute-0 systemd[191693]: Stopped target Paths.
Dec 05 01:11:53 compute-0 systemd[191693]: Stopped target Sockets.
Dec 05 01:11:53 compute-0 systemd[191693]: Stopped target Timers.
Dec 05 01:11:53 compute-0 systemd[191693]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 05 01:11:53 compute-0 systemd[191693]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 05 01:11:53 compute-0 systemd[191693]: Closed D-Bus User Message Bus Socket.
Dec 05 01:11:53 compute-0 systemd[191693]: Stopped Create User's Volatile Files and Directories.
Dec 05 01:11:53 compute-0 systemd[191693]: Removed slice User Application Slice.
Dec 05 01:11:53 compute-0 systemd[191693]: Reached target Shutdown.
Dec 05 01:11:53 compute-0 systemd[191693]: Finished Exit the Session.
Dec 05 01:11:53 compute-0 systemd[191693]: Reached target Exit the Session.
Dec 05 01:11:53 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Dec 05 01:11:53 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Dec 05 01:11:53 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec 05 01:11:53 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec 05 01:11:53 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec 05 01:11:53 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec 05 01:11:53 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Dec 05 01:11:55 compute-0 podman[191875]: 2025-12-05 01:11:55.705936002 +0000 UTC m=+0.114392742 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, config_id=edpm, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 05 01:11:59 compute-0 podman[191895]: 2025-12-05 01:11:59.709202718 +0000 UTC m=+0.111147895 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 01:11:59 compute-0 podman[158197]: time="2025-12-05T01:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:11:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18533 "" "Go-http-client/1.1"
Dec 05 01:11:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2998 "" "Go-http-client/1.1"
Dec 05 01:12:01 compute-0 openstack_network_exporter[160350]: ERROR   01:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:12:01 compute-0 openstack_network_exporter[160350]: ERROR   01:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:12:01 compute-0 openstack_network_exporter[160350]: ERROR   01:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:12:01 compute-0 openstack_network_exporter[160350]: ERROR   01:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:12:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:12:01 compute-0 openstack_network_exporter[160350]: ERROR   01:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:12:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:12:16 compute-0 podman[191933]: 2025-12-05 01:12:16.560543489 +0000 UTC m=+4.964706126 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:12:16 compute-0 podman[191746]: 2025-12-05 01:12:16.600741669 +0000 UTC m=+33.259050898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:16 compute-0 podman[191956]: 2025-12-05 01:12:16.697304247 +0000 UTC m=+0.114749142 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec 05 01:12:16 compute-0 podman[191976]: 2025-12-05 01:12:16.724875355 +0000 UTC m=+0.069827495 container create ec543270cd0457ef1f82168712759ff418ecf8c37571393c546f602340e15172 (image=quay.io/ceph/ceph:v18, name=zen_goodall, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:12:16 compute-0 podman[191957]: 2025-12-05 01:12:16.762301749 +0000 UTC m=+0.171722887 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 01:12:16 compute-0 podman[191976]: 2025-12-05 01:12:16.696821514 +0000 UTC m=+0.041773704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:16 compute-0 systemd[1]: Started libpod-conmon-ec543270cd0457ef1f82168712759ff418ecf8c37571393c546f602340e15172.scope.
Dec 05 01:12:16 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:12:16 compute-0 podman[191976]: 2025-12-05 01:12:16.86040501 +0000 UTC m=+0.205357250 container init ec543270cd0457ef1f82168712759ff418ecf8c37571393c546f602340e15172 (image=quay.io/ceph/ceph:v18, name=zen_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Dec 05 01:12:16 compute-0 podman[191976]: 2025-12-05 01:12:16.872678652 +0000 UTC m=+0.217630792 container start ec543270cd0457ef1f82168712759ff418ecf8c37571393c546f602340e15172 (image=quay.io/ceph/ceph:v18, name=zen_goodall, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 05 01:12:16 compute-0 podman[191976]: 2025-12-05 01:12:16.878027127 +0000 UTC m=+0.222979267 container attach ec543270cd0457ef1f82168712759ff418ecf8c37571393c546f602340e15172 (image=quay.io/ceph/ceph:v18, name=zen_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 01:12:17 compute-0 zen_goodall[192015]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Dec 05 01:12:17 compute-0 systemd[1]: libpod-ec543270cd0457ef1f82168712759ff418ecf8c37571393c546f602340e15172.scope: Deactivated successfully.
Dec 05 01:12:17 compute-0 podman[191976]: 2025-12-05 01:12:17.196552864 +0000 UTC m=+0.541505034 container died ec543270cd0457ef1f82168712759ff418ecf8c37571393c546f602340e15172 (image=quay.io/ceph/ceph:v18, name=zen_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 05 01:12:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ae68a9956e87d1e61773f661f138d561ab2f577464dfd36aa0e2f4fbcb6645d-merged.mount: Deactivated successfully.
Dec 05 01:12:17 compute-0 podman[191976]: 2025-12-05 01:12:17.272316678 +0000 UTC m=+0.617268818 container remove ec543270cd0457ef1f82168712759ff418ecf8c37571393c546f602340e15172 (image=quay.io/ceph/ceph:v18, name=zen_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:12:17 compute-0 systemd[1]: libpod-conmon-ec543270cd0457ef1f82168712759ff418ecf8c37571393c546f602340e15172.scope: Deactivated successfully.
Dec 05 01:12:17 compute-0 podman[192030]: 2025-12-05 01:12:17.402228761 +0000 UTC m=+0.089231351 container create 01481de9943c0ce58fda3014cdba4bfcb26a3ee3a394d58456f8569f3b29b0c6 (image=quay.io/ceph/ceph:v18, name=brave_black, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:12:17 compute-0 podman[192030]: 2025-12-05 01:12:17.356710947 +0000 UTC m=+0.043713577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:17 compute-0 systemd[1]: Started libpod-conmon-01481de9943c0ce58fda3014cdba4bfcb26a3ee3a394d58456f8569f3b29b0c6.scope.
Dec 05 01:12:17 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:12:17 compute-0 podman[192030]: 2025-12-05 01:12:17.542014781 +0000 UTC m=+0.229017431 container init 01481de9943c0ce58fda3014cdba4bfcb26a3ee3a394d58456f8569f3b29b0c6 (image=quay.io/ceph/ceph:v18, name=brave_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 01:12:17 compute-0 podman[192030]: 2025-12-05 01:12:17.558045196 +0000 UTC m=+0.245047806 container start 01481de9943c0ce58fda3014cdba4bfcb26a3ee3a394d58456f8569f3b29b0c6 (image=quay.io/ceph/ceph:v18, name=brave_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 05 01:12:17 compute-0 podman[192030]: 2025-12-05 01:12:17.566150906 +0000 UTC m=+0.253153566 container attach 01481de9943c0ce58fda3014cdba4bfcb26a3ee3a394d58456f8569f3b29b0c6 (image=quay.io/ceph/ceph:v18, name=brave_black, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:12:17 compute-0 brave_black[192046]: 167 167
Dec 05 01:12:17 compute-0 systemd[1]: libpod-01481de9943c0ce58fda3014cdba4bfcb26a3ee3a394d58456f8569f3b29b0c6.scope: Deactivated successfully.
Dec 05 01:12:17 compute-0 podman[192030]: 2025-12-05 01:12:17.572329223 +0000 UTC m=+0.259331813 container died 01481de9943c0ce58fda3014cdba4bfcb26a3ee3a394d58456f8569f3b29b0c6 (image=quay.io/ceph/ceph:v18, name=brave_black, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 01:12:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-5885bfb2b3b9ffa2b48dddbf2cabd3a3cea005bfc229cf066fffaaa966445ae3-merged.mount: Deactivated successfully.
Dec 05 01:12:17 compute-0 podman[192030]: 2025-12-05 01:12:17.627218711 +0000 UTC m=+0.314221311 container remove 01481de9943c0ce58fda3014cdba4bfcb26a3ee3a394d58456f8569f3b29b0c6 (image=quay.io/ceph/ceph:v18, name=brave_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:12:17 compute-0 systemd[1]: libpod-conmon-01481de9943c0ce58fda3014cdba4bfcb26a3ee3a394d58456f8569f3b29b0c6.scope: Deactivated successfully.
Dec 05 01:12:17 compute-0 podman[192064]: 2025-12-05 01:12:17.762993002 +0000 UTC m=+0.089052405 container create da9cc9c44d3f0089b2b293771d1e4d0fb2bbf557700437931425edb3e4a3ee8f (image=quay.io/ceph/ceph:v18, name=strange_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 05 01:12:17 compute-0 podman[192064]: 2025-12-05 01:12:17.726328448 +0000 UTC m=+0.052387901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:17 compute-0 systemd[1]: Started libpod-conmon-da9cc9c44d3f0089b2b293771d1e4d0fb2bbf557700437931425edb3e4a3ee8f.scope.
Dec 05 01:12:17 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:12:17 compute-0 podman[192064]: 2025-12-05 01:12:17.901615521 +0000 UTC m=+0.227674894 container init da9cc9c44d3f0089b2b293771d1e4d0fb2bbf557700437931425edb3e4a3ee8f (image=quay.io/ceph/ceph:v18, name=strange_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:12:17 compute-0 podman[192064]: 2025-12-05 01:12:17.916979497 +0000 UTC m=+0.243038870 container start da9cc9c44d3f0089b2b293771d1e4d0fb2bbf557700437931425edb3e4a3ee8f (image=quay.io/ceph/ceph:v18, name=strange_cartwright, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:12:17 compute-0 podman[192064]: 2025-12-05 01:12:17.922393374 +0000 UTC m=+0.248452747 container attach da9cc9c44d3f0089b2b293771d1e4d0fb2bbf557700437931425edb3e4a3ee8f (image=quay.io/ceph/ceph:v18, name=strange_cartwright, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Dec 05 01:12:17 compute-0 strange_cartwright[192081]: AQBxMTJp11UxORAAqP9btdAKgROyxK7Fdlo7XQ==
Dec 05 01:12:17 compute-0 systemd[1]: libpod-da9cc9c44d3f0089b2b293771d1e4d0fb2bbf557700437931425edb3e4a3ee8f.scope: Deactivated successfully.
Dec 05 01:12:17 compute-0 podman[192064]: 2025-12-05 01:12:17.967712303 +0000 UTC m=+0.293771696 container died da9cc9c44d3f0089b2b293771d1e4d0fb2bbf557700437931425edb3e4a3ee8f (image=quay.io/ceph/ceph:v18, name=strange_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:12:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-86464f0965cf29acc65b13ae1bc9a91d4243e966b25559e3a1a46bd0e24d66cf-merged.mount: Deactivated successfully.
Dec 05 01:12:18 compute-0 podman[192064]: 2025-12-05 01:12:18.043120377 +0000 UTC m=+0.369179790 container remove da9cc9c44d3f0089b2b293771d1e4d0fb2bbf557700437931425edb3e4a3ee8f (image=quay.io/ceph/ceph:v18, name=strange_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 05 01:12:18 compute-0 systemd[1]: libpod-conmon-da9cc9c44d3f0089b2b293771d1e4d0fb2bbf557700437931425edb3e4a3ee8f.scope: Deactivated successfully.
Dec 05 01:12:18 compute-0 podman[192100]: 2025-12-05 01:12:18.162081363 +0000 UTC m=+0.085111079 container create f4472a8da98ef3edfd66fc4b145e047e3fee7ba985a0c92be6d44bae37f1dc72 (image=quay.io/ceph/ceph:v18, name=focused_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:12:18 compute-0 systemd[1]: Started libpod-conmon-f4472a8da98ef3edfd66fc4b145e047e3fee7ba985a0c92be6d44bae37f1dc72.scope.
Dec 05 01:12:18 compute-0 podman[192100]: 2025-12-05 01:12:18.130838756 +0000 UTC m=+0.053868522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:18 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:12:18 compute-0 podman[192100]: 2025-12-05 01:12:18.265106306 +0000 UTC m=+0.188136012 container init f4472a8da98ef3edfd66fc4b145e047e3fee7ba985a0c92be6d44bae37f1dc72 (image=quay.io/ceph/ceph:v18, name=focused_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:12:18 compute-0 podman[192100]: 2025-12-05 01:12:18.275812397 +0000 UTC m=+0.198842113 container start f4472a8da98ef3edfd66fc4b145e047e3fee7ba985a0c92be6d44bae37f1dc72 (image=quay.io/ceph/ceph:v18, name=focused_jemison, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:12:18 compute-0 podman[192100]: 2025-12-05 01:12:18.286783284 +0000 UTC m=+0.209813070 container attach f4472a8da98ef3edfd66fc4b145e047e3fee7ba985a0c92be6d44bae37f1dc72 (image=quay.io/ceph/ceph:v18, name=focused_jemison, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 01:12:18 compute-0 focused_jemison[192116]: AQByMTJp2/MWEhAAdppAJmw8nfxov6zCgPjqyQ==
Dec 05 01:12:18 compute-0 systemd[1]: libpod-f4472a8da98ef3edfd66fc4b145e047e3fee7ba985a0c92be6d44bae37f1dc72.scope: Deactivated successfully.
Dec 05 01:12:18 compute-0 podman[192100]: 2025-12-05 01:12:18.31062107 +0000 UTC m=+0.233650746 container died f4472a8da98ef3edfd66fc4b145e047e3fee7ba985a0c92be6d44bae37f1dc72 (image=quay.io/ceph/ceph:v18, name=focused_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 05 01:12:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5cc90c81018f9f5c081b5dd83e905babed498b81df64c3b1ddbde2fcdefb572-merged.mount: Deactivated successfully.
Dec 05 01:12:18 compute-0 podman[192100]: 2025-12-05 01:12:18.362742394 +0000 UTC m=+0.285772080 container remove f4472a8da98ef3edfd66fc4b145e047e3fee7ba985a0c92be6d44bae37f1dc72 (image=quay.io/ceph/ceph:v18, name=focused_jemison, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 01:12:18 compute-0 systemd[1]: libpod-conmon-f4472a8da98ef3edfd66fc4b145e047e3fee7ba985a0c92be6d44bae37f1dc72.scope: Deactivated successfully.
Dec 05 01:12:18 compute-0 podman[192133]: 2025-12-05 01:12:18.48245974 +0000 UTC m=+0.081258704 container create 0c5c71c433e2622982fb707915bc4f2acbb78eb8dc505a4bcba4d267b560b5b1 (image=quay.io/ceph/ceph:v18, name=ecstatic_swanson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 05 01:12:18 compute-0 podman[192133]: 2025-12-05 01:12:18.446037182 +0000 UTC m=+0.044836196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:18 compute-0 systemd[1]: Started libpod-conmon-0c5c71c433e2622982fb707915bc4f2acbb78eb8dc505a4bcba4d267b560b5b1.scope.
Dec 05 01:12:19 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:12:19 compute-0 podman[192133]: 2025-12-05 01:12:19.202821372 +0000 UTC m=+0.801620296 container init 0c5c71c433e2622982fb707915bc4f2acbb78eb8dc505a4bcba4d267b560b5b1 (image=quay.io/ceph/ceph:v18, name=ecstatic_swanson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:12:19 compute-0 podman[192133]: 2025-12-05 01:12:19.21231402 +0000 UTC m=+0.811112944 container start 0c5c71c433e2622982fb707915bc4f2acbb78eb8dc505a4bcba4d267b560b5b1 (image=quay.io/ceph/ceph:v18, name=ecstatic_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Dec 05 01:12:19 compute-0 podman[192133]: 2025-12-05 01:12:19.217827929 +0000 UTC m=+0.816626873 container attach 0c5c71c433e2622982fb707915bc4f2acbb78eb8dc505a4bcba4d267b560b5b1 (image=quay.io/ceph/ceph:v18, name=ecstatic_swanson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 01:12:19 compute-0 ecstatic_swanson[192148]: AQBzMTJphPXxDRAAHMI6rK1a3oVQzlpOqU7aqg==
Dec 05 01:12:19 compute-0 systemd[1]: libpod-0c5c71c433e2622982fb707915bc4f2acbb78eb8dc505a4bcba4d267b560b5b1.scope: Deactivated successfully.
Dec 05 01:12:19 compute-0 podman[192133]: 2025-12-05 01:12:19.239007463 +0000 UTC m=+0.837806407 container died 0c5c71c433e2622982fb707915bc4f2acbb78eb8dc505a4bcba4d267b560b5b1 (image=quay.io/ceph/ceph:v18, name=ecstatic_swanson, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:12:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-024e248e67405574eb33c918e2051ceed320aa0d8909a795c5de36ac91e88a8a-merged.mount: Deactivated successfully.
Dec 05 01:12:19 compute-0 podman[192133]: 2025-12-05 01:12:19.29678221 +0000 UTC m=+0.895581134 container remove 0c5c71c433e2622982fb707915bc4f2acbb78eb8dc505a4bcba4d267b560b5b1 (image=quay.io/ceph/ceph:v18, name=ecstatic_swanson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 01:12:19 compute-0 systemd[1]: libpod-conmon-0c5c71c433e2622982fb707915bc4f2acbb78eb8dc505a4bcba4d267b560b5b1.scope: Deactivated successfully.
Dec 05 01:12:19 compute-0 podman[192167]: 2025-12-05 01:12:19.396833033 +0000 UTC m=+0.071810868 container create e34530ed950e0c0a36df95a80ddeba558da1079f074a000eada1ffce0557a5a0 (image=quay.io/ceph/ceph:v18, name=gifted_raman, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:12:19 compute-0 systemd[1]: Started libpod-conmon-e34530ed950e0c0a36df95a80ddeba558da1079f074a000eada1ffce0557a5a0.scope.
Dec 05 01:12:19 compute-0 podman[192167]: 2025-12-05 01:12:19.365413661 +0000 UTC m=+0.040391366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:19 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:12:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd807a7d2cf47bca87331cf0f3fe77227cae1f18116eb9310b6c495f926a0e30/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:19 compute-0 podman[192167]: 2025-12-05 01:12:19.529590532 +0000 UTC m=+0.204568147 container init e34530ed950e0c0a36df95a80ddeba558da1079f074a000eada1ffce0557a5a0 (image=quay.io/ceph/ceph:v18, name=gifted_raman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:12:19 compute-0 podman[192167]: 2025-12-05 01:12:19.53616583 +0000 UTC m=+0.211143445 container start e34530ed950e0c0a36df95a80ddeba558da1079f074a000eada1ffce0557a5a0 (image=quay.io/ceph/ceph:v18, name=gifted_raman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 01:12:19 compute-0 podman[192167]: 2025-12-05 01:12:19.541093064 +0000 UTC m=+0.216070699 container attach e34530ed950e0c0a36df95a80ddeba558da1079f074a000eada1ffce0557a5a0 (image=quay.io/ceph/ceph:v18, name=gifted_raman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:12:19 compute-0 gifted_raman[192183]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec 05 01:12:19 compute-0 gifted_raman[192183]: setting min_mon_release = pacific
Dec 05 01:12:19 compute-0 gifted_raman[192183]: /usr/bin/monmaptool: set fsid to cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec 05 01:12:19 compute-0 gifted_raman[192183]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec 05 01:12:19 compute-0 systemd[1]: libpod-e34530ed950e0c0a36df95a80ddeba558da1079f074a000eada1ffce0557a5a0.scope: Deactivated successfully.
Dec 05 01:12:19 compute-0 podman[192167]: 2025-12-05 01:12:19.56932305 +0000 UTC m=+0.244300675 container died e34530ed950e0c0a36df95a80ddeba558da1079f074a000eada1ffce0557a5a0 (image=quay.io/ceph/ceph:v18, name=gifted_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 01:12:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd807a7d2cf47bca87331cf0f3fe77227cae1f18116eb9310b6c495f926a0e30-merged.mount: Deactivated successfully.
Dec 05 01:12:19 compute-0 podman[192167]: 2025-12-05 01:12:19.637402495 +0000 UTC m=+0.312380110 container remove e34530ed950e0c0a36df95a80ddeba558da1079f074a000eada1ffce0557a5a0 (image=quay.io/ceph/ceph:v18, name=gifted_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:12:19 compute-0 systemd[1]: libpod-conmon-e34530ed950e0c0a36df95a80ddeba558da1079f074a000eada1ffce0557a5a0.scope: Deactivated successfully.
Dec 05 01:12:19 compute-0 podman[192202]: 2025-12-05 01:12:19.734314303 +0000 UTC m=+0.057045998 container create 22f08eb77ff4cdf992fd230b797575e42359447cd2eb564068b3c397f84b1139 (image=quay.io/ceph/ceph:v18, name=sleepy_sutherland, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:12:19 compute-0 systemd[1]: Started libpod-conmon-22f08eb77ff4cdf992fd230b797575e42359447cd2eb564068b3c397f84b1139.scope.
Dec 05 01:12:19 compute-0 podman[192202]: 2025-12-05 01:12:19.7098726 +0000 UTC m=+0.032604275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:19 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:12:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3edbff07925cb80c2d3729dfe44f056b5f1fabe7bf4daaf51ec7875faca9ea36/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3edbff07925cb80c2d3729dfe44f056b5f1fabe7bf4daaf51ec7875faca9ea36/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3edbff07925cb80c2d3729dfe44f056b5f1fabe7bf4daaf51ec7875faca9ea36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3edbff07925cb80c2d3729dfe44f056b5f1fabe7bf4daaf51ec7875faca9ea36/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:19 compute-0 podman[192202]: 2025-12-05 01:12:19.866856027 +0000 UTC m=+0.189587702 container init 22f08eb77ff4cdf992fd230b797575e42359447cd2eb564068b3c397f84b1139 (image=quay.io/ceph/ceph:v18, name=sleepy_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:12:19 compute-0 podman[192202]: 2025-12-05 01:12:19.882528282 +0000 UTC m=+0.205259937 container start 22f08eb77ff4cdf992fd230b797575e42359447cd2eb564068b3c397f84b1139 (image=quay.io/ceph/ceph:v18, name=sleepy_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:12:19 compute-0 podman[192202]: 2025-12-05 01:12:19.888265057 +0000 UTC m=+0.210996712 container attach 22f08eb77ff4cdf992fd230b797575e42359447cd2eb564068b3c397f84b1139 (image=quay.io/ceph/ceph:v18, name=sleepy_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 05 01:12:19 compute-0 podman[192216]: 2025-12-05 01:12:19.912537166 +0000 UTC m=+0.127784316 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125)
Dec 05 01:12:20 compute-0 systemd[1]: libpod-22f08eb77ff4cdf992fd230b797575e42359447cd2eb564068b3c397f84b1139.scope: Deactivated successfully.
Dec 05 01:12:20 compute-0 conmon[192224]: conmon 22f08eb77ff4cdf992fd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-22f08eb77ff4cdf992fd230b797575e42359447cd2eb564068b3c397f84b1139.scope/container/memory.events
Dec 05 01:12:20 compute-0 podman[192202]: 2025-12-05 01:12:20.004573661 +0000 UTC m=+0.327305336 container died 22f08eb77ff4cdf992fd230b797575e42359447cd2eb564068b3c397f84b1139 (image=quay.io/ceph/ceph:v18, name=sleepy_sutherland, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 01:12:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-3edbff07925cb80c2d3729dfe44f056b5f1fabe7bf4daaf51ec7875faca9ea36-merged.mount: Deactivated successfully.
Dec 05 01:12:20 compute-0 podman[192202]: 2025-12-05 01:12:20.05838954 +0000 UTC m=+0.381121195 container remove 22f08eb77ff4cdf992fd230b797575e42359447cd2eb564068b3c397f84b1139 (image=quay.io/ceph/ceph:v18, name=sleepy_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:12:20 compute-0 systemd[1]: libpod-conmon-22f08eb77ff4cdf992fd230b797575e42359447cd2eb564068b3c397f84b1139.scope: Deactivated successfully.
Dec 05 01:12:20 compute-0 systemd[1]: Reloading.
Dec 05 01:12:20 compute-0 systemd-rc-local-generator[192305]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:12:20 compute-0 systemd-sysv-generator[192310]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:12:20 compute-0 systemd[1]: Reloading.
Dec 05 01:12:20 compute-0 systemd-rc-local-generator[192342]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:12:20 compute-0 systemd-sysv-generator[192345]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:12:20 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Dec 05 01:12:20 compute-0 systemd[1]: Reloading.
Dec 05 01:12:21 compute-0 systemd-sysv-generator[192383]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:12:21 compute-0 systemd-rc-local-generator[192380]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:12:21 compute-0 systemd[1]: Reached target Ceph cluster cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec 05 01:12:21 compute-0 systemd[1]: Reloading.
Dec 05 01:12:21 compute-0 systemd-rc-local-generator[192417]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:12:21 compute-0 systemd-sysv-generator[192420]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:12:21 compute-0 systemd[1]: Reloading.
Dec 05 01:12:21 compute-0 systemd-rc-local-generator[192459]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:12:21 compute-0 systemd-sysv-generator[192464]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:12:22 compute-0 systemd[1]: Created slice Slice /system/ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec 05 01:12:22 compute-0 systemd[1]: Reached target System Time Set.
Dec 05 01:12:22 compute-0 systemd[1]: Reached target System Time Synchronized.
Dec 05 01:12:22 compute-0 systemd[1]: Starting Ceph mon.compute-0 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee...
Dec 05 01:12:22 compute-0 podman[192514]: 2025-12-05 01:12:22.533316946 +0000 UTC m=+0.082119888 container create 34f99c2940f04f7996eb5f6501f7e5baf464ccc95c9e67a651bd8354fdd5f175 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 05 01:12:22 compute-0 podman[192514]: 2025-12-05 01:12:22.507604899 +0000 UTC m=+0.056407821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce88672616984df0293eb4b55333de5b5da26ac5e1d7ec352ccbdf8541f386b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce88672616984df0293eb4b55333de5b5da26ac5e1d7ec352ccbdf8541f386b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce88672616984df0293eb4b55333de5b5da26ac5e1d7ec352ccbdf8541f386b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce88672616984df0293eb4b55333de5b5da26ac5e1d7ec352ccbdf8541f386b4/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:22 compute-0 podman[192514]: 2025-12-05 01:12:22.696279425 +0000 UTC m=+0.245082427 container init 34f99c2940f04f7996eb5f6501f7e5baf464ccc95c9e67a651bd8354fdd5f175 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 01:12:22 compute-0 podman[192514]: 2025-12-05 01:12:22.713606994 +0000 UTC m=+0.262409926 container start 34f99c2940f04f7996eb5f6501f7e5baf464ccc95c9e67a651bd8354fdd5f175 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:12:22 compute-0 bash[192514]: 34f99c2940f04f7996eb5f6501f7e5baf464ccc95c9e67a651bd8354fdd5f175
Dec 05 01:12:22 compute-0 systemd[1]: Started Ceph mon.compute-0 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec 05 01:12:22 compute-0 ceph-mon[192533]: set uid:gid to 167:167 (ceph:ceph)
Dec 05 01:12:22 compute-0 ceph-mon[192533]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Dec 05 01:12:22 compute-0 ceph-mon[192533]: pidfile_write: ignore empty --pid-file
Dec 05 01:12:22 compute-0 ceph-mon[192533]: load: jerasure load: lrc 
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: RocksDB version: 7.9.2
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Git sha 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Compile date 2025-05-06 23:30:25
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: DB SUMMARY
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: DB Session ID:  MXAEOJG0GZXDO313546X
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: CURRENT file:  CURRENT
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: IDENTITY file:  IDENTITY
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                         Options.error_if_exists: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                       Options.create_if_missing: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                         Options.paranoid_checks: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                                     Options.env: 0x559d75418c40
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                                Options.info_log: 0x559d75f78e80
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                Options.max_file_opening_threads: 16
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                              Options.statistics: (nil)
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                               Options.use_fsync: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                       Options.max_log_file_size: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                         Options.allow_fallocate: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                        Options.use_direct_reads: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:          Options.create_missing_column_families: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                              Options.db_log_dir: 
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                                 Options.wal_dir: 
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                   Options.advise_random_on_open: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                    Options.write_buffer_manager: 0x559d75f88b40
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                            Options.rate_limiter: (nil)
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                  Options.unordered_write: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                               Options.row_cache: None
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                              Options.wal_filter: None
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.allow_ingest_behind: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.two_write_queues: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.manual_wal_flush: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.wal_compression: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.atomic_flush: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                 Options.log_readahead_size: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.allow_data_in_errors: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.db_host_id: __hostname__
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.max_background_jobs: 2
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.max_background_compactions: -1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.max_subcompactions: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.max_total_wal_size: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                          Options.max_open_files: -1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                          Options.bytes_per_sync: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:       Options.compaction_readahead_size: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                  Options.max_background_flushes: -1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Compression algorithms supported:
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         kZSTD supported: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         kXpressCompression supported: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         kBZip2Compression supported: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         kLZ4Compression supported: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         kZlibCompression supported: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         kLZ4HCCompression supported: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         kSnappyCompression supported: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:           Options.merge_operator: 
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:        Options.compaction_filter: None
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559d75f78a80)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x559d75f711f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:        Options.write_buffer_size: 33554432
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:  Options.max_write_buffer_number: 2
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:          Options.compression: NoCompression
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.num_levels: 7
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d2a3e37e-222f-447f-af23-2a52f135922f
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897142769007, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897142771406, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "MXAEOJG0GZXDO313546X", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897142771527, "job": 1, "event": "recovery_finished"}
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559d75f9ae00
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: DB pointer 0x559d76024000
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 01:12:22 compute-0 ceph-mon[192533]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 0.0 total, 0.0 interval
                                            Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                            Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                            Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.0 total, 0.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x559d75f711f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Dec 05 01:12:22 compute-0 ceph-mon[192533]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@-1(???) e0 preinit fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(probing) e0 win_standalone_election
Dec 05 01:12:22 compute-0 ceph-mon[192533]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 01:12:22 compute-0 ceph-mon[192533]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 05 01:12:22 compute-0 ceph-mon[192533]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 01:12:22 compute-0 ceph-mon[192533]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 05 01:12:22 compute-0 ceph-mon[192533]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-12-05T01:12:19.941718Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).mds e1 new map
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).mds e1 print_map
                                            e1
                                            enable_multiple, ever_enabled_multiple: 1,1
                                            default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            legacy client fscid: -1
                                             
                                            No filesystems configured
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 05 01:12:22 compute-0 ceph-mon[192533]: log_channel(cluster) log [DBG] : fsmap 
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mkfs cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec 05 01:12:22 compute-0 podman[192534]: 2025-12-05 01:12:22.868395861 +0000 UTC m=+0.086367182 container create 87557651c5dec6ae6170b79c3b16e0456307a95c180a8f60f651452023dcc9fc (image=quay.io/ceph/ceph:v18, name=eloquent_swanson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 05 01:12:22 compute-0 ceph-mon[192533]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 05 01:12:22 compute-0 ceph-mon[192533]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 05 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 05 01:12:22 compute-0 systemd[1]: Started libpod-conmon-87557651c5dec6ae6170b79c3b16e0456307a95c180a8f60f651452023dcc9fc.scope.
Dec 05 01:12:22 compute-0 podman[192534]: 2025-12-05 01:12:22.83624292 +0000 UTC m=+0.054214341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:22 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c6bd03e609017150ebdfd4cacf89c5a8d1b7051e3095084652707a8e386f4d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c6bd03e609017150ebdfd4cacf89c5a8d1b7051e3095084652707a8e386f4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c6bd03e609017150ebdfd4cacf89c5a8d1b7051e3095084652707a8e386f4d/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:22 compute-0 podman[192534]: 2025-12-05 01:12:22.999459705 +0000 UTC m=+0.217431056 container init 87557651c5dec6ae6170b79c3b16e0456307a95c180a8f60f651452023dcc9fc (image=quay.io/ceph/ceph:v18, name=eloquent_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 05 01:12:23 compute-0 podman[192534]: 2025-12-05 01:12:23.016436325 +0000 UTC m=+0.234407646 container start 87557651c5dec6ae6170b79c3b16e0456307a95c180a8f60f651452023dcc9fc (image=quay.io/ceph/ceph:v18, name=eloquent_swanson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:12:23 compute-0 podman[192534]: 2025-12-05 01:12:23.024164015 +0000 UTC m=+0.242135386 container attach 87557651c5dec6ae6170b79c3b16e0456307a95c180a8f60f651452023dcc9fc (image=quay.io/ceph/ceph:v18, name=eloquent_swanson, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:12:23 compute-0 podman[192586]: 2025-12-05 01:12:23.050365425 +0000 UTC m=+0.109033907 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release-0.7.12=, vcs-type=git, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, container_name=kepler, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 05 01:12:23 compute-0 ceph-mon[192533]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Dec 05 01:12:23 compute-0 ceph-mon[192533]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1598696244' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 05 01:12:23 compute-0 eloquent_swanson[192589]:   cluster:
Dec 05 01:12:23 compute-0 eloquent_swanson[192589]:     id:     cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec 05 01:12:23 compute-0 eloquent_swanson[192589]:     health: HEALTH_OK
Dec 05 01:12:23 compute-0 eloquent_swanson[192589]:  
Dec 05 01:12:23 compute-0 eloquent_swanson[192589]:   services:
Dec 05 01:12:23 compute-0 eloquent_swanson[192589]:     mon: 1 daemons, quorum compute-0 (age 0.629197s)
Dec 05 01:12:23 compute-0 eloquent_swanson[192589]:     mgr: no daemons active
Dec 05 01:12:23 compute-0 eloquent_swanson[192589]:     osd: 0 osds: 0 up, 0 in
Dec 05 01:12:23 compute-0 eloquent_swanson[192589]:  
Dec 05 01:12:23 compute-0 eloquent_swanson[192589]:   data:
Dec 05 01:12:23 compute-0 eloquent_swanson[192589]:     pools:   0 pools, 0 pgs
Dec 05 01:12:23 compute-0 eloquent_swanson[192589]:     objects: 0 objects, 0 B
Dec 05 01:12:23 compute-0 eloquent_swanson[192589]:     usage:   0 B used, 0 B / 0 B avail
Dec 05 01:12:23 compute-0 eloquent_swanson[192589]:     pgs:     
Dec 05 01:12:23 compute-0 eloquent_swanson[192589]:  
Dec 05 01:12:23 compute-0 systemd[1]: libpod-87557651c5dec6ae6170b79c3b16e0456307a95c180a8f60f651452023dcc9fc.scope: Deactivated successfully.
Dec 05 01:12:23 compute-0 podman[192534]: 2025-12-05 01:12:23.496951654 +0000 UTC m=+0.714922995 container died 87557651c5dec6ae6170b79c3b16e0456307a95c180a8f60f651452023dcc9fc (image=quay.io/ceph/ceph:v18, name=eloquent_swanson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 05 01:12:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-78c6bd03e609017150ebdfd4cacf89c5a8d1b7051e3095084652707a8e386f4d-merged.mount: Deactivated successfully.
Dec 05 01:12:23 compute-0 podman[192534]: 2025-12-05 01:12:23.576249415 +0000 UTC m=+0.794220746 container remove 87557651c5dec6ae6170b79c3b16e0456307a95c180a8f60f651452023dcc9fc (image=quay.io/ceph/ceph:v18, name=eloquent_swanson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:12:23 compute-0 systemd[1]: libpod-conmon-87557651c5dec6ae6170b79c3b16e0456307a95c180a8f60f651452023dcc9fc.scope: Deactivated successfully.
Dec 05 01:12:23 compute-0 podman[192644]: 2025-12-05 01:12:23.714074922 +0000 UTC m=+0.097052223 container create 522061e67fffa5f50605e047153adbd68a865b4e8d4d8244ad127671c8127de1 (image=quay.io/ceph/ceph:v18, name=strange_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 05 01:12:23 compute-0 podman[192644]: 2025-12-05 01:12:23.663984784 +0000 UTC m=+0.046962135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:23 compute-0 systemd[1]: Started libpod-conmon-522061e67fffa5f50605e047153adbd68a865b4e8d4d8244ad127671c8127de1.scope.
Dec 05 01:12:23 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e970a9a1469609fce052b3fd1b15408570c460d2cf068895eee2864ed5523f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e970a9a1469609fce052b3fd1b15408570c460d2cf068895eee2864ed5523f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e970a9a1469609fce052b3fd1b15408570c460d2cf068895eee2864ed5523f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e970a9a1469609fce052b3fd1b15408570c460d2cf068895eee2864ed5523f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:23 compute-0 podman[192644]: 2025-12-05 01:12:23.860019299 +0000 UTC m=+0.242996580 container init 522061e67fffa5f50605e047153adbd68a865b4e8d4d8244ad127671c8127de1 (image=quay.io/ceph/ceph:v18, name=strange_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 05 01:12:23 compute-0 podman[192644]: 2025-12-05 01:12:23.877217575 +0000 UTC m=+0.260194836 container start 522061e67fffa5f50605e047153adbd68a865b4e8d4d8244ad127671c8127de1 (image=quay.io/ceph/ceph:v18, name=strange_shaw, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 05 01:12:23 compute-0 podman[192644]: 2025-12-05 01:12:23.881602384 +0000 UTC m=+0.264579635 container attach 522061e67fffa5f50605e047153adbd68a865b4e8d4d8244ad127671c8127de1 (image=quay.io/ceph/ceph:v18, name=strange_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 01:12:23 compute-0 ceph-mon[192533]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 05 01:12:23 compute-0 ceph-mon[192533]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 05 01:12:23 compute-0 ceph-mon[192533]: fsmap 
Dec 05 01:12:23 compute-0 ceph-mon[192533]: osdmap e1: 0 total, 0 up, 0 in
Dec 05 01:12:23 compute-0 ceph-mon[192533]: mgrmap e1: no daemons active
Dec 05 01:12:23 compute-0 ceph-mon[192533]: from='client.? 192.168.122.100:0/1598696244' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 05 01:12:24 compute-0 ceph-mon[192533]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Dec 05 01:12:24 compute-0 ceph-mon[192533]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2633477333' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 05 01:12:24 compute-0 ceph-mon[192533]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2633477333' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 05 01:12:24 compute-0 strange_shaw[192660]: 
Dec 05 01:12:24 compute-0 strange_shaw[192660]: [global]
Dec 05 01:12:24 compute-0 strange_shaw[192660]:         fsid = cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec 05 01:12:24 compute-0 strange_shaw[192660]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 05 01:12:24 compute-0 strange_shaw[192660]:         osd_crush_chooseleaf_type = 0
Dec 05 01:12:24 compute-0 systemd[1]: libpod-522061e67fffa5f50605e047153adbd68a865b4e8d4d8244ad127671c8127de1.scope: Deactivated successfully.
Dec 05 01:12:24 compute-0 podman[192686]: 2025-12-05 01:12:24.48886948 +0000 UTC m=+0.057423618 container died 522061e67fffa5f50605e047153adbd68a865b4e8d4d8244ad127671c8127de1 (image=quay.io/ceph/ceph:v18, name=strange_shaw, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:12:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1e970a9a1469609fce052b3fd1b15408570c460d2cf068895eee2864ed5523f-merged.mount: Deactivated successfully.
Dec 05 01:12:24 compute-0 podman[192686]: 2025-12-05 01:12:24.5674416 +0000 UTC m=+0.135995698 container remove 522061e67fffa5f50605e047153adbd68a865b4e8d4d8244ad127671c8127de1 (image=quay.io/ceph/ceph:v18, name=strange_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 01:12:24 compute-0 systemd[1]: libpod-conmon-522061e67fffa5f50605e047153adbd68a865b4e8d4d8244ad127671c8127de1.scope: Deactivated successfully.
Dec 05 01:12:24 compute-0 podman[192701]: 2025-12-05 01:12:24.676197909 +0000 UTC m=+0.064155100 container create c309a3963a5c7d3f55b3448c7e6606c27b852232fb99763b78f52a1c1367c6ad (image=quay.io/ceph/ceph:v18, name=funny_pasteur, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 01:12:24 compute-0 systemd[1]: Started libpod-conmon-c309a3963a5c7d3f55b3448c7e6606c27b852232fb99763b78f52a1c1367c6ad.scope.
Dec 05 01:12:24 compute-0 podman[192701]: 2025-12-05 01:12:24.656535376 +0000 UTC m=+0.044492587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:24 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc97dfe20f62abc0cb3b0ba6f67b619f3dae8c3910c858911f491c9917cdcf8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc97dfe20f62abc0cb3b0ba6f67b619f3dae8c3910c858911f491c9917cdcf8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc97dfe20f62abc0cb3b0ba6f67b619f3dae8c3910c858911f491c9917cdcf8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc97dfe20f62abc0cb3b0ba6f67b619f3dae8c3910c858911f491c9917cdcf8/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:24 compute-0 podman[192701]: 2025-12-05 01:12:24.835591701 +0000 UTC m=+0.223548902 container init c309a3963a5c7d3f55b3448c7e6606c27b852232fb99763b78f52a1c1367c6ad (image=quay.io/ceph/ceph:v18, name=funny_pasteur, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 05 01:12:24 compute-0 podman[192701]: 2025-12-05 01:12:24.851694578 +0000 UTC m=+0.239651769 container start c309a3963a5c7d3f55b3448c7e6606c27b852232fb99763b78f52a1c1367c6ad (image=quay.io/ceph/ceph:v18, name=funny_pasteur, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 05 01:12:24 compute-0 podman[192701]: 2025-12-05 01:12:24.855947843 +0000 UTC m=+0.243905034 container attach c309a3963a5c7d3f55b3448c7e6606c27b852232fb99763b78f52a1c1367c6ad (image=quay.io/ceph/ceph:v18, name=funny_pasteur, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:12:24 compute-0 ceph-mon[192533]: from='client.? 192.168.122.100:0/2633477333' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 05 01:12:24 compute-0 ceph-mon[192533]: from='client.? 192.168.122.100:0/2633477333' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 05 01:12:25 compute-0 ceph-mon[192533]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:12:25 compute-0 ceph-mon[192533]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1559321731' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:12:25 compute-0 systemd[1]: libpod-c309a3963a5c7d3f55b3448c7e6606c27b852232fb99763b78f52a1c1367c6ad.scope: Deactivated successfully.
Dec 05 01:12:25 compute-0 podman[192701]: 2025-12-05 01:12:25.32609566 +0000 UTC m=+0.714052921 container died c309a3963a5c7d3f55b3448c7e6606c27b852232fb99763b78f52a1c1367c6ad (image=quay.io/ceph/ceph:v18, name=funny_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 05 01:12:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cc97dfe20f62abc0cb3b0ba6f67b619f3dae8c3910c858911f491c9917cdcf8-merged.mount: Deactivated successfully.
Dec 05 01:12:25 compute-0 podman[192701]: 2025-12-05 01:12:25.419815741 +0000 UTC m=+0.807772942 container remove c309a3963a5c7d3f55b3448c7e6606c27b852232fb99763b78f52a1c1367c6ad (image=quay.io/ceph/ceph:v18, name=funny_pasteur, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:12:25 compute-0 systemd[1]: libpod-conmon-c309a3963a5c7d3f55b3448c7e6606c27b852232fb99763b78f52a1c1367c6ad.scope: Deactivated successfully.
Dec 05 01:12:25 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee...
Dec 05 01:12:25 compute-0 ceph-mon[192533]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 05 01:12:25 compute-0 ceph-mon[192533]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 05 01:12:25 compute-0 ceph-mon[192533]: mon.compute-0@0(leader) e1 shutdown
Dec 05 01:12:25 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0[192529]: 2025-12-05T01:12:25.798+0000 7fd0ac7f5640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 05 01:12:25 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0[192529]: 2025-12-05T01:12:25.798+0000 7fd0ac7f5640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 05 01:12:25 compute-0 ceph-mon[192533]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 05 01:12:25 compute-0 ceph-mon[192533]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 05 01:12:25 compute-0 podman[192781]: 2025-12-05 01:12:25.974321626 +0000 UTC m=+0.246281378 container died 34f99c2940f04f7996eb5f6501f7e5baf464ccc95c9e67a651bd8354fdd5f175 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:12:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce88672616984df0293eb4b55333de5b5da26ac5e1d7ec352ccbdf8541f386b4-merged.mount: Deactivated successfully.
Dec 05 01:12:26 compute-0 podman[192781]: 2025-12-05 01:12:26.029851742 +0000 UTC m=+0.301811454 container remove 34f99c2940f04f7996eb5f6501f7e5baf464ccc95c9e67a651bd8354fdd5f175 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:12:26 compute-0 bash[192781]: ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0
Dec 05 01:12:26 compute-0 podman[192803]: 2025-12-05 01:12:26.165551611 +0000 UTC m=+0.120239381 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, config_id=edpm, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., architecture=x86_64, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 01:12:26 compute-0 systemd[1]: ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee@mon.compute-0.service: Deactivated successfully.
Dec 05 01:12:26 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec 05 01:12:26 compute-0 systemd[1]: ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee@mon.compute-0.service: Consumed 2.088s CPU time.
Dec 05 01:12:26 compute-0 systemd[1]: Starting Ceph mon.compute-0 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee...
Dec 05 01:12:26 compute-0 podman[192895]: 2025-12-05 01:12:26.741071506 +0000 UTC m=+0.082011434 container create aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 01:12:26 compute-0 podman[192895]: 2025-12-05 01:12:26.707143496 +0000 UTC m=+0.048083454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ce083d781d282b541ced7ec033de49bbb6fca722f3765d693518efaa94c656/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ce083d781d282b541ced7ec033de49bbb6fca722f3765d693518efaa94c656/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ce083d781d282b541ced7ec033de49bbb6fca722f3765d693518efaa94c656/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ce083d781d282b541ced7ec033de49bbb6fca722f3765d693518efaa94c656/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:26 compute-0 podman[192895]: 2025-12-05 01:12:26.862385896 +0000 UTC m=+0.203325844 container init aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:12:26 compute-0 podman[192895]: 2025-12-05 01:12:26.87655795 +0000 UTC m=+0.217497878 container start aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:12:26 compute-0 bash[192895]: aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9
Dec 05 01:12:26 compute-0 systemd[1]: Started Ceph mon.compute-0 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec 05 01:12:26 compute-0 ceph-mon[192914]: set uid:gid to 167:167 (ceph:ceph)
Dec 05 01:12:26 compute-0 ceph-mon[192914]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Dec 05 01:12:26 compute-0 ceph-mon[192914]: pidfile_write: ignore empty --pid-file
Dec 05 01:12:26 compute-0 ceph-mon[192914]: load: jerasure load: lrc 
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: RocksDB version: 7.9.2
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Git sha 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Compile date 2025-05-06 23:30:25
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: DB SUMMARY
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: DB Session ID:  4QDKSXZ9659NG2VXPQ9P
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: CURRENT file:  CURRENT
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: IDENTITY file:  IDENTITY
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 54564 ; 
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                         Options.error_if_exists: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                       Options.create_if_missing: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                         Options.paranoid_checks: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                                     Options.env: 0x5646352cfc40
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                                Options.info_log: 0x5646377a5040
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                Options.max_file_opening_threads: 16
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                              Options.statistics: (nil)
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                               Options.use_fsync: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                       Options.max_log_file_size: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                         Options.allow_fallocate: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                        Options.use_direct_reads: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:          Options.create_missing_column_families: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                              Options.db_log_dir: 
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                                 Options.wal_dir: 
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                   Options.advise_random_on_open: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                    Options.write_buffer_manager: 0x5646377b4b40
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                            Options.rate_limiter: (nil)
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                  Options.unordered_write: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                               Options.row_cache: None
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                              Options.wal_filter: None
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.allow_ingest_behind: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.two_write_queues: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.manual_wal_flush: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.wal_compression: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.atomic_flush: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                 Options.log_readahead_size: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.allow_data_in_errors: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.db_host_id: __hostname__
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.max_background_jobs: 2
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.max_background_compactions: -1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.max_subcompactions: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.max_total_wal_size: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                          Options.max_open_files: -1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                          Options.bytes_per_sync: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:       Options.compaction_readahead_size: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                  Options.max_background_flushes: -1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Compression algorithms supported:
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         kZSTD supported: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         kXpressCompression supported: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         kBZip2Compression supported: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         kLZ4Compression supported: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         kZlibCompression supported: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         kLZ4HCCompression supported: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         kSnappyCompression supported: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:           Options.merge_operator: 
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:        Options.compaction_filter: None
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5646377a4c40)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x56463779d1f0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:        Options.write_buffer_size: 33554432
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:  Options.max_write_buffer_number: 2
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:          Options.compression: NoCompression
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.num_levels: 7
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d2a3e37e-222f-447f-af23-2a52f135922f
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897146941467, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897146946777, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 54153, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 52695, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3023, "raw_average_key_size": 30, "raw_value_size": 50297, "raw_average_value_size": 502, "num_data_blocks": 8, "num_entries": 100, "num_filter_entries": 100, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897146, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897146947081, "job": 1, "event": "recovery_finished"}
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5646377c6e00
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: DB pointer 0x5646378ce000
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 01:12:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 0.0 total, 0.0 interval
                                            Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                            Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                            Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0   54.78 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0
                                             Sum      2/0   54.78 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.0 total, 0.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 2.28 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 2.28 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56463779d1f0#2 capacity: 512.00 MB usage: 1.73 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(2,0.95 KB,0.000181794%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Dec 05 01:12:26 compute-0 ceph-mon[192914]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec 05 01:12:26 compute-0 ceph-mon[192914]: mon.compute-0@-1(???) e1 preinit fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec 05 01:12:26 compute-0 ceph-mon[192914]: mon.compute-0@-1(???).mds e1 new map
Dec 05 01:12:26 compute-0 ceph-mon[192914]: mon.compute-0@-1(???).mds e1 print_map
                                            e1
                                            enable_multiple, ever_enabled_multiple: 1,1
                                            default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            legacy client fscid: -1
                                             
                                            No filesystems configured
Dec 05 01:12:26 compute-0 ceph-mon[192914]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 05 01:12:26 compute-0 ceph-mon[192914]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 05 01:12:26 compute-0 ceph-mon[192914]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 05 01:12:26 compute-0 ceph-mon[192914]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 05 01:12:26 compute-0 ceph-mon[192914]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec 05 01:12:26 compute-0 ceph-mon[192914]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec 05 01:12:26 compute-0 ceph-mon[192914]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 05 01:12:26 compute-0 ceph-mon[192914]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec 05 01:12:26 compute-0 ceph-mon[192914]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 01:12:26 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 05 01:12:26 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 05 01:12:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 05 01:12:26 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : fsmap 
Dec 05 01:12:26 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 05 01:12:26 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 05 01:12:26 compute-0 podman[192915]: 2025-12-05 01:12:26.98902683 +0000 UTC m=+0.069539107 container create 75f3b56afab374b6945e02396a0fa8d5256c512c94e7cdf5e8ac3e78eaf8e3fe (image=quay.io/ceph/ceph:v18, name=jovial_curie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:12:27 compute-0 ceph-mon[192914]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 05 01:12:27 compute-0 ceph-mon[192914]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 05 01:12:27 compute-0 ceph-mon[192914]: fsmap 
Dec 05 01:12:27 compute-0 ceph-mon[192914]: osdmap e1: 0 total, 0 up, 0 in
Dec 05 01:12:27 compute-0 ceph-mon[192914]: mgrmap e1: no daemons active
Dec 05 01:12:27 compute-0 podman[192915]: 2025-12-05 01:12:26.965175823 +0000 UTC m=+0.045688130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:27 compute-0 systemd[1]: Started libpod-conmon-75f3b56afab374b6945e02396a0fa8d5256c512c94e7cdf5e8ac3e78eaf8e3fe.scope.
Dec 05 01:12:27 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6339b02d6396312a2b050a73d887beb13d99908b4053628f14d44ba0123f964f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6339b02d6396312a2b050a73d887beb13d99908b4053628f14d44ba0123f964f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6339b02d6396312a2b050a73d887beb13d99908b4053628f14d44ba0123f964f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:27 compute-0 podman[192915]: 2025-12-05 01:12:27.163316525 +0000 UTC m=+0.243828842 container init 75f3b56afab374b6945e02396a0fa8d5256c512c94e7cdf5e8ac3e78eaf8e3fe (image=quay.io/ceph/ceph:v18, name=jovial_curie, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 01:12:27 compute-0 podman[192915]: 2025-12-05 01:12:27.180520522 +0000 UTC m=+0.261032829 container start 75f3b56afab374b6945e02396a0fa8d5256c512c94e7cdf5e8ac3e78eaf8e3fe (image=quay.io/ceph/ceph:v18, name=jovial_curie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 05 01:12:27 compute-0 podman[192915]: 2025-12-05 01:12:27.189710651 +0000 UTC m=+0.270222968 container attach 75f3b56afab374b6945e02396a0fa8d5256c512c94e7cdf5e8ac3e78eaf8e3fe (image=quay.io/ceph/ceph:v18, name=jovial_curie, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:12:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Dec 05 01:12:27 compute-0 systemd[1]: libpod-75f3b56afab374b6945e02396a0fa8d5256c512c94e7cdf5e8ac3e78eaf8e3fe.scope: Deactivated successfully.
Dec 05 01:12:27 compute-0 podman[192915]: 2025-12-05 01:12:27.676000486 +0000 UTC m=+0.756512813 container died 75f3b56afab374b6945e02396a0fa8d5256c512c94e7cdf5e8ac3e78eaf8e3fe (image=quay.io/ceph/ceph:v18, name=jovial_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 05 01:12:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-6339b02d6396312a2b050a73d887beb13d99908b4053628f14d44ba0123f964f-merged.mount: Deactivated successfully.
Dec 05 01:12:27 compute-0 podman[192915]: 2025-12-05 01:12:27.752392948 +0000 UTC m=+0.832905215 container remove 75f3b56afab374b6945e02396a0fa8d5256c512c94e7cdf5e8ac3e78eaf8e3fe (image=quay.io/ceph/ceph:v18, name=jovial_curie, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 01:12:27 compute-0 systemd[1]: libpod-conmon-75f3b56afab374b6945e02396a0fa8d5256c512c94e7cdf5e8ac3e78eaf8e3fe.scope: Deactivated successfully.
Dec 05 01:12:27 compute-0 podman[193004]: 2025-12-05 01:12:27.860431327 +0000 UTC m=+0.062550107 container create 1e2d4cc4d463ca799332e59ab207e15453d06e82fac277bedf2d9b9617430b61 (image=quay.io/ceph/ceph:v18, name=festive_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 05 01:12:27 compute-0 systemd[1]: Started libpod-conmon-1e2d4cc4d463ca799332e59ab207e15453d06e82fac277bedf2d9b9617430b61.scope.
Dec 05 01:12:27 compute-0 podman[193004]: 2025-12-05 01:12:27.837509046 +0000 UTC m=+0.039627866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:27 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617e7f731a11569899a3ea61defb32337fb1685e34b2e73de1bc171a9b620988/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617e7f731a11569899a3ea61defb32337fb1685e34b2e73de1bc171a9b620988/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617e7f731a11569899a3ea61defb32337fb1685e34b2e73de1bc171a9b620988/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:28 compute-0 podman[193004]: 2025-12-05 01:12:28.041457456 +0000 UTC m=+0.243576306 container init 1e2d4cc4d463ca799332e59ab207e15453d06e82fac277bedf2d9b9617430b61 (image=quay.io/ceph/ceph:v18, name=festive_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:12:28 compute-0 podman[193004]: 2025-12-05 01:12:28.059064893 +0000 UTC m=+0.261183703 container start 1e2d4cc4d463ca799332e59ab207e15453d06e82fac277bedf2d9b9617430b61 (image=quay.io/ceph/ceph:v18, name=festive_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 01:12:28 compute-0 podman[193004]: 2025-12-05 01:12:28.06632813 +0000 UTC m=+0.268446980 container attach 1e2d4cc4d463ca799332e59ab207e15453d06e82fac277bedf2d9b9617430b61 (image=quay.io/ceph/ceph:v18, name=festive_neumann, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 01:12:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Dec 05 01:12:28 compute-0 systemd[1]: libpod-1e2d4cc4d463ca799332e59ab207e15453d06e82fac277bedf2d9b9617430b61.scope: Deactivated successfully.
Dec 05 01:12:28 compute-0 podman[193004]: 2025-12-05 01:12:28.617164215 +0000 UTC m=+0.819282985 container died 1e2d4cc4d463ca799332e59ab207e15453d06e82fac277bedf2d9b9617430b61 (image=quay.io/ceph/ceph:v18, name=festive_neumann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:12:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-617e7f731a11569899a3ea61defb32337fb1685e34b2e73de1bc171a9b620988-merged.mount: Deactivated successfully.
Dec 05 01:12:28 compute-0 podman[193004]: 2025-12-05 01:12:28.691425928 +0000 UTC m=+0.893544738 container remove 1e2d4cc4d463ca799332e59ab207e15453d06e82fac277bedf2d9b9617430b61 (image=quay.io/ceph/ceph:v18, name=festive_neumann, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 05 01:12:28 compute-0 systemd[1]: libpod-conmon-1e2d4cc4d463ca799332e59ab207e15453d06e82fac277bedf2d9b9617430b61.scope: Deactivated successfully.
Dec 05 01:12:28 compute-0 systemd[1]: Reloading.
Dec 05 01:12:28 compute-0 systemd-rc-local-generator[193080]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:12:28 compute-0 systemd-sysv-generator[193084]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:12:29 compute-0 systemd[1]: Reloading.
Dec 05 01:12:29 compute-0 systemd-sysv-generator[193127]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:12:29 compute-0 systemd-rc-local-generator[193123]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:12:29 compute-0 systemd[1]: Starting Ceph mgr.compute-0.afshmv for cbd280d3-cbd8-528b-ace6-2b3a887cdcee...
Dec 05 01:12:29 compute-0 podman[158197]: time="2025-12-05T01:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:12:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20380 "" "Go-http-client/1.1"
Dec 05 01:12:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3463 "" "Go-http-client/1.1"
Dec 05 01:12:30 compute-0 podman[193178]: 2025-12-05 01:12:30.069130244 +0000 UTC m=+0.078415497 container create 08717604c330d387a7f8ede377aa8d6af954338591c6c50fbdef8fe4a8f58c24 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 01:12:30 compute-0 podman[193178]: 2025-12-05 01:12:30.044246569 +0000 UTC m=+0.053531912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/959de572a3ea1e2fee8fdcec547d2287ddc96a35bffb0a51d24d7cee808aee66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/959de572a3ea1e2fee8fdcec547d2287ddc96a35bffb0a51d24d7cee808aee66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/959de572a3ea1e2fee8fdcec547d2287ddc96a35bffb0a51d24d7cee808aee66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/959de572a3ea1e2fee8fdcec547d2287ddc96a35bffb0a51d24d7cee808aee66/merged/var/lib/ceph/mgr/ceph-compute-0.afshmv supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:30 compute-0 podman[193178]: 2025-12-05 01:12:30.192431677 +0000 UTC m=+0.201717010 container init 08717604c330d387a7f8ede377aa8d6af954338591c6c50fbdef8fe4a8f58c24 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 05 01:12:30 compute-0 podman[193178]: 2025-12-05 01:12:30.212331537 +0000 UTC m=+0.221616820 container start 08717604c330d387a7f8ede377aa8d6af954338591c6c50fbdef8fe4a8f58c24 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:12:30 compute-0 bash[193178]: 08717604c330d387a7f8ede377aa8d6af954338591c6c50fbdef8fe4a8f58c24
Dec 05 01:12:30 compute-0 systemd[1]: Started Ceph mgr.compute-0.afshmv for cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec 05 01:12:30 compute-0 podman[193190]: 2025-12-05 01:12:30.275769717 +0000 UTC m=+0.137798977 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:12:30 compute-0 ceph-mgr[193209]: set uid:gid to 167:167 (ceph:ceph)
Dec 05 01:12:30 compute-0 ceph-mgr[193209]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Dec 05 01:12:30 compute-0 ceph-mgr[193209]: pidfile_write: ignore empty --pid-file
Dec 05 01:12:30 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'alerts'
Dec 05 01:12:30 compute-0 podman[193214]: 2025-12-05 01:12:30.425862697 +0000 UTC m=+0.140736987 container create 6ece579c5264561ca83d11a8a733f1652bd25dd2e97dc7ed917c45134e32ae43 (image=quay.io/ceph/ceph:v18, name=confident_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:12:30 compute-0 podman[193214]: 2025-12-05 01:12:30.33673569 +0000 UTC m=+0.051609990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:30 compute-0 systemd[1]: Started libpod-conmon-6ece579c5264561ca83d11a8a733f1652bd25dd2e97dc7ed917c45134e32ae43.scope.
Dec 05 01:12:30 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efc280c8b9d433f96b0dbef142aa5f19ee03aecc39310366b0805cd11595af05/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efc280c8b9d433f96b0dbef142aa5f19ee03aecc39310366b0805cd11595af05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efc280c8b9d433f96b0dbef142aa5f19ee03aecc39310366b0805cd11595af05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:30 compute-0 podman[193214]: 2025-12-05 01:12:30.624484672 +0000 UTC m=+0.339358972 container init 6ece579c5264561ca83d11a8a733f1652bd25dd2e97dc7ed917c45134e32ae43 (image=quay.io/ceph/ceph:v18, name=confident_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 05 01:12:30 compute-0 podman[193214]: 2025-12-05 01:12:30.645852432 +0000 UTC m=+0.360726722 container start 6ece579c5264561ca83d11a8a733f1652bd25dd2e97dc7ed917c45134e32ae43 (image=quay.io/ceph/ceph:v18, name=confident_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 01:12:30 compute-0 podman[193214]: 2025-12-05 01:12:30.652976845 +0000 UTC m=+0.367851105 container attach 6ece579c5264561ca83d11a8a733f1652bd25dd2e97dc7ed917c45134e32ae43 (image=quay.io/ceph/ceph:v18, name=confident_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 05 01:12:30 compute-0 ceph-mgr[193209]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 05 01:12:30 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'balancer'
Dec 05 01:12:30 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:30.711+0000 7fe0056c6140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 05 01:12:30 compute-0 ceph-mgr[193209]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 05 01:12:30 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'cephadm'
Dec 05 01:12:30 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:30.966+0000 7fe0056c6140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 05 01:12:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 05 01:12:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/200958537' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 01:12:31 compute-0 confident_elion[193259]: 
Dec 05 01:12:31 compute-0 confident_elion[193259]: {
Dec 05 01:12:31 compute-0 confident_elion[193259]:     "fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:12:31 compute-0 confident_elion[193259]:     "health": {
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "status": "HEALTH_OK",
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "checks": {},
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "mutes": []
Dec 05 01:12:31 compute-0 confident_elion[193259]:     },
Dec 05 01:12:31 compute-0 confident_elion[193259]:     "election_epoch": 5,
Dec 05 01:12:31 compute-0 confident_elion[193259]:     "quorum": [
Dec 05 01:12:31 compute-0 confident_elion[193259]:         0
Dec 05 01:12:31 compute-0 confident_elion[193259]:     ],
Dec 05 01:12:31 compute-0 confident_elion[193259]:     "quorum_names": [
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "compute-0"
Dec 05 01:12:31 compute-0 confident_elion[193259]:     ],
Dec 05 01:12:31 compute-0 confident_elion[193259]:     "quorum_age": 4,
Dec 05 01:12:31 compute-0 confident_elion[193259]:     "monmap": {
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "epoch": 1,
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "min_mon_release_name": "reef",
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "num_mons": 1
Dec 05 01:12:31 compute-0 confident_elion[193259]:     },
Dec 05 01:12:31 compute-0 confident_elion[193259]:     "osdmap": {
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "epoch": 1,
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "num_osds": 0,
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "num_up_osds": 0,
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "osd_up_since": 0,
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "num_in_osds": 0,
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "osd_in_since": 0,
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "num_remapped_pgs": 0
Dec 05 01:12:31 compute-0 confident_elion[193259]:     },
Dec 05 01:12:31 compute-0 confident_elion[193259]:     "pgmap": {
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "pgs_by_state": [],
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "num_pgs": 0,
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "num_pools": 0,
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "num_objects": 0,
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "data_bytes": 0,
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "bytes_used": 0,
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "bytes_avail": 0,
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "bytes_total": 0
Dec 05 01:12:31 compute-0 confident_elion[193259]:     },
Dec 05 01:12:31 compute-0 confident_elion[193259]:     "fsmap": {
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "epoch": 1,
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "by_rank": [],
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "up:standby": 0
Dec 05 01:12:31 compute-0 confident_elion[193259]:     },
Dec 05 01:12:31 compute-0 confident_elion[193259]:     "mgrmap": {
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "available": false,
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "num_standbys": 0,
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "modules": [
Dec 05 01:12:31 compute-0 confident_elion[193259]:             "iostat",
Dec 05 01:12:31 compute-0 confident_elion[193259]:             "nfs",
Dec 05 01:12:31 compute-0 confident_elion[193259]:             "restful"
Dec 05 01:12:31 compute-0 confident_elion[193259]:         ],
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "services": {}
Dec 05 01:12:31 compute-0 confident_elion[193259]:     },
Dec 05 01:12:31 compute-0 confident_elion[193259]:     "servicemap": {
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "epoch": 1,
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "modified": "2025-12-05T01:12:22.836369+0000",
Dec 05 01:12:31 compute-0 confident_elion[193259]:         "services": {}
Dec 05 01:12:31 compute-0 confident_elion[193259]:     },
Dec 05 01:12:31 compute-0 confident_elion[193259]:     "progress_events": {}
Dec 05 01:12:31 compute-0 confident_elion[193259]: }
Dec 05 01:12:31 compute-0 systemd[1]: libpod-6ece579c5264561ca83d11a8a733f1652bd25dd2e97dc7ed917c45134e32ae43.scope: Deactivated successfully.
Dec 05 01:12:31 compute-0 podman[193214]: 2025-12-05 01:12:31.16210983 +0000 UTC m=+0.876984100 container died 6ece579c5264561ca83d11a8a733f1652bd25dd2e97dc7ed917c45134e32ae43 (image=quay.io/ceph/ceph:v18, name=confident_elion, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:12:31 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/200958537' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 01:12:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-efc280c8b9d433f96b0dbef142aa5f19ee03aecc39310366b0805cd11595af05-merged.mount: Deactivated successfully.
Dec 05 01:12:31 compute-0 podman[193214]: 2025-12-05 01:12:31.25985242 +0000 UTC m=+0.974726680 container remove 6ece579c5264561ca83d11a8a733f1652bd25dd2e97dc7ed917c45134e32ae43 (image=quay.io/ceph/ceph:v18, name=confident_elion, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Dec 05 01:12:31 compute-0 systemd[1]: libpod-conmon-6ece579c5264561ca83d11a8a733f1652bd25dd2e97dc7ed917c45134e32ae43.scope: Deactivated successfully.
Dec 05 01:12:31 compute-0 openstack_network_exporter[160350]: ERROR   01:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:12:31 compute-0 openstack_network_exporter[160350]: ERROR   01:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:12:31 compute-0 openstack_network_exporter[160350]: ERROR   01:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:12:31 compute-0 openstack_network_exporter[160350]: ERROR   01:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:12:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:12:31 compute-0 openstack_network_exporter[160350]: ERROR   01:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:12:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:12:33 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'crash'
Dec 05 01:12:33 compute-0 ceph-mgr[193209]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 05 01:12:33 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'dashboard'
Dec 05 01:12:33 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:33.380+0000 7fe0056c6140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 05 01:12:33 compute-0 podman[193311]: 2025-12-05 01:12:33.442132581 +0000 UTC m=+0.131024364 container create 89254c45f5f256d0d879c2cd958ec9c7714e22173e4a725181069f67e0308331 (image=quay.io/ceph/ceph:v18, name=tender_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Dec 05 01:12:33 compute-0 podman[193311]: 2025-12-05 01:12:33.400442171 +0000 UTC m=+0.089333994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:33 compute-0 systemd[1]: Started libpod-conmon-89254c45f5f256d0d879c2cd958ec9c7714e22173e4a725181069f67e0308331.scope.
Dec 05 01:12:33 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb4803719d18f7bcf9ba1524cc64198b54fbc4e0b07af8aec95c2e7b3e90edaf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb4803719d18f7bcf9ba1524cc64198b54fbc4e0b07af8aec95c2e7b3e90edaf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb4803719d18f7bcf9ba1524cc64198b54fbc4e0b07af8aec95c2e7b3e90edaf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:33 compute-0 podman[193311]: 2025-12-05 01:12:33.608557704 +0000 UTC m=+0.297449467 container init 89254c45f5f256d0d879c2cd958ec9c7714e22173e4a725181069f67e0308331 (image=quay.io/ceph/ceph:v18, name=tender_gagarin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:12:33 compute-0 podman[193311]: 2025-12-05 01:12:33.61876091 +0000 UTC m=+0.307652663 container start 89254c45f5f256d0d879c2cd958ec9c7714e22173e4a725181069f67e0308331 (image=quay.io/ceph/ceph:v18, name=tender_gagarin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:12:33 compute-0 podman[193311]: 2025-12-05 01:12:33.624563418 +0000 UTC m=+0.313455201 container attach 89254c45f5f256d0d879c2cd958ec9c7714e22173e4a725181069f67e0308331 (image=quay.io/ceph/ceph:v18, name=tender_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 05 01:12:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 05 01:12:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3845166885' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 01:12:34 compute-0 tender_gagarin[193328]: 
Dec 05 01:12:34 compute-0 tender_gagarin[193328]: {
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:     "fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:     "health": {
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "status": "HEALTH_OK",
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "checks": {},
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "mutes": []
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:     },
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:     "election_epoch": 5,
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:     "quorum": [
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         0
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:     ],
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:     "quorum_names": [
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "compute-0"
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:     ],
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:     "quorum_age": 7,
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:     "monmap": {
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "epoch": 1,
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "min_mon_release_name": "reef",
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "num_mons": 1
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:     },
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:     "osdmap": {
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "epoch": 1,
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "num_osds": 0,
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "num_up_osds": 0,
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "osd_up_since": 0,
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "num_in_osds": 0,
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "osd_in_since": 0,
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "num_remapped_pgs": 0
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:     },
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:     "pgmap": {
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "pgs_by_state": [],
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "num_pgs": 0,
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "num_pools": 0,
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "num_objects": 0,
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "data_bytes": 0,
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "bytes_used": 0,
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "bytes_avail": 0,
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "bytes_total": 0
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:     },
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:     "fsmap": {
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "epoch": 1,
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "by_rank": [],
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "up:standby": 0
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:     },
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:     "mgrmap": {
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "available": false,
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "num_standbys": 0,
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "modules": [
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:             "iostat",
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:             "nfs",
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:             "restful"
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         ],
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "services": {}
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:     },
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:     "servicemap": {
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "epoch": 1,
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "modified": "2025-12-05T01:12:22.836369+0000",
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:         "services": {}
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:     },
Dec 05 01:12:34 compute-0 tender_gagarin[193328]:     "progress_events": {}
Dec 05 01:12:34 compute-0 tender_gagarin[193328]: }
Dec 05 01:12:34 compute-0 systemd[1]: libpod-89254c45f5f256d0d879c2cd958ec9c7714e22173e4a725181069f67e0308331.scope: Deactivated successfully.
Dec 05 01:12:34 compute-0 podman[193311]: 2025-12-05 01:12:34.056035457 +0000 UTC m=+0.744927280 container died 89254c45f5f256d0d879c2cd958ec9c7714e22173e4a725181069f67e0308331 (image=quay.io/ceph/ceph:v18, name=tender_gagarin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 01:12:34 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3845166885' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 01:12:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb4803719d18f7bcf9ba1524cc64198b54fbc4e0b07af8aec95c2e7b3e90edaf-merged.mount: Deactivated successfully.
Dec 05 01:12:34 compute-0 podman[193311]: 2025-12-05 01:12:34.144792693 +0000 UTC m=+0.833684446 container remove 89254c45f5f256d0d879c2cd958ec9c7714e22173e4a725181069f67e0308331 (image=quay.io/ceph/ceph:v18, name=tender_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:12:34 compute-0 systemd[1]: libpod-conmon-89254c45f5f256d0d879c2cd958ec9c7714e22173e4a725181069f67e0308331.scope: Deactivated successfully.
Dec 05 01:12:34 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'devicehealth'
Dec 05 01:12:35 compute-0 ceph-mgr[193209]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 05 01:12:35 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'diskprediction_local'
Dec 05 01:12:35 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:35.187+0000 7fe0056c6140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 05 01:12:35 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 05 01:12:35 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 05 01:12:35 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]:   from numpy import show_config as show_numpy_config
Dec 05 01:12:35 compute-0 ceph-mgr[193209]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 05 01:12:35 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'influx'
Dec 05 01:12:35 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:35.744+0000 7fe0056c6140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 05 01:12:35 compute-0 ceph-mgr[193209]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 05 01:12:35 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'insights'
Dec 05 01:12:35 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:35.994+0000 7fe0056c6140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 05 01:12:36 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'iostat'
Dec 05 01:12:36 compute-0 podman[193365]: 2025-12-05 01:12:36.310818753 +0000 UTC m=+0.104753431 container create c955be14b80ce1c8bff98f40e8eaa1a66bba0f67f1e9b2371502193dcca2df73 (image=quay.io/ceph/ceph:v18, name=distracted_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 05 01:12:36 compute-0 podman[193365]: 2025-12-05 01:12:36.274768976 +0000 UTC m=+0.068703664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:36 compute-0 systemd[1]: Started libpod-conmon-c955be14b80ce1c8bff98f40e8eaa1a66bba0f67f1e9b2371502193dcca2df73.scope.
Dec 05 01:12:36 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b55ffd125a483e120a99ffe6bdd0511f48d6b39a57f13938c8d65b9ca6398dac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b55ffd125a483e120a99ffe6bdd0511f48d6b39a57f13938c8d65b9ca6398dac/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b55ffd125a483e120a99ffe6bdd0511f48d6b39a57f13938c8d65b9ca6398dac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:36 compute-0 podman[193365]: 2025-12-05 01:12:36.520330914 +0000 UTC m=+0.314265612 container init c955be14b80ce1c8bff98f40e8eaa1a66bba0f67f1e9b2371502193dcca2df73 (image=quay.io/ceph/ceph:v18, name=distracted_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 05 01:12:36 compute-0 ceph-mgr[193209]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 05 01:12:36 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'k8sevents'
Dec 05 01:12:36 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:36.524+0000 7fe0056c6140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 05 01:12:36 compute-0 podman[193365]: 2025-12-05 01:12:36.537135979 +0000 UTC m=+0.331070657 container start c955be14b80ce1c8bff98f40e8eaa1a66bba0f67f1e9b2371502193dcca2df73 (image=quay.io/ceph/ceph:v18, name=distracted_villani, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 01:12:36 compute-0 podman[193365]: 2025-12-05 01:12:36.551000535 +0000 UTC m=+0.344935223 container attach c955be14b80ce1c8bff98f40e8eaa1a66bba0f67f1e9b2371502193dcca2df73 (image=quay.io/ceph/ceph:v18, name=distracted_villani, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 01:12:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 05 01:12:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/940052462' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 01:12:37 compute-0 distracted_villani[193382]: 
Dec 05 01:12:37 compute-0 distracted_villani[193382]: {
Dec 05 01:12:37 compute-0 distracted_villani[193382]:     "fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:12:37 compute-0 distracted_villani[193382]:     "health": {
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "status": "HEALTH_OK",
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "checks": {},
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "mutes": []
Dec 05 01:12:37 compute-0 distracted_villani[193382]:     },
Dec 05 01:12:37 compute-0 distracted_villani[193382]:     "election_epoch": 5,
Dec 05 01:12:37 compute-0 distracted_villani[193382]:     "quorum": [
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         0
Dec 05 01:12:37 compute-0 distracted_villani[193382]:     ],
Dec 05 01:12:37 compute-0 distracted_villani[193382]:     "quorum_names": [
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "compute-0"
Dec 05 01:12:37 compute-0 distracted_villani[193382]:     ],
Dec 05 01:12:37 compute-0 distracted_villani[193382]:     "quorum_age": 10,
Dec 05 01:12:37 compute-0 distracted_villani[193382]:     "monmap": {
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "epoch": 1,
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "min_mon_release_name": "reef",
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "num_mons": 1
Dec 05 01:12:37 compute-0 distracted_villani[193382]:     },
Dec 05 01:12:37 compute-0 distracted_villani[193382]:     "osdmap": {
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "epoch": 1,
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "num_osds": 0,
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "num_up_osds": 0,
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "osd_up_since": 0,
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "num_in_osds": 0,
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "osd_in_since": 0,
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "num_remapped_pgs": 0
Dec 05 01:12:37 compute-0 distracted_villani[193382]:     },
Dec 05 01:12:37 compute-0 distracted_villani[193382]:     "pgmap": {
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "pgs_by_state": [],
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "num_pgs": 0,
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "num_pools": 0,
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "num_objects": 0,
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "data_bytes": 0,
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "bytes_used": 0,
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "bytes_avail": 0,
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "bytes_total": 0
Dec 05 01:12:37 compute-0 distracted_villani[193382]:     },
Dec 05 01:12:37 compute-0 distracted_villani[193382]:     "fsmap": {
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "epoch": 1,
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "by_rank": [],
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "up:standby": 0
Dec 05 01:12:37 compute-0 distracted_villani[193382]:     },
Dec 05 01:12:37 compute-0 distracted_villani[193382]:     "mgrmap": {
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "available": false,
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "num_standbys": 0,
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "modules": [
Dec 05 01:12:37 compute-0 distracted_villani[193382]:             "iostat",
Dec 05 01:12:37 compute-0 distracted_villani[193382]:             "nfs",
Dec 05 01:12:37 compute-0 distracted_villani[193382]:             "restful"
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         ],
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "services": {}
Dec 05 01:12:37 compute-0 distracted_villani[193382]:     },
Dec 05 01:12:37 compute-0 distracted_villani[193382]:     "servicemap": {
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "epoch": 1,
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "modified": "2025-12-05T01:12:22.836369+0000",
Dec 05 01:12:37 compute-0 distracted_villani[193382]:         "services": {}
Dec 05 01:12:37 compute-0 distracted_villani[193382]:     },
Dec 05 01:12:37 compute-0 distracted_villani[193382]:     "progress_events": {}
Dec 05 01:12:37 compute-0 distracted_villani[193382]: }
Dec 05 01:12:37 compute-0 systemd[1]: libpod-c955be14b80ce1c8bff98f40e8eaa1a66bba0f67f1e9b2371502193dcca2df73.scope: Deactivated successfully.
Dec 05 01:12:37 compute-0 conmon[193382]: conmon c955be14b80ce1c8bff9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c955be14b80ce1c8bff98f40e8eaa1a66bba0f67f1e9b2371502193dcca2df73.scope/container/memory.events
Dec 05 01:12:37 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/940052462' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 01:12:37 compute-0 podman[193408]: 2025-12-05 01:12:37.117965988 +0000 UTC m=+0.052436412 container died c955be14b80ce1c8bff98f40e8eaa1a66bba0f67f1e9b2371502193dcca2df73 (image=quay.io/ceph/ceph:v18, name=distracted_villani, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:12:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b55ffd125a483e120a99ffe6bdd0511f48d6b39a57f13938c8d65b9ca6398dac-merged.mount: Deactivated successfully.
Dec 05 01:12:37 compute-0 podman[193408]: 2025-12-05 01:12:37.201701769 +0000 UTC m=+0.136172113 container remove c955be14b80ce1c8bff98f40e8eaa1a66bba0f67f1e9b2371502193dcca2df73 (image=quay.io/ceph/ceph:v18, name=distracted_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:12:37 compute-0 systemd[1]: libpod-conmon-c955be14b80ce1c8bff98f40e8eaa1a66bba0f67f1e9b2371502193dcca2df73.scope: Deactivated successfully.
Dec 05 01:12:38 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'localpool'
Dec 05 01:12:38 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'mds_autoscaler'
Dec 05 01:12:39 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'mirroring'
Dec 05 01:12:39 compute-0 podman[193421]: 2025-12-05 01:12:39.297215217 +0000 UTC m=+0.044087307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:39 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'nfs'
Dec 05 01:12:39 compute-0 podman[193421]: 2025-12-05 01:12:39.78638462 +0000 UTC m=+0.533256660 container create 2f848e3419703a14b432a0dee4130b7d5f755a5270875262d19218529e2b4e36 (image=quay.io/ceph/ceph:v18, name=heuristic_benz, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Dec 05 01:12:39 compute-0 systemd[1]: Started libpod-conmon-2f848e3419703a14b432a0dee4130b7d5f755a5270875262d19218529e2b4e36.scope.
Dec 05 01:12:39 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d7fc77e248b7d93be8769172a6d1c0946ceaa731778e4a59af22f086ec0bcd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d7fc77e248b7d93be8769172a6d1c0946ceaa731778e4a59af22f086ec0bcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d7fc77e248b7d93be8769172a6d1c0946ceaa731778e4a59af22f086ec0bcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:39 compute-0 podman[193421]: 2025-12-05 01:12:39.898762687 +0000 UTC m=+0.645634757 container init 2f848e3419703a14b432a0dee4130b7d5f755a5270875262d19218529e2b4e36 (image=quay.io/ceph/ceph:v18, name=heuristic_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 01:12:39 compute-0 podman[193421]: 2025-12-05 01:12:39.906696593 +0000 UTC m=+0.653568653 container start 2f848e3419703a14b432a0dee4130b7d5f755a5270875262d19218529e2b4e36 (image=quay.io/ceph/ceph:v18, name=heuristic_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 01:12:39 compute-0 podman[193421]: 2025-12-05 01:12:39.920649531 +0000 UTC m=+0.667521581 container attach 2f848e3419703a14b432a0dee4130b7d5f755a5270875262d19218529e2b4e36 (image=quay.io/ceph/ceph:v18, name=heuristic_benz, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:12:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 05 01:12:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1185180056' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 01:12:40 compute-0 heuristic_benz[193436]: 
Dec 05 01:12:40 compute-0 heuristic_benz[193436]: {
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:     "fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:     "health": {
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "status": "HEALTH_OK",
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "checks": {},
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "mutes": []
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:     },
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:     "election_epoch": 5,
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:     "quorum": [
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         0
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:     ],
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:     "quorum_names": [
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "compute-0"
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:     ],
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:     "quorum_age": 13,
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:     "monmap": {
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "epoch": 1,
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "min_mon_release_name": "reef",
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "num_mons": 1
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:     },
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:     "osdmap": {
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "epoch": 1,
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "num_osds": 0,
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "num_up_osds": 0,
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "osd_up_since": 0,
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "num_in_osds": 0,
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "osd_in_since": 0,
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "num_remapped_pgs": 0
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:     },
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:     "pgmap": {
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "pgs_by_state": [],
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "num_pgs": 0,
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "num_pools": 0,
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "num_objects": 0,
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "data_bytes": 0,
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "bytes_used": 0,
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "bytes_avail": 0,
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "bytes_total": 0
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:     },
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:     "fsmap": {
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "epoch": 1,
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "by_rank": [],
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "up:standby": 0
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:     },
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:     "mgrmap": {
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "available": false,
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "num_standbys": 0,
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "modules": [
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:             "iostat",
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:             "nfs",
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:             "restful"
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         ],
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "services": {}
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:     },
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:     "servicemap": {
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "epoch": 1,
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "modified": "2025-12-05T01:12:22.836369+0000",
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:         "services": {}
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:     },
Dec 05 01:12:40 compute-0 heuristic_benz[193436]:     "progress_events": {}
Dec 05 01:12:40 compute-0 heuristic_benz[193436]: }
Dec 05 01:12:40 compute-0 systemd[1]: libpod-2f848e3419703a14b432a0dee4130b7d5f755a5270875262d19218529e2b4e36.scope: Deactivated successfully.
Dec 05 01:12:40 compute-0 podman[193421]: 2025-12-05 01:12:40.328591162 +0000 UTC m=+1.075463242 container died 2f848e3419703a14b432a0dee4130b7d5f755a5270875262d19218529e2b4e36 (image=quay.io/ceph/ceph:v18, name=heuristic_benz, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:12:40 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1185180056' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 01:12:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3d7fc77e248b7d93be8769172a6d1c0946ceaa731778e4a59af22f086ec0bcd-merged.mount: Deactivated successfully.
Dec 05 01:12:40 compute-0 podman[193421]: 2025-12-05 01:12:40.405405855 +0000 UTC m=+1.152277905 container remove 2f848e3419703a14b432a0dee4130b7d5f755a5270875262d19218529e2b4e36 (image=quay.io/ceph/ceph:v18, name=heuristic_benz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:12:40 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:40.416+0000 7fe0056c6140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 05 01:12:40 compute-0 ceph-mgr[193209]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 05 01:12:40 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'orchestrator'
Dec 05 01:12:40 compute-0 systemd[1]: libpod-conmon-2f848e3419703a14b432a0dee4130b7d5f755a5270875262d19218529e2b4e36.scope: Deactivated successfully.
Dec 05 01:12:41 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:41.093+0000 7fe0056c6140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 05 01:12:41 compute-0 ceph-mgr[193209]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 05 01:12:41 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'osd_perf_query'
Dec 05 01:12:41 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:41.369+0000 7fe0056c6140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 05 01:12:41 compute-0 ceph-mgr[193209]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 05 01:12:41 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'osd_support'
Dec 05 01:12:41 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:41.605+0000 7fe0056c6140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 05 01:12:41 compute-0 ceph-mgr[193209]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 05 01:12:41 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'pg_autoscaler'
Dec 05 01:12:41 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:41.891+0000 7fe0056c6140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 05 01:12:41 compute-0 ceph-mgr[193209]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 05 01:12:41 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'progress'
Dec 05 01:12:42 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:42.133+0000 7fe0056c6140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 05 01:12:42 compute-0 ceph-mgr[193209]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 05 01:12:42 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'prometheus'
Dec 05 01:12:42 compute-0 podman[193473]: 2025-12-05 01:12:42.538445991 +0000 UTC m=+0.075252301 container create 8f8f1cbda74ae7356000fbcec7db2d12cd5d68204e13fbadf8c3be02733fa21e (image=quay.io/ceph/ceph:v18, name=stupefied_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.541 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.542 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.544 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.544 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:12:42 compute-0 systemd[1]: Started libpod-conmon-8f8f1cbda74ae7356000fbcec7db2d12cd5d68204e13fbadf8c3be02733fa21e.scope.
Dec 05 01:12:42 compute-0 podman[193473]: 2025-12-05 01:12:42.516182098 +0000 UTC m=+0.052988408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:42 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c717510ea68f12fb2231c7daf7a659197621a92da4c682a1e1efebd6f8b032/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c717510ea68f12fb2231c7daf7a659197621a92da4c682a1e1efebd6f8b032/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c717510ea68f12fb2231c7daf7a659197621a92da4c682a1e1efebd6f8b032/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:42 compute-0 podman[193473]: 2025-12-05 01:12:42.677694197 +0000 UTC m=+0.214500527 container init 8f8f1cbda74ae7356000fbcec7db2d12cd5d68204e13fbadf8c3be02733fa21e (image=quay.io/ceph/ceph:v18, name=stupefied_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:12:42 compute-0 podman[193473]: 2025-12-05 01:12:42.6848225 +0000 UTC m=+0.221628810 container start 8f8f1cbda74ae7356000fbcec7db2d12cd5d68204e13fbadf8c3be02733fa21e (image=quay.io/ceph/ceph:v18, name=stupefied_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 05 01:12:42 compute-0 podman[193473]: 2025-12-05 01:12:42.699994281 +0000 UTC m=+0.236800621 container attach 8f8f1cbda74ae7356000fbcec7db2d12cd5d68204e13fbadf8c3be02733fa21e (image=quay.io/ceph/ceph:v18, name=stupefied_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:12:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 05 01:12:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3291617161' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]: 
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]: {
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:     "fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:     "health": {
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "status": "HEALTH_OK",
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "checks": {},
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "mutes": []
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:     },
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:     "election_epoch": 5,
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:     "quorum": [
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         0
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:     ],
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:     "quorum_names": [
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "compute-0"
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:     ],
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:     "quorum_age": 16,
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:     "monmap": {
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "epoch": 1,
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "min_mon_release_name": "reef",
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "num_mons": 1
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:     },
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:     "osdmap": {
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "epoch": 1,
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "num_osds": 0,
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "num_up_osds": 0,
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "osd_up_since": 0,
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "num_in_osds": 0,
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "osd_in_since": 0,
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "num_remapped_pgs": 0
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:     },
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:     "pgmap": {
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "pgs_by_state": [],
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "num_pgs": 0,
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "num_pools": 0,
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "num_objects": 0,
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "data_bytes": 0,
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "bytes_used": 0,
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "bytes_avail": 0,
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "bytes_total": 0
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:     },
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:     "fsmap": {
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "epoch": 1,
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "by_rank": [],
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "up:standby": 0
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:     },
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:     "mgrmap": {
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "available": false,
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "num_standbys": 0,
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "modules": [
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:             "iostat",
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:             "nfs",
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:             "restful"
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         ],
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "services": {}
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:     },
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:     "servicemap": {
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "epoch": 1,
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "modified": "2025-12-05T01:12:22.836369+0000",
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:         "services": {}
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:     },
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]:     "progress_events": {}
Dec 05 01:12:43 compute-0 stupefied_poitras[193489]: }
Dec 05 01:12:43 compute-0 systemd[1]: libpod-8f8f1cbda74ae7356000fbcec7db2d12cd5d68204e13fbadf8c3be02733fa21e.scope: Deactivated successfully.
Dec 05 01:12:43 compute-0 podman[193473]: 2025-12-05 01:12:43.118163779 +0000 UTC m=+0.654970159 container died 8f8f1cbda74ae7356000fbcec7db2d12cd5d68204e13fbadf8c3be02733fa21e (image=quay.io/ceph/ceph:v18, name=stupefied_poitras, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 01:12:43 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3291617161' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 01:12:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-70c717510ea68f12fb2231c7daf7a659197621a92da4c682a1e1efebd6f8b032-merged.mount: Deactivated successfully.
Dec 05 01:12:43 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:43.190+0000 7fe0056c6140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 05 01:12:43 compute-0 ceph-mgr[193209]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 05 01:12:43 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'rbd_support'
Dec 05 01:12:43 compute-0 podman[193473]: 2025-12-05 01:12:43.191871167 +0000 UTC m=+0.728677477 container remove 8f8f1cbda74ae7356000fbcec7db2d12cd5d68204e13fbadf8c3be02733fa21e (image=quay.io/ceph/ceph:v18, name=stupefied_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 01:12:43 compute-0 systemd[1]: libpod-conmon-8f8f1cbda74ae7356000fbcec7db2d12cd5d68204e13fbadf8c3be02733fa21e.scope: Deactivated successfully.
Dec 05 01:12:43 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:43.493+0000 7fe0056c6140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 05 01:12:43 compute-0 ceph-mgr[193209]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 05 01:12:43 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'restful'
Dec 05 01:12:44 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'rgw'
Dec 05 01:12:45 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:45.025+0000 7fe0056c6140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 05 01:12:45 compute-0 ceph-mgr[193209]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 05 01:12:45 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'rook'
Dec 05 01:12:45 compute-0 podman[193528]: 2025-12-05 01:12:45.266080899 +0000 UTC m=+0.036379138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:47 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:47.307+0000 7fe0056c6140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 05 01:12:47 compute-0 ceph-mgr[193209]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 05 01:12:47 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'selftest'
Dec 05 01:12:47 compute-0 podman[193528]: 2025-12-05 01:12:47.501329045 +0000 UTC m=+2.271627294 container create 8544cce5a2fad0e7200f0c533662eb4d7b046ba11f4d4ecf9ed81638c2eb838b (image=quay.io/ceph/ceph:v18, name=heuristic_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:12:47 compute-0 systemd[1]: Started libpod-conmon-8544cce5a2fad0e7200f0c533662eb4d7b046ba11f4d4ecf9ed81638c2eb838b.scope.
Dec 05 01:12:47 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:12:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b073a8a48e3a7119d3e13c57e5cd9c9701e7c3f4aaacf804d1f57e3ebc83b78f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b073a8a48e3a7119d3e13c57e5cd9c9701e7c3f4aaacf804d1f57e3ebc83b78f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b073a8a48e3a7119d3e13c57e5cd9c9701e7c3f4aaacf804d1f57e3ebc83b78f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:47 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:47.610+0000 7fe0056c6140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 05 01:12:47 compute-0 ceph-mgr[193209]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 05 01:12:47 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'snap_schedule'
Dec 05 01:12:47 compute-0 podman[193528]: 2025-12-05 01:12:47.626834178 +0000 UTC m=+2.397132407 container init 8544cce5a2fad0e7200f0c533662eb4d7b046ba11f4d4ecf9ed81638c2eb838b (image=quay.io/ceph/ceph:v18, name=heuristic_kilby, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:12:47 compute-0 podman[193528]: 2025-12-05 01:12:47.644290852 +0000 UTC m=+2.414589061 container start 8544cce5a2fad0e7200f0c533662eb4d7b046ba11f4d4ecf9ed81638c2eb838b (image=quay.io/ceph/ceph:v18, name=heuristic_kilby, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:12:47 compute-0 podman[193528]: 2025-12-05 01:12:47.650356266 +0000 UTC m=+2.420654495 container attach 8544cce5a2fad0e7200f0c533662eb4d7b046ba11f4d4ecf9ed81638c2eb838b (image=quay.io/ceph/ceph:v18, name=heuristic_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:12:47 compute-0 podman[193542]: 2025-12-05 01:12:47.707944588 +0000 UTC m=+0.137854559 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Dec 05 01:12:47 compute-0 podman[193545]: 2025-12-05 01:12:47.716241413 +0000 UTC m=+0.143786070 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:12:47 compute-0 podman[193546]: 2025-12-05 01:12:47.751050167 +0000 UTC m=+0.166175207 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:12:47 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:47.904+0000 7fe0056c6140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 05 01:12:47 compute-0 ceph-mgr[193209]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 05 01:12:47 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'stats'
Dec 05 01:12:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 05 01:12:48 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3146128602' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]: 
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]: {
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:     "fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:     "health": {
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "status": "HEALTH_OK",
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "checks": {},
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "mutes": []
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:     },
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:     "election_epoch": 5,
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:     "quorum": [
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         0
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:     ],
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:     "quorum_names": [
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "compute-0"
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:     ],
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:     "quorum_age": 21,
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:     "monmap": {
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "epoch": 1,
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "min_mon_release_name": "reef",
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "num_mons": 1
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:     },
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:     "osdmap": {
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "epoch": 1,
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "num_osds": 0,
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "num_up_osds": 0,
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "osd_up_since": 0,
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "num_in_osds": 0,
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "osd_in_since": 0,
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "num_remapped_pgs": 0
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:     },
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:     "pgmap": {
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "pgs_by_state": [],
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "num_pgs": 0,
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "num_pools": 0,
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "num_objects": 0,
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "data_bytes": 0,
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "bytes_used": 0,
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "bytes_avail": 0,
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "bytes_total": 0
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:     },
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:     "fsmap": {
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "epoch": 1,
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "by_rank": [],
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "up:standby": 0
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:     },
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:     "mgrmap": {
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "available": false,
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "num_standbys": 0,
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "modules": [
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:             "iostat",
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:             "nfs",
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:             "restful"
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         ],
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "services": {}
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:     },
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:     "servicemap": {
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "epoch": 1,
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "modified": "2025-12-05T01:12:22.836369+0000",
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:         "services": {}
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:     },
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]:     "progress_events": {}
Dec 05 01:12:48 compute-0 heuristic_kilby[193547]: }
Dec 05 01:12:48 compute-0 systemd[1]: libpod-8544cce5a2fad0e7200f0c533662eb4d7b046ba11f4d4ecf9ed81638c2eb838b.scope: Deactivated successfully.
Dec 05 01:12:48 compute-0 conmon[193547]: conmon 8544cce5a2fad0e7200f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8544cce5a2fad0e7200f0c533662eb4d7b046ba11f4d4ecf9ed81638c2eb838b.scope/container/memory.events
Dec 05 01:12:48 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3146128602' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 01:12:48 compute-0 podman[193635]: 2025-12-05 01:12:48.163807568 +0000 UTC m=+0.054799957 container died 8544cce5a2fad0e7200f0c533662eb4d7b046ba11f4d4ecf9ed81638c2eb838b (image=quay.io/ceph/ceph:v18, name=heuristic_kilby, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 01:12:48 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'status'
Dec 05 01:12:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b073a8a48e3a7119d3e13c57e5cd9c9701e7c3f4aaacf804d1f57e3ebc83b78f-merged.mount: Deactivated successfully.
Dec 05 01:12:48 compute-0 podman[193635]: 2025-12-05 01:12:48.237949179 +0000 UTC m=+0.128941538 container remove 8544cce5a2fad0e7200f0c533662eb4d7b046ba11f4d4ecf9ed81638c2eb838b (image=quay.io/ceph/ceph:v18, name=heuristic_kilby, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:12:48 compute-0 systemd[1]: libpod-conmon-8544cce5a2fad0e7200f0c533662eb4d7b046ba11f4d4ecf9ed81638c2eb838b.scope: Deactivated successfully.
Dec 05 01:12:48 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:48.447+0000 7fe0056c6140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 05 01:12:48 compute-0 ceph-mgr[193209]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 05 01:12:48 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'telegraf'
Dec 05 01:12:48 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:48.699+0000 7fe0056c6140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 05 01:12:48 compute-0 ceph-mgr[193209]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 05 01:12:48 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'telemetry'
Dec 05 01:12:49 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:49.320+0000 7fe0056c6140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 05 01:12:49 compute-0 ceph-mgr[193209]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 05 01:12:49 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'test_orchestrator'
Dec 05 01:12:50 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:50.004+0000 7fe0056c6140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'volumes'
Dec 05 01:12:50 compute-0 podman[193651]: 2025-12-05 01:12:50.381690995 +0000 UTC m=+0.090469523 container create 2aab2c0ec96c2cc1a4e4213d01cee9387a54476ca416535c43667e9671b7cb5b (image=quay.io/ceph/ceph:v18, name=hopeful_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 05 01:12:50 compute-0 podman[193651]: 2025-12-05 01:12:50.35054682 +0000 UTC m=+0.059325338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:50 compute-0 systemd[1]: Started libpod-conmon-2aab2c0ec96c2cc1a4e4213d01cee9387a54476ca416535c43667e9671b7cb5b.scope.
Dec 05 01:12:50 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff8dc7fdf53053a3e97dafb1bad8e66dc4dd05390a0a0f85a22601e77800fab5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff8dc7fdf53053a3e97dafb1bad8e66dc4dd05390a0a0f85a22601e77800fab5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff8dc7fdf53053a3e97dafb1bad8e66dc4dd05390a0a0f85a22601e77800fab5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:50 compute-0 podman[193651]: 2025-12-05 01:12:50.568118726 +0000 UTC m=+0.276897254 container init 2aab2c0ec96c2cc1a4e4213d01cee9387a54476ca416535c43667e9671b7cb5b (image=quay.io/ceph/ceph:v18, name=hopeful_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 05 01:12:50 compute-0 podman[193665]: 2025-12-05 01:12:50.569560977 +0000 UTC m=+0.124547042 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 05 01:12:50 compute-0 podman[193651]: 2025-12-05 01:12:50.586295593 +0000 UTC m=+0.295074081 container start 2aab2c0ec96c2cc1a4e4213d01cee9387a54476ca416535c43667e9671b7cb5b (image=quay.io/ceph/ceph:v18, name=hopeful_heyrovsky, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:12:50 compute-0 podman[193651]: 2025-12-05 01:12:50.592362686 +0000 UTC m=+0.301141394 container attach 2aab2c0ec96c2cc1a4e4213d01cee9387a54476ca416535c43667e9671b7cb5b (image=quay.io/ceph/ceph:v18, name=hopeful_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 01:12:50 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:50.695+0000 7fe0056c6140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'zabbix'
Dec 05 01:12:50 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:50.926+0000 7fe0056c6140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: ms_deliver_dispatch: unhandled message 0x559dd83f31e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Dec 05 01:12:50 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.afshmv
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: mgr handle_mgr_map Activating!
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: mgr handle_mgr_map I am now activating
Dec 05 01:12:50 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.afshmv(active, starting, since 0.0182515s)
Dec 05 01:12:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Dec 05 01:12:50 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 05 01:12:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).mds e1 all = 1
Dec 05 01:12:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Dec 05 01:12:50 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 01:12:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Dec 05 01:12:50 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 05 01:12:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Dec 05 01:12:50 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 05 01:12:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.afshmv", "id": "compute-0.afshmv"} v 0) v1
Dec 05 01:12:50 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mgr metadata", "who": "compute-0.afshmv", "id": "compute-0.afshmv"}]: dispatch
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: balancer
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: crash
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: [balancer INFO root] Starting
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:12:50
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: [balancer INFO root] No pools available
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: devicehealth
Dec 05 01:12:50 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : Manager daemon compute-0.afshmv is now available
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: iostat
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: [devicehealth INFO root] Starting
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: nfs
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: orchestrator
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: pg_autoscaler
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: progress
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: [progress INFO root] Loading...
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: [progress INFO root] No stored events to load
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: [progress INFO root] Loaded [] historic events
Dec 05 01:12:50 compute-0 ceph-mgr[193209]: [progress INFO root] Loaded OSDMap, ready.
Dec 05 01:12:50 compute-0 ceph-mon[192914]: Activating manager daemon compute-0.afshmv
Dec 05 01:12:50 compute-0 ceph-mon[192914]: mgrmap e2: compute-0.afshmv(active, starting, since 0.0182515s)
Dec 05 01:12:50 compute-0 ceph-mon[192914]: from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 05 01:12:50 compute-0 ceph-mon[192914]: from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 01:12:50 compute-0 ceph-mon[192914]: from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 05 01:12:50 compute-0 ceph-mon[192914]: from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 05 01:12:50 compute-0 ceph-mon[192914]: from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mgr metadata", "who": "compute-0.afshmv", "id": "compute-0.afshmv"}]: dispatch
Dec 05 01:12:50 compute-0 ceph-mon[192914]: Manager daemon compute-0.afshmv is now available
Dec 05 01:12:51 compute-0 ceph-mgr[193209]: [rbd_support INFO root] recovery thread starting
Dec 05 01:12:51 compute-0 ceph-mgr[193209]: [rbd_support INFO root] starting setup
Dec 05 01:12:51 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: rbd_support
Dec 05 01:12:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 05 01:12:51 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2314212956' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 01:12:51 compute-0 ceph-mgr[193209]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:12:51 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: restful
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]: 
Dec 05 01:12:51 compute-0 ceph-mgr[193209]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:12:51 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: status
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]: {
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:     "fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:     "health": {
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "status": "HEALTH_OK",
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "checks": {},
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "mutes": []
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:     },
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:     "election_epoch": 5,
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:     "quorum": [
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         0
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:     ],
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:     "quorum_names": [
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "compute-0"
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:     ],
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:     "quorum_age": 24,
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:     "monmap": {
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "epoch": 1,
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "min_mon_release_name": "reef",
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "num_mons": 1
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:     },
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:     "osdmap": {
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "epoch": 1,
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "num_osds": 0,
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "num_up_osds": 0,
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "osd_up_since": 0,
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "num_in_osds": 0,
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "osd_in_since": 0,
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "num_remapped_pgs": 0
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:     },
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:     "pgmap": {
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "pgs_by_state": [],
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "num_pgs": 0,
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "num_pools": 0,
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "num_objects": 0,
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "data_bytes": 0,
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "bytes_used": 0,
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "bytes_avail": 0,
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "bytes_total": 0
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:     },
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:     "fsmap": {
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "epoch": 1,
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "by_rank": [],
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "up:standby": 0
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:     },
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:     "mgrmap": {
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "available": false,
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "num_standbys": 0,
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "modules": [
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:             "iostat",
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:             "nfs",
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:             "restful"
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         ],
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "services": {}
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:     },
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:     "servicemap": {
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "epoch": 1,
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "modified": "2025-12-05T01:12:22.836369+0000",
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:         "services": {}
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:     },
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]:     "progress_events": {}
Dec 05 01:12:51 compute-0 hopeful_heyrovsky[193677]: }
Dec 05 01:12:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.afshmv/mirror_snapshot_schedule"} v 0) v1
Dec 05 01:12:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.afshmv/mirror_snapshot_schedule"}]: dispatch
Dec 05 01:12:51 compute-0 ceph-mgr[193209]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:12:51 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:12:51 compute-0 ceph-mgr[193209]: [restful INFO root] server_addr: :: server_port: 8003
Dec 05 01:12:51 compute-0 ceph-mgr[193209]: [restful WARNING root] server not running: no certificate configured
Dec 05 01:12:51 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 05 01:12:51 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: telemetry
Dec 05 01:12:51 compute-0 ceph-mgr[193209]: [rbd_support INFO root] PerfHandler: starting
Dec 05 01:12:51 compute-0 ceph-mgr[193209]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:12:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Dec 05 01:12:51 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TaskHandler: starting
Dec 05 01:12:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.afshmv/trash_purge_schedule"} v 0) v1
Dec 05 01:12:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.afshmv/trash_purge_schedule"}]: dispatch
Dec 05 01:12:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' 
Dec 05 01:12:51 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:12:51 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 05 01:12:51 compute-0 ceph-mgr[193209]: [rbd_support INFO root] setup complete
Dec 05 01:12:51 compute-0 podman[193651]: 2025-12-05 01:12:51.043089972 +0000 UTC m=+0.751868440 container died 2aab2c0ec96c2cc1a4e4213d01cee9387a54476ca416535c43667e9671b7cb5b (image=quay.io/ceph/ceph:v18, name=hopeful_heyrovsky, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:12:51 compute-0 systemd[1]: libpod-2aab2c0ec96c2cc1a4e4213d01cee9387a54476ca416535c43667e9671b7cb5b.scope: Deactivated successfully.
Dec 05 01:12:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Dec 05 01:12:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' 
Dec 05 01:12:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Dec 05 01:12:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' 
Dec 05 01:12:51 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: volumes
Dec 05 01:12:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff8dc7fdf53053a3e97dafb1bad8e66dc4dd05390a0a0f85a22601e77800fab5-merged.mount: Deactivated successfully.
Dec 05 01:12:51 compute-0 podman[193651]: 2025-12-05 01:12:51.102614434 +0000 UTC m=+0.811392912 container remove 2aab2c0ec96c2cc1a4e4213d01cee9387a54476ca416535c43667e9671b7cb5b (image=quay.io/ceph/ceph:v18, name=hopeful_heyrovsky, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 05 01:12:51 compute-0 systemd[1]: libpod-conmon-2aab2c0ec96c2cc1a4e4213d01cee9387a54476ca416535c43667e9671b7cb5b.scope: Deactivated successfully.
Dec 05 01:12:51 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.afshmv(active, since 1.03193s)
Dec 05 01:12:52 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2314212956' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 01:12:52 compute-0 ceph-mon[192914]: from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.afshmv/mirror_snapshot_schedule"}]: dispatch
Dec 05 01:12:52 compute-0 ceph-mon[192914]: from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.afshmv/trash_purge_schedule"}]: dispatch
Dec 05 01:12:52 compute-0 ceph-mon[192914]: from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' 
Dec 05 01:12:52 compute-0 ceph-mon[192914]: from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' 
Dec 05 01:12:52 compute-0 ceph-mon[192914]: from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' 
Dec 05 01:12:52 compute-0 ceph-mon[192914]: mgrmap e3: compute-0.afshmv(active, since 1.03193s)
Dec 05 01:12:52 compute-0 ceph-mgr[193209]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 01:12:53 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.afshmv(active, since 2s)
Dec 05 01:12:53 compute-0 podman[193803]: 2025-12-05 01:12:53.218617141 +0000 UTC m=+0.068867169 container create abb0091ef959dad11561efdcbd219abcdbd1bd69f3ba364a40a2c3c7e0cef17f (image=quay.io/ceph/ceph:v18, name=intelligent_haslett, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:12:53 compute-0 systemd[1]: Started libpod-conmon-abb0091ef959dad11561efdcbd219abcdbd1bd69f3ba364a40a2c3c7e0cef17f.scope.
Dec 05 01:12:53 compute-0 podman[193803]: 2025-12-05 01:12:53.197703457 +0000 UTC m=+0.047953465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:53 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:12:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ee5c3083322b44d89762e3bd5a284f435588d2dfbda7947b123619c97253abf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ee5c3083322b44d89762e3bd5a284f435588d2dfbda7947b123619c97253abf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ee5c3083322b44d89762e3bd5a284f435588d2dfbda7947b123619c97253abf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:53 compute-0 podman[193803]: 2025-12-05 01:12:53.354075303 +0000 UTC m=+0.204325391 container init abb0091ef959dad11561efdcbd219abcdbd1bd69f3ba364a40a2c3c7e0cef17f (image=quay.io/ceph/ceph:v18, name=intelligent_haslett, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 01:12:53 compute-0 podman[193803]: 2025-12-05 01:12:53.370627504 +0000 UTC m=+0.220877532 container start abb0091ef959dad11561efdcbd219abcdbd1bd69f3ba364a40a2c3c7e0cef17f (image=quay.io/ceph/ceph:v18, name=intelligent_haslett, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 05 01:12:53 compute-0 podman[193803]: 2025-12-05 01:12:53.378428716 +0000 UTC m=+0.228678734 container attach abb0091ef959dad11561efdcbd219abcdbd1bd69f3ba364a40a2c3c7e0cef17f (image=quay.io/ceph/ceph:v18, name=intelligent_haslett, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:12:53 compute-0 podman[193817]: 2025-12-05 01:12:53.429418365 +0000 UTC m=+0.136075940 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, maintainer=Red Hat, Inc., config_id=edpm, distribution-scope=public, name=ubi9, release=1214.1726694543, version=9.4, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9)
Dec 05 01:12:54 compute-0 ceph-mon[192914]: mgrmap e4: compute-0.afshmv(active, since 2s)
Dec 05 01:12:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 05 01:12:54 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3287451921' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]: 
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]: {
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:     "fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:     "health": {
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "status": "HEALTH_OK",
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "checks": {},
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "mutes": []
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:     },
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:     "election_epoch": 5,
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:     "quorum": [
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         0
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:     ],
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:     "quorum_names": [
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "compute-0"
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:     ],
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:     "quorum_age": 27,
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:     "monmap": {
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "epoch": 1,
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "min_mon_release_name": "reef",
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "num_mons": 1
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:     },
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:     "osdmap": {
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "epoch": 1,
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "num_osds": 0,
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "num_up_osds": 0,
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "osd_up_since": 0,
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "num_in_osds": 0,
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "osd_in_since": 0,
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "num_remapped_pgs": 0
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:     },
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:     "pgmap": {
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "pgs_by_state": [],
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "num_pgs": 0,
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "num_pools": 0,
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "num_objects": 0,
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "data_bytes": 0,
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "bytes_used": 0,
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "bytes_avail": 0,
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "bytes_total": 0
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:     },
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:     "fsmap": {
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "epoch": 1,
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "by_rank": [],
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "up:standby": 0
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:     },
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:     "mgrmap": {
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "available": true,
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "num_standbys": 0,
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "modules": [
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:             "iostat",
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:             "nfs",
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:             "restful"
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         ],
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "services": {}
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:     },
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:     "servicemap": {
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "epoch": 1,
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "modified": "2025-12-05T01:12:22.836369+0000",
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:         "services": {}
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:     },
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]:     "progress_events": {}
Dec 05 01:12:54 compute-0 intelligent_haslett[193820]: }
Dec 05 01:12:54 compute-0 systemd[1]: libpod-abb0091ef959dad11561efdcbd219abcdbd1bd69f3ba364a40a2c3c7e0cef17f.scope: Deactivated successfully.
Dec 05 01:12:54 compute-0 podman[193803]: 2025-12-05 01:12:54.057517174 +0000 UTC m=+0.907767212 container died abb0091ef959dad11561efdcbd219abcdbd1bd69f3ba364a40a2c3c7e0cef17f (image=quay.io/ceph/ceph:v18, name=intelligent_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:12:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ee5c3083322b44d89762e3bd5a284f435588d2dfbda7947b123619c97253abf-merged.mount: Deactivated successfully.
Dec 05 01:12:54 compute-0 podman[193803]: 2025-12-05 01:12:54.11856788 +0000 UTC m=+0.968817888 container remove abb0091ef959dad11561efdcbd219abcdbd1bd69f3ba364a40a2c3c7e0cef17f (image=quay.io/ceph/ceph:v18, name=intelligent_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 05 01:12:54 compute-0 systemd[1]: libpod-conmon-abb0091ef959dad11561efdcbd219abcdbd1bd69f3ba364a40a2c3c7e0cef17f.scope: Deactivated successfully.
Dec 05 01:12:54 compute-0 podman[193876]: 2025-12-05 01:12:54.20649004 +0000 UTC m=+0.058032511 container create 0890b2b39217df04b559c51b9097426a7958fe79c71c3fea545921a288089a3f (image=quay.io/ceph/ceph:v18, name=eager_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:12:54 compute-0 systemd[1]: Started libpod-conmon-0890b2b39217df04b559c51b9097426a7958fe79c71c3fea545921a288089a3f.scope.
Dec 05 01:12:54 compute-0 podman[193876]: 2025-12-05 01:12:54.181203251 +0000 UTC m=+0.032745802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:54 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b533aba22280a5c4d65c01ccf0b681bb8d1576a645cfa168e659bc8f8b05188e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b533aba22280a5c4d65c01ccf0b681bb8d1576a645cfa168e659bc8f8b05188e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b533aba22280a5c4d65c01ccf0b681bb8d1576a645cfa168e659bc8f8b05188e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b533aba22280a5c4d65c01ccf0b681bb8d1576a645cfa168e659bc8f8b05188e/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:54 compute-0 podman[193876]: 2025-12-05 01:12:54.336182188 +0000 UTC m=+0.187724689 container init 0890b2b39217df04b559c51b9097426a7958fe79c71c3fea545921a288089a3f (image=quay.io/ceph/ceph:v18, name=eager_dubinsky, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 01:12:54 compute-0 podman[193876]: 2025-12-05 01:12:54.3506636 +0000 UTC m=+0.202206051 container start 0890b2b39217df04b559c51b9097426a7958fe79c71c3fea545921a288089a3f (image=quay.io/ceph/ceph:v18, name=eager_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 05 01:12:54 compute-0 podman[193876]: 2025-12-05 01:12:54.355341833 +0000 UTC m=+0.206884344 container attach 0890b2b39217df04b559c51b9097426a7958fe79c71c3fea545921a288089a3f (image=quay.io/ceph/ceph:v18, name=eager_dubinsky, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 01:12:54 compute-0 ceph-mgr[193209]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 01:12:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Dec 05 01:12:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/888022185' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 05 01:12:55 compute-0 systemd[1]: libpod-0890b2b39217df04b559c51b9097426a7958fe79c71c3fea545921a288089a3f.scope: Deactivated successfully.
Dec 05 01:12:55 compute-0 podman[193876]: 2025-12-05 01:12:55.023860381 +0000 UTC m=+0.875402892 container died 0890b2b39217df04b559c51b9097426a7958fe79c71c3fea545921a288089a3f (image=quay.io/ceph/ceph:v18, name=eager_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:12:55 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3287451921' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 01:12:55 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/888022185' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 05 01:12:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-b533aba22280a5c4d65c01ccf0b681bb8d1576a645cfa168e659bc8f8b05188e-merged.mount: Deactivated successfully.
Dec 05 01:12:55 compute-0 podman[193876]: 2025-12-05 01:12:55.110833844 +0000 UTC m=+0.962376315 container remove 0890b2b39217df04b559c51b9097426a7958fe79c71c3fea545921a288089a3f (image=quay.io/ceph/ceph:v18, name=eager_dubinsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:12:55 compute-0 systemd[1]: libpod-conmon-0890b2b39217df04b559c51b9097426a7958fe79c71c3fea545921a288089a3f.scope: Deactivated successfully.
Dec 05 01:12:55 compute-0 podman[193929]: 2025-12-05 01:12:55.2320187 +0000 UTC m=+0.080012196 container create b8498f3feded28e37d17f60bf59e0b00674ac23d2c5045a1c95a48c17b5da158 (image=quay.io/ceph/ceph:v18, name=eloquent_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 05 01:12:55 compute-0 podman[193929]: 2025-12-05 01:12:55.200115843 +0000 UTC m=+0.048109339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:55 compute-0 systemd[1]: Started libpod-conmon-b8498f3feded28e37d17f60bf59e0b00674ac23d2c5045a1c95a48c17b5da158.scope.
Dec 05 01:12:55 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd78485c9f516a10148e1f18f4b77d92e945da51fe72970641e62d1712c31af7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd78485c9f516a10148e1f18f4b77d92e945da51fe72970641e62d1712c31af7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd78485c9f516a10148e1f18f4b77d92e945da51fe72970641e62d1712c31af7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:55 compute-0 podman[193929]: 2025-12-05 01:12:55.402462977 +0000 UTC m=+0.250456473 container init b8498f3feded28e37d17f60bf59e0b00674ac23d2c5045a1c95a48c17b5da158 (image=quay.io/ceph/ceph:v18, name=eloquent_noether, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:12:55 compute-0 podman[193929]: 2025-12-05 01:12:55.417155755 +0000 UTC m=+0.265149251 container start b8498f3feded28e37d17f60bf59e0b00674ac23d2c5045a1c95a48c17b5da158 (image=quay.io/ceph/ceph:v18, name=eloquent_noether, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:12:55 compute-0 podman[193929]: 2025-12-05 01:12:55.423948318 +0000 UTC m=+0.271941814 container attach b8498f3feded28e37d17f60bf59e0b00674ac23d2c5045a1c95a48c17b5da158 (image=quay.io/ceph/ceph:v18, name=eloquent_noether, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 05 01:12:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Dec 05 01:12:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3035572452' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec 05 01:12:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3035572452' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 05 01:12:56 compute-0 ceph-mgr[193209]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 05 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 05 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 05 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  1: '-n'
Dec 05 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  2: 'mgr.compute-0.afshmv'
Dec 05 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  3: '-f'
Dec 05 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  4: '--setuser'
Dec 05 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  5: 'ceph'
Dec 05 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  6: '--setgroup'
Dec 05 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  7: 'ceph'
Dec 05 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  8: '--default-log-to-file=false'
Dec 05 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  9: '--default-log-to-journald=true'
Dec 05 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 05 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 05 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  exe_path /proc/self/exe
Dec 05 01:12:56 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.afshmv(active, since 5s)
Dec 05 01:12:56 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3035572452' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec 05 01:12:56 compute-0 systemd[1]: libpod-b8498f3feded28e37d17f60bf59e0b00674ac23d2c5045a1c95a48c17b5da158.scope: Deactivated successfully.
Dec 05 01:12:56 compute-0 podman[193929]: 2025-12-05 01:12:56.150474826 +0000 UTC m=+0.998468332 container died b8498f3feded28e37d17f60bf59e0b00674ac23d2c5045a1c95a48c17b5da158 (image=quay.io/ceph/ceph:v18, name=eloquent_noether, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:12:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd78485c9f516a10148e1f18f4b77d92e945da51fe72970641e62d1712c31af7-merged.mount: Deactivated successfully.
Dec 05 01:12:56 compute-0 podman[193929]: 2025-12-05 01:12:56.231384667 +0000 UTC m=+1.079378163 container remove b8498f3feded28e37d17f60bf59e0b00674ac23d2c5045a1c95a48c17b5da158 (image=quay.io/ceph/ceph:v18, name=eloquent_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:12:56 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: ignoring --setuser ceph since I am not root
Dec 05 01:12:56 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: ignoring --setgroup ceph since I am not root
Dec 05 01:12:56 compute-0 systemd[1]: libpod-conmon-b8498f3feded28e37d17f60bf59e0b00674ac23d2c5045a1c95a48c17b5da158.scope: Deactivated successfully.
Dec 05 01:12:56 compute-0 ceph-mgr[193209]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Dec 05 01:12:56 compute-0 ceph-mgr[193209]: pidfile_write: ignore empty --pid-file
Dec 05 01:12:56 compute-0 podman[193981]: 2025-12-05 01:12:56.339223523 +0000 UTC m=+0.103256407 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, release=1755695350, config_id=edpm, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, vcs-type=git, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.buildah.version=1.33.7, container_name=openstack_network_exporter)
Dec 05 01:12:56 compute-0 podman[193999]: 2025-12-05 01:12:56.358260224 +0000 UTC m=+0.076520757 container create 02f0861951b2bb2e9400108227d03e4f338aca6635427f4539bc27377e789b07 (image=quay.io/ceph/ceph:v18, name=vigilant_johnson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 05 01:12:56 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'alerts'
Dec 05 01:12:56 compute-0 systemd[1]: Started libpod-conmon-02f0861951b2bb2e9400108227d03e4f338aca6635427f4539bc27377e789b07.scope.
Dec 05 01:12:56 compute-0 podman[193999]: 2025-12-05 01:12:56.332249095 +0000 UTC m=+0.050509668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:56 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:12:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5838bc62e781fa19a2baeedfe342d7c2ee7115f5ba110af470bf3cf7cc14ac27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5838bc62e781fa19a2baeedfe342d7c2ee7115f5ba110af470bf3cf7cc14ac27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5838bc62e781fa19a2baeedfe342d7c2ee7115f5ba110af470bf3cf7cc14ac27/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:56 compute-0 podman[193999]: 2025-12-05 01:12:56.488135887 +0000 UTC m=+0.206396410 container init 02f0861951b2bb2e9400108227d03e4f338aca6635427f4539bc27377e789b07 (image=quay.io/ceph/ceph:v18, name=vigilant_johnson, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 05 01:12:56 compute-0 podman[193999]: 2025-12-05 01:12:56.501843007 +0000 UTC m=+0.220103570 container start 02f0861951b2bb2e9400108227d03e4f338aca6635427f4539bc27377e789b07 (image=quay.io/ceph/ceph:v18, name=vigilant_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 01:12:56 compute-0 podman[193999]: 2025-12-05 01:12:56.507672343 +0000 UTC m=+0.225932896 container attach 02f0861951b2bb2e9400108227d03e4f338aca6635427f4539bc27377e789b07 (image=quay.io/ceph/ceph:v18, name=vigilant_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 05 01:12:56 compute-0 ceph-mgr[193209]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 05 01:12:56 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'balancer'
Dec 05 01:12:56 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:56.676+0000 7f1b6c4d6140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 05 01:12:56 compute-0 ceph-mgr[193209]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 05 01:12:56 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:56.928+0000 7f1b6c4d6140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 05 01:12:56 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'cephadm'
Dec 05 01:12:57 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3035572452' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 05 01:12:57 compute-0 ceph-mon[192914]: mgrmap e5: compute-0.afshmv(active, since 5s)
Dec 05 01:12:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Dec 05 01:12:57 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1586185734' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 05 01:12:57 compute-0 vigilant_johnson[194044]: {
Dec 05 01:12:57 compute-0 vigilant_johnson[194044]:     "epoch": 5,
Dec 05 01:12:57 compute-0 vigilant_johnson[194044]:     "available": true,
Dec 05 01:12:57 compute-0 vigilant_johnson[194044]:     "active_name": "compute-0.afshmv",
Dec 05 01:12:57 compute-0 vigilant_johnson[194044]:     "num_standby": 0
Dec 05 01:12:57 compute-0 vigilant_johnson[194044]: }
Dec 05 01:12:57 compute-0 systemd[1]: libpod-02f0861951b2bb2e9400108227d03e4f338aca6635427f4539bc27377e789b07.scope: Deactivated successfully.
Dec 05 01:12:57 compute-0 podman[193999]: 2025-12-05 01:12:57.1916368 +0000 UTC m=+0.909897363 container died 02f0861951b2bb2e9400108227d03e4f338aca6635427f4539bc27377e789b07 (image=quay.io/ceph/ceph:v18, name=vigilant_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 05 01:12:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-5838bc62e781fa19a2baeedfe342d7c2ee7115f5ba110af470bf3cf7cc14ac27-merged.mount: Deactivated successfully.
Dec 05 01:12:57 compute-0 podman[193999]: 2025-12-05 01:12:57.284317355 +0000 UTC m=+1.002577908 container remove 02f0861951b2bb2e9400108227d03e4f338aca6635427f4539bc27377e789b07 (image=quay.io/ceph/ceph:v18, name=vigilant_johnson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 05 01:12:57 compute-0 systemd[1]: libpod-conmon-02f0861951b2bb2e9400108227d03e4f338aca6635427f4539bc27377e789b07.scope: Deactivated successfully.
Dec 05 01:12:57 compute-0 podman[194080]: 2025-12-05 01:12:57.421223908 +0000 UTC m=+0.088693683 container create bdd72895c284df98646becd65cd92d92079ea5119489616b3c4f455c2a678942 (image=quay.io/ceph/ceph:v18, name=vibrant_matsumoto, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 05 01:12:57 compute-0 podman[194080]: 2025-12-05 01:12:57.382567029 +0000 UTC m=+0.050036834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:12:57 compute-0 systemd[1]: Started libpod-conmon-bdd72895c284df98646becd65cd92d92079ea5119489616b3c4f455c2a678942.scope.
Dec 05 01:12:57 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:12:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c7c6f9a43b8975fcdf4863bda44137cf1009e1dd35d81bf183e1d6a44f7b226/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c7c6f9a43b8975fcdf4863bda44137cf1009e1dd35d81bf183e1d6a44f7b226/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c7c6f9a43b8975fcdf4863bda44137cf1009e1dd35d81bf183e1d6a44f7b226/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:12:57 compute-0 podman[194080]: 2025-12-05 01:12:57.586679563 +0000 UTC m=+0.254149318 container init bdd72895c284df98646becd65cd92d92079ea5119489616b3c4f455c2a678942 (image=quay.io/ceph/ceph:v18, name=vibrant_matsumoto, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:12:57 compute-0 podman[194080]: 2025-12-05 01:12:57.600616259 +0000 UTC m=+0.268085994 container start bdd72895c284df98646becd65cd92d92079ea5119489616b3c4f455c2a678942 (image=quay.io/ceph/ceph:v18, name=vibrant_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Dec 05 01:12:57 compute-0 podman[194080]: 2025-12-05 01:12:57.605372774 +0000 UTC m=+0.272842539 container attach bdd72895c284df98646becd65cd92d92079ea5119489616b3c4f455c2a678942 (image=quay.io/ceph/ceph:v18, name=vibrant_matsumoto, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 05 01:12:58 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1586185734' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 05 01:12:58 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'crash'
Dec 05 01:12:59 compute-0 ceph-mgr[193209]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 05 01:12:59 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:59.227+0000 7f1b6c4d6140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 05 01:12:59 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'dashboard'
Dec 05 01:12:59 compute-0 podman[158197]: time="2025-12-05T01:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:12:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 23487 "" "Go-http-client/1.1"
Dec 05 01:12:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4353 "" "Go-http-client/1.1"
Dec 05 01:13:00 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'devicehealth'
Dec 05 01:13:00 compute-0 podman[194132]: 2025-12-05 01:13:00.736785723 +0000 UTC m=+0.141168564 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:13:00 compute-0 ceph-mgr[193209]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 05 01:13:00 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:00.975+0000 7f1b6c4d6140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 05 01:13:00 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'diskprediction_local'
Dec 05 01:13:01 compute-0 openstack_network_exporter[160350]: ERROR   01:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:13:01 compute-0 openstack_network_exporter[160350]: ERROR   01:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:13:01 compute-0 openstack_network_exporter[160350]: ERROR   01:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:13:01 compute-0 openstack_network_exporter[160350]: ERROR   01:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:13:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:13:01 compute-0 openstack_network_exporter[160350]: ERROR   01:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:13:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:13:01 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 05 01:13:01 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 05 01:13:01 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]:   from numpy import show_config as show_numpy_config
Dec 05 01:13:01 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:01.491+0000 7f1b6c4d6140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 05 01:13:01 compute-0 ceph-mgr[193209]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 05 01:13:01 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'influx'
Dec 05 01:13:01 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:01.735+0000 7f1b6c4d6140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 05 01:13:01 compute-0 ceph-mgr[193209]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 05 01:13:01 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'insights'
Dec 05 01:13:01 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'iostat'
Dec 05 01:13:02 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:02.243+0000 7f1b6c4d6140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 05 01:13:02 compute-0 ceph-mgr[193209]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 05 01:13:02 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'k8sevents'
Dec 05 01:13:04 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'localpool'
Dec 05 01:13:04 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'mds_autoscaler'
Dec 05 01:13:04 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'mirroring'
Dec 05 01:13:05 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'nfs'
Dec 05 01:13:05 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:05.916+0000 7f1b6c4d6140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 05 01:13:05 compute-0 ceph-mgr[193209]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 05 01:13:05 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'orchestrator'
Dec 05 01:13:06 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:06.563+0000 7f1b6c4d6140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 05 01:13:06 compute-0 ceph-mgr[193209]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 05 01:13:06 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'osd_perf_query'
Dec 05 01:13:06 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:06.826+0000 7f1b6c4d6140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 05 01:13:06 compute-0 ceph-mgr[193209]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 05 01:13:06 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'osd_support'
Dec 05 01:13:07 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:07.058+0000 7f1b6c4d6140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 05 01:13:07 compute-0 ceph-mgr[193209]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 05 01:13:07 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'pg_autoscaler'
Dec 05 01:13:07 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:07.331+0000 7f1b6c4d6140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 05 01:13:07 compute-0 ceph-mgr[193209]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 05 01:13:07 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'progress'
Dec 05 01:13:07 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:07.564+0000 7f1b6c4d6140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 05 01:13:07 compute-0 ceph-mgr[193209]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 05 01:13:07 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'prometheus'
Dec 05 01:13:08 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:08.570+0000 7f1b6c4d6140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 05 01:13:08 compute-0 ceph-mgr[193209]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 05 01:13:08 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'rbd_support'
Dec 05 01:13:08 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:08.873+0000 7f1b6c4d6140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 05 01:13:08 compute-0 ceph-mgr[193209]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 05 01:13:08 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'restful'
Dec 05 01:13:09 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'rgw'
Dec 05 01:13:10 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:10.312+0000 7f1b6c4d6140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 05 01:13:10 compute-0 ceph-mgr[193209]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 05 01:13:10 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'rook'
Dec 05 01:13:12 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:12.406+0000 7f1b6c4d6140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 05 01:13:12 compute-0 ceph-mgr[193209]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 05 01:13:12 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'selftest'
Dec 05 01:13:12 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:12.646+0000 7f1b6c4d6140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 05 01:13:12 compute-0 ceph-mgr[193209]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 05 01:13:12 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'snap_schedule'
Dec 05 01:13:12 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:12.920+0000 7f1b6c4d6140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 05 01:13:12 compute-0 ceph-mgr[193209]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 05 01:13:12 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'stats'
Dec 05 01:13:13 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'status'
Dec 05 01:13:13 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:13.413+0000 7f1b6c4d6140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 05 01:13:13 compute-0 ceph-mgr[193209]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 05 01:13:13 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'telegraf'
Dec 05 01:13:13 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:13.653+0000 7f1b6c4d6140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 05 01:13:13 compute-0 ceph-mgr[193209]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 05 01:13:13 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'telemetry'
Dec 05 01:13:14 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:14.277+0000 7f1b6c4d6140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 05 01:13:14 compute-0 ceph-mgr[193209]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 05 01:13:14 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'test_orchestrator'
Dec 05 01:13:14 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:14.972+0000 7f1b6c4d6140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 05 01:13:14 compute-0 ceph-mgr[193209]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 05 01:13:14 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'volumes'
Dec 05 01:13:15 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:15.735+0000 7f1b6c4d6140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 05 01:13:15 compute-0 ceph-mgr[193209]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 05 01:13:15 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'zabbix'
Dec 05 01:13:15 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:15.979+0000 7f1b6c4d6140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 05 01:13:15 compute-0 ceph-mgr[193209]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 05 01:13:15 compute-0 ceph-mgr[193209]: ms_deliver_dispatch: unhandled message 0x55c881e5d1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Dec 05 01:13:15 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : Active manager daemon compute-0.afshmv restarted
Dec 05 01:13:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec 05 01:13:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 05 01:13:15 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.afshmv
Dec 05 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 05 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 05 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: mgr handle_mgr_map Activating!
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: mgr handle_mgr_map I am now activating
Dec 05 01:13:16 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec 05 01:13:16 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.afshmv(active, starting, since 0.0244957s)
Dec 05 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Dec 05 01:13:16 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 05 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.afshmv", "id": "compute-0.afshmv"} v 0) v1
Dec 05 01:13:16 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mgr metadata", "who": "compute-0.afshmv", "id": "compute-0.afshmv"}]: dispatch
Dec 05 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Dec 05 01:13:16 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 05 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).mds e1 all = 1
Dec 05 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Dec 05 01:13:16 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Dec 05 01:13:16 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 05 01:13:16 compute-0 ceph-mon[192914]: Active manager daemon compute-0.afshmv restarted
Dec 05 01:13:16 compute-0 ceph-mon[192914]: Activating manager daemon compute-0.afshmv
Dec 05 01:13:16 compute-0 ceph-mon[192914]: osdmap e2: 0 total, 0 up, 0 in
Dec 05 01:13:16 compute-0 ceph-mon[192914]: mgrmap e6: compute-0.afshmv(active, starting, since 0.0244957s)
Dec 05 01:13:16 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 05 01:13:16 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mgr metadata", "who": "compute-0.afshmv", "id": "compute-0.afshmv"}]: dispatch
Dec 05 01:13:16 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 05 01:13:16 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 01:13:16 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 05 01:13:16 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : Manager daemon compute-0.afshmv is now available
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: balancer
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Starting
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:13:16
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [balancer INFO root] No pools available
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec 05 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Dec 05 01:13:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Dec 05 01:13:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: cephadm
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: crash
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: devicehealth
Dec 05 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: iostat
Dec 05 01:13:16 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: nfs
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [devicehealth INFO root] Starting
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: orchestrator
Dec 05 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: pg_autoscaler
Dec 05 01:13:16 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: progress
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [progress INFO root] Loading...
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [progress INFO root] No stored events to load
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [progress INFO root] Loaded [] historic events
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [progress INFO root] Loaded OSDMap, ready.
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] recovery thread starting
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] starting setup
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: rbd_support
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: restful
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: status
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [restful INFO root] server_addr: :: server_port: 8003
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [restful WARNING root] server not running: no certificate configured
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: telemetry
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 05 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.afshmv/mirror_snapshot_schedule"} v 0) v1
Dec 05 01:13:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.afshmv/mirror_snapshot_schedule"}]: dispatch
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] PerfHandler: starting
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TaskHandler: starting
Dec 05 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.afshmv/trash_purge_schedule"} v 0) v1
Dec 05 01:13:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.afshmv/trash_purge_schedule"}]: dispatch
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] setup complete
Dec 05 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: volumes
Dec 05 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019927801 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:13:17 compute-0 ceph-mon[192914]: Manager daemon compute-0.afshmv is now available
Dec 05 01:13:17 compute-0 ceph-mon[192914]: Found migration_current of "None". Setting to last migration.
Dec 05 01:13:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 05 01:13:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 05 01:13:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.afshmv/mirror_snapshot_schedule"}]: dispatch
Dec 05 01:13:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.afshmv/trash_purge_schedule"}]: dispatch
Dec 05 01:13:17 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.afshmv(active, since 1.09915s)
Dec 05 01:13:17 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 05 01:13:17 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 05 01:13:17 compute-0 vibrant_matsumoto[194097]: {
Dec 05 01:13:17 compute-0 vibrant_matsumoto[194097]:     "mgrmap_epoch": 7,
Dec 05 01:13:17 compute-0 vibrant_matsumoto[194097]:     "initialized": true
Dec 05 01:13:17 compute-0 vibrant_matsumoto[194097]: }
Dec 05 01:13:17 compute-0 systemd[1]: libpod-bdd72895c284df98646becd65cd92d92079ea5119489616b3c4f455c2a678942.scope: Deactivated successfully.
Dec 05 01:13:17 compute-0 podman[194268]: 2025-12-05 01:13:17.190039167 +0000 UTC m=+0.043419846 container died bdd72895c284df98646becd65cd92d92079ea5119489616b3c4f455c2a678942 (image=quay.io/ceph/ceph:v18, name=vibrant_matsumoto, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 01:13:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c7c6f9a43b8975fcdf4863bda44137cf1009e1dd35d81bf183e1d6a44f7b226-merged.mount: Deactivated successfully.
Dec 05 01:13:17 compute-0 podman[194268]: 2025-12-05 01:13:17.262291861 +0000 UTC m=+0.115672520 container remove bdd72895c284df98646becd65cd92d92079ea5119489616b3c4f455c2a678942 (image=quay.io/ceph/ceph:v18, name=vibrant_matsumoto, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 01:13:17 compute-0 systemd[1]: libpod-conmon-bdd72895c284df98646becd65cd92d92079ea5119489616b3c4f455c2a678942.scope: Deactivated successfully.
Dec 05 01:13:17 compute-0 podman[194283]: 2025-12-05 01:13:17.379173774 +0000 UTC m=+0.066688427 container create f66ad9fcafb72a9055bfe59da04343908b1d27c4c7882e788971a71e76ffc6ed (image=quay.io/ceph/ceph:v18, name=peaceful_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:13:17 compute-0 systemd[1]: Started libpod-conmon-f66ad9fcafb72a9055bfe59da04343908b1d27c4c7882e788971a71e76ffc6ed.scope.
Dec 05 01:13:17 compute-0 podman[194283]: 2025-12-05 01:13:17.360200845 +0000 UTC m=+0.047715488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:13:17 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3076ddb13ec2579139929d1d2d50d6dfc7a57b69c114aa08daf8aeb9289ef9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3076ddb13ec2579139929d1d2d50d6dfc7a57b69c114aa08daf8aeb9289ef9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3076ddb13ec2579139929d1d2d50d6dfc7a57b69c114aa08daf8aeb9289ef9e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:17 compute-0 podman[194283]: 2025-12-05 01:13:17.488007699 +0000 UTC m=+0.175522402 container init f66ad9fcafb72a9055bfe59da04343908b1d27c4c7882e788971a71e76ffc6ed (image=quay.io/ceph/ceph:v18, name=peaceful_shaw, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 05 01:13:17 compute-0 podman[194283]: 2025-12-05 01:13:17.507591076 +0000 UTC m=+0.195105719 container start f66ad9fcafb72a9055bfe59da04343908b1d27c4c7882e788971a71e76ffc6ed (image=quay.io/ceph/ceph:v18, name=peaceful_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:13:17 compute-0 podman[194283]: 2025-12-05 01:13:17.512987209 +0000 UTC m=+0.200501922 container attach f66ad9fcafb72a9055bfe59da04343908b1d27c4c7882e788971a71e76ffc6ed (image=quay.io/ceph/ceph:v18, name=peaceful_shaw, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 01:13:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Dec 05 01:13:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Dec 05 01:13:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:18 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:13:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Dec 05 01:13:18 compute-0 ceph-mgr[193209]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 01:13:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec 05 01:13:18 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 05 01:13:18 compute-0 systemd[1]: libpod-f66ad9fcafb72a9055bfe59da04343908b1d27c4c7882e788971a71e76ffc6ed.scope: Deactivated successfully.
Dec 05 01:13:18 compute-0 podman[194283]: 2025-12-05 01:13:18.073216699 +0000 UTC m=+0.760731322 container died f66ad9fcafb72a9055bfe59da04343908b1d27c4c7882e788971a71e76ffc6ed (image=quay.io/ceph/ceph:v18, name=peaceful_shaw, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:13:18 compute-0 ceph-mon[192914]: mgrmap e7: compute-0.afshmv(active, since 1.09915s)
Dec 05 01:13:18 compute-0 ceph-mon[192914]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 05 01:13:18 compute-0 ceph-mon[192914]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 05 01:13:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 05 01:13:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3076ddb13ec2579139929d1d2d50d6dfc7a57b69c114aa08daf8aeb9289ef9e-merged.mount: Deactivated successfully.
Dec 05 01:13:18 compute-0 podman[194283]: 2025-12-05 01:13:18.12986839 +0000 UTC m=+0.817383013 container remove f66ad9fcafb72a9055bfe59da04343908b1d27c4c7882e788971a71e76ffc6ed (image=quay.io/ceph/ceph:v18, name=peaceful_shaw, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 05 01:13:18 compute-0 systemd[1]: libpod-conmon-f66ad9fcafb72a9055bfe59da04343908b1d27c4c7882e788971a71e76ffc6ed.scope: Deactivated successfully.
Dec 05 01:13:18 compute-0 podman[194325]: 2025-12-05 01:13:18.196382921 +0000 UTC m=+0.086120900 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:13:18 compute-0 podman[194357]: 2025-12-05 01:13:18.183257118 +0000 UTC m=+0.029551741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:13:18 compute-0 ceph-mgr[193209]: [cephadm INFO cherrypy.error] [05/Dec/2025:01:13:18] ENGINE Bus STARTING
Dec 05 01:13:18 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : [05/Dec/2025:01:13:18] ENGINE Bus STARTING
Dec 05 01:13:18 compute-0 ceph-mgr[193209]: [cephadm INFO cherrypy.error] [05/Dec/2025:01:13:18] ENGINE Serving on http://192.168.122.100:8765
Dec 05 01:13:18 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : [05/Dec/2025:01:13:18] ENGINE Serving on http://192.168.122.100:8765
Dec 05 01:13:18 compute-0 ceph-mgr[193209]: [cephadm INFO cherrypy.error] [05/Dec/2025:01:13:18] ENGINE Serving on https://192.168.122.100:7150
Dec 05 01:13:18 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : [05/Dec/2025:01:13:18] ENGINE Serving on https://192.168.122.100:7150
Dec 05 01:13:18 compute-0 ceph-mgr[193209]: [cephadm INFO cherrypy.error] [05/Dec/2025:01:13:18] ENGINE Bus STARTED
Dec 05 01:13:18 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : [05/Dec/2025:01:13:18] ENGINE Bus STARTED
Dec 05 01:13:18 compute-0 ceph-mgr[193209]: [cephadm INFO cherrypy.error] [05/Dec/2025:01:13:18] ENGINE Client ('192.168.122.100', 52268) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 05 01:13:18 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : [05/Dec/2025:01:13:18] ENGINE Client ('192.168.122.100', 52268) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 05 01:13:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec 05 01:13:19 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 05 01:13:19 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.afshmv(active, since 3s)
Dec 05 01:13:19 compute-0 podman[194357]: 2025-12-05 01:13:19.116142253 +0000 UTC m=+0.962436866 container create 696f9a56df3196885ddff3a6d0be539f6721e1611fd4c687f7f0b6327f9bbd37 (image=quay.io/ceph/ceph:v18, name=kind_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:13:19 compute-0 ceph-mon[192914]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:13:19 compute-0 podman[194332]: 2025-12-05 01:13:19.13362995 +0000 UTC m=+1.020292771 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 01:13:19 compute-0 systemd[1]: Started libpod-conmon-696f9a56df3196885ddff3a6d0be539f6721e1611fd4c687f7f0b6327f9bbd37.scope.
Dec 05 01:13:19 compute-0 podman[194333]: 2025-12-05 01:13:19.178439374 +0000 UTC m=+1.058013584 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 05 01:13:19 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f4958dc5dc908d04e32edf57a87df877abc7224d5b7397767c62c28fc32b67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f4958dc5dc908d04e32edf57a87df877abc7224d5b7397767c62c28fc32b67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f4958dc5dc908d04e32edf57a87df877abc7224d5b7397767c62c28fc32b67/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:19 compute-0 podman[194357]: 2025-12-05 01:13:19.226256824 +0000 UTC m=+1.072551427 container init 696f9a56df3196885ddff3a6d0be539f6721e1611fd4c687f7f0b6327f9bbd37 (image=quay.io/ceph/ceph:v18, name=kind_hawking, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 01:13:19 compute-0 podman[194357]: 2025-12-05 01:13:19.238016969 +0000 UTC m=+1.084311572 container start 696f9a56df3196885ddff3a6d0be539f6721e1611fd4c687f7f0b6327f9bbd37 (image=quay.io/ceph/ceph:v18, name=kind_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Dec 05 01:13:19 compute-0 podman[194357]: 2025-12-05 01:13:19.242498886 +0000 UTC m=+1.088793509 container attach 696f9a56df3196885ddff3a6d0be539f6721e1611fd4c687f7f0b6327f9bbd37 (image=quay.io/ceph/ceph:v18, name=kind_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 01:13:19 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:13:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Dec 05 01:13:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:19 compute-0 ceph-mgr[193209]: [cephadm INFO root] Set ssh ssh_user
Dec 05 01:13:19 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec 05 01:13:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Dec 05 01:13:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:19 compute-0 ceph-mgr[193209]: [cephadm INFO root] Set ssh ssh_config
Dec 05 01:13:19 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec 05 01:13:19 compute-0 ceph-mgr[193209]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec 05 01:13:19 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec 05 01:13:19 compute-0 kind_hawking[194434]: ssh user set to ceph-admin. sudo will be used
Dec 05 01:13:19 compute-0 systemd[1]: libpod-696f9a56df3196885ddff3a6d0be539f6721e1611fd4c687f7f0b6327f9bbd37.scope: Deactivated successfully.
Dec 05 01:13:19 compute-0 podman[194357]: 2025-12-05 01:13:19.79304553 +0000 UTC m=+1.639340143 container died 696f9a56df3196885ddff3a6d0be539f6721e1611fd4c687f7f0b6327f9bbd37 (image=quay.io/ceph/ceph:v18, name=kind_hawking, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:13:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-34f4958dc5dc908d04e32edf57a87df877abc7224d5b7397767c62c28fc32b67-merged.mount: Deactivated successfully.
Dec 05 01:13:19 compute-0 podman[194357]: 2025-12-05 01:13:19.842240189 +0000 UTC m=+1.688534792 container remove 696f9a56df3196885ddff3a6d0be539f6721e1611fd4c687f7f0b6327f9bbd37 (image=quay.io/ceph/ceph:v18, name=kind_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 05 01:13:19 compute-0 systemd[1]: libpod-conmon-696f9a56df3196885ddff3a6d0be539f6721e1611fd4c687f7f0b6327f9bbd37.scope: Deactivated successfully.
Dec 05 01:13:19 compute-0 podman[194474]: 2025-12-05 01:13:19.938216618 +0000 UTC m=+0.071992168 container create 4c8b6331359d314cb487e8e99378cbee4d4c22ee6d9d5988a5320d4aaafaf2ea (image=quay.io/ceph/ceph:v18, name=gifted_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 05 01:13:19 compute-0 systemd[1]: Started libpod-conmon-4c8b6331359d314cb487e8e99378cbee4d4c22ee6d9d5988a5320d4aaafaf2ea.scope.
Dec 05 01:13:19 compute-0 podman[194474]: 2025-12-05 01:13:19.907868955 +0000 UTC m=+0.041644585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:13:20 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30a3f266bc92c8ab9a611095b7c9dc4fe062f60d254e433dd1c93e467e8a6de/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30a3f266bc92c8ab9a611095b7c9dc4fe062f60d254e433dd1c93e467e8a6de/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30a3f266bc92c8ab9a611095b7c9dc4fe062f60d254e433dd1c93e467e8a6de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30a3f266bc92c8ab9a611095b7c9dc4fe062f60d254e433dd1c93e467e8a6de/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30a3f266bc92c8ab9a611095b7c9dc4fe062f60d254e433dd1c93e467e8a6de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:20 compute-0 ceph-mgr[193209]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 01:13:20 compute-0 podman[194474]: 2025-12-05 01:13:20.106402701 +0000 UTC m=+0.240178291 container init 4c8b6331359d314cb487e8e99378cbee4d4c22ee6d9d5988a5320d4aaafaf2ea (image=quay.io/ceph/ceph:v18, name=gifted_ishizaka, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 05 01:13:20 compute-0 podman[194474]: 2025-12-05 01:13:20.118270918 +0000 UTC m=+0.252046488 container start 4c8b6331359d314cb487e8e99378cbee4d4c22ee6d9d5988a5320d4aaafaf2ea (image=quay.io/ceph/ceph:v18, name=gifted_ishizaka, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:13:20 compute-0 podman[194474]: 2025-12-05 01:13:20.129757925 +0000 UTC m=+0.263533485 container attach 4c8b6331359d314cb487e8e99378cbee4d4c22ee6d9d5988a5320d4aaafaf2ea (image=quay.io/ceph/ceph:v18, name=gifted_ishizaka, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:13:20 compute-0 ceph-mon[192914]: [05/Dec/2025:01:13:18] ENGINE Bus STARTING
Dec 05 01:13:20 compute-0 ceph-mon[192914]: [05/Dec/2025:01:13:18] ENGINE Serving on http://192.168.122.100:8765
Dec 05 01:13:20 compute-0 ceph-mon[192914]: [05/Dec/2025:01:13:18] ENGINE Serving on https://192.168.122.100:7150
Dec 05 01:13:20 compute-0 ceph-mon[192914]: [05/Dec/2025:01:13:18] ENGINE Bus STARTED
Dec 05 01:13:20 compute-0 ceph-mon[192914]: [05/Dec/2025:01:13:18] ENGINE Client ('192.168.122.100', 52268) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 05 01:13:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 05 01:13:20 compute-0 ceph-mon[192914]: mgrmap e8: compute-0.afshmv(active, since 3s)
Dec 05 01:13:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:20 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:13:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Dec 05 01:13:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:20 compute-0 ceph-mgr[193209]: [cephadm INFO root] Set ssh ssh_identity_key
Dec 05 01:13:20 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec 05 01:13:20 compute-0 ceph-mgr[193209]: [cephadm INFO root] Set ssh private key
Dec 05 01:13:20 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Set ssh private key
Dec 05 01:13:20 compute-0 systemd[1]: libpod-4c8b6331359d314cb487e8e99378cbee4d4c22ee6d9d5988a5320d4aaafaf2ea.scope: Deactivated successfully.
Dec 05 01:13:20 compute-0 podman[194474]: 2025-12-05 01:13:20.732416781 +0000 UTC m=+0.866192341 container died 4c8b6331359d314cb487e8e99378cbee4d4c22ee6d9d5988a5320d4aaafaf2ea (image=quay.io/ceph/ceph:v18, name=gifted_ishizaka, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 01:13:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-d30a3f266bc92c8ab9a611095b7c9dc4fe062f60d254e433dd1c93e467e8a6de-merged.mount: Deactivated successfully.
Dec 05 01:13:20 compute-0 podman[194474]: 2025-12-05 01:13:20.826237419 +0000 UTC m=+0.960012989 container remove 4c8b6331359d314cb487e8e99378cbee4d4c22ee6d9d5988a5320d4aaafaf2ea (image=quay.io/ceph/ceph:v18, name=gifted_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Dec 05 01:13:20 compute-0 systemd[1]: libpod-conmon-4c8b6331359d314cb487e8e99378cbee4d4c22ee6d9d5988a5320d4aaafaf2ea.scope: Deactivated successfully.
Dec 05 01:13:20 compute-0 podman[194518]: 2025-12-05 01:13:20.875391666 +0000 UTC m=+0.114340272 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 05 01:13:20 compute-0 podman[194547]: 2025-12-05 01:13:20.913107329 +0000 UTC m=+0.059754330 container create 36f20d9f1d82acfc21ff8b197bda2abafa2669ab534daa1e5469e9f0f5d81bcb (image=quay.io/ceph/ceph:v18, name=hungry_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 05 01:13:20 compute-0 systemd[1]: Started libpod-conmon-36f20d9f1d82acfc21ff8b197bda2abafa2669ab534daa1e5469e9f0f5d81bcb.scope.
Dec 05 01:13:20 compute-0 podman[194547]: 2025-12-05 01:13:20.890437334 +0000 UTC m=+0.037084365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:13:20 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cff8ebfafe842c70c4ffaf7eaac099f2fb1d10c2bcb8fc608d7af065cfff772/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cff8ebfafe842c70c4ffaf7eaac099f2fb1d10c2bcb8fc608d7af065cfff772/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cff8ebfafe842c70c4ffaf7eaac099f2fb1d10c2bcb8fc608d7af065cfff772/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cff8ebfafe842c70c4ffaf7eaac099f2fb1d10c2bcb8fc608d7af065cfff772/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cff8ebfafe842c70c4ffaf7eaac099f2fb1d10c2bcb8fc608d7af065cfff772/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:21 compute-0 podman[194547]: 2025-12-05 01:13:21.030478876 +0000 UTC m=+0.177125867 container init 36f20d9f1d82acfc21ff8b197bda2abafa2669ab534daa1e5469e9f0f5d81bcb (image=quay.io/ceph/ceph:v18, name=hungry_lumiere, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:13:21 compute-0 podman[194547]: 2025-12-05 01:13:21.044294039 +0000 UTC m=+0.190941020 container start 36f20d9f1d82acfc21ff8b197bda2abafa2669ab534daa1e5469e9f0f5d81bcb (image=quay.io/ceph/ceph:v18, name=hungry_lumiere, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:13:21 compute-0 podman[194547]: 2025-12-05 01:13:21.04855431 +0000 UTC m=+0.195201291 container attach 36f20d9f1d82acfc21ff8b197bda2abafa2669ab534daa1e5469e9f0f5d81bcb (image=quay.io/ceph/ceph:v18, name=hungry_lumiere, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:13:21 compute-0 ceph-mon[192914]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:13:21 compute-0 ceph-mon[192914]: Set ssh ssh_user
Dec 05 01:13:21 compute-0 ceph-mon[192914]: Set ssh ssh_config
Dec 05 01:13:21 compute-0 ceph-mon[192914]: ssh user set to ceph-admin. sudo will be used
Dec 05 01:13:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:21 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:13:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Dec 05 01:13:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:21 compute-0 ceph-mgr[193209]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec 05 01:13:21 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec 05 01:13:21 compute-0 systemd[1]: libpod-36f20d9f1d82acfc21ff8b197bda2abafa2669ab534daa1e5469e9f0f5d81bcb.scope: Deactivated successfully.
Dec 05 01:13:21 compute-0 conmon[194563]: conmon 36f20d9f1d82acfc21ff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-36f20d9f1d82acfc21ff8b197bda2abafa2669ab534daa1e5469e9f0f5d81bcb.scope/container/memory.events
Dec 05 01:13:21 compute-0 podman[194547]: 2025-12-05 01:13:21.625607518 +0000 UTC m=+0.772254539 container died 36f20d9f1d82acfc21ff8b197bda2abafa2669ab534daa1e5469e9f0f5d81bcb (image=quay.io/ceph/ceph:v18, name=hungry_lumiere, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 05 01:13:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cff8ebfafe842c70c4ffaf7eaac099f2fb1d10c2bcb8fc608d7af065cfff772-merged.mount: Deactivated successfully.
Dec 05 01:13:21 compute-0 podman[194547]: 2025-12-05 01:13:21.691606525 +0000 UTC m=+0.838253506 container remove 36f20d9f1d82acfc21ff8b197bda2abafa2669ab534daa1e5469e9f0f5d81bcb (image=quay.io/ceph/ceph:v18, name=hungry_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 01:13:21 compute-0 systemd[1]: libpod-conmon-36f20d9f1d82acfc21ff8b197bda2abafa2669ab534daa1e5469e9f0f5d81bcb.scope: Deactivated successfully.
Dec 05 01:13:21 compute-0 podman[194599]: 2025-12-05 01:13:21.785162995 +0000 UTC m=+0.061065457 container create 9afcfd2e04dac41780428d45196168eb3a873630c51bef5c3a263f54ac84f117 (image=quay.io/ceph/ceph:v18, name=suspicious_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 01:13:21 compute-0 systemd[1]: Started libpod-conmon-9afcfd2e04dac41780428d45196168eb3a873630c51bef5c3a263f54ac84f117.scope.
Dec 05 01:13:21 compute-0 podman[194599]: 2025-12-05 01:13:21.759143155 +0000 UTC m=+0.035045597 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:13:21 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8c348735568d5e94e713c4ad056045ddbfd089468b84001738273e629ff2f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8c348735568d5e94e713c4ad056045ddbfd089468b84001738273e629ff2f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8c348735568d5e94e713c4ad056045ddbfd089468b84001738273e629ff2f7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:21 compute-0 podman[194599]: 2025-12-05 01:13:21.926751641 +0000 UTC m=+0.202654123 container init 9afcfd2e04dac41780428d45196168eb3a873630c51bef5c3a263f54ac84f117 (image=quay.io/ceph/ceph:v18, name=suspicious_colden, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 05 01:13:21 compute-0 podman[194599]: 2025-12-05 01:13:21.937724313 +0000 UTC m=+0.213626775 container start 9afcfd2e04dac41780428d45196168eb3a873630c51bef5c3a263f54ac84f117 (image=quay.io/ceph/ceph:v18, name=suspicious_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 05 01:13:21 compute-0 podman[194599]: 2025-12-05 01:13:21.945145464 +0000 UTC m=+0.221047936 container attach 9afcfd2e04dac41780428d45196168eb3a873630c51bef5c3a263f54ac84f117 (image=quay.io/ceph/ceph:v18, name=suspicious_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 05 01:13:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053118 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:13:22 compute-0 ceph-mgr[193209]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 01:13:22 compute-0 ceph-mon[192914]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:13:22 compute-0 ceph-mon[192914]: Set ssh ssh_identity_key
Dec 05 01:13:22 compute-0 ceph-mon[192914]: Set ssh private key
Dec 05 01:13:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:22 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:13:22 compute-0 suspicious_colden[194615]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKqIArDaqP5VTY5pu+bmfkIDqlV1OYvK960Ala0m73bO7YePCy/ROElV++adpRzJ7pOq+eUbor3fKjFH5kWsqF8L0P2BfLW2r2c8+P7vZ9R4Ivrng8Nx6WakK4vWBs1QpVWZzhqHXtJ/pyKGffLpmqmHHQxZUa7q4afZEzDYWP4O1W7Qx6/WSnch9KPY5/tt3+Km2zGb22LAbaE7CGLyflQp1XSgpE+fQxa1BUhiHYOxaan2s/bRP5MtPlhpLfdOczqKJmYOUo7TqTOBb0NASnZQMqY3zIVZk1cx4/wBx4uggKUSPwLZoEpBbumKeSI9aPwk/lecqgDuB5udA14AxnJr3el3Vap09/C/mdPxfnie+g3aOK37H0zlFLZ9buWQ3LfHQBgztWMirSVnxvPDUuzbRi5lsPPnJZ4UcPq9d1GdVqDxUdiQ8RqxNMTtSeywa36Men4QQldL915BwCc9bcppQ3sxEvNnH2EpWWBs8FvWrXP54/oUIWBua9+YBk/mU= zuul@controller
Dec 05 01:13:22 compute-0 systemd[1]: libpod-9afcfd2e04dac41780428d45196168eb3a873630c51bef5c3a263f54ac84f117.scope: Deactivated successfully.
Dec 05 01:13:22 compute-0 podman[194641]: 2025-12-05 01:13:22.566498901 +0000 UTC m=+0.037463256 container died 9afcfd2e04dac41780428d45196168eb3a873630c51bef5c3a263f54ac84f117 (image=quay.io/ceph/ceph:v18, name=suspicious_colden, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:13:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa8c348735568d5e94e713c4ad056045ddbfd089468b84001738273e629ff2f7-merged.mount: Deactivated successfully.
Dec 05 01:13:22 compute-0 podman[194641]: 2025-12-05 01:13:22.636698447 +0000 UTC m=+0.107662762 container remove 9afcfd2e04dac41780428d45196168eb3a873630c51bef5c3a263f54ac84f117 (image=quay.io/ceph/ceph:v18, name=suspicious_colden, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:13:22 compute-0 systemd[1]: libpod-conmon-9afcfd2e04dac41780428d45196168eb3a873630c51bef5c3a263f54ac84f117.scope: Deactivated successfully.
Dec 05 01:13:22 compute-0 podman[194656]: 2025-12-05 01:13:22.764381587 +0000 UTC m=+0.080908961 container create e16a4f4bfb5531f9dbcf77e1ed445813019c63b2756c9ed04e414d85a73694ed (image=quay.io/ceph/ceph:v18, name=nostalgic_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 05 01:13:22 compute-0 podman[194656]: 2025-12-05 01:13:22.723409292 +0000 UTC m=+0.039936726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:13:22 compute-0 systemd[1]: Started libpod-conmon-e16a4f4bfb5531f9dbcf77e1ed445813019c63b2756c9ed04e414d85a73694ed.scope.
Dec 05 01:13:22 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf09b05c803de973d208ab0093d58a388ea8358932541823e71ea3b375b0c5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf09b05c803de973d208ab0093d58a388ea8358932541823e71ea3b375b0c5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf09b05c803de973d208ab0093d58a388ea8358932541823e71ea3b375b0c5a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:22 compute-0 podman[194656]: 2025-12-05 01:13:22.945668962 +0000 UTC m=+0.262196376 container init e16a4f4bfb5531f9dbcf77e1ed445813019c63b2756c9ed04e414d85a73694ed (image=quay.io/ceph/ceph:v18, name=nostalgic_burnell, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 01:13:22 compute-0 podman[194656]: 2025-12-05 01:13:22.954650038 +0000 UTC m=+0.271177392 container start e16a4f4bfb5531f9dbcf77e1ed445813019c63b2756c9ed04e414d85a73694ed (image=quay.io/ceph/ceph:v18, name=nostalgic_burnell, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:13:22 compute-0 podman[194656]: 2025-12-05 01:13:22.959787504 +0000 UTC m=+0.276314858 container attach e16a4f4bfb5531f9dbcf77e1ed445813019c63b2756c9ed04e414d85a73694ed (image=quay.io/ceph/ceph:v18, name=nostalgic_burnell, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:13:23 compute-0 ceph-mon[192914]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:13:23 compute-0 ceph-mon[192914]: Set ssh ssh_identity_pub
Dec 05 01:13:23 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:13:23 compute-0 podman[194699]: 2025-12-05 01:13:23.703320286 +0000 UTC m=+0.112092229 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., name=ubi9, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 05 01:13:23 compute-0 sshd-session[194710]: Accepted publickey for ceph-admin from 192.168.122.100 port 32886 ssh2: RSA SHA256:pCaoUHSsXPy6f749SicfXH920NTNVwogKR2+VGIbug4
Dec 05 01:13:23 compute-0 systemd-logind[792]: New session 28 of user ceph-admin.
Dec 05 01:13:23 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 05 01:13:23 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 05 01:13:23 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 05 01:13:23 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 05 01:13:23 compute-0 systemd[194721]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 01:13:23 compute-0 sshd-session[194726]: Accepted publickey for ceph-admin from 192.168.122.100 port 32896 ssh2: RSA SHA256:pCaoUHSsXPy6f749SicfXH920NTNVwogKR2+VGIbug4
Dec 05 01:13:23 compute-0 systemd-logind[792]: New session 30 of user ceph-admin.
Dec 05 01:13:23 compute-0 systemd[194721]: Queued start job for default target Main User Target.
Dec 05 01:13:23 compute-0 systemd[194721]: Created slice User Application Slice.
Dec 05 01:13:23 compute-0 systemd[194721]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 05 01:13:23 compute-0 systemd[194721]: Started Daily Cleanup of User's Temporary Directories.
Dec 05 01:13:23 compute-0 systemd[194721]: Reached target Paths.
Dec 05 01:13:23 compute-0 systemd[194721]: Reached target Timers.
Dec 05 01:13:23 compute-0 systemd[194721]: Starting D-Bus User Message Bus Socket...
Dec 05 01:13:23 compute-0 systemd[194721]: Starting Create User's Volatile Files and Directories...
Dec 05 01:13:23 compute-0 systemd[194721]: Listening on D-Bus User Message Bus Socket.
Dec 05 01:13:23 compute-0 systemd[194721]: Reached target Sockets.
Dec 05 01:13:23 compute-0 systemd[194721]: Finished Create User's Volatile Files and Directories.
Dec 05 01:13:23 compute-0 systemd[194721]: Reached target Basic System.
Dec 05 01:13:23 compute-0 systemd[194721]: Reached target Main User Target.
Dec 05 01:13:23 compute-0 systemd[194721]: Startup finished in 168ms.
Dec 05 01:13:23 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 05 01:13:24 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Dec 05 01:13:24 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Dec 05 01:13:24 compute-0 sshd-session[194710]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 01:13:24 compute-0 ceph-mgr[193209]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 01:13:24 compute-0 sshd-session[194726]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 01:13:24 compute-0 ceph-mon[192914]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:13:24 compute-0 sudo[194741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:24 compute-0 sudo[194741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:24 compute-0 sudo[194741]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:24 compute-0 sudo[194766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:13:24 compute-0 sudo[194766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:24 compute-0 sudo[194766]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:24 compute-0 sshd-session[194791]: Accepted publickey for ceph-admin from 192.168.122.100 port 32910 ssh2: RSA SHA256:pCaoUHSsXPy6f749SicfXH920NTNVwogKR2+VGIbug4
Dec 05 01:13:24 compute-0 systemd-logind[792]: New session 31 of user ceph-admin.
Dec 05 01:13:24 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Dec 05 01:13:24 compute-0 sshd-session[194791]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 01:13:24 compute-0 sudo[194795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:24 compute-0 sudo[194795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:24 compute-0 sudo[194795]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:24 compute-0 sudo[194820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Dec 05 01:13:24 compute-0 sudo[194820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:24 compute-0 sudo[194820]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:25 compute-0 sshd-session[194845]: Accepted publickey for ceph-admin from 192.168.122.100 port 32920 ssh2: RSA SHA256:pCaoUHSsXPy6f749SicfXH920NTNVwogKR2+VGIbug4
Dec 05 01:13:25 compute-0 systemd-logind[792]: New session 32 of user ceph-admin.
Dec 05 01:13:25 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Dec 05 01:13:25 compute-0 sshd-session[194845]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 01:13:25 compute-0 ceph-mon[192914]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:13:25 compute-0 sudo[194849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:25 compute-0 sudo[194849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:25 compute-0 sudo[194849]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:25 compute-0 sudo[194874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Dec 05 01:13:25 compute-0 sudo[194874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:25 compute-0 sudo[194874]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:25 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec 05 01:13:25 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec 05 01:13:25 compute-0 sshd-session[194899]: Accepted publickey for ceph-admin from 192.168.122.100 port 32934 ssh2: RSA SHA256:pCaoUHSsXPy6f749SicfXH920NTNVwogKR2+VGIbug4
Dec 05 01:13:25 compute-0 systemd-logind[792]: New session 33 of user ceph-admin.
Dec 05 01:13:25 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Dec 05 01:13:25 compute-0 sshd-session[194899]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 01:13:25 compute-0 sudo[194903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:25 compute-0 sudo[194903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:25 compute-0 sudo[194903]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:25 compute-0 sudo[194928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec 05 01:13:25 compute-0 sudo[194928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:25 compute-0 sudo[194928]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:26 compute-0 ceph-mgr[193209]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 01:13:26 compute-0 sshd-session[194953]: Accepted publickey for ceph-admin from 192.168.122.100 port 53922 ssh2: RSA SHA256:pCaoUHSsXPy6f749SicfXH920NTNVwogKR2+VGIbug4
Dec 05 01:13:26 compute-0 systemd-logind[792]: New session 34 of user ceph-admin.
Dec 05 01:13:26 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Dec 05 01:13:26 compute-0 sshd-session[194953]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 01:13:26 compute-0 ceph-mon[192914]: Deploying cephadm binary to compute-0
Dec 05 01:13:26 compute-0 sudo[194957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:26 compute-0 sudo[194957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:26 compute-0 sudo[194957]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:26 compute-0 sudo[194986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec 05 01:13:26 compute-0 sudo[194986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:26 compute-0 sudo[194986]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:26 compute-0 podman[194981]: 2025-12-05 01:13:26.527130028 +0000 UTC m=+0.127614040 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, distribution-scope=public, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible)
Dec 05 01:13:26 compute-0 sshd-session[195028]: Accepted publickey for ceph-admin from 192.168.122.100 port 53934 ssh2: RSA SHA256:pCaoUHSsXPy6f749SicfXH920NTNVwogKR2+VGIbug4
Dec 05 01:13:26 compute-0 systemd-logind[792]: New session 35 of user ceph-admin.
Dec 05 01:13:26 compute-0 systemd[1]: Started Session 35 of User ceph-admin.
Dec 05 01:13:26 compute-0 sshd-session[195028]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 01:13:26 compute-0 sudo[195032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:26 compute-0 sudo[195032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:26 compute-0 sudo[195032]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:13:27 compute-0 sudo[195057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Dec 05 01:13:27 compute-0 sudo[195057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:27 compute-0 sudo[195057]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:27 compute-0 sshd-session[195082]: Accepted publickey for ceph-admin from 192.168.122.100 port 53948 ssh2: RSA SHA256:pCaoUHSsXPy6f749SicfXH920NTNVwogKR2+VGIbug4
Dec 05 01:13:27 compute-0 systemd-logind[792]: New session 36 of user ceph-admin.
Dec 05 01:13:27 compute-0 systemd[1]: Started Session 36 of User ceph-admin.
Dec 05 01:13:27 compute-0 sshd-session[195082]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 01:13:27 compute-0 sudo[195086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:27 compute-0 sudo[195086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:27 compute-0 sudo[195086]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:27 compute-0 sudo[195111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec 05 01:13:27 compute-0 sudo[195111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:27 compute-0 sudo[195111]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:27 compute-0 sshd-session[195136]: Accepted publickey for ceph-admin from 192.168.122.100 port 53956 ssh2: RSA SHA256:pCaoUHSsXPy6f749SicfXH920NTNVwogKR2+VGIbug4
Dec 05 01:13:27 compute-0 systemd-logind[792]: New session 37 of user ceph-admin.
Dec 05 01:13:27 compute-0 systemd[1]: Started Session 37 of User ceph-admin.
Dec 05 01:13:27 compute-0 sshd-session[195136]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 01:13:28 compute-0 ceph-mgr[193209]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 01:13:28 compute-0 sudo[195140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:28 compute-0 sudo[195140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:28 compute-0 sudo[195140]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:28 compute-0 sudo[195165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Dec 05 01:13:28 compute-0 sudo[195165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:28 compute-0 sudo[195165]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:28 compute-0 sshd-session[195190]: Accepted publickey for ceph-admin from 192.168.122.100 port 53968 ssh2: RSA SHA256:pCaoUHSsXPy6f749SicfXH920NTNVwogKR2+VGIbug4
Dec 05 01:13:28 compute-0 systemd-logind[792]: New session 38 of user ceph-admin.
Dec 05 01:13:28 compute-0 systemd[1]: Started Session 38 of User ceph-admin.
Dec 05 01:13:28 compute-0 sshd-session[195190]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 01:13:29 compute-0 sshd-session[195217]: Accepted publickey for ceph-admin from 192.168.122.100 port 53982 ssh2: RSA SHA256:pCaoUHSsXPy6f749SicfXH920NTNVwogKR2+VGIbug4
Dec 05 01:13:29 compute-0 systemd-logind[792]: New session 39 of user ceph-admin.
Dec 05 01:13:29 compute-0 systemd[1]: Started Session 39 of User ceph-admin.
Dec 05 01:13:29 compute-0 sshd-session[195217]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 01:13:29 compute-0 sudo[195221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:29 compute-0 sudo[195221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:29 compute-0 sudo[195221]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:29 compute-0 sudo[195246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Dec 05 01:13:29 compute-0 sudo[195246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:29 compute-0 sudo[195246]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:29 compute-0 podman[158197]: time="2025-12-05T01:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:13:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 23510 "" "Go-http-client/1.1"
Dec 05 01:13:29 compute-0 sshd-session[195271]: Accepted publickey for ceph-admin from 192.168.122.100 port 53986 ssh2: RSA SHA256:pCaoUHSsXPy6f749SicfXH920NTNVwogKR2+VGIbug4
Dec 05 01:13:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4360 "" "Go-http-client/1.1"
Dec 05 01:13:29 compute-0 systemd-logind[792]: New session 40 of user ceph-admin.
Dec 05 01:13:29 compute-0 systemd[1]: Started Session 40 of User ceph-admin.
Dec 05 01:13:29 compute-0 sshd-session[195271]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 05 01:13:29 compute-0 sudo[195275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:29 compute-0 sudo[195275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:29 compute-0 sudo[195275]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:30 compute-0 ceph-mgr[193209]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 01:13:30 compute-0 sudo[195300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Dec 05 01:13:30 compute-0 sudo[195300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:30 compute-0 sudo[195300]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec 05 01:13:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:30 compute-0 ceph-mgr[193209]: [cephadm INFO root] Added host compute-0
Dec 05 01:13:30 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 05 01:13:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec 05 01:13:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 05 01:13:30 compute-0 nostalgic_burnell[194673]: Added host 'compute-0' with addr '192.168.122.100'
Dec 05 01:13:30 compute-0 systemd[1]: libpod-e16a4f4bfb5531f9dbcf77e1ed445813019c63b2756c9ed04e414d85a73694ed.scope: Deactivated successfully.
Dec 05 01:13:30 compute-0 podman[195353]: 2025-12-05 01:13:30.636601347 +0000 UTC m=+0.050595579 container died e16a4f4bfb5531f9dbcf77e1ed445813019c63b2756c9ed04e414d85a73694ed (image=quay.io/ceph/ceph:v18, name=nostalgic_burnell, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 05 01:13:30 compute-0 sudo[195345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:30 compute-0 sudo[195345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:30 compute-0 sudo[195345]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cf09b05c803de973d208ab0093d58a388ea8358932541823e71ea3b375b0c5a-merged.mount: Deactivated successfully.
Dec 05 01:13:30 compute-0 podman[195353]: 2025-12-05 01:13:30.699121985 +0000 UTC m=+0.113116147 container remove e16a4f4bfb5531f9dbcf77e1ed445813019c63b2756c9ed04e414d85a73694ed (image=quay.io/ceph/ceph:v18, name=nostalgic_burnell, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:13:30 compute-0 systemd[1]: libpod-conmon-e16a4f4bfb5531f9dbcf77e1ed445813019c63b2756c9ed04e414d85a73694ed.scope: Deactivated successfully.
Dec 05 01:13:30 compute-0 sudo[195384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:13:30 compute-0 sudo[195384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:30 compute-0 sudo[195384]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:30 compute-0 podman[195403]: 2025-12-05 01:13:30.835718799 +0000 UTC m=+0.089506066 container create 1eb9d3729f7ced5625ed72e59b2db2b1e19cf6f28a2132e392695d605ba133cf (image=quay.io/ceph/ceph:v18, name=lucid_cannon, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:13:30 compute-0 podman[195403]: 2025-12-05 01:13:30.801046673 +0000 UTC m=+0.054833960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:13:30 compute-0 podman[195416]: 2025-12-05 01:13:30.903023853 +0000 UTC m=+0.101546258 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:13:30 compute-0 systemd[1]: Started libpod-conmon-1eb9d3729f7ced5625ed72e59b2db2b1e19cf6f28a2132e392695d605ba133cf.scope.
Dec 05 01:13:30 compute-0 sudo[195429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:30 compute-0 sudo[195429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:30 compute-0 sudo[195429]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:30 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc324d499a3b309e54afefe82a97dc3c4651bbba6ffaf74c76f4396cd61a4a90/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc324d499a3b309e54afefe82a97dc3c4651bbba6ffaf74c76f4396cd61a4a90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc324d499a3b309e54afefe82a97dc3c4651bbba6ffaf74c76f4396cd61a4a90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:30 compute-0 podman[195403]: 2025-12-05 01:13:30.990220012 +0000 UTC m=+0.244007339 container init 1eb9d3729f7ced5625ed72e59b2db2b1e19cf6f28a2132e392695d605ba133cf (image=quay.io/ceph/ceph:v18, name=lucid_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:13:31 compute-0 podman[195403]: 2025-12-05 01:13:31.006160676 +0000 UTC m=+0.259947953 container start 1eb9d3729f7ced5625ed72e59b2db2b1e19cf6f28a2132e392695d605ba133cf (image=quay.io/ceph/ceph:v18, name=lucid_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 05 01:13:31 compute-0 podman[195403]: 2025-12-05 01:13:31.013296689 +0000 UTC m=+0.267083976 container attach 1eb9d3729f7ced5625ed72e59b2db2b1e19cf6f28a2132e392695d605ba133cf (image=quay.io/ceph/ceph:v18, name=lucid_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 05 01:13:31 compute-0 sudo[195477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Dec 05 01:13:31 compute-0 sudo[195477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:31 compute-0 podman[195546]: 2025-12-05 01:13:31.383976729 +0000 UTC m=+0.049158919 container create 5cbc8af0b1493a992ec9e03403f223d130a6b1252c46b0e637767b4a172cfaf8 (image=quay.io/ceph/ceph:v18, name=festive_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 01:13:31 compute-0 openstack_network_exporter[160350]: ERROR   01:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:13:31 compute-0 openstack_network_exporter[160350]: ERROR   01:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:13:31 compute-0 openstack_network_exporter[160350]: ERROR   01:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:13:31 compute-0 openstack_network_exporter[160350]: ERROR   01:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:13:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:13:31 compute-0 openstack_network_exporter[160350]: ERROR   01:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:13:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:13:31 compute-0 systemd[1]: Started libpod-conmon-5cbc8af0b1493a992ec9e03403f223d130a6b1252c46b0e637767b4a172cfaf8.scope.
Dec 05 01:13:31 compute-0 podman[195546]: 2025-12-05 01:13:31.365864234 +0000 UTC m=+0.031046444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:13:31 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:31 compute-0 podman[195546]: 2025-12-05 01:13:31.480483313 +0000 UTC m=+0.145665523 container init 5cbc8af0b1493a992ec9e03403f223d130a6b1252c46b0e637767b4a172cfaf8 (image=quay.io/ceph/ceph:v18, name=festive_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 05 01:13:31 compute-0 podman[195546]: 2025-12-05 01:13:31.489966352 +0000 UTC m=+0.155148542 container start 5cbc8af0b1493a992ec9e03403f223d130a6b1252c46b0e637767b4a172cfaf8 (image=quay.io/ceph/ceph:v18, name=festive_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:13:31 compute-0 podman[195546]: 2025-12-05 01:13:31.494054709 +0000 UTC m=+0.159236899 container attach 5cbc8af0b1493a992ec9e03403f223d130a6b1252c46b0e637767b4a172cfaf8 (image=quay.io/ceph/ceph:v18, name=festive_hermann, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:13:31 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:31 compute-0 ceph-mon[192914]: Added host compute-0
Dec 05 01:13:31 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 05 01:13:31 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:13:31 compute-0 ceph-mgr[193209]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec 05 01:13:31 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec 05 01:13:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec 05 01:13:31 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:31 compute-0 lucid_cannon[195473]: Scheduled mon update...
Dec 05 01:13:31 compute-0 systemd[1]: libpod-1eb9d3729f7ced5625ed72e59b2db2b1e19cf6f28a2132e392695d605ba133cf.scope: Deactivated successfully.
Dec 05 01:13:31 compute-0 podman[195403]: 2025-12-05 01:13:31.59220847 +0000 UTC m=+0.845995797 container died 1eb9d3729f7ced5625ed72e59b2db2b1e19cf6f28a2132e392695d605ba133cf (image=quay.io/ceph/ceph:v18, name=lucid_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 05 01:13:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc324d499a3b309e54afefe82a97dc3c4651bbba6ffaf74c76f4396cd61a4a90-merged.mount: Deactivated successfully.
Dec 05 01:13:31 compute-0 podman[195403]: 2025-12-05 01:13:31.669998781 +0000 UTC m=+0.923786048 container remove 1eb9d3729f7ced5625ed72e59b2db2b1e19cf6f28a2132e392695d605ba133cf (image=quay.io/ceph/ceph:v18, name=lucid_cannon, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 05 01:13:31 compute-0 systemd[1]: libpod-conmon-1eb9d3729f7ced5625ed72e59b2db2b1e19cf6f28a2132e392695d605ba133cf.scope: Deactivated successfully.
Dec 05 01:13:31 compute-0 podman[195582]: 2025-12-05 01:13:31.771523998 +0000 UTC m=+0.069879318 container create add6ae3922cf523f76116894e1f37b6048e4e082e9ed156fb0dd628c33fceae4 (image=quay.io/ceph/ceph:v18, name=charming_hugle, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec 05 01:13:31 compute-0 systemd[1]: Started libpod-conmon-add6ae3922cf523f76116894e1f37b6048e4e082e9ed156fb0dd628c33fceae4.scope.
Dec 05 01:13:31 compute-0 podman[195582]: 2025-12-05 01:13:31.737568273 +0000 UTC m=+0.035923693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:13:31 compute-0 festive_hermann[195562]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Dec 05 01:13:31 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:31 compute-0 systemd[1]: libpod-5cbc8af0b1493a992ec9e03403f223d130a6b1252c46b0e637767b4a172cfaf8.scope: Deactivated successfully.
Dec 05 01:13:31 compute-0 podman[195546]: 2025-12-05 01:13:31.861502437 +0000 UTC m=+0.526684627 container died 5cbc8af0b1493a992ec9e03403f223d130a6b1252c46b0e637767b4a172cfaf8 (image=quay.io/ceph/ceph:v18, name=festive_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:13:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67181fe6528af9161487c849b7f539cacc99ddd188d85acab9b69fad57bb431c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67181fe6528af9161487c849b7f539cacc99ddd188d85acab9b69fad57bb431c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67181fe6528af9161487c849b7f539cacc99ddd188d85acab9b69fad57bb431c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:31 compute-0 podman[195582]: 2025-12-05 01:13:31.89785504 +0000 UTC m=+0.196210400 container init add6ae3922cf523f76116894e1f37b6048e4e082e9ed156fb0dd628c33fceae4 (image=quay.io/ceph/ceph:v18, name=charming_hugle, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:13:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-db17bd72dad808e3c366a3ffaa512d83bd7ae4b7a533ffd630dd9463a49259f7-merged.mount: Deactivated successfully.
Dec 05 01:13:31 compute-0 podman[195582]: 2025-12-05 01:13:31.920977238 +0000 UTC m=+0.219332588 container start add6ae3922cf523f76116894e1f37b6048e4e082e9ed156fb0dd628c33fceae4 (image=quay.io/ceph/ceph:v18, name=charming_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:13:31 compute-0 podman[195582]: 2025-12-05 01:13:31.928268575 +0000 UTC m=+0.226623935 container attach add6ae3922cf523f76116894e1f37b6048e4e082e9ed156fb0dd628c33fceae4 (image=quay.io/ceph/ceph:v18, name=charming_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:13:31 compute-0 podman[195546]: 2025-12-05 01:13:31.954069439 +0000 UTC m=+0.619251629 container remove 5cbc8af0b1493a992ec9e03403f223d130a6b1252c46b0e637767b4a172cfaf8 (image=quay.io/ceph/ceph:v18, name=festive_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:13:31 compute-0 systemd[1]: libpod-conmon-5cbc8af0b1493a992ec9e03403f223d130a6b1252c46b0e637767b4a172cfaf8.scope: Deactivated successfully.
Dec 05 01:13:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:13:31 compute-0 sudo[195477]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Dec 05 01:13:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:32 compute-0 ceph-mgr[193209]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 01:13:32 compute-0 sudo[195616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:32 compute-0 sudo[195616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:32 compute-0 sudo[195616]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:32 compute-0 sudo[195641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:13:32 compute-0 sudo[195641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:32 compute-0 sudo[195641]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:32 compute-0 sudo[195676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:32 compute-0 sudo[195676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:32 compute-0 sudo[195676]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:32 compute-0 sudo[195710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 05 01:13:32 compute-0 sudo[195710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:32 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:13:32 compute-0 ceph-mgr[193209]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec 05 01:13:32 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec 05 01:13:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec 05 01:13:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:32 compute-0 charming_hugle[195598]: Scheduled mgr update...
Dec 05 01:13:32 compute-0 systemd[1]: libpod-add6ae3922cf523f76116894e1f37b6048e4e082e9ed156fb0dd628c33fceae4.scope: Deactivated successfully.
Dec 05 01:13:32 compute-0 podman[195582]: 2025-12-05 01:13:32.552243878 +0000 UTC m=+0.850599238 container died add6ae3922cf523f76116894e1f37b6048e4e082e9ed156fb0dd628c33fceae4 (image=quay.io/ceph/ceph:v18, name=charming_hugle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:13:32 compute-0 ceph-mon[192914]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:13:32 compute-0 ceph-mon[192914]: Saving service mon spec with placement count:5
Dec 05 01:13:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-67181fe6528af9161487c849b7f539cacc99ddd188d85acab9b69fad57bb431c-merged.mount: Deactivated successfully.
Dec 05 01:13:32 compute-0 podman[195582]: 2025-12-05 01:13:32.645594002 +0000 UTC m=+0.943949342 container remove add6ae3922cf523f76116894e1f37b6048e4e082e9ed156fb0dd628c33fceae4 (image=quay.io/ceph/ceph:v18, name=charming_hugle, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 01:13:32 compute-0 systemd[1]: libpod-conmon-add6ae3922cf523f76116894e1f37b6048e4e082e9ed156fb0dd628c33fceae4.scope: Deactivated successfully.
Dec 05 01:13:32 compute-0 podman[195757]: 2025-12-05 01:13:32.740773888 +0000 UTC m=+0.064377341 container create aed4696ad00b88ac61451f30e1949f6104ce01f4a61bafd8d647d0c5208462e5 (image=quay.io/ceph/ceph:v18, name=zealous_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:13:32 compute-0 sudo[195710]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:32 compute-0 systemd[1]: Started libpod-conmon-aed4696ad00b88ac61451f30e1949f6104ce01f4a61bafd8d647d0c5208462e5.scope.
Dec 05 01:13:32 compute-0 podman[195757]: 2025-12-05 01:13:32.71200032 +0000 UTC m=+0.035603863 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:13:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:13:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:32 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05729670b27caae475e3b001bb618d28da705e522f26cbd3b07b79bf28018d7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05729670b27caae475e3b001bb618d28da705e522f26cbd3b07b79bf28018d7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05729670b27caae475e3b001bb618d28da705e522f26cbd3b07b79bf28018d7a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:32 compute-0 podman[195757]: 2025-12-05 01:13:32.864034323 +0000 UTC m=+0.187637786 container init aed4696ad00b88ac61451f30e1949f6104ce01f4a61bafd8d647d0c5208462e5 (image=quay.io/ceph/ceph:v18, name=zealous_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 01:13:32 compute-0 podman[195757]: 2025-12-05 01:13:32.875049726 +0000 UTC m=+0.198653169 container start aed4696ad00b88ac61451f30e1949f6104ce01f4a61bafd8d647d0c5208462e5 (image=quay.io/ceph/ceph:v18, name=zealous_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 05 01:13:32 compute-0 podman[195757]: 2025-12-05 01:13:32.880259213 +0000 UTC m=+0.203862656 container attach aed4696ad00b88ac61451f30e1949f6104ce01f4a61bafd8d647d0c5208462e5 (image=quay.io/ceph/ceph:v18, name=zealous_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 05 01:13:32 compute-0 sudo[195789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:32 compute-0 sudo[195789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:32 compute-0 sudo[195789]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:32 compute-0 sudo[195816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:13:32 compute-0 sudo[195816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:33 compute-0 sudo[195816]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:33 compute-0 sudo[195841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:33 compute-0 sudo[195841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:33 compute-0 sudo[195841]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:33 compute-0 sudo[195866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 05 01:13:33 compute-0 sudo[195866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:33 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:13:33 compute-0 ceph-mgr[193209]: [cephadm INFO root] Saving service crash spec with placement *
Dec 05 01:13:33 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec 05 01:13:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Dec 05 01:13:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:33 compute-0 zealous_thompson[195786]: Scheduled crash update...
Dec 05 01:13:33 compute-0 systemd[1]: libpod-aed4696ad00b88ac61451f30e1949f6104ce01f4a61bafd8d647d0c5208462e5.scope: Deactivated successfully.
Dec 05 01:13:33 compute-0 podman[195757]: 2025-12-05 01:13:33.500410727 +0000 UTC m=+0.824014200 container died aed4696ad00b88ac61451f30e1949f6104ce01f4a61bafd8d647d0c5208462e5 (image=quay.io/ceph/ceph:v18, name=zealous_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:13:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-05729670b27caae475e3b001bb618d28da705e522f26cbd3b07b79bf28018d7a-merged.mount: Deactivated successfully.
Dec 05 01:13:33 compute-0 podman[195757]: 2025-12-05 01:13:33.587529994 +0000 UTC m=+0.911133437 container remove aed4696ad00b88ac61451f30e1949f6104ce01f4a61bafd8d647d0c5208462e5 (image=quay.io/ceph/ceph:v18, name=zealous_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:13:33 compute-0 systemd[1]: libpod-conmon-aed4696ad00b88ac61451f30e1949f6104ce01f4a61bafd8d647d0c5208462e5.scope: Deactivated successfully.
Dec 05 01:13:33 compute-0 podman[195948]: 2025-12-05 01:13:33.710660605 +0000 UTC m=+0.078023939 container create a1c15d17a9c67c4e38eda550ac51af5ecb46c4f2e9273b3c63088396fcecebb8 (image=quay.io/ceph/ceph:v18, name=quizzical_galois, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 01:13:33 compute-0 podman[195948]: 2025-12-05 01:13:33.674740444 +0000 UTC m=+0.042103818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:13:33 compute-0 systemd[1]: Started libpod-conmon-a1c15d17a9c67c4e38eda550ac51af5ecb46c4f2e9273b3c63088396fcecebb8.scope.
Dec 05 01:13:33 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c090e40e9a3d2f05adb3a911484c61dd2ef20dd65bc55af84608eaf12cbbde/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c090e40e9a3d2f05adb3a911484c61dd2ef20dd65bc55af84608eaf12cbbde/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c090e40e9a3d2f05adb3a911484c61dd2ef20dd65bc55af84608eaf12cbbde/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:33 compute-0 podman[195948]: 2025-12-05 01:13:33.828125845 +0000 UTC m=+0.195489259 container init a1c15d17a9c67c4e38eda550ac51af5ecb46c4f2e9273b3c63088396fcecebb8 (image=quay.io/ceph/ceph:v18, name=quizzical_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 05 01:13:33 compute-0 ceph-mon[192914]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:13:33 compute-0 ceph-mon[192914]: Saving service mgr spec with placement count:2
Dec 05 01:13:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:33 compute-0 podman[195948]: 2025-12-05 01:13:33.850817871 +0000 UTC m=+0.218181215 container start a1c15d17a9c67c4e38eda550ac51af5ecb46c4f2e9273b3c63088396fcecebb8 (image=quay.io/ceph/ceph:v18, name=quizzical_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 01:13:33 compute-0 podman[195948]: 2025-12-05 01:13:33.856858932 +0000 UTC m=+0.224222276 container attach a1c15d17a9c67c4e38eda550ac51af5ecb46c4f2e9273b3c63088396fcecebb8 (image=quay.io/ceph/ceph:v18, name=quizzical_galois, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Dec 05 01:13:34 compute-0 ceph-mgr[193209]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 05 01:13:34 compute-0 podman[196011]: 2025-12-05 01:13:34.084464964 +0000 UTC m=+0.104031989 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:13:34 compute-0 podman[196011]: 2025-12-05 01:13:34.414729015 +0000 UTC m=+0.434296040 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:13:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Dec 05 01:13:34 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1794856051' entity='client.admin' 
Dec 05 01:13:34 compute-0 podman[195948]: 2025-12-05 01:13:34.470030288 +0000 UTC m=+0.837393642 container died a1c15d17a9c67c4e38eda550ac51af5ecb46c4f2e9273b3c63088396fcecebb8 (image=quay.io/ceph/ceph:v18, name=quizzical_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 05 01:13:34 compute-0 systemd[1]: libpod-a1c15d17a9c67c4e38eda550ac51af5ecb46c4f2e9273b3c63088396fcecebb8.scope: Deactivated successfully.
Dec 05 01:13:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-17c090e40e9a3d2f05adb3a911484c61dd2ef20dd65bc55af84608eaf12cbbde-merged.mount: Deactivated successfully.
Dec 05 01:13:34 compute-0 podman[195948]: 2025-12-05 01:13:34.535054236 +0000 UTC m=+0.902417580 container remove a1c15d17a9c67c4e38eda550ac51af5ecb46c4f2e9273b3c63088396fcecebb8 (image=quay.io/ceph/ceph:v18, name=quizzical_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 01:13:34 compute-0 systemd[1]: libpod-conmon-a1c15d17a9c67c4e38eda550ac51af5ecb46c4f2e9273b3c63088396fcecebb8.scope: Deactivated successfully.
Dec 05 01:13:34 compute-0 sudo[195866]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:13:34 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:34 compute-0 podman[196087]: 2025-12-05 01:13:34.650414277 +0000 UTC m=+0.069836367 container create c617ea0bf0dd329973f5e5076a565b41c9b3f3704e960c8cd52e5532c54818f5 (image=quay.io/ceph/ceph:v18, name=nostalgic_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Dec 05 01:13:34 compute-0 systemd[1]: Started libpod-conmon-c617ea0bf0dd329973f5e5076a565b41c9b3f3704e960c8cd52e5532c54818f5.scope.
Dec 05 01:13:34 compute-0 podman[196087]: 2025-12-05 01:13:34.625760206 +0000 UTC m=+0.045182306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:13:34 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:34 compute-0 sudo[196101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a1a4748f6d194a5044fda5565ae7e7d7716318444826ca3d4d8f44b0626a051/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a1a4748f6d194a5044fda5565ae7e7d7716318444826ca3d4d8f44b0626a051/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a1a4748f6d194a5044fda5565ae7e7d7716318444826ca3d4d8f44b0626a051/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:34 compute-0 sudo[196101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:34 compute-0 sudo[196101]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:34 compute-0 podman[196087]: 2025-12-05 01:13:34.779795976 +0000 UTC m=+0.199218076 container init c617ea0bf0dd329973f5e5076a565b41c9b3f3704e960c8cd52e5532c54818f5 (image=quay.io/ceph/ceph:v18, name=nostalgic_carson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:13:34 compute-0 podman[196087]: 2025-12-05 01:13:34.797863449 +0000 UTC m=+0.217285519 container start c617ea0bf0dd329973f5e5076a565b41c9b3f3704e960c8cd52e5532c54818f5 (image=quay.io/ceph/ceph:v18, name=nostalgic_carson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 05 01:13:34 compute-0 podman[196087]: 2025-12-05 01:13:34.803660324 +0000 UTC m=+0.223082424 container attach c617ea0bf0dd329973f5e5076a565b41c9b3f3704e960c8cd52e5532c54818f5 (image=quay.io/ceph/ceph:v18, name=nostalgic_carson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:13:34 compute-0 ceph-mon[192914]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:13:34 compute-0 ceph-mon[192914]: Saving service crash spec with placement *
Dec 05 01:13:34 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1794856051' entity='client.admin' 
Dec 05 01:13:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:34 compute-0 sudo[196134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:13:34 compute-0 sudo[196134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:34 compute-0 sudo[196134]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:34 compute-0 sudo[196161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:35 compute-0 sudo[196161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:35 compute-0 sudo[196161]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:35 compute-0 sudo[196186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:13:35 compute-0 sudo[196186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:35 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 196241 (sysctl)
Dec 05 01:13:35 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec 05 01:13:35 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec 05 01:13:35 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:13:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Dec 05 01:13:35 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:35 compute-0 podman[196087]: 2025-12-05 01:13:35.474808868 +0000 UTC m=+0.894230978 container died c617ea0bf0dd329973f5e5076a565b41c9b3f3704e960c8cd52e5532c54818f5 (image=quay.io/ceph/ceph:v18, name=nostalgic_carson, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:13:35 compute-0 systemd[1]: libpod-c617ea0bf0dd329973f5e5076a565b41c9b3f3704e960c8cd52e5532c54818f5.scope: Deactivated successfully.
Dec 05 01:13:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a1a4748f6d194a5044fda5565ae7e7d7716318444826ca3d4d8f44b0626a051-merged.mount: Deactivated successfully.
Dec 05 01:13:35 compute-0 podman[196087]: 2025-12-05 01:13:35.554053001 +0000 UTC m=+0.973475071 container remove c617ea0bf0dd329973f5e5076a565b41c9b3f3704e960c8cd52e5532c54818f5 (image=quay.io/ceph/ceph:v18, name=nostalgic_carson, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:13:35 compute-0 systemd[1]: libpod-conmon-c617ea0bf0dd329973f5e5076a565b41c9b3f3704e960c8cd52e5532c54818f5.scope: Deactivated successfully.
Dec 05 01:13:35 compute-0 podman[196262]: 2025-12-05 01:13:35.642799105 +0000 UTC m=+0.064681781 container create bc9b8a74e350a9abd99ff827e6d80b793eb12f8bf492159fc0c5186d81309e10 (image=quay.io/ceph/ceph:v18, name=kind_fermat, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:13:35 compute-0 podman[196262]: 2025-12-05 01:13:35.607083769 +0000 UTC m=+0.028966465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:13:35 compute-0 systemd[1]: Started libpod-conmon-bc9b8a74e350a9abd99ff827e6d80b793eb12f8bf492159fc0c5186d81309e10.scope.
Dec 05 01:13:35 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:35 compute-0 sudo[196186]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8297a75fb58b8409b5c31a26d3c718da6225825a3e36b10ebec983a29f9753d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8297a75fb58b8409b5c31a26d3c718da6225825a3e36b10ebec983a29f9753d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8297a75fb58b8409b5c31a26d3c718da6225825a3e36b10ebec983a29f9753d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:35 compute-0 podman[196262]: 2025-12-05 01:13:35.821388883 +0000 UTC m=+0.243271569 container init bc9b8a74e350a9abd99ff827e6d80b793eb12f8bf492159fc0c5186d81309e10 (image=quay.io/ceph/ceph:v18, name=kind_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 01:13:35 compute-0 podman[196262]: 2025-12-05 01:13:35.852276311 +0000 UTC m=+0.274159017 container start bc9b8a74e350a9abd99ff827e6d80b793eb12f8bf492159fc0c5186d81309e10 (image=quay.io/ceph/ceph:v18, name=kind_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec 05 01:13:35 compute-0 podman[196262]: 2025-12-05 01:13:35.863104039 +0000 UTC m=+0.284986705 container attach bc9b8a74e350a9abd99ff827e6d80b793eb12f8bf492159fc0c5186d81309e10 (image=quay.io/ceph/ceph:v18, name=kind_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 05 01:13:35 compute-0 sudo[196295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:35 compute-0 sudo[196295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:35 compute-0 sudo[196295]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:36 compute-0 ceph-mgr[193209]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec 05 01:13:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:13:36 compute-0 ceph-mon[192914]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 05 01:13:36 compute-0 sudo[196322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:13:36 compute-0 sudo[196322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:36 compute-0 sudo[196322]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:36 compute-0 sudo[196347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:36 compute-0 sudo[196347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:36 compute-0 sudo[196347]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:36 compute-0 sudo[196391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Dec 05 01:13:36 compute-0 sudo[196391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:36 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:13:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec 05 01:13:36 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:36 compute-0 ceph-mgr[193209]: [cephadm INFO root] Added label _admin to host compute-0
Dec 05 01:13:36 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec 05 01:13:36 compute-0 kind_fermat[196290]: Added label _admin to host compute-0
Dec 05 01:13:36 compute-0 systemd[1]: libpod-bc9b8a74e350a9abd99ff827e6d80b793eb12f8bf492159fc0c5186d81309e10.scope: Deactivated successfully.
Dec 05 01:13:36 compute-0 podman[196262]: 2025-12-05 01:13:36.429513764 +0000 UTC m=+0.851396470 container died bc9b8a74e350a9abd99ff827e6d80b793eb12f8bf492159fc0c5186d81309e10 (image=quay.io/ceph/ceph:v18, name=kind_fermat, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:13:36 compute-0 ceph-mon[192914]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:13:36 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:36 compute-0 ceph-mon[192914]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:13:36 compute-0 ceph-mon[192914]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 05 01:13:36 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8297a75fb58b8409b5c31a26d3c718da6225825a3e36b10ebec983a29f9753d-merged.mount: Deactivated successfully.
Dec 05 01:13:36 compute-0 podman[196262]: 2025-12-05 01:13:36.50045048 +0000 UTC m=+0.922333146 container remove bc9b8a74e350a9abd99ff827e6d80b793eb12f8bf492159fc0c5186d81309e10 (image=quay.io/ceph/ceph:v18, name=kind_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:13:36 compute-0 systemd[1]: libpod-conmon-bc9b8a74e350a9abd99ff827e6d80b793eb12f8bf492159fc0c5186d81309e10.scope: Deactivated successfully.
Dec 05 01:13:36 compute-0 sudo[196391]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:13:36 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:36 compute-0 podman[196445]: 2025-12-05 01:13:36.583968505 +0000 UTC m=+0.057423684 container create 6e42ca781d993e1bf04966bbc54681fde3391ff0fb663683fbe9a11d53b4d1d9 (image=quay.io/ceph/ceph:v18, name=zealous_napier, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 01:13:36 compute-0 systemd[1]: Started libpod-conmon-6e42ca781d993e1bf04966bbc54681fde3391ff0fb663683fbe9a11d53b4d1d9.scope.
Dec 05 01:13:36 compute-0 podman[196445]: 2025-12-05 01:13:36.564223274 +0000 UTC m=+0.037678473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:13:36 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bbab318e522938f36eeb6b2b02aa556ed07177e9b263a3cfc0090a2b7d2450e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bbab318e522938f36eeb6b2b02aa556ed07177e9b263a3cfc0090a2b7d2450e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bbab318e522938f36eeb6b2b02aa556ed07177e9b263a3cfc0090a2b7d2450e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:36 compute-0 sudo[196461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:36 compute-0 sudo[196461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:36 compute-0 sudo[196461]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:36 compute-0 podman[196445]: 2025-12-05 01:13:36.696808174 +0000 UTC m=+0.170263393 container init 6e42ca781d993e1bf04966bbc54681fde3391ff0fb663683fbe9a11d53b4d1d9 (image=quay.io/ceph/ceph:v18, name=zealous_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:13:36 compute-0 podman[196445]: 2025-12-05 01:13:36.71216458 +0000 UTC m=+0.185619759 container start 6e42ca781d993e1bf04966bbc54681fde3391ff0fb663683fbe9a11d53b4d1d9 (image=quay.io/ceph/ceph:v18, name=zealous_napier, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 01:13:36 compute-0 podman[196445]: 2025-12-05 01:13:36.717307436 +0000 UTC m=+0.190762655 container attach 6e42ca781d993e1bf04966bbc54681fde3391ff0fb663683fbe9a11d53b4d1d9 (image=quay.io/ceph/ceph:v18, name=zealous_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:13:36 compute-0 sudo[196492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:13:36 compute-0 sudo[196492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:36 compute-0 sudo[196492]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:36 compute-0 sudo[196518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:36 compute-0 sudo[196518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:36 compute-0 sudo[196518]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:13:36 compute-0 sudo[196543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- inventory --format=json-pretty --filter-for-batch
Dec 05 01:13:36 compute-0 sudo[196543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Dec 05 01:13:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2875519369' entity='client.admin' 
Dec 05 01:13:37 compute-0 systemd[1]: libpod-6e42ca781d993e1bf04966bbc54681fde3391ff0fb663683fbe9a11d53b4d1d9.scope: Deactivated successfully.
Dec 05 01:13:37 compute-0 podman[196445]: 2025-12-05 01:13:37.29738033 +0000 UTC m=+0.770835509 container died 6e42ca781d993e1bf04966bbc54681fde3391ff0fb663683fbe9a11d53b4d1d9 (image=quay.io/ceph/ceph:v18, name=zealous_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:13:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bbab318e522938f36eeb6b2b02aa556ed07177e9b263a3cfc0090a2b7d2450e-merged.mount: Deactivated successfully.
Dec 05 01:13:37 compute-0 podman[196445]: 2025-12-05 01:13:37.346131377 +0000 UTC m=+0.819586556 container remove 6e42ca781d993e1bf04966bbc54681fde3391ff0fb663683fbe9a11d53b4d1d9 (image=quay.io/ceph/ceph:v18, name=zealous_napier, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:13:37 compute-0 systemd[1]: libpod-conmon-6e42ca781d993e1bf04966bbc54681fde3391ff0fb663683fbe9a11d53b4d1d9.scope: Deactivated successfully.
Dec 05 01:13:37 compute-0 podman[196634]: 2025-12-05 01:13:37.438431531 +0000 UTC m=+0.061300664 container create b9bd8cc4db5ecfdc7c392b4c479902c86259ef1793af271f6dfc78128289107a (image=quay.io/ceph/ceph:v18, name=angry_pike, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 01:13:37 compute-0 podman[196644]: 2025-12-05 01:13:37.472425918 +0000 UTC m=+0.063756534 container create a16c8f1e780eb05be65948bd2314a087f6d70eba10cb8cb6d12eec8f5b5e2305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:13:37 compute-0 systemd[1]: Started libpod-conmon-b9bd8cc4db5ecfdc7c392b4c479902c86259ef1793af271f6dfc78128289107a.scope.
Dec 05 01:13:37 compute-0 podman[196634]: 2025-12-05 01:13:37.408703886 +0000 UTC m=+0.031573099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:13:37 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:37 compute-0 systemd[1]: Started libpod-conmon-a16c8f1e780eb05be65948bd2314a087f6d70eba10cb8cb6d12eec8f5b5e2305.scope.
Dec 05 01:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0570a5fe07fb665e6b0eb9b5f4e2a7c04ef0ecb540141a0f899e46ee7ad0245/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0570a5fe07fb665e6b0eb9b5f4e2a7c04ef0ecb540141a0f899e46ee7ad0245/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0570a5fe07fb665e6b0eb9b5f4e2a7c04ef0ecb540141a0f899e46ee7ad0245/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:37 compute-0 podman[196644]: 2025-12-05 01:13:37.437244117 +0000 UTC m=+0.028574753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:13:37 compute-0 podman[196634]: 2025-12-05 01:13:37.568029916 +0000 UTC m=+0.190899079 container init b9bd8cc4db5ecfdc7c392b4c479902c86259ef1793af271f6dfc78128289107a (image=quay.io/ceph/ceph:v18, name=angry_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 05 01:13:37 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:37 compute-0 ceph-mon[192914]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:13:37 compute-0 ceph-mon[192914]: Added label _admin to host compute-0
Dec 05 01:13:37 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:37 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2875519369' entity='client.admin' 
Dec 05 01:13:37 compute-0 podman[196634]: 2025-12-05 01:13:37.584402022 +0000 UTC m=+0.207271155 container start b9bd8cc4db5ecfdc7c392b4c479902c86259ef1793af271f6dfc78128289107a (image=quay.io/ceph/ceph:v18, name=angry_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:13:37 compute-0 podman[196644]: 2025-12-05 01:13:37.586778679 +0000 UTC m=+0.178109355 container init a16c8f1e780eb05be65948bd2314a087f6d70eba10cb8cb6d12eec8f5b5e2305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:13:37 compute-0 podman[196634]: 2025-12-05 01:13:37.592250115 +0000 UTC m=+0.215119278 container attach b9bd8cc4db5ecfdc7c392b4c479902c86259ef1793af271f6dfc78128289107a (image=quay.io/ceph/ceph:v18, name=angry_pike, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 05 01:13:37 compute-0 podman[196644]: 2025-12-05 01:13:37.596747173 +0000 UTC m=+0.188077789 container start a16c8f1e780eb05be65948bd2314a087f6d70eba10cb8cb6d12eec8f5b5e2305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 01:13:37 compute-0 podman[196644]: 2025-12-05 01:13:37.602322301 +0000 UTC m=+0.193652967 container attach a16c8f1e780eb05be65948bd2314a087f6d70eba10cb8cb6d12eec8f5b5e2305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 05 01:13:37 compute-0 optimistic_elion[196670]: 167 167
Dec 05 01:13:37 compute-0 systemd[1]: libpod-a16c8f1e780eb05be65948bd2314a087f6d70eba10cb8cb6d12eec8f5b5e2305.scope: Deactivated successfully.
Dec 05 01:13:37 compute-0 conmon[196670]: conmon a16c8f1e780eb05be659 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a16c8f1e780eb05be65948bd2314a087f6d70eba10cb8cb6d12eec8f5b5e2305.scope/container/memory.events
Dec 05 01:13:37 compute-0 podman[196644]: 2025-12-05 01:13:37.608510407 +0000 UTC m=+0.199841073 container died a16c8f1e780eb05be65948bd2314a087f6d70eba10cb8cb6d12eec8f5b5e2305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 05 01:13:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-10cf81b12018d5e460b4e208f2da1b4aa5d296b8f98e986330d3ba8a8caee8d7-merged.mount: Deactivated successfully.
Dec 05 01:13:37 compute-0 podman[196644]: 2025-12-05 01:13:37.665995812 +0000 UTC m=+0.257326418 container remove a16c8f1e780eb05be65948bd2314a087f6d70eba10cb8cb6d12eec8f5b5e2305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 05 01:13:37 compute-0 systemd[1]: libpod-conmon-a16c8f1e780eb05be65948bd2314a087f6d70eba10cb8cb6d12eec8f5b5e2305.scope: Deactivated successfully.
Dec 05 01:13:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:13:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Dec 05 01:13:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2686863758' entity='client.admin' 
Dec 05 01:13:38 compute-0 angry_pike[196665]: set mgr/dashboard/cluster/status
Dec 05 01:13:38 compute-0 systemd[1]: libpod-b9bd8cc4db5ecfdc7c392b4c479902c86259ef1793af271f6dfc78128289107a.scope: Deactivated successfully.
Dec 05 01:13:38 compute-0 podman[196634]: 2025-12-05 01:13:38.375834156 +0000 UTC m=+0.998703319 container died b9bd8cc4db5ecfdc7c392b4c479902c86259ef1793af271f6dfc78128289107a (image=quay.io/ceph/ceph:v18, name=angry_pike, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 05 01:13:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0570a5fe07fb665e6b0eb9b5f4e2a7c04ef0ecb540141a0f899e46ee7ad0245-merged.mount: Deactivated successfully.
Dec 05 01:13:38 compute-0 podman[196634]: 2025-12-05 01:13:38.46179089 +0000 UTC m=+1.084660023 container remove b9bd8cc4db5ecfdc7c392b4c479902c86259ef1793af271f6dfc78128289107a (image=quay.io/ceph/ceph:v18, name=angry_pike, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 05 01:13:38 compute-0 systemd[1]: libpod-conmon-b9bd8cc4db5ecfdc7c392b4c479902c86259ef1793af271f6dfc78128289107a.scope: Deactivated successfully.
Dec 05 01:13:38 compute-0 sudo[191671]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:38 compute-0 ceph-mon[192914]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:13:38 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2686863758' entity='client.admin' 
Dec 05 01:13:38 compute-0 podman[196728]: 2025-12-05 01:13:38.741648787 +0000 UTC m=+0.095958559 container create 5e8367d4b8ab7803ae2402e4f6658bf0dc68b213eed34d328b8fa4689ac82c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 01:13:38 compute-0 podman[196728]: 2025-12-05 01:13:38.703438241 +0000 UTC m=+0.057747983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:13:38 compute-0 systemd[1]: Started libpod-conmon-5e8367d4b8ab7803ae2402e4f6658bf0dc68b213eed34d328b8fa4689ac82c7e.scope.
Dec 05 01:13:38 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a19e962232743bf785d3221f9b381e53a177abffa9c9caa359352a4ac8cb829/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a19e962232743bf785d3221f9b381e53a177abffa9c9caa359352a4ac8cb829/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a19e962232743bf785d3221f9b381e53a177abffa9c9caa359352a4ac8cb829/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a19e962232743bf785d3221f9b381e53a177abffa9c9caa359352a4ac8cb829/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:38 compute-0 podman[196728]: 2025-12-05 01:13:38.888567485 +0000 UTC m=+0.242877227 container init 5e8367d4b8ab7803ae2402e4f6658bf0dc68b213eed34d328b8fa4689ac82c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:13:38 compute-0 podman[196728]: 2025-12-05 01:13:38.910535629 +0000 UTC m=+0.264845361 container start 5e8367d4b8ab7803ae2402e4f6658bf0dc68b213eed34d328b8fa4689ac82c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carson, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 05 01:13:38 compute-0 podman[196728]: 2025-12-05 01:13:38.914910234 +0000 UTC m=+0.269219996 container attach 5e8367d4b8ab7803ae2402e4f6658bf0dc68b213eed34d328b8fa4689ac82c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:13:39 compute-0 sudo[196772]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxyvnadalnasximhwshdaiymqqarnhzx ; /usr/bin/python3'
Dec 05 01:13:39 compute-0 sudo[196772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:13:39 compute-0 python3[196774]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                            _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:13:39 compute-0 podman[196775]: 2025-12-05 01:13:39.553695467 +0000 UTC m=+0.060913583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:13:39 compute-0 podman[196775]: 2025-12-05 01:13:39.706471851 +0000 UTC m=+0.213689947 container create f5a39de83cb29dd11760d031ec67add8447837e51d8f0c3362bdaef94cb5656b (image=quay.io/ceph/ceph:v18, name=epic_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 01:13:39 compute-0 systemd[1]: Started libpod-conmon-f5a39de83cb29dd11760d031ec67add8447837e51d8f0c3362bdaef94cb5656b.scope.
Dec 05 01:13:39 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b4cdb305a326752888db785abfb3593a2ef61ee48c7f00fbc5be9b48f5a2834/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b4cdb305a326752888db785abfb3593a2ef61ee48c7f00fbc5be9b48f5a2834/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:39 compute-0 podman[196775]: 2025-12-05 01:13:39.85203497 +0000 UTC m=+0.359253056 container init f5a39de83cb29dd11760d031ec67add8447837e51d8f0c3362bdaef94cb5656b (image=quay.io/ceph/ceph:v18, name=epic_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 01:13:39 compute-0 podman[196775]: 2025-12-05 01:13:39.868363365 +0000 UTC m=+0.375581451 container start f5a39de83cb29dd11760d031ec67add8447837e51d8f0c3362bdaef94cb5656b (image=quay.io/ceph/ceph:v18, name=epic_keldysh, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 01:13:39 compute-0 podman[196775]: 2025-12-05 01:13:39.875159448 +0000 UTC m=+0.382377534 container attach f5a39de83cb29dd11760d031ec67add8447837e51d8f0c3362bdaef94cb5656b (image=quay.io/ceph/ceph:v18, name=epic_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:13:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:13:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Dec 05 01:13:40 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2342597363' entity='client.admin' 
Dec 05 01:13:40 compute-0 systemd[1]: libpod-f5a39de83cb29dd11760d031ec67add8447837e51d8f0c3362bdaef94cb5656b.scope: Deactivated successfully.
Dec 05 01:13:40 compute-0 podman[196775]: 2025-12-05 01:13:40.537401787 +0000 UTC m=+1.044619883 container died f5a39de83cb29dd11760d031ec67add8447837e51d8f0c3362bdaef94cb5656b (image=quay.io/ceph/ceph:v18, name=epic_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:13:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b4cdb305a326752888db785abfb3593a2ef61ee48c7f00fbc5be9b48f5a2834-merged.mount: Deactivated successfully.
Dec 05 01:13:40 compute-0 podman[196775]: 2025-12-05 01:13:40.619615155 +0000 UTC m=+1.126833271 container remove f5a39de83cb29dd11760d031ec67add8447837e51d8f0c3362bdaef94cb5656b (image=quay.io/ceph/ceph:v18, name=epic_keldysh, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:13:40 compute-0 systemd[1]: libpod-conmon-f5a39de83cb29dd11760d031ec67add8447837e51d8f0c3362bdaef94cb5656b.scope: Deactivated successfully.
Dec 05 01:13:40 compute-0 sudo[196772]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:41 compute-0 ceph-mon[192914]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:13:41 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2342597363' entity='client.admin' 
Dec 05 01:13:41 compute-0 fervent_carson[196744]: [
Dec 05 01:13:41 compute-0 fervent_carson[196744]:     {
Dec 05 01:13:41 compute-0 fervent_carson[196744]:         "available": false,
Dec 05 01:13:41 compute-0 fervent_carson[196744]:         "ceph_device": false,
Dec 05 01:13:41 compute-0 fervent_carson[196744]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 05 01:13:41 compute-0 fervent_carson[196744]:         "lsm_data": {},
Dec 05 01:13:41 compute-0 fervent_carson[196744]:         "lvs": [],
Dec 05 01:13:41 compute-0 fervent_carson[196744]:         "path": "/dev/sr0",
Dec 05 01:13:41 compute-0 fervent_carson[196744]:         "rejected_reasons": [
Dec 05 01:13:41 compute-0 fervent_carson[196744]:             "Insufficient space (<5GB)",
Dec 05 01:13:41 compute-0 fervent_carson[196744]:             "Has a FileSystem"
Dec 05 01:13:41 compute-0 fervent_carson[196744]:         ],
Dec 05 01:13:41 compute-0 fervent_carson[196744]:         "sys_api": {
Dec 05 01:13:41 compute-0 fervent_carson[196744]:             "actuators": null,
Dec 05 01:13:41 compute-0 fervent_carson[196744]:             "device_nodes": "sr0",
Dec 05 01:13:41 compute-0 fervent_carson[196744]:             "devname": "sr0",
Dec 05 01:13:41 compute-0 fervent_carson[196744]:             "human_readable_size": "482.00 KB",
Dec 05 01:13:41 compute-0 fervent_carson[196744]:             "id_bus": "ata",
Dec 05 01:13:41 compute-0 fervent_carson[196744]:             "model": "QEMU DVD-ROM",
Dec 05 01:13:41 compute-0 fervent_carson[196744]:             "nr_requests": "2",
Dec 05 01:13:41 compute-0 fervent_carson[196744]:             "parent": "/dev/sr0",
Dec 05 01:13:41 compute-0 fervent_carson[196744]:             "partitions": {},
Dec 05 01:13:41 compute-0 fervent_carson[196744]:             "path": "/dev/sr0",
Dec 05 01:13:41 compute-0 fervent_carson[196744]:             "removable": "1",
Dec 05 01:13:41 compute-0 fervent_carson[196744]:             "rev": "2.5+",
Dec 05 01:13:41 compute-0 fervent_carson[196744]:             "ro": "0",
Dec 05 01:13:41 compute-0 fervent_carson[196744]:             "rotational": "1",
Dec 05 01:13:41 compute-0 fervent_carson[196744]:             "sas_address": "",
Dec 05 01:13:41 compute-0 fervent_carson[196744]:             "sas_device_handle": "",
Dec 05 01:13:41 compute-0 fervent_carson[196744]:             "scheduler_mode": "mq-deadline",
Dec 05 01:13:41 compute-0 fervent_carson[196744]:             "sectors": 0,
Dec 05 01:13:41 compute-0 fervent_carson[196744]:             "sectorsize": "2048",
Dec 05 01:13:41 compute-0 fervent_carson[196744]:             "size": 493568.0,
Dec 05 01:13:41 compute-0 fervent_carson[196744]:             "support_discard": "2048",
Dec 05 01:13:41 compute-0 fervent_carson[196744]:             "type": "disk",
Dec 05 01:13:41 compute-0 fervent_carson[196744]:             "vendor": "QEMU"
Dec 05 01:13:41 compute-0 fervent_carson[196744]:         }
Dec 05 01:13:41 compute-0 fervent_carson[196744]:     }
Dec 05 01:13:41 compute-0 fervent_carson[196744]: ]
Dec 05 01:13:41 compute-0 systemd[1]: libpod-5e8367d4b8ab7803ae2402e4f6658bf0dc68b213eed34d328b8fa4689ac82c7e.scope: Deactivated successfully.
Dec 05 01:13:41 compute-0 podman[196728]: 2025-12-05 01:13:41.381802148 +0000 UTC m=+2.736111900 container died 5e8367d4b8ab7803ae2402e4f6658bf0dc68b213eed34d328b8fa4689ac82c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 05 01:13:41 compute-0 systemd[1]: libpod-5e8367d4b8ab7803ae2402e4f6658bf0dc68b213eed34d328b8fa4689ac82c7e.scope: Consumed 2.494s CPU time.
Dec 05 01:13:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a19e962232743bf785d3221f9b381e53a177abffa9c9caa359352a4ac8cb829-merged.mount: Deactivated successfully.
Dec 05 01:13:41 compute-0 podman[196728]: 2025-12-05 01:13:41.462297056 +0000 UTC m=+2.816606788 container remove 5e8367d4b8ab7803ae2402e4f6658bf0dc68b213eed34d328b8fa4689ac82c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 05 01:13:41 compute-0 systemd[1]: libpod-conmon-5e8367d4b8ab7803ae2402e4f6658bf0dc68b213eed34d328b8fa4689ac82c7e.scope: Deactivated successfully.
Dec 05 01:13:41 compute-0 sudo[196543]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:13:41 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:13:41 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:13:41 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:13:41 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 05 01:13:41 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 01:13:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:13:41 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:13:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:13:41 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:13:41 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 05 01:13:41 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 05 01:13:41 compute-0 sudo[198766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:41 compute-0 sudo[198766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:41 compute-0 sudo[198766]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:41 compute-0 sudo[198836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntsnbwyhdryviorbjhijkmycbclssxep ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764897221.0511286-37008-201520603484948/async_wrapper.py j271356092052 30 /home/zuul/.ansible/tmp/ansible-tmp-1764897221.0511286-37008-201520603484948/AnsiballZ_command.py _'
Dec 05 01:13:41 compute-0 sudo[198836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:13:41 compute-0 sudo[198818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 01:13:41 compute-0 sudo[198818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:41 compute-0 sudo[198818]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:41 compute-0 sudo[198856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:41 compute-0 sudo[198856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:41 compute-0 sudo[198856]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:41 compute-0 ansible-async_wrapper.py[198853]: Invoked with j271356092052 30 /home/zuul/.ansible/tmp/ansible-tmp-1764897221.0511286-37008-201520603484948/AnsiballZ_command.py _
Dec 05 01:13:41 compute-0 ansible-async_wrapper.py[198900]: Starting module and watcher
Dec 05 01:13:41 compute-0 ansible-async_wrapper.py[198900]: Start watching 198902 (30)
Dec 05 01:13:41 compute-0 ansible-async_wrapper.py[198902]: Start module (198902)
Dec 05 01:13:41 compute-0 ansible-async_wrapper.py[198853]: Return async_wrapper task started.
Dec 05 01:13:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:13:41 compute-0 sudo[198836]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:42 compute-0 sudo[198881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/etc/ceph
Dec 05 01:13:42 compute-0 sudo[198881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:42 compute-0 sudo[198881]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:13:42 compute-0 python3[198906]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:13:42 compute-0 sudo[198911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:42 compute-0 sudo[198911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:42 compute-0 sudo[198911]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:42 compute-0 podman[198934]: 2025-12-05 01:13:42.238005614 +0000 UTC m=+0.102665661 container create 558e35a6692b8f289c23fc5eff2110f1227e4f4a1a9f19f225b01dd4a55e05e9 (image=quay.io/ceph/ceph:v18, name=distracted_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:13:42 compute-0 sudo[198942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/etc/ceph/ceph.conf.new
Dec 05 01:13:42 compute-0 sudo[198942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:42 compute-0 sudo[198942]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:42 compute-0 podman[198934]: 2025-12-05 01:13:42.199497049 +0000 UTC m=+0.064157196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:13:42 compute-0 systemd[1]: Started libpod-conmon-558e35a6692b8f289c23fc5eff2110f1227e4f4a1a9f19f225b01dd4a55e05e9.scope.
Dec 05 01:13:42 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/774b649870037a2e89f0d20d890a7669d3ed97f0e789fc29b0a07200e6b96ac6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/774b649870037a2e89f0d20d890a7669d3ed97f0e789fc29b0a07200e6b96ac6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:42 compute-0 sudo[198976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:42 compute-0 sudo[198976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:42 compute-0 sudo[198976]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:42 compute-0 podman[198934]: 2025-12-05 01:13:42.353476168 +0000 UTC m=+0.218136245 container init 558e35a6692b8f289c23fc5eff2110f1227e4f4a1a9f19f225b01dd4a55e05e9 (image=quay.io/ceph/ceph:v18, name=distracted_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 05 01:13:42 compute-0 podman[198934]: 2025-12-05 01:13:42.373159397 +0000 UTC m=+0.237819444 container start 558e35a6692b8f289c23fc5eff2110f1227e4f4a1a9f19f225b01dd4a55e05e9 (image=quay.io/ceph/ceph:v18, name=distracted_rhodes, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 05 01:13:42 compute-0 podman[198934]: 2025-12-05 01:13:42.382335278 +0000 UTC m=+0.246995335 container attach 558e35a6692b8f289c23fc5eff2110f1227e4f4a1a9f19f225b01dd4a55e05e9 (image=quay.io/ceph/ceph:v18, name=distracted_rhodes, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 05 01:13:42 compute-0 sudo[199005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec 05 01:13:42 compute-0 sudo[199005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:42 compute-0 sudo[199005]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:42 compute-0 sudo[199030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:42 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:42 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:42 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:42 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:42 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 01:13:42 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:13:42 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:13:42 compute-0 ceph-mon[192914]: Updating compute-0:/etc/ceph/ceph.conf
Dec 05 01:13:42 compute-0 ceph-mon[192914]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:13:42 compute-0 sudo[199030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:42 compute-0 sudo[199030]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:42 compute-0 sudo[199055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/etc/ceph/ceph.conf.new
Dec 05 01:13:42 compute-0 sudo[199055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:42 compute-0 sudo[199055]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:42 compute-0 sudo[199113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:42 compute-0 sudo[199113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:42 compute-0 sudo[199113]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:42 compute-0 sudo[199147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/etc/ceph/ceph.conf.new
Dec 05 01:13:42 compute-0 sudo[199147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:42 compute-0 sudo[199147]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:42 compute-0 sudo[199172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:42 compute-0 sudo[199172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:42 compute-0 sudo[199172]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:42 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 01:13:42 compute-0 distracted_rhodes[198982]: 
Dec 05 01:13:42 compute-0 distracted_rhodes[198982]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 05 01:13:42 compute-0 systemd[1]: libpod-558e35a6692b8f289c23fc5eff2110f1227e4f4a1a9f19f225b01dd4a55e05e9.scope: Deactivated successfully.
Dec 05 01:13:42 compute-0 podman[198934]: 2025-12-05 01:13:42.980051234 +0000 UTC m=+0.844711281 container died 558e35a6692b8f289c23fc5eff2110f1227e4f4a1a9f19f225b01dd4a55e05e9 (image=quay.io/ceph/ceph:v18, name=distracted_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:13:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-774b649870037a2e89f0d20d890a7669d3ed97f0e789fc29b0a07200e6b96ac6-merged.mount: Deactivated successfully.
Dec 05 01:13:43 compute-0 podman[198934]: 2025-12-05 01:13:43.056765215 +0000 UTC m=+0.921425302 container remove 558e35a6692b8f289c23fc5eff2110f1227e4f4a1a9f19f225b01dd4a55e05e9 (image=quay.io/ceph/ceph:v18, name=distracted_rhodes, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 05 01:13:43 compute-0 sudo[199199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/etc/ceph/ceph.conf.new
Dec 05 01:13:43 compute-0 sudo[199199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:43 compute-0 sudo[199199]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:43 compute-0 systemd[1]: libpod-conmon-558e35a6692b8f289c23fc5eff2110f1227e4f4a1a9f19f225b01dd4a55e05e9.scope: Deactivated successfully.
Dec 05 01:13:43 compute-0 ansible-async_wrapper.py[198902]: Module complete (198902)
Dec 05 01:13:43 compute-0 sudo[199258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:43 compute-0 sudo[199258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:43 compute-0 sudo[199258]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:43 compute-0 sudo[199285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 05 01:13:43 compute-0 sudo[199285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:43 compute-0 sudo[199285]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:43 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config/ceph.conf
Dec 05 01:13:43 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config/ceph.conf
Dec 05 01:13:43 compute-0 sudo[199310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:43 compute-0 sudo[199310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:43 compute-0 sudo[199310]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:43 compute-0 sudo[199357]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrmbirtyypymhliufzzfuqsrwfhvmaoq ; /usr/bin/python3'
Dec 05 01:13:43 compute-0 sudo[199357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:13:43 compute-0 sudo[199359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config
Dec 05 01:13:43 compute-0 sudo[199359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:43 compute-0 sudo[199359]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:43 compute-0 python3[199365]: ansible-ansible.legacy.async_status Invoked with jid=j271356092052.198853 mode=status _async_dir=/root/.ansible_async
Dec 05 01:13:43 compute-0 sudo[199357]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:43 compute-0 sudo[199386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:43 compute-0 sudo[199386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:43 compute-0 sudo[199386]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:43 compute-0 ceph-mon[192914]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 01:13:43 compute-0 ceph-mon[192914]: Updating compute-0:/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config/ceph.conf
Dec 05 01:13:43 compute-0 sudo[199411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config
Dec 05 01:13:43 compute-0 sudo[199411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:43 compute-0 sudo[199411]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:43 compute-0 sudo[199461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:43 compute-0 sudo[199461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:43 compute-0 sudo[199505]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcfgfsnczcnlsxitgomlsxyloskhwuwp ; /usr/bin/python3'
Dec 05 01:13:43 compute-0 sudo[199461]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:43 compute-0 sudo[199505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:13:43 compute-0 auditd[704]: Audit daemon rotating log files
Dec 05 01:13:43 compute-0 sudo[199510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config/ceph.conf.new
Dec 05 01:13:43 compute-0 sudo[199510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:43 compute-0 sudo[199510]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:43 compute-0 python3[199509]: ansible-ansible.legacy.async_status Invoked with jid=j271356092052.198853 mode=cleanup _async_dir=/root/.ansible_async
Dec 05 01:13:43 compute-0 sudo[199505]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:43 compute-0 sudo[199535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:43 compute-0 sudo[199535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:43 compute-0 sudo[199535]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:43 compute-0 sudo[199560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec 05 01:13:43 compute-0 sudo[199560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:43 compute-0 sudo[199560]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:13:44 compute-0 sudo[199585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:44 compute-0 sudo[199585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:44 compute-0 sudo[199585]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:44 compute-0 sudo[199611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config/ceph.conf.new
Dec 05 01:13:44 compute-0 sudo[199611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:44 compute-0 sudo[199611]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:44 compute-0 sudo[199656]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgbtzuukabwqggavaabvhkxeyjmohkls ; /usr/bin/python3'
Dec 05 01:13:44 compute-0 sudo[199656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:13:44 compute-0 python3[199660]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 01:13:44 compute-0 sudo[199684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:44 compute-0 sudo[199684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:44 compute-0 sudo[199684]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:44 compute-0 sudo[199656]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:44 compute-0 sudo[199711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config/ceph.conf.new
Dec 05 01:13:44 compute-0 sudo[199711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:44 compute-0 sudo[199711]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:44 compute-0 sudo[199736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:44 compute-0 sudo[199736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:44 compute-0 sudo[199736]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:44 compute-0 ceph-mon[192914]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:13:44 compute-0 sudo[199761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config/ceph.conf.new
Dec 05 01:13:44 compute-0 sudo[199761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:44 compute-0 sudo[199761]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:44 compute-0 sudo[199817]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ublohitryizgmfewdfdbvloulpiizgme ; /usr/bin/python3'
Dec 05 01:13:44 compute-0 sudo[199817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:13:44 compute-0 sudo[199798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:44 compute-0 sudo[199798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:44 compute-0 sudo[199798]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:44 compute-0 sudo[199837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config/ceph.conf.new /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config/ceph.conf
Dec 05 01:13:44 compute-0 sudo[199837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:44 compute-0 sudo[199837]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:44 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 05 01:13:44 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 05 01:13:44 compute-0 python3[199835]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:13:44 compute-0 sudo[199862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:44 compute-0 sudo[199862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:44 compute-0 sudo[199862]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:44 compute-0 podman[199880]: 2025-12-05 01:13:44.911139871 +0000 UTC m=+0.055693794 container create 49dfd9a40cde8f16e7f8c4cb8792d8670104fde9d5987986f787dadee65ee77e (image=quay.io/ceph/ceph:v18, name=hungry_bell, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 01:13:44 compute-0 systemd[1]: Started libpod-conmon-49dfd9a40cde8f16e7f8c4cb8792d8670104fde9d5987986f787dadee65ee77e.scope.
Dec 05 01:13:44 compute-0 sudo[199898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 05 01:13:44 compute-0 sudo[199898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:44 compute-0 podman[199880]: 2025-12-05 01:13:44.890789883 +0000 UTC m=+0.035343826 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:13:44 compute-0 sudo[199898]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:44 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eb181ddf9b5e6a5db4cfd21aa4977f6ff960c081573d2ea370d577e9e159e80/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eb181ddf9b5e6a5db4cfd21aa4977f6ff960c081573d2ea370d577e9e159e80/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eb181ddf9b5e6a5db4cfd21aa4977f6ff960c081573d2ea370d577e9e159e80/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:45 compute-0 podman[199880]: 2025-12-05 01:13:45.026714428 +0000 UTC m=+0.171268381 container init 49dfd9a40cde8f16e7f8c4cb8792d8670104fde9d5987986f787dadee65ee77e (image=quay.io/ceph/ceph:v18, name=hungry_bell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 01:13:45 compute-0 podman[199880]: 2025-12-05 01:13:45.040203581 +0000 UTC m=+0.184757504 container start 49dfd9a40cde8f16e7f8c4cb8792d8670104fde9d5987986f787dadee65ee77e (image=quay.io/ceph/ceph:v18, name=hungry_bell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 05 01:13:45 compute-0 podman[199880]: 2025-12-05 01:13:45.044163064 +0000 UTC m=+0.188716987 container attach 49dfd9a40cde8f16e7f8c4cb8792d8670104fde9d5987986f787dadee65ee77e (image=quay.io/ceph/ceph:v18, name=hungry_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:13:45 compute-0 sudo[199930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:45 compute-0 sudo[199930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:45 compute-0 sudo[199930]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:45 compute-0 sudo[199956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/etc/ceph
Dec 05 01:13:45 compute-0 sudo[199956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:45 compute-0 sudo[199956]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:45 compute-0 sudo[199981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:45 compute-0 sudo[199981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:45 compute-0 sudo[199981]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:45 compute-0 sudo[200006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/etc/ceph/ceph.client.admin.keyring.new
Dec 05 01:13:45 compute-0 sudo[200006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:45 compute-0 sudo[200006]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:45 compute-0 sudo[200050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:45 compute-0 sudo[200050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:45 compute-0 sudo[200050]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:45 compute-0 ceph-mon[192914]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 05 01:13:45 compute-0 sudo[200075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec 05 01:13:45 compute-0 sudo[200075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:45 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 01:13:45 compute-0 sudo[200075]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:45 compute-0 hungry_bell[199926]: 
Dec 05 01:13:45 compute-0 hungry_bell[199926]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 05 01:13:45 compute-0 systemd[1]: libpod-49dfd9a40cde8f16e7f8c4cb8792d8670104fde9d5987986f787dadee65ee77e.scope: Deactivated successfully.
Dec 05 01:13:45 compute-0 podman[199880]: 2025-12-05 01:13:45.691038177 +0000 UTC m=+0.835592100 container died 49dfd9a40cde8f16e7f8c4cb8792d8670104fde9d5987986f787dadee65ee77e (image=quay.io/ceph/ceph:v18, name=hungry_bell, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:13:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-9eb181ddf9b5e6a5db4cfd21aa4977f6ff960c081573d2ea370d577e9e159e80-merged.mount: Deactivated successfully.
Dec 05 01:13:45 compute-0 podman[199880]: 2025-12-05 01:13:45.758767893 +0000 UTC m=+0.903321826 container remove 49dfd9a40cde8f16e7f8c4cb8792d8670104fde9d5987986f787dadee65ee77e (image=quay.io/ceph/ceph:v18, name=hungry_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:13:45 compute-0 sudo[200102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:45 compute-0 sudo[200102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:45 compute-0 systemd[1]: libpod-conmon-49dfd9a40cde8f16e7f8c4cb8792d8670104fde9d5987986f787dadee65ee77e.scope: Deactivated successfully.
Dec 05 01:13:45 compute-0 sudo[200102]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:45 compute-0 sudo[199817]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:45 compute-0 sudo[200139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/etc/ceph/ceph.client.admin.keyring.new
Dec 05 01:13:45 compute-0 sudo[200139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:45 compute-0 sudo[200139]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:45 compute-0 sudo[200187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:45 compute-0 sudo[200187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:45 compute-0 sudo[200187]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:13:46 compute-0 sudo[200256]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktkllundfrthuthembamlmaxryzbsdhb ; /usr/bin/python3'
Dec 05 01:13:46 compute-0 sudo[200256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:13:46 compute-0 sudo[200213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/etc/ceph/ceph.client.admin.keyring.new
Dec 05 01:13:46 compute-0 sudo[200213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:46 compute-0 sudo[200213]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:13:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:13:46 compute-0 sudo[200263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:46 compute-0 sudo[200263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:46 compute-0 sudo[200263]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:13:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:13:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:13:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:13:46 compute-0 python3[200261]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:13:46 compute-0 sudo[200288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/etc/ceph/ceph.client.admin.keyring.new
Dec 05 01:13:46 compute-0 sudo[200288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:46 compute-0 sudo[200288]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:46 compute-0 podman[200306]: 2025-12-05 01:13:46.314074783 +0000 UTC m=+0.054661345 container create d842367e806cd21eeaa73d8976afbf382236c722a7ea14393a1e81a4cd30aa7f (image=quay.io/ceph/ceph:v18, name=affectionate_beaver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:13:46 compute-0 systemd[1]: Started libpod-conmon-d842367e806cd21eeaa73d8976afbf382236c722a7ea14393a1e81a4cd30aa7f.scope.
Dec 05 01:13:46 compute-0 podman[200306]: 2025-12-05 01:13:46.29181587 +0000 UTC m=+0.032402442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:13:46 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:46 compute-0 sudo[200325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52fbbfdaa6cb772de3e60c28eeea1bd50c98d2c857a66b676a1b822c0853d8a2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52fbbfdaa6cb772de3e60c28eeea1bd50c98d2c857a66b676a1b822c0853d8a2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52fbbfdaa6cb772de3e60c28eeea1bd50c98d2c857a66b676a1b822c0853d8a2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:46 compute-0 sudo[200325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:46 compute-0 sudo[200325]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:46 compute-0 podman[200306]: 2025-12-05 01:13:46.424449322 +0000 UTC m=+0.165035954 container init d842367e806cd21eeaa73d8976afbf382236c722a7ea14393a1e81a4cd30aa7f (image=quay.io/ceph/ceph:v18, name=affectionate_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 01:13:46 compute-0 podman[200306]: 2025-12-05 01:13:46.434975581 +0000 UTC m=+0.175562173 container start d842367e806cd21eeaa73d8976afbf382236c722a7ea14393a1e81a4cd30aa7f (image=quay.io/ceph/ceph:v18, name=affectionate_beaver, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:13:46 compute-0 podman[200306]: 2025-12-05 01:13:46.440289712 +0000 UTC m=+0.180876264 container attach d842367e806cd21eeaa73d8976afbf382236c722a7ea14393a1e81a4cd30aa7f (image=quay.io/ceph/ceph:v18, name=affectionate_beaver, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 05 01:13:46 compute-0 sudo[200357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 05 01:13:46 compute-0 sudo[200357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:46 compute-0 sudo[200357]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:46 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config/ceph.client.admin.keyring
Dec 05 01:13:46 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config/ceph.client.admin.keyring
Dec 05 01:13:46 compute-0 ceph-mon[192914]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 01:13:46 compute-0 ceph-mon[192914]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:13:46 compute-0 sudo[200382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:46 compute-0 sudo[200382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:46 compute-0 sudo[200382]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:46 compute-0 sudo[200407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config
Dec 05 01:13:46 compute-0 sudo[200407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:46 compute-0 sudo[200407]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:46 compute-0 sudo[200442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:46 compute-0 sudo[200442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:46 compute-0 sudo[200442]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:46 compute-0 sudo[200476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config
Dec 05 01:13:46 compute-0 sudo[200476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:46 compute-0 sudo[200476]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Dec 05 01:13:46 compute-0 ansible-async_wrapper.py[198900]: Done in kid B.
Dec 05 01:13:46 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1903592761' entity='client.admin' 
Dec 05 01:13:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:13:47 compute-0 sudo[200501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:47 compute-0 systemd[1]: libpod-d842367e806cd21eeaa73d8976afbf382236c722a7ea14393a1e81a4cd30aa7f.scope: Deactivated successfully.
Dec 05 01:13:47 compute-0 podman[200306]: 2025-12-05 01:13:47.003448105 +0000 UTC m=+0.744034687 container died d842367e806cd21eeaa73d8976afbf382236c722a7ea14393a1e81a4cd30aa7f (image=quay.io/ceph/ceph:v18, name=affectionate_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 05 01:13:47 compute-0 sudo[200501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:47 compute-0 sudo[200501]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-52fbbfdaa6cb772de3e60c28eeea1bd50c98d2c857a66b676a1b822c0853d8a2-merged.mount: Deactivated successfully.
Dec 05 01:13:47 compute-0 podman[200306]: 2025-12-05 01:13:47.071548341 +0000 UTC m=+0.812134903 container remove d842367e806cd21eeaa73d8976afbf382236c722a7ea14393a1e81a4cd30aa7f (image=quay.io/ceph/ceph:v18, name=affectionate_beaver, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 01:13:47 compute-0 systemd[1]: libpod-conmon-d842367e806cd21eeaa73d8976afbf382236c722a7ea14393a1e81a4cd30aa7f.scope: Deactivated successfully.
Dec 05 01:13:47 compute-0 sudo[200256]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:47 compute-0 sudo[200535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config/ceph.client.admin.keyring.new
Dec 05 01:13:47 compute-0 sudo[200535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:47 compute-0 sudo[200535]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:47 compute-0 sudo[200565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:47 compute-0 sudo[200565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:47 compute-0 sudo[200565]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:47 compute-0 sudo[200617]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpxkyhyfzkuivcedqbdomrqixbumefgu ; /usr/bin/python3'
Dec 05 01:13:47 compute-0 sudo[200617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:13:47 compute-0 sudo[200611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec 05 01:13:47 compute-0 sudo[200611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:47 compute-0 sudo[200611]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:47 compute-0 sudo[200641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:47 compute-0 sudo[200641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:47 compute-0 sudo[200641]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:47 compute-0 python3[200633]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:13:47 compute-0 sudo[200666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config/ceph.client.admin.keyring.new
Dec 05 01:13:47 compute-0 podman[200667]: 2025-12-05 01:13:47.53471219 +0000 UTC m=+0.057524757 container create fefff7e29b55c97c894aed57a0523c18ef2955c00c1b2337b5c32d19f90222fb (image=quay.io/ceph/ceph:v18, name=pensive_fermi, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:13:47 compute-0 sudo[200666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:47 compute-0 sudo[200666]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:47 compute-0 systemd[1]: Started libpod-conmon-fefff7e29b55c97c894aed57a0523c18ef2955c00c1b2337b5c32d19f90222fb.scope.
Dec 05 01:13:47 compute-0 podman[200667]: 2025-12-05 01:13:47.516215064 +0000 UTC m=+0.039027661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:13:47 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da128f5f01d58a3225a6e40dc4f53634aeedad85166e5262e64b02d2bc0a7891/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da128f5f01d58a3225a6e40dc4f53634aeedad85166e5262e64b02d2bc0a7891/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da128f5f01d58a3225a6e40dc4f53634aeedad85166e5262e64b02d2bc0a7891/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:47 compute-0 podman[200667]: 2025-12-05 01:13:47.635681651 +0000 UTC m=+0.158494278 container init fefff7e29b55c97c894aed57a0523c18ef2955c00c1b2337b5c32d19f90222fb (image=quay.io/ceph/ceph:v18, name=pensive_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:13:47 compute-0 podman[200667]: 2025-12-05 01:13:47.648682391 +0000 UTC m=+0.171494968 container start fefff7e29b55c97c894aed57a0523c18ef2955c00c1b2337b5c32d19f90222fb (image=quay.io/ceph/ceph:v18, name=pensive_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 01:13:47 compute-0 podman[200667]: 2025-12-05 01:13:47.654781334 +0000 UTC m=+0.177594001 container attach fefff7e29b55c97c894aed57a0523c18ef2955c00c1b2337b5c32d19f90222fb (image=quay.io/ceph/ceph:v18, name=pensive_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:13:47 compute-0 sudo[200733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:47 compute-0 sudo[200733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:47 compute-0 sudo[200733]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:47 compute-0 sudo[200758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config/ceph.client.admin.keyring.new
Dec 05 01:13:47 compute-0 sudo[200758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:47 compute-0 sudo[200758]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:47 compute-0 sudo[200783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:47 compute-0 sudo[200783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:47 compute-0 sudo[200783]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:47 compute-0 ceph-mon[192914]: Updating compute-0:/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config/ceph.client.admin.keyring
Dec 05 01:13:47 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1903592761' entity='client.admin' 
Dec 05 01:13:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:13:48 compute-0 sudo[200810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config/ceph.client.admin.keyring.new
Dec 05 01:13:48 compute-0 sudo[200810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:48 compute-0 sudo[200810]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:48 compute-0 sudo[200852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:48 compute-0 sudo[200852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:48 compute-0 sudo[200852]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Dec 05 01:13:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4153863721' entity='client.admin' 
Dec 05 01:13:48 compute-0 systemd[1]: libpod-fefff7e29b55c97c894aed57a0523c18ef2955c00c1b2337b5c32d19f90222fb.scope: Deactivated successfully.
Dec 05 01:13:48 compute-0 conmon[200719]: conmon fefff7e29b55c97c894a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fefff7e29b55c97c894aed57a0523c18ef2955c00c1b2337b5c32d19f90222fb.scope/container/memory.events
Dec 05 01:13:48 compute-0 podman[200667]: 2025-12-05 01:13:48.29999045 +0000 UTC m=+0.822803027 container died fefff7e29b55c97c894aed57a0523c18ef2955c00c1b2337b5c32d19f90222fb (image=quay.io/ceph/ceph:v18, name=pensive_fermi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 05 01:13:48 compute-0 sudo[200877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-cbd280d3-cbd8-528b-ace6-2b3a887cdcee/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config/ceph.client.admin.keyring.new /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config/ceph.client.admin.keyring
Dec 05 01:13:48 compute-0 sudo[200877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-da128f5f01d58a3225a6e40dc4f53634aeedad85166e5262e64b02d2bc0a7891-merged.mount: Deactivated successfully.
Dec 05 01:13:48 compute-0 sudo[200877]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:13:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:13:48 compute-0 podman[200667]: 2025-12-05 01:13:48.393366166 +0000 UTC m=+0.916178743 container remove fefff7e29b55c97c894aed57a0523c18ef2955c00c1b2337b5c32d19f90222fb (image=quay.io/ceph/ceph:v18, name=pensive_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 05 01:13:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:13:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:48 compute-0 systemd[1]: libpod-conmon-fefff7e29b55c97c894aed57a0523c18ef2955c00c1b2337b5c32d19f90222fb.scope: Deactivated successfully.
Dec 05 01:13:48 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev baf65416-19b1-4804-9f83-627cc5d30e70 (Updating crash deployment (+1 -> 1))
Dec 05 01:13:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Dec 05 01:13:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 05 01:13:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 05 01:13:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:13:48 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:13:48 compute-0 sudo[200617]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:48 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec 05 01:13:48 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec 05 01:13:48 compute-0 podman[200903]: 2025-12-05 01:13:48.465921119 +0000 UTC m=+0.122392782 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec 05 01:13:48 compute-0 sudo[200931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:48 compute-0 sudo[200931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:48 compute-0 sudo[200931]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:48 compute-0 sudo[200959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:13:48 compute-0 sudo[200959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:48 compute-0 sudo[200959]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:48 compute-0 sudo[201007]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plepatrgzpqemuatcbpgjtencctzqgsl ; /usr/bin/python3'
Dec 05 01:13:48 compute-0 sudo[201007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:13:48 compute-0 sudo[201009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:48 compute-0 sudo[201009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:48 compute-0 sudo[201009]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:48 compute-0 python3[201011]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                            _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:13:48 compute-0 sudo[201035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec 05 01:13:48 compute-0 sudo[201035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:48 compute-0 podman[201053]: 2025-12-05 01:13:48.877677617 +0000 UTC m=+0.066279756 container create 6db1e7e4219c5c9a0a27c8c069e8d12a2b2a854915d2a6fadfc84e9b6c321f44 (image=quay.io/ceph/ceph:v18, name=busy_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:13:48 compute-0 systemd[1]: Started libpod-conmon-6db1e7e4219c5c9a0a27c8c069e8d12a2b2a854915d2a6fadfc84e9b6c321f44.scope.
Dec 05 01:13:48 compute-0 podman[201053]: 2025-12-05 01:13:48.851753739 +0000 UTC m=+0.040355918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:13:48 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08eab2ee8f29a2feae9579d3b70ea8d9722d833d31237cd31dc7146fcab4d8b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08eab2ee8f29a2feae9579d3b70ea8d9722d833d31237cd31dc7146fcab4d8b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08eab2ee8f29a2feae9579d3b70ea8d9722d833d31237cd31dc7146fcab4d8b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:48 compute-0 podman[201053]: 2025-12-05 01:13:48.973530542 +0000 UTC m=+0.162132731 container init 6db1e7e4219c5c9a0a27c8c069e8d12a2b2a854915d2a6fadfc84e9b6c321f44 (image=quay.io/ceph/ceph:v18, name=busy_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 01:13:48 compute-0 podman[201053]: 2025-12-05 01:13:48.995445765 +0000 UTC m=+0.184047904 container start 6db1e7e4219c5c9a0a27c8c069e8d12a2b2a854915d2a6fadfc84e9b6c321f44 (image=quay.io/ceph/ceph:v18, name=busy_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:13:49 compute-0 podman[201053]: 2025-12-05 01:13:49.001197109 +0000 UTC m=+0.189799268 container attach 6db1e7e4219c5c9a0a27c8c069e8d12a2b2a854915d2a6fadfc84e9b6c321f44 (image=quay.io/ceph/ceph:v18, name=busy_chebyshev, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 01:13:49 compute-0 ceph-mon[192914]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:13:49 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4153863721' entity='client.admin' 
Dec 05 01:13:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 05 01:13:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 05 01:13:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:13:49 compute-0 podman[201118]: 2025-12-05 01:13:49.290037422 +0000 UTC m=+0.064095604 container create 5b445a9ede00f0969f256d24c66eef472f754d96a1da560a10a02bd13d985043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_leavitt, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 01:13:49 compute-0 systemd[1]: Started libpod-conmon-5b445a9ede00f0969f256d24c66eef472f754d96a1da560a10a02bd13d985043.scope.
Dec 05 01:13:49 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:49 compute-0 podman[201118]: 2025-12-05 01:13:49.262436727 +0000 UTC m=+0.036494969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:13:49 compute-0 podman[201118]: 2025-12-05 01:13:49.387534434 +0000 UTC m=+0.161592616 container init 5b445a9ede00f0969f256d24c66eef472f754d96a1da560a10a02bd13d985043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_leavitt, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 01:13:49 compute-0 podman[201118]: 2025-12-05 01:13:49.397212889 +0000 UTC m=+0.171271071 container start 5b445a9ede00f0969f256d24c66eef472f754d96a1da560a10a02bd13d985043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:13:49 compute-0 podman[201118]: 2025-12-05 01:13:49.40180229 +0000 UTC m=+0.175860492 container attach 5b445a9ede00f0969f256d24c66eef472f754d96a1da560a10a02bd13d985043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_leavitt, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Dec 05 01:13:49 compute-0 wonderful_leavitt[201146]: 167 167
Dec 05 01:13:49 compute-0 systemd[1]: libpod-5b445a9ede00f0969f256d24c66eef472f754d96a1da560a10a02bd13d985043.scope: Deactivated successfully.
Dec 05 01:13:49 compute-0 conmon[201146]: conmon 5b445a9ede00f0969f25 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5b445a9ede00f0969f256d24c66eef472f754d96a1da560a10a02bd13d985043.scope/container/memory.events
Dec 05 01:13:49 compute-0 podman[201118]: 2025-12-05 01:13:49.404660161 +0000 UTC m=+0.178718353 container died 5b445a9ede00f0969f256d24c66eef472f754d96a1da560a10a02bd13d985043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_leavitt, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 01:13:49 compute-0 podman[201133]: 2025-12-05 01:13:49.427603543 +0000 UTC m=+0.099795938 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:13:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-36ebd387b60bc9c1456de72fe0d3580f10fadc4c3316514ec4fa8cb4251f51ee-merged.mount: Deactivated successfully.
Dec 05 01:13:49 compute-0 podman[201118]: 2025-12-05 01:13:49.461003333 +0000 UTC m=+0.235061525 container remove 5b445a9ede00f0969f256d24c66eef472f754d96a1da560a10a02bd13d985043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_leavitt, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:13:49 compute-0 systemd[1]: libpod-conmon-5b445a9ede00f0969f256d24c66eef472f754d96a1da560a10a02bd13d985043.scope: Deactivated successfully.
Dec 05 01:13:49 compute-0 podman[201137]: 2025-12-05 01:13:49.479103738 +0000 UTC m=+0.150243483 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 05 01:13:49 compute-0 systemd[1]: Reloading.
Dec 05 01:13:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Dec 05 01:13:49 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3277226958' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec 05 01:13:49 compute-0 systemd-sysv-generator[201247]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:13:49 compute-0 systemd-rc-local-generator[201244]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:13:49 compute-0 systemd[1]: Reloading.
Dec 05 01:13:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:13:50 compute-0 systemd-rc-local-generator[201284]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:13:50 compute-0 systemd-sysv-generator[201287]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:13:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec 05 01:13:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 05 01:13:50 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3277226958' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 05 01:13:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec 05 01:13:50 compute-0 ceph-mon[192914]: Deploying daemon crash.compute-0 on compute-0
Dec 05 01:13:50 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3277226958' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec 05 01:13:50 compute-0 busy_chebyshev[201075]: set require_min_compat_client to mimic
Dec 05 01:13:50 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec 05 01:13:50 compute-0 podman[201053]: 2025-12-05 01:13:50.312226107 +0000 UTC m=+1.500828276 container died 6db1e7e4219c5c9a0a27c8c069e8d12a2b2a854915d2a6fadfc84e9b6c321f44 (image=quay.io/ceph/ceph:v18, name=busy_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:13:50 compute-0 systemd[1]: libpod-6db1e7e4219c5c9a0a27c8c069e8d12a2b2a854915d2a6fadfc84e9b6c321f44.scope: Deactivated successfully.
Dec 05 01:13:50 compute-0 systemd[1]: Starting Ceph crash.compute-0 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee...
Dec 05 01:13:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d08eab2ee8f29a2feae9579d3b70ea8d9722d833d31237cd31dc7146fcab4d8b-merged.mount: Deactivated successfully.
Dec 05 01:13:50 compute-0 podman[201053]: 2025-12-05 01:13:50.418159959 +0000 UTC m=+1.606762118 container remove 6db1e7e4219c5c9a0a27c8c069e8d12a2b2a854915d2a6fadfc84e9b6c321f44 (image=quay.io/ceph/ceph:v18, name=busy_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 01:13:50 compute-0 systemd[1]: libpod-conmon-6db1e7e4219c5c9a0a27c8c069e8d12a2b2a854915d2a6fadfc84e9b6c321f44.scope: Deactivated successfully.
Dec 05 01:13:50 compute-0 sudo[201007]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:50 compute-0 podman[201349]: 2025-12-05 01:13:50.666504951 +0000 UTC m=+0.044695552 container create f9154648f016c70233aa6bcea5551106336428e4cad77b505cd85ec19f3c3ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-crash-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 05 01:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eea18aa1ee2216af756156558fccf6c05ee38f1e4ed9b8d4f313d099a7f1b819/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eea18aa1ee2216af756156558fccf6c05ee38f1e4ed9b8d4f313d099a7f1b819/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eea18aa1ee2216af756156558fccf6c05ee38f1e4ed9b8d4f313d099a7f1b819/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eea18aa1ee2216af756156558fccf6c05ee38f1e4ed9b8d4f313d099a7f1b819/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:50 compute-0 podman[201349]: 2025-12-05 01:13:50.730282714 +0000 UTC m=+0.108473365 container init f9154648f016c70233aa6bcea5551106336428e4cad77b505cd85ec19f3c3ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-crash-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 05 01:13:50 compute-0 podman[201349]: 2025-12-05 01:13:50.647093159 +0000 UTC m=+0.025283790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:13:50 compute-0 podman[201349]: 2025-12-05 01:13:50.742293346 +0000 UTC m=+0.120483967 container start f9154648f016c70233aa6bcea5551106336428e4cad77b505cd85ec19f3c3ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-crash-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:13:50 compute-0 bash[201349]: f9154648f016c70233aa6bcea5551106336428e4cad77b505cd85ec19f3c3ea8
Dec 05 01:13:50 compute-0 systemd[1]: Started Ceph crash.compute-0 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec 05 01:13:50 compute-0 sudo[201035]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:13:50 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:13:50 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Dec 05 01:13:50 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:50 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev baf65416-19b1-4804-9f83-627cc5d30e70 (Updating crash deployment (+1 -> 1))
Dec 05 01:13:50 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event baf65416-19b1-4804-9f83-627cc5d30e70 (Updating crash deployment (+1 -> 1)) in 2 seconds
Dec 05 01:13:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Dec 05 01:13:50 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:50 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 4a560713-e49c-47b7-a4d8-19e9b8203506 does not exist
Dec 05 01:13:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec 05 01:13:50 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:50 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev 7c268afe-0045-4f9a-81e2-b5f2a6f86b3b (Updating mgr deployment (+1 -> 2))
Dec 05 01:13:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.rknuqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Dec 05 01:13:50 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rknuqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 05 01:13:50 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rknuqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 05 01:13:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec 05 01:13:50 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 01:13:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:13:50 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:13:50 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.rknuqb on compute-0
Dec 05 01:13:50 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.rknuqb on compute-0
Dec 05 01:13:50 compute-0 sudo[201391]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxefygluncskrjfvtxpoydyqrdnlpfus ; /usr/bin/python3'
Dec 05 01:13:50 compute-0 sudo[201391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:13:50 compute-0 sudo[201393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:50 compute-0 sudo[201393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:50 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-crash-compute-0[201362]: INFO:ceph-crash:pinging cluster to exercise our key
Dec 05 01:13:50 compute-0 sudo[201393]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:51 compute-0 python3[201395]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:13:51 compute-0 sudo[201422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:13:51 compute-0 sudo[201422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:51 compute-0 podman[201418]: 2025-12-05 01:13:51.085958757 +0000 UTC m=+0.089484786 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 05 01:13:51 compute-0 sudo[201422]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:51 compute-0 podman[201436]: 2025-12-05 01:13:51.13180133 +0000 UTC m=+0.088790875 container create 70171864e5477188a97759292b8b9673361193c8b21e2903911496fadda4e5d4 (image=quay.io/ceph/ceph:v18, name=busy_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Dec 05 01:13:51 compute-0 ceph-mgr[193209]: [progress INFO root] Writing back 1 completed events
Dec 05 01:13:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 05 01:13:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:51 compute-0 podman[201436]: 2025-12-05 01:13:51.094351875 +0000 UTC m=+0.051341460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:13:51 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-crash-compute-0[201362]: 2025-12-05T01:13:51.188+0000 7f4787f56640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 05 01:13:51 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-crash-compute-0[201362]: 2025-12-05T01:13:51.188+0000 7f4787f56640 -1 AuthRegistry(0x7f4780066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 05 01:13:51 compute-0 systemd[1]: Started libpod-conmon-70171864e5477188a97759292b8b9673361193c8b21e2903911496fadda4e5d4.scope.
Dec 05 01:13:51 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-crash-compute-0[201362]: 2025-12-05T01:13:51.189+0000 7f4787f56640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 05 01:13:51 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-crash-compute-0[201362]: 2025-12-05T01:13:51.189+0000 7f4787f56640 -1 AuthRegistry(0x7f4787f55000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 05 01:13:51 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-crash-compute-0[201362]: 2025-12-05T01:13:51.192+0000 7f4785ccb640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 05 01:13:51 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-crash-compute-0[201362]: 2025-12-05T01:13:51.192+0000 7f4787f56640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec 05 01:13:51 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-crash-compute-0[201362]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec 05 01:13:51 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-crash-compute-0[201362]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec 05 01:13:51 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:51 compute-0 sudo[201475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/febce25a46c6b26c2752d74848e930bcae9d09e0d5042f0355df0f9348946542/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/febce25a46c6b26c2752d74848e930bcae9d09e0d5042f0355df0f9348946542/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/febce25a46c6b26c2752d74848e930bcae9d09e0d5042f0355df0f9348946542/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:51 compute-0 sudo[201475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:51 compute-0 sudo[201475]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:51 compute-0 podman[201436]: 2025-12-05 01:13:51.265120111 +0000 UTC m=+0.222109666 container init 70171864e5477188a97759292b8b9673361193c8b21e2903911496fadda4e5d4 (image=quay.io/ceph/ceph:v18, name=busy_mcclintock, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 05 01:13:51 compute-0 podman[201436]: 2025-12-05 01:13:51.278116701 +0000 UTC m=+0.235106256 container start 70171864e5477188a97759292b8b9673361193c8b21e2903911496fadda4e5d4 (image=quay.io/ceph/ceph:v18, name=busy_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 05 01:13:51 compute-0 podman[201436]: 2025-12-05 01:13:51.285352536 +0000 UTC m=+0.242342121 container attach 70171864e5477188a97759292b8b9673361193c8b21e2903911496fadda4e5d4 (image=quay.io/ceph/ceph:v18, name=busy_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:13:51 compute-0 ceph-mon[192914]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:13:51 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3277226958' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 05 01:13:51 compute-0 ceph-mon[192914]: osdmap e3: 0 total, 0 up, 0 in
Dec 05 01:13:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rknuqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 05 01:13:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rknuqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 05 01:13:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 01:13:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:13:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:51 compute-0 sudo[201521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec 05 01:13:51 compute-0 sudo[201521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:51 compute-0 podman[201604]: 2025-12-05 01:13:51.81897744 +0000 UTC m=+0.083825765 container create ebdf5729f9eedc37763f9550b79b248294bdbfa8ad7a9a79b19d10804088c40e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lewin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:13:51 compute-0 systemd[1]: Started libpod-conmon-ebdf5729f9eedc37763f9550b79b248294bdbfa8ad7a9a79b19d10804088c40e.scope.
Dec 05 01:13:51 compute-0 podman[201604]: 2025-12-05 01:13:51.785094836 +0000 UTC m=+0.049943251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:13:51 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14182 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:13:51 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:51 compute-0 podman[201604]: 2025-12-05 01:13:51.935173684 +0000 UTC m=+0.200022029 container init ebdf5729f9eedc37763f9550b79b248294bdbfa8ad7a9a79b19d10804088c40e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lewin, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:13:51 compute-0 podman[201604]: 2025-12-05 01:13:51.944623172 +0000 UTC m=+0.209471517 container start ebdf5729f9eedc37763f9550b79b248294bdbfa8ad7a9a79b19d10804088c40e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lewin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:13:51 compute-0 interesting_lewin[201620]: 167 167
Dec 05 01:13:51 compute-0 systemd[1]: libpod-ebdf5729f9eedc37763f9550b79b248294bdbfa8ad7a9a79b19d10804088c40e.scope: Deactivated successfully.
Dec 05 01:13:51 compute-0 podman[201604]: 2025-12-05 01:13:51.951274961 +0000 UTC m=+0.216123307 container attach ebdf5729f9eedc37763f9550b79b248294bdbfa8ad7a9a79b19d10804088c40e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:13:51 compute-0 conmon[201620]: conmon ebdf5729f9eedc37763f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ebdf5729f9eedc37763f9550b79b248294bdbfa8ad7a9a79b19d10804088c40e.scope/container/memory.events
Dec 05 01:13:51 compute-0 podman[201604]: 2025-12-05 01:13:51.952416264 +0000 UTC m=+0.217264589 container died ebdf5729f9eedc37763f9550b79b248294bdbfa8ad7a9a79b19d10804088c40e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lewin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:13:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:13:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-d705ad42b8151979ba719e549c3a7b470225b291c81967d2b0852c8ad8efe50a-merged.mount: Deactivated successfully.
Dec 05 01:13:52 compute-0 sudo[201624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:52 compute-0 sudo[201624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:52 compute-0 podman[201604]: 2025-12-05 01:13:52.0169901 +0000 UTC m=+0.281838425 container remove ebdf5729f9eedc37763f9550b79b248294bdbfa8ad7a9a79b19d10804088c40e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lewin, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:13:52 compute-0 sudo[201624]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:52 compute-0 systemd[1]: libpod-conmon-ebdf5729f9eedc37763f9550b79b248294bdbfa8ad7a9a79b19d10804088c40e.scope: Deactivated successfully.
Dec 05 01:13:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:13:52 compute-0 systemd[1]: Reloading.
Dec 05 01:13:52 compute-0 systemd-sysv-generator[201716]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:13:52 compute-0 systemd-rc-local-generator[201713]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:13:52 compute-0 ceph-mon[192914]: Deploying daemon mgr.compute-0.rknuqb on compute-0
Dec 05 01:13:52 compute-0 sudo[201662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:13:52 compute-0 sudo[201662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:52 compute-0 sudo[201662]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:52 compute-0 systemd[1]: Reloading.
Dec 05 01:13:52 compute-0 systemd-sysv-generator[201777]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:13:52 compute-0 systemd-rc-local-generator[201774]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:13:52 compute-0 sudo[201730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:52 compute-0 systemd[1]: Starting Ceph mgr.compute-0.rknuqb for cbd280d3-cbd8-528b-ace6-2b3a887cdcee...
Dec 05 01:13:52 compute-0 sudo[201730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:52 compute-0 sudo[201730]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:53 compute-0 sudo[201794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Dec 05 01:13:53 compute-0 sudo[201794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:53 compute-0 podman[201861]: 2025-12-05 01:13:53.234232341 +0000 UTC m=+0.060564707 container create a45ef3db10c0d1b234f92ce6ab91a85e98752c9713eff097b262fddba4bc74d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-rknuqb, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 05 01:13:53 compute-0 sudo[201794]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:53 compute-0 ceph-mon[192914]: from='client.14182 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:13:53 compute-0 ceph-mon[192914]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51d9f37dca8c04dbd8de69c338aa22edafb1f21a0a05f1474f40ee315ccf4285/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51d9f37dca8c04dbd8de69c338aa22edafb1f21a0a05f1474f40ee315ccf4285/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51d9f37dca8c04dbd8de69c338aa22edafb1f21a0a05f1474f40ee315ccf4285/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51d9f37dca8c04dbd8de69c338aa22edafb1f21a0a05f1474f40ee315ccf4285/merged/var/lib/ceph/mgr/ceph-compute-0.rknuqb supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:53 compute-0 podman[201861]: 2025-12-05 01:13:53.215097998 +0000 UTC m=+0.041430384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:13:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec 05 01:13:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec 05 01:13:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec 05 01:13:53 compute-0 podman[201861]: 2025-12-05 01:13:53.343539234 +0000 UTC m=+0.169871650 container init a45ef3db10c0d1b234f92ce6ab91a85e98752c9713eff097b262fddba4bc74d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-rknuqb, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:13:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec 05 01:13:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:53 compute-0 ceph-mgr[193209]: [cephadm INFO root] Added host compute-0
Dec 05 01:13:53 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 05 01:13:53 compute-0 ceph-mgr[193209]: [cephadm INFO root] Saving service mon spec with placement compute-0
Dec 05 01:13:53 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Dec 05 01:13:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec 05 01:13:53 compute-0 podman[201861]: 2025-12-05 01:13:53.367993765 +0000 UTC m=+0.194326141 container start a45ef3db10c0d1b234f92ce6ab91a85e98752c9713eff097b262fddba4bc74d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-rknuqb, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:13:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:53 compute-0 ceph-mgr[193209]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Dec 05 01:13:53 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Dec 05 01:13:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec 05 01:13:53 compute-0 bash[201861]: a45ef3db10c0d1b234f92ce6ab91a85e98752c9713eff097b262fddba4bc74d5
Dec 05 01:13:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:53 compute-0 ceph-mgr[193209]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec 05 01:13:53 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec 05 01:13:53 compute-0 ceph-mgr[193209]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Dec 05 01:13:53 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Dec 05 01:13:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Dec 05 01:13:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:53 compute-0 systemd[1]: Started Ceph mgr.compute-0.rknuqb for cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec 05 01:13:53 compute-0 busy_mcclintock[201516]: Added host 'compute-0' with addr '192.168.122.100'
Dec 05 01:13:53 compute-0 busy_mcclintock[201516]: Scheduled mon update...
Dec 05 01:13:53 compute-0 busy_mcclintock[201516]: Scheduled mgr update...
Dec 05 01:13:53 compute-0 busy_mcclintock[201516]: Scheduled osd.default_drive_group update...
Dec 05 01:13:53 compute-0 systemd[1]: libpod-70171864e5477188a97759292b8b9673361193c8b21e2903911496fadda4e5d4.scope: Deactivated successfully.
Dec 05 01:13:53 compute-0 sudo[201521]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:53 compute-0 podman[201436]: 2025-12-05 01:13:53.435390841 +0000 UTC m=+2.392380386 container died 70171864e5477188a97759292b8b9673361193c8b21e2903911496fadda4e5d4 (image=quay.io/ceph/ceph:v18, name=busy_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Dec 05 01:13:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:13:53 compute-0 ceph-mgr[201895]: set uid:gid to 167:167 (ceph:ceph)
Dec 05 01:13:53 compute-0 ceph-mgr[201895]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Dec 05 01:13:53 compute-0 ceph-mgr[201895]: pidfile_write: ignore empty --pid-file
Dec 05 01:13:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:13:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-febce25a46c6b26c2752d74848e930bcae9d09e0d5042f0355df0f9348946542-merged.mount: Deactivated successfully.
Dec 05 01:13:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec 05 01:13:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:53 compute-0 podman[201436]: 2025-12-05 01:13:53.498310393 +0000 UTC m=+2.455299938 container remove 70171864e5477188a97759292b8b9673361193c8b21e2903911496fadda4e5d4 (image=quay.io/ceph/ceph:v18, name=busy_mcclintock, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 05 01:13:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec 05 01:13:53 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev 7c268afe-0045-4f9a-81e2-b5f2a6f86b3b (Updating mgr deployment (+1 -> 2))
Dec 05 01:13:53 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event 7c268afe-0045-4f9a-81e2-b5f2a6f86b3b (Updating mgr deployment (+1 -> 2)) in 3 seconds
Dec 05 01:13:53 compute-0 systemd[1]: libpod-conmon-70171864e5477188a97759292b8b9673361193c8b21e2903911496fadda4e5d4.scope: Deactivated successfully.
Dec 05 01:13:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:53 compute-0 sudo[201391]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:53 compute-0 ceph-mgr[201895]: mgr[py] Loading python module 'alerts'
Dec 05 01:13:53 compute-0 sudo[201931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:53 compute-0 sudo[201931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:53 compute-0 sudo[201931]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:53 compute-0 sudo[201956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:13:53 compute-0 sudo[201956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:53 compute-0 sudo[201956]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:53 compute-0 sudo[202028]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlcbkkjokibatjjttkzfuvtakklvbvts ; /usr/bin/python3'
Dec 05 01:13:53 compute-0 sudo[201981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:53 compute-0 sudo[201981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:53 compute-0 sudo[202028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:13:53 compute-0 sudo[201981]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:53 compute-0 sudo[202038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:13:53 compute-0 sudo[202038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:53 compute-0 sudo[202038]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:53 compute-0 podman[202029]: 2025-12-05 01:13:53.912183765 +0000 UTC m=+0.104973983 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, name=ubi9, container_name=kepler)
Dec 05 01:13:53 compute-0 ceph-mgr[201895]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 05 01:13:53 compute-0 ceph-mgr[201895]: mgr[py] Loading python module 'balancer'
Dec 05 01:13:53 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-rknuqb[201891]: 2025-12-05T01:13:53.924+0000 7ff61db06140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 05 01:13:53 compute-0 python3[202032]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:13:53 compute-0 sudo[202075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:53 compute-0 sudo[202075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:53 compute-0 sudo[202075]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:54 compute-0 podman[202099]: 2025-12-05 01:13:54.024139262 +0000 UTC m=+0.058450658 container create 29c3eecfb2c5bd098d293b1b6508c01d85bcb40b8d274131eb9d1ad29a5849a8 (image=quay.io/ceph/ceph:v18, name=infallible_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 05 01:13:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:13:54 compute-0 systemd[1]: Started libpod-conmon-29c3eecfb2c5bd098d293b1b6508c01d85bcb40b8d274131eb9d1ad29a5849a8.scope.
Dec 05 01:13:54 compute-0 podman[202099]: 2025-12-05 01:13:53.999523457 +0000 UTC m=+0.033834873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:13:54 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:54 compute-0 sudo[202110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 05 01:13:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395d8140490d1fbb3688f60fc67af97e3477916cfa1a285457d5d76463cb50bc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395d8140490d1fbb3688f60fc67af97e3477916cfa1a285457d5d76463cb50bc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395d8140490d1fbb3688f60fc67af97e3477916cfa1a285457d5d76463cb50bc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:54 compute-0 sudo[202110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:54 compute-0 podman[202099]: 2025-12-05 01:13:54.141775517 +0000 UTC m=+0.176086913 container init 29c3eecfb2c5bd098d293b1b6508c01d85bcb40b8d274131eb9d1ad29a5849a8 (image=quay.io/ceph/ceph:v18, name=infallible_cartwright, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:13:54 compute-0 podman[202099]: 2025-12-05 01:13:54.151078406 +0000 UTC m=+0.185389792 container start 29c3eecfb2c5bd098d293b1b6508c01d85bcb40b8d274131eb9d1ad29a5849a8 (image=quay.io/ceph/ceph:v18, name=infallible_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:13:54 compute-0 podman[202099]: 2025-12-05 01:13:54.155036027 +0000 UTC m=+0.189347423 container attach 29c3eecfb2c5bd098d293b1b6508c01d85bcb40b8d274131eb9d1ad29a5849a8 (image=quay.io/ceph/ceph:v18, name=infallible_cartwright, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 05 01:13:54 compute-0 ceph-mgr[201895]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 05 01:13:54 compute-0 ceph-mgr[201895]: mgr[py] Loading python module 'cephadm'
Dec 05 01:13:54 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-rknuqb[201891]: 2025-12-05T01:13:54.205+0000 7ff61db06140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 05 01:13:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:54 compute-0 ceph-mon[192914]: Added host compute-0
Dec 05 01:13:54 compute-0 ceph-mon[192914]: Saving service mon spec with placement compute-0
Dec 05 01:13:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:54 compute-0 ceph-mon[192914]: Saving service mgr spec with placement compute-0
Dec 05 01:13:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:54 compute-0 ceph-mon[192914]: Marking host: compute-0 for OSDSpec preview refresh.
Dec 05 01:13:54 compute-0 ceph-mon[192914]: Saving service osd.default_drive_group spec with placement compute-0
Dec 05 01:13:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:54 compute-0 ceph-mon[192914]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:13:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec 05 01:13:54 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2673528238' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 05 01:13:54 compute-0 infallible_cartwright[202140]: 
Dec 05 01:13:54 compute-0 infallible_cartwright[202140]: {"fsid":"cbd280d3-cbd8-528b-ace6-2b3a887cdcee","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":87,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-05T01:12:22.836369+0000","services":{}},"progress_events":{}}
Dec 05 01:13:54 compute-0 systemd[1]: libpod-29c3eecfb2c5bd098d293b1b6508c01d85bcb40b8d274131eb9d1ad29a5849a8.scope: Deactivated successfully.
Dec 05 01:13:54 compute-0 conmon[202140]: conmon 29c3eecfb2c5bd098d29 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-29c3eecfb2c5bd098d293b1b6508c01d85bcb40b8d274131eb9d1ad29a5849a8.scope/container/memory.events
Dec 05 01:13:54 compute-0 podman[202231]: 2025-12-05 01:13:54.876953424 +0000 UTC m=+0.186819782 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:13:54 compute-0 podman[202253]: 2025-12-05 01:13:54.960231432 +0000 UTC m=+0.075483272 container died 29c3eecfb2c5bd098d293b1b6508c01d85bcb40b8d274131eb9d1ad29a5849a8 (image=quay.io/ceph/ceph:v18, name=infallible_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 05 01:13:54 compute-0 podman[202231]: 2025-12-05 01:13:54.982733449 +0000 UTC m=+0.292599827 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 05 01:13:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-395d8140490d1fbb3688f60fc67af97e3477916cfa1a285457d5d76463cb50bc-merged.mount: Deactivated successfully.
Dec 05 01:13:55 compute-0 podman[202253]: 2025-12-05 01:13:55.05244608 +0000 UTC m=+0.167697910 container remove 29c3eecfb2c5bd098d293b1b6508c01d85bcb40b8d274131eb9d1ad29a5849a8 (image=quay.io/ceph/ceph:v18, name=infallible_cartwright, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 05 01:13:55 compute-0 systemd[1]: libpod-conmon-29c3eecfb2c5bd098d293b1b6508c01d85bcb40b8d274131eb9d1ad29a5849a8.scope: Deactivated successfully.
Dec 05 01:13:55 compute-0 sudo[202028]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:55 compute-0 sudo[202110]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:13:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:13:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:13:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:13:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:13:55 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:13:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:13:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:13:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:13:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:55 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7fd103b1-9299-47e5-aec5-12f7663d9561 does not exist
Dec 05 01:13:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec 05 01:13:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:55 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev c7002aca-ef8f-4ee2-8802-0a0ee52f207e (Updating mgr deployment (-1 -> 1))
Dec 05 01:13:55 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.rknuqb from compute-0 -- ports [8765]
Dec 05 01:13:55 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.rknuqb from compute-0 -- ports [8765]
Dec 05 01:13:55 compute-0 sudo[202327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:55 compute-0 sudo[202327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:55 compute-0 sudo[202327]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:55 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2673528238' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 05 01:13:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:13:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:13:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:55 compute-0 sudo[202352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:13:55 compute-0 sudo[202352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:55 compute-0 sudo[202352]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:55 compute-0 sudo[202382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:55 compute-0 sudo[202382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:55 compute-0 sudo[202382]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:55 compute-0 sudo[202413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 rm-daemon --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --name mgr.compute-0.rknuqb --force --tcp-ports 8765
Dec 05 01:13:55 compute-0 sudo[202413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:13:56 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.rknuqb for cbd280d3-cbd8-528b-ace6-2b3a887cdcee...
Dec 05 01:13:56 compute-0 ceph-mgr[193209]: [progress INFO root] Writing back 2 completed events
Dec 05 01:13:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 05 01:13:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:56 compute-0 ceph-mgr[201895]: mgr[py] Loading python module 'crash'
Dec 05 01:13:56 compute-0 podman[202504]: 2025-12-05 01:13:56.488579082 +0000 UTC m=+0.113221873 container died a45ef3db10c0d1b234f92ce6ab91a85e98752c9713eff097b262fddba4bc74d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-rknuqb, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 05 01:13:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-51d9f37dca8c04dbd8de69c338aa22edafb1f21a0a05f1474f40ee315ccf4285-merged.mount: Deactivated successfully.
Dec 05 01:13:56 compute-0 podman[202504]: 2025-12-05 01:13:56.5711117 +0000 UTC m=+0.195754501 container remove a45ef3db10c0d1b234f92ce6ab91a85e98752c9713eff097b262fddba4bc74d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-rknuqb, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 01:13:56 compute-0 bash[202504]: ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-rknuqb
Dec 05 01:13:56 compute-0 systemd[1]: ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee@mgr.compute-0.rknuqb.service: Main process exited, code=exited, status=143/n/a
Dec 05 01:13:56 compute-0 podman[202531]: 2025-12-05 01:13:56.704617996 +0000 UTC m=+0.122361007 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.tags=minimal rhel9, distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, release=1755695350, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, vcs-type=git)
Dec 05 01:13:56 compute-0 systemd[1]: ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee@mgr.compute-0.rknuqb.service: Failed with result 'exit-code'.
Dec 05 01:13:56 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.rknuqb for cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec 05 01:13:56 compute-0 systemd[1]: ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee@mgr.compute-0.rknuqb.service: Consumed 4.658s CPU time.
Dec 05 01:13:56 compute-0 systemd[1]: Reloading.
Dec 05 01:13:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:13:57 compute-0 systemd-rc-local-generator[202609]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:13:57 compute-0 systemd-sysv-generator[202614]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:13:57 compute-0 ceph-mon[192914]: Removing daemon mgr.compute-0.rknuqb from compute-0 -- ports [8765]
Dec 05 01:13:57 compute-0 ceph-mon[192914]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:13:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:57 compute-0 sudo[202413]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:57 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.rknuqb
Dec 05 01:13:57 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.rknuqb
Dec 05 01:13:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.rknuqb"} v 0) v1
Dec 05 01:13:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.rknuqb"}]: dispatch
Dec 05 01:13:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.rknuqb"}]': finished
Dec 05 01:13:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec 05 01:13:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:57 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev c7002aca-ef8f-4ee2-8802-0a0ee52f207e (Updating mgr deployment (-1 -> 1))
Dec 05 01:13:57 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event c7002aca-ef8f-4ee2-8802-0a0ee52f207e (Updating mgr deployment (-1 -> 1)) in 2 seconds
Dec 05 01:13:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec 05 01:13:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:57 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b35376fe-f03e-4798-9012-965b03cf11ce does not exist
Dec 05 01:13:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:13:57 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:13:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:13:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:13:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:13:57 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:13:57 compute-0 sudo[202619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:57 compute-0 sudo[202619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:57 compute-0 sudo[202619]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:57 compute-0 sudo[202644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:13:57 compute-0 sudo[202644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:57 compute-0 sudo[202644]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:57 compute-0 sudo[202669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:13:57 compute-0 sudo[202669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:57 compute-0 sudo[202669]: pam_unix(sudo:session): session closed for user root
Dec 05 01:13:57 compute-0 sudo[202694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:13:57 compute-0 sudo[202694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:13:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:13:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.rknuqb"}]: dispatch
Dec 05 01:13:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.rknuqb"}]': finished
Dec 05 01:13:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:13:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:13:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:13:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:13:58 compute-0 podman[202756]: 2025-12-05 01:13:58.227799561 +0000 UTC m=+0.054420526 container create 38ef62b7c02dd126fa8cc4a1c8d7f0eb9fb16a526d65bed70471d1cb14d5cac5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hypatia, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:13:58 compute-0 systemd[1]: Started libpod-conmon-38ef62b7c02dd126fa8cc4a1c8d7f0eb9fb16a526d65bed70471d1cb14d5cac5.scope.
Dec 05 01:13:58 compute-0 podman[202756]: 2025-12-05 01:13:58.212135975 +0000 UTC m=+0.038756960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:13:58 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:58 compute-0 podman[202756]: 2025-12-05 01:13:58.337451284 +0000 UTC m=+0.164072259 container init 38ef62b7c02dd126fa8cc4a1c8d7f0eb9fb16a526d65bed70471d1cb14d5cac5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Dec 05 01:13:58 compute-0 podman[202756]: 2025-12-05 01:13:58.357200684 +0000 UTC m=+0.183821679 container start 38ef62b7c02dd126fa8cc4a1c8d7f0eb9fb16a526d65bed70471d1cb14d5cac5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hypatia, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:13:58 compute-0 podman[202756]: 2025-12-05 01:13:58.364294581 +0000 UTC m=+0.190915666 container attach 38ef62b7c02dd126fa8cc4a1c8d7f0eb9fb16a526d65bed70471d1cb14d5cac5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:13:58 compute-0 great_hypatia[202770]: 167 167
Dec 05 01:13:58 compute-0 systemd[1]: libpod-38ef62b7c02dd126fa8cc4a1c8d7f0eb9fb16a526d65bed70471d1cb14d5cac5.scope: Deactivated successfully.
Dec 05 01:13:58 compute-0 podman[202756]: 2025-12-05 01:13:58.36999053 +0000 UTC m=+0.196611555 container died 38ef62b7c02dd126fa8cc4a1c8d7f0eb9fb16a526d65bed70471d1cb14d5cac5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hypatia, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Dec 05 01:13:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-86698401ed7035b99d4fa762f739eee4ef6d662191873857b89b09ffce57097a-merged.mount: Deactivated successfully.
Dec 05 01:13:58 compute-0 podman[202756]: 2025-12-05 01:13:58.443511097 +0000 UTC m=+0.270132072 container remove 38ef62b7c02dd126fa8cc4a1c8d7f0eb9fb16a526d65bed70471d1cb14d5cac5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hypatia, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:13:58 compute-0 systemd[1]: libpod-conmon-38ef62b7c02dd126fa8cc4a1c8d7f0eb9fb16a526d65bed70471d1cb14d5cac5.scope: Deactivated successfully.
Dec 05 01:13:58 compute-0 podman[202793]: 2025-12-05 01:13:58.664263142 +0000 UTC m=+0.057941204 container create ca47c3e846d4670fc66759301264fbee3626e8f324c6b934a19c0980b41a0e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 05 01:13:58 compute-0 systemd[1]: Started libpod-conmon-ca47c3e846d4670fc66759301264fbee3626e8f324c6b934a19c0980b41a0e90.scope.
Dec 05 01:13:58 compute-0 podman[202793]: 2025-12-05 01:13:58.642359833 +0000 UTC m=+0.036037905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:13:58 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:13:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67417fa98245a11fb0f0d1e2344f29841590f8818262fc5073705e0f7090e666/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67417fa98245a11fb0f0d1e2344f29841590f8818262fc5073705e0f7090e666/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67417fa98245a11fb0f0d1e2344f29841590f8818262fc5073705e0f7090e666/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67417fa98245a11fb0f0d1e2344f29841590f8818262fc5073705e0f7090e666/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67417fa98245a11fb0f0d1e2344f29841590f8818262fc5073705e0f7090e666/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:13:58 compute-0 podman[202793]: 2025-12-05 01:13:58.781960659 +0000 UTC m=+0.175638751 container init ca47c3e846d4670fc66759301264fbee3626e8f324c6b934a19c0980b41a0e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 01:13:58 compute-0 podman[202793]: 2025-12-05 01:13:58.799471157 +0000 UTC m=+0.193149219 container start ca47c3e846d4670fc66759301264fbee3626e8f324c6b934a19c0980b41a0e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_swirles, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:13:58 compute-0 podman[202793]: 2025-12-05 01:13:58.805154995 +0000 UTC m=+0.198833077 container attach ca47c3e846d4670fc66759301264fbee3626e8f324c6b934a19c0980b41a0e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_swirles, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 01:13:59 compute-0 ceph-mon[192914]: Removing key for mgr.compute-0.rknuqb
Dec 05 01:13:59 compute-0 ceph-mon[192914]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:13:59 compute-0 podman[158197]: time="2025-12-05T01:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:13:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 25441 "" "Go-http-client/1.1"
Dec 05 01:13:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4827 "" "Go-http-client/1.1"
Dec 05 01:13:59 compute-0 gracious_swirles[202809]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:13:59 compute-0 gracious_swirles[202809]: --> relative data size: 1.0
Dec 05 01:14:00 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 05 01:14:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:00 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 8c4de221-4fda-4bb1-b794-fc4329742186
Dec 05 01:14:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "8c4de221-4fda-4bb1-b794-fc4329742186"} v 0) v1
Dec 05 01:14:00 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/692644570' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8c4de221-4fda-4bb1-b794-fc4329742186"}]: dispatch
Dec 05 01:14:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec 05 01:14:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 05 01:14:00 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/692644570' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8c4de221-4fda-4bb1-b794-fc4329742186"}]': finished
Dec 05 01:14:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec 05 01:14:00 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec 05 01:14:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 05 01:14:00 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 01:14:00 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 01:14:00 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 05 01:14:00 compute-0 gracious_swirles[202809]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Dec 05 01:14:00 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec 05 01:14:00 compute-0 lvm[202873]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 01:14:00 compute-0 lvm[202873]: VG ceph_vg0 finished
Dec 05 01:14:00 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 05 01:14:00 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 05 01:14:00 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Dec 05 01:14:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Dec 05 01:14:01 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/129212990' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 05 01:14:01 compute-0 gracious_swirles[202809]:  stderr: got monmap epoch 1
Dec 05 01:14:01 compute-0 gracious_swirles[202809]: --> Creating keyring file for osd.0
Dec 05 01:14:01 compute-0 ceph-mgr[193209]: [progress INFO root] Writing back 3 completed events
Dec 05 01:14:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 05 01:14:01 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:01 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Dec 05 01:14:01 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Dec 05 01:14:01 compute-0 ceph-mon[192914]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:01 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/692644570' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8c4de221-4fda-4bb1-b794-fc4329742186"}]: dispatch
Dec 05 01:14:01 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/692644570' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8c4de221-4fda-4bb1-b794-fc4329742186"}]': finished
Dec 05 01:14:01 compute-0 ceph-mon[192914]: osdmap e4: 1 total, 0 up, 1 in
Dec 05 01:14:01 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 01:14:01 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/129212990' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 05 01:14:01 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:01 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 8c4de221-4fda-4bb1-b794-fc4329742186 --setuser ceph --setgroup ceph
Dec 05 01:14:01 compute-0 openstack_network_exporter[160350]: ERROR   01:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:14:01 compute-0 openstack_network_exporter[160350]: ERROR   01:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:14:01 compute-0 openstack_network_exporter[160350]: ERROR   01:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:14:01 compute-0 openstack_network_exporter[160350]: ERROR   01:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:14:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:14:01 compute-0 openstack_network_exporter[160350]: ERROR   01:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:14:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:14:01 compute-0 podman[202926]: 2025-12-05 01:14:01.436377709 +0000 UTC m=+0.094944284 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:14:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:14:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:02 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 05 01:14:02 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 05 01:14:02 compute-0 ceph-mon[192914]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 05 01:14:02 compute-0 ceph-mon[192914]: Cluster is now healthy
Dec 05 01:14:03 compute-0 ceph-mon[192914]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:03 compute-0 gracious_swirles[202809]:  stderr: 2025-12-05T01:14:01.286+0000 7f5ce6256740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 05 01:14:03 compute-0 gracious_swirles[202809]:  stderr: 2025-12-05T01:14:01.286+0000 7f5ce6256740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 05 01:14:03 compute-0 gracious_swirles[202809]:  stderr: 2025-12-05T01:14:01.286+0000 7f5ce6256740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 05 01:14:03 compute-0 gracious_swirles[202809]:  stderr: 2025-12-05T01:14:01.287+0000 7f5ce6256740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Dec 05 01:14:03 compute-0 gracious_swirles[202809]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec 05 01:14:03 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 05 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 05 01:14:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 05 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 05 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 05 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 05 01:14:04 compute-0 gracious_swirles[202809]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 05 01:14:04 compute-0 gracious_swirles[202809]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec 05 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 05 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 944e6457-e96a-45b2-ba7f-23ecd70be9f8
Dec 05 01:14:04 compute-0 ceph-mon[192914]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8"} v 0) v1
Dec 05 01:14:04 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/658691782' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8"}]: dispatch
Dec 05 01:14:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec 05 01:14:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 05 01:14:04 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/658691782' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8"}]': finished
Dec 05 01:14:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec 05 01:14:04 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec 05 01:14:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 05 01:14:04 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 01:14:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 05 01:14:04 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:04 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 01:14:04 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 01:14:04 compute-0 lvm[203853]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 05 01:14:04 compute-0 lvm[203853]: VG ceph_vg1 finished
Dec 05 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 05 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec 05 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Dec 05 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 05 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 05 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec 05 01:14:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Dec 05 01:14:05 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3148035459' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 05 01:14:05 compute-0 gracious_swirles[202809]:  stderr: got monmap epoch 1
Dec 05 01:14:05 compute-0 gracious_swirles[202809]: --> Creating keyring file for osd.1
Dec 05 01:14:05 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/658691782' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8"}]: dispatch
Dec 05 01:14:05 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/658691782' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8"}]': finished
Dec 05 01:14:05 compute-0 ceph-mon[192914]: osdmap e5: 2 total, 0 up, 2 in
Dec 05 01:14:05 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 01:14:05 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:05 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3148035459' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 05 01:14:05 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec 05 01:14:05 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec 05 01:14:05 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 944e6457-e96a-45b2-ba7f-23ecd70be9f8 --setuser ceph --setgroup ceph
Dec 05 01:14:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:06 compute-0 ceph-mon[192914]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:06 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:14:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:08 compute-0 gracious_swirles[202809]:  stderr: 2025-12-05T01:14:05.524+0000 7f0afce68740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 05 01:14:08 compute-0 gracious_swirles[202809]:  stderr: 2025-12-05T01:14:05.524+0000 7f0afce68740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 05 01:14:08 compute-0 gracious_swirles[202809]:  stderr: 2025-12-05T01:14:05.525+0000 7f0afce68740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 05 01:14:08 compute-0 gracious_swirles[202809]:  stderr: 2025-12-05T01:14:05.525+0000 7f0afce68740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec 05 01:14:08 compute-0 gracious_swirles[202809]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Dec 05 01:14:08 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 05 01:14:08 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 05 01:14:08 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 05 01:14:08 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 05 01:14:08 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 05 01:14:08 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 05 01:14:08 compute-0 gracious_swirles[202809]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 05 01:14:08 compute-0 gracious_swirles[202809]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Dec 05 01:14:08 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 05 01:14:08 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new adfceb0a-e5d7-48a8-b6ba-0c42f745777c
Dec 05 01:14:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c"} v 0) v1
Dec 05 01:14:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1716259959' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c"}]: dispatch
Dec 05 01:14:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec 05 01:14:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 05 01:14:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1716259959' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c"}]': finished
Dec 05 01:14:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Dec 05 01:14:08 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Dec 05 01:14:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 05 01:14:08 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 01:14:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 05 01:14:08 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 05 01:14:08 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:08 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 01:14:08 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 01:14:08 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 01:14:09 compute-0 ceph-mon[192914]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:09 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1716259959' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c"}]: dispatch
Dec 05 01:14:09 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1716259959' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c"}]': finished
Dec 05 01:14:09 compute-0 ceph-mon[192914]: osdmap e6: 3 total, 0 up, 3 in
Dec 05 01:14:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 01:14:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:09 compute-0 lvm[204811]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 05 01:14:09 compute-0 lvm[204811]: VG ceph_vg2 finished
Dec 05 01:14:09 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 05 01:14:09 compute-0 gracious_swirles[202809]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Dec 05 01:14:09 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Dec 05 01:14:09 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 05 01:14:09 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 05 01:14:09 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Dec 05 01:14:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Dec 05 01:14:09 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2591695986' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 05 01:14:09 compute-0 gracious_swirles[202809]:  stderr: got monmap epoch 1
Dec 05 01:14:09 compute-0 gracious_swirles[202809]: --> Creating keyring file for osd.2
Dec 05 01:14:09 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Dec 05 01:14:09 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Dec 05 01:14:09 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid adfceb0a-e5d7-48a8-b6ba-0c42f745777c --setuser ceph --setgroup ceph
Dec 05 01:14:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:10 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2591695986' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 05 01:14:11 compute-0 ceph-mon[192914]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:14:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:12 compute-0 gracious_swirles[202809]:  stderr: 2025-12-05T01:14:09.890+0000 7f09b44cd740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 05 01:14:12 compute-0 gracious_swirles[202809]:  stderr: 2025-12-05T01:14:09.890+0000 7f09b44cd740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 05 01:14:12 compute-0 gracious_swirles[202809]:  stderr: 2025-12-05T01:14:09.891+0000 7f09b44cd740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 05 01:14:12 compute-0 gracious_swirles[202809]:  stderr: 2025-12-05T01:14:09.891+0000 7f09b44cd740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Dec 05 01:14:12 compute-0 gracious_swirles[202809]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Dec 05 01:14:12 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 05 01:14:12 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec 05 01:14:12 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 05 01:14:12 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec 05 01:14:12 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 05 01:14:12 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 05 01:14:12 compute-0 gracious_swirles[202809]: --> ceph-volume lvm activate successful for osd ID: 2
Dec 05 01:14:12 compute-0 gracious_swirles[202809]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Dec 05 01:14:12 compute-0 systemd[1]: libpod-ca47c3e846d4670fc66759301264fbee3626e8f324c6b934a19c0980b41a0e90.scope: Deactivated successfully.
Dec 05 01:14:12 compute-0 systemd[1]: libpod-ca47c3e846d4670fc66759301264fbee3626e8f324c6b934a19c0980b41a0e90.scope: Consumed 8.345s CPU time.
Dec 05 01:14:12 compute-0 podman[202793]: 2025-12-05 01:14:12.613722734 +0000 UTC m=+14.007400826 container died ca47c3e846d4670fc66759301264fbee3626e8f324c6b934a19c0980b41a0e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_swirles, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:14:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-67417fa98245a11fb0f0d1e2344f29841590f8818262fc5073705e0f7090e666-merged.mount: Deactivated successfully.
Dec 05 01:14:12 compute-0 podman[202793]: 2025-12-05 01:14:12.718668356 +0000 UTC m=+14.112346428 container remove ca47c3e846d4670fc66759301264fbee3626e8f324c6b934a19c0980b41a0e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 05 01:14:12 compute-0 systemd[1]: libpod-conmon-ca47c3e846d4670fc66759301264fbee3626e8f324c6b934a19c0980b41a0e90.scope: Deactivated successfully.
Dec 05 01:14:12 compute-0 sudo[202694]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:12 compute-0 sudo[205756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:12 compute-0 sudo[205756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:12 compute-0 sudo[205756]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:12 compute-0 sudo[205781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:14:12 compute-0 sudo[205781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:12 compute-0 sudo[205781]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:13 compute-0 sudo[205806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:13 compute-0 sudo[205806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:13 compute-0 sudo[205806]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:13 compute-0 sudo[205831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:14:13 compute-0 sudo[205831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:13 compute-0 ceph-mon[192914]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:13 compute-0 podman[205893]: 2025-12-05 01:14:13.618613281 +0000 UTC m=+0.078378163 container create 88143ba220bd09bdfb6198d4afe732308a0c123370edacc9fb1ba7782d69e868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pike, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:14:13 compute-0 podman[205893]: 2025-12-05 01:14:13.58158338 +0000 UTC m=+0.041348322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:13 compute-0 systemd[1]: Started libpod-conmon-88143ba220bd09bdfb6198d4afe732308a0c123370edacc9fb1ba7782d69e868.scope.
Dec 05 01:14:13 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:13 compute-0 podman[205893]: 2025-12-05 01:14:13.74250589 +0000 UTC m=+0.202270772 container init 88143ba220bd09bdfb6198d4afe732308a0c123370edacc9fb1ba7782d69e868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 05 01:14:13 compute-0 podman[205893]: 2025-12-05 01:14:13.757127007 +0000 UTC m=+0.216891859 container start 88143ba220bd09bdfb6198d4afe732308a0c123370edacc9fb1ba7782d69e868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pike, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 01:14:13 compute-0 podman[205893]: 2025-12-05 01:14:13.762667601 +0000 UTC m=+0.222432573 container attach 88143ba220bd09bdfb6198d4afe732308a0c123370edacc9fb1ba7782d69e868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pike, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 01:14:13 compute-0 naughty_pike[205910]: 167 167
Dec 05 01:14:13 compute-0 systemd[1]: libpod-88143ba220bd09bdfb6198d4afe732308a0c123370edacc9fb1ba7782d69e868.scope: Deactivated successfully.
Dec 05 01:14:13 compute-0 podman[205893]: 2025-12-05 01:14:13.76586178 +0000 UTC m=+0.225626632 container died 88143ba220bd09bdfb6198d4afe732308a0c123370edacc9fb1ba7782d69e868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 01:14:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b68dd936ed350d030fa6eef477b214908c3243b23257b3f796d6014e90297da-merged.mount: Deactivated successfully.
Dec 05 01:14:13 compute-0 podman[205893]: 2025-12-05 01:14:13.829653726 +0000 UTC m=+0.289418578 container remove 88143ba220bd09bdfb6198d4afe732308a0c123370edacc9fb1ba7782d69e868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pike, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:14:13 compute-0 systemd[1]: libpod-conmon-88143ba220bd09bdfb6198d4afe732308a0c123370edacc9fb1ba7782d69e868.scope: Deactivated successfully.
Dec 05 01:14:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:14 compute-0 podman[205933]: 2025-12-05 01:14:14.08912132 +0000 UTC m=+0.091451167 container create 9c30da8737c2c956f19e3396a2d669e7cb051bcab60216a84144e147234f920f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 05 01:14:14 compute-0 podman[205933]: 2025-12-05 01:14:14.062583791 +0000 UTC m=+0.064913668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:14 compute-0 systemd[1]: Started libpod-conmon-9c30da8737c2c956f19e3396a2d669e7cb051bcab60216a84144e147234f920f.scope.
Dec 05 01:14:14 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/718cefdd81a968875a8b7231cbeb6c013f09146db708fed53e9b8662846cba31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/718cefdd81a968875a8b7231cbeb6c013f09146db708fed53e9b8662846cba31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/718cefdd81a968875a8b7231cbeb6c013f09146db708fed53e9b8662846cba31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/718cefdd81a968875a8b7231cbeb6c013f09146db708fed53e9b8662846cba31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:14 compute-0 podman[205933]: 2025-12-05 01:14:14.320716597 +0000 UTC m=+0.323046494 container init 9c30da8737c2c956f19e3396a2d669e7cb051bcab60216a84144e147234f920f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_yalow, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:14:14 compute-0 podman[205933]: 2025-12-05 01:14:14.331341603 +0000 UTC m=+0.333671440 container start 9c30da8737c2c956f19e3396a2d669e7cb051bcab60216a84144e147234f920f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_yalow, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:14:14 compute-0 podman[205933]: 2025-12-05 01:14:14.336561969 +0000 UTC m=+0.338891846 container attach 9c30da8737c2c956f19e3396a2d669e7cb051bcab60216a84144e147234f920f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 05 01:14:15 compute-0 priceless_yalow[205950]: {
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:     "0": [
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:         {
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "devices": [
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "/dev/loop3"
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             ],
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "lv_name": "ceph_lv0",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "lv_size": "21470642176",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "name": "ceph_lv0",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "tags": {
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.cluster_name": "ceph",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.crush_device_class": "",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.encrypted": "0",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.osd_id": "0",
Dec 05 01:14:15 compute-0 ceph-mon[192914]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.type": "block",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.vdo": "0"
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             },
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "type": "block",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "vg_name": "ceph_vg0"
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:         }
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:     ],
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:     "1": [
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:         {
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "devices": [
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "/dev/loop4"
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             ],
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "lv_name": "ceph_lv1",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "lv_size": "21470642176",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "name": "ceph_lv1",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "tags": {
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.cluster_name": "ceph",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.crush_device_class": "",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.encrypted": "0",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.osd_id": "1",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.type": "block",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.vdo": "0"
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             },
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "type": "block",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "vg_name": "ceph_vg1"
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:         }
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:     ],
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:     "2": [
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:         {
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "devices": [
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "/dev/loop5"
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             ],
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "lv_name": "ceph_lv2",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "lv_size": "21470642176",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "name": "ceph_lv2",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "tags": {
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.cluster_name": "ceph",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.crush_device_class": "",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.encrypted": "0",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.osd_id": "2",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.type": "block",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:                 "ceph.vdo": "0"
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             },
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "type": "block",
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:             "vg_name": "ceph_vg2"
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:         }
Dec 05 01:14:15 compute-0 priceless_yalow[205950]:     ]
Dec 05 01:14:15 compute-0 priceless_yalow[205950]: }
Dec 05 01:14:15 compute-0 systemd[1]: libpod-9c30da8737c2c956f19e3396a2d669e7cb051bcab60216a84144e147234f920f.scope: Deactivated successfully.
Dec 05 01:14:15 compute-0 podman[205959]: 2025-12-05 01:14:15.252810367 +0000 UTC m=+0.046250909 container died 9c30da8737c2c956f19e3396a2d669e7cb051bcab60216a84144e147234f920f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:14:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-718cefdd81a968875a8b7231cbeb6c013f09146db708fed53e9b8662846cba31-merged.mount: Deactivated successfully.
Dec 05 01:14:15 compute-0 podman[205959]: 2025-12-05 01:14:15.348646535 +0000 UTC m=+0.142086997 container remove 9c30da8737c2c956f19e3396a2d669e7cb051bcab60216a84144e147234f920f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:14:15 compute-0 systemd[1]: libpod-conmon-9c30da8737c2c956f19e3396a2d669e7cb051bcab60216a84144e147234f920f.scope: Deactivated successfully.
Dec 05 01:14:15 compute-0 sudo[205831]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Dec 05 01:14:15 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 05 01:14:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:14:15 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:14:15 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Dec 05 01:14:15 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Dec 05 01:14:15 compute-0 sudo[205971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:15 compute-0 sudo[205971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:15 compute-0 sudo[205971]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:15 compute-0 sudo[205996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:14:15 compute-0 sudo[205996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:15 compute-0 sudo[205996]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:15 compute-0 sudo[206021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:15 compute-0 sudo[206021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:15 compute-0 sudo[206021]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:15 compute-0 sudo[206046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec 05 01:14:15 compute-0 sudo[206046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:14:16
Dec 05 01:14:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:14:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:14:16 compute-0 ceph-mgr[193209]: [balancer INFO root] No pools available
Dec 05 01:14:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:14:16 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 05 01:14:16 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:14:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:14:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:14:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:14:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:14:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:14:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:14:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:14:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:14:16 compute-0 podman[206110]: 2025-12-05 01:14:16.356202165 +0000 UTC m=+0.067834030 container create 9ca565f826cac6f4df95a9c18e15201bb29251993452eb0c4ed80ac88404b08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_allen, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 01:14:16 compute-0 systemd[1]: Started libpod-conmon-9ca565f826cac6f4df95a9c18e15201bb29251993452eb0c4ed80ac88404b08e.scope.
Dec 05 01:14:16 compute-0 podman[206110]: 2025-12-05 01:14:16.321530979 +0000 UTC m=+0.033162864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:16 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:16 compute-0 podman[206110]: 2025-12-05 01:14:16.472354428 +0000 UTC m=+0.183986353 container init 9ca565f826cac6f4df95a9c18e15201bb29251993452eb0c4ed80ac88404b08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 05 01:14:16 compute-0 podman[206110]: 2025-12-05 01:14:16.482950303 +0000 UTC m=+0.194582158 container start 9ca565f826cac6f4df95a9c18e15201bb29251993452eb0c4ed80ac88404b08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_allen, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 05 01:14:16 compute-0 podman[206110]: 2025-12-05 01:14:16.489810444 +0000 UTC m=+0.201442369 container attach 9ca565f826cac6f4df95a9c18e15201bb29251993452eb0c4ed80ac88404b08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_allen, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:14:16 compute-0 cool_allen[206126]: 167 167
Dec 05 01:14:16 compute-0 systemd[1]: libpod-9ca565f826cac6f4df95a9c18e15201bb29251993452eb0c4ed80ac88404b08e.scope: Deactivated successfully.
Dec 05 01:14:16 compute-0 podman[206110]: 2025-12-05 01:14:16.494067633 +0000 UTC m=+0.205699458 container died 9ca565f826cac6f4df95a9c18e15201bb29251993452eb0c4ed80ac88404b08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_allen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 05 01:14:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-46ae0d819869b882ca625f6881492e517bcb81fd0c5e3f9da4391ca048923607-merged.mount: Deactivated successfully.
Dec 05 01:14:16 compute-0 podman[206110]: 2025-12-05 01:14:16.550084723 +0000 UTC m=+0.261716568 container remove 9ca565f826cac6f4df95a9c18e15201bb29251993452eb0c4ed80ac88404b08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_allen, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 05 01:14:16 compute-0 systemd[1]: libpod-conmon-9ca565f826cac6f4df95a9c18e15201bb29251993452eb0c4ed80ac88404b08e.scope: Deactivated successfully.
Dec 05 01:14:16 compute-0 podman[206157]: 2025-12-05 01:14:16.895410676 +0000 UTC m=+0.063274442 container create 38fed600e05bce803648ef13b17d45b05b6a656e21d173e5aa06c2d7669471df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate-test, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:14:16 compute-0 podman[206157]: 2025-12-05 01:14:16.871292675 +0000 UTC m=+0.039156451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:16 compute-0 systemd[1]: Started libpod-conmon-38fed600e05bce803648ef13b17d45b05b6a656e21d173e5aa06c2d7669471df.scope.
Dec 05 01:14:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:14:17 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df08a6a7b020ce08338203d78321eff49b7b746244b8089c5a7555db38b7e220/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df08a6a7b020ce08338203d78321eff49b7b746244b8089c5a7555db38b7e220/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df08a6a7b020ce08338203d78321eff49b7b746244b8089c5a7555db38b7e220/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df08a6a7b020ce08338203d78321eff49b7b746244b8089c5a7555db38b7e220/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df08a6a7b020ce08338203d78321eff49b7b746244b8089c5a7555db38b7e220/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:17 compute-0 podman[206157]: 2025-12-05 01:14:17.062289152 +0000 UTC m=+0.230152958 container init 38fed600e05bce803648ef13b17d45b05b6a656e21d173e5aa06c2d7669471df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:14:17 compute-0 podman[206157]: 2025-12-05 01:14:17.081690342 +0000 UTC m=+0.249554108 container start 38fed600e05bce803648ef13b17d45b05b6a656e21d173e5aa06c2d7669471df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate-test, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:14:17 compute-0 podman[206157]: 2025-12-05 01:14:17.086176077 +0000 UTC m=+0.254039893 container attach 38fed600e05bce803648ef13b17d45b05b6a656e21d173e5aa06c2d7669471df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:14:17 compute-0 ceph-mon[192914]: Deploying daemon osd.0 on compute-0
Dec 05 01:14:17 compute-0 ceph-mon[192914]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:17 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate-test[206174]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec 05 01:14:17 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate-test[206174]:                             [--no-systemd] [--no-tmpfs]
Dec 05 01:14:17 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate-test[206174]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 05 01:14:17 compute-0 systemd[1]: libpod-38fed600e05bce803648ef13b17d45b05b6a656e21d173e5aa06c2d7669471df.scope: Deactivated successfully.
Dec 05 01:14:17 compute-0 podman[206157]: 2025-12-05 01:14:17.736508142 +0000 UTC m=+0.904371918 container died 38fed600e05bce803648ef13b17d45b05b6a656e21d173e5aa06c2d7669471df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 05 01:14:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-df08a6a7b020ce08338203d78321eff49b7b746244b8089c5a7555db38b7e220-merged.mount: Deactivated successfully.
Dec 05 01:14:17 compute-0 podman[206157]: 2025-12-05 01:14:17.822477675 +0000 UTC m=+0.990341451 container remove 38fed600e05bce803648ef13b17d45b05b6a656e21d173e5aa06c2d7669471df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate-test, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 05 01:14:17 compute-0 systemd[1]: libpod-conmon-38fed600e05bce803648ef13b17d45b05b6a656e21d173e5aa06c2d7669471df.scope: Deactivated successfully.
Dec 05 01:14:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:18 compute-0 systemd[1]: Reloading.
Dec 05 01:14:18 compute-0 systemd-sysv-generator[206237]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:14:18 compute-0 systemd-rc-local-generator[206232]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:14:18 compute-0 systemd[1]: Reloading.
Dec 05 01:14:18 compute-0 podman[206246]: 2025-12-05 01:14:18.763843814 +0000 UTC m=+0.118471890 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 05 01:14:18 compute-0 systemd-rc-local-generator[206288]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:14:18 compute-0 systemd-sysv-generator[206294]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:14:19 compute-0 systemd[1]: Starting Ceph osd.0 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee...
Dec 05 01:14:19 compute-0 ceph-mon[192914]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:19 compute-0 podman[206350]: 2025-12-05 01:14:19.472818741 +0000 UTC m=+0.102274728 container create 155781d3de9325927b31a382b8fbb3d626fb8d299d729f8976e09e8d9578e0f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:14:19 compute-0 podman[206350]: 2025-12-05 01:14:19.40814587 +0000 UTC m=+0.037601867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:19 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7bbd02eaa778214f8bf6a4ce6059cf7a39c914c116ef656c96d23312a6bd5d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7bbd02eaa778214f8bf6a4ce6059cf7a39c914c116ef656c96d23312a6bd5d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7bbd02eaa778214f8bf6a4ce6059cf7a39c914c116ef656c96d23312a6bd5d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7bbd02eaa778214f8bf6a4ce6059cf7a39c914c116ef656c96d23312a6bd5d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7bbd02eaa778214f8bf6a4ce6059cf7a39c914c116ef656c96d23312a6bd5d3/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:19 compute-0 podman[206350]: 2025-12-05 01:14:19.605783552 +0000 UTC m=+0.235239559 container init 155781d3de9325927b31a382b8fbb3d626fb8d299d729f8976e09e8d9578e0f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:14:19 compute-0 podman[206350]: 2025-12-05 01:14:19.621044817 +0000 UTC m=+0.250500794 container start 155781d3de9325927b31a382b8fbb3d626fb8d299d729f8976e09e8d9578e0f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:14:19 compute-0 podman[206350]: 2025-12-05 01:14:19.627342582 +0000 UTC m=+0.256798559 container attach 155781d3de9325927b31a382b8fbb3d626fb8d299d729f8976e09e8d9578e0f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 05 01:14:19 compute-0 podman[206365]: 2025-12-05 01:14:19.675164624 +0000 UTC m=+0.133613061 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 01:14:19 compute-0 podman[206368]: 2025-12-05 01:14:19.728383045 +0000 UTC m=+0.177929334 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:14:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:20 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate[206369]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 05 01:14:20 compute-0 bash[206350]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 05 01:14:20 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate[206369]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Dec 05 01:14:20 compute-0 bash[206350]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Dec 05 01:14:21 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate[206369]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Dec 05 01:14:21 compute-0 bash[206350]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Dec 05 01:14:21 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate[206369]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 05 01:14:21 compute-0 bash[206350]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 05 01:14:21 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate[206369]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 05 01:14:21 compute-0 bash[206350]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 05 01:14:21 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate[206369]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 05 01:14:21 compute-0 bash[206350]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 05 01:14:21 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate[206369]: --> ceph-volume raw activate successful for osd ID: 0
Dec 05 01:14:21 compute-0 bash[206350]: --> ceph-volume raw activate successful for osd ID: 0
Dec 05 01:14:21 compute-0 systemd[1]: libpod-155781d3de9325927b31a382b8fbb3d626fb8d299d729f8976e09e8d9578e0f1.scope: Deactivated successfully.
Dec 05 01:14:21 compute-0 systemd[1]: libpod-155781d3de9325927b31a382b8fbb3d626fb8d299d729f8976e09e8d9578e0f1.scope: Consumed 1.482s CPU time.
Dec 05 01:14:21 compute-0 podman[206350]: 2025-12-05 01:14:21.077720841 +0000 UTC m=+1.707176848 container died 155781d3de9325927b31a382b8fbb3d626fb8d299d729f8976e09e8d9578e0f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 05 01:14:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7bbd02eaa778214f8bf6a4ce6059cf7a39c914c116ef656c96d23312a6bd5d3-merged.mount: Deactivated successfully.
Dec 05 01:14:21 compute-0 podman[206350]: 2025-12-05 01:14:21.211383212 +0000 UTC m=+1.840839219 container remove 155781d3de9325927b31a382b8fbb3d626fb8d299d729f8976e09e8d9578e0f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 05 01:14:21 compute-0 ceph-mon[192914]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:21 compute-0 podman[206565]: 2025-12-05 01:14:21.268152792 +0000 UTC m=+0.108918623 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 05 01:14:21 compute-0 podman[206628]: 2025-12-05 01:14:21.534038534 +0000 UTC m=+0.078796904 container create a1423cde747e417c6d5c4992cf49da7bd11f7624f7f045ac8f6cd3cd6dd674e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:14:21 compute-0 podman[206628]: 2025-12-05 01:14:21.500707616 +0000 UTC m=+0.045466036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f931fbb0f233f99bf5e84c8a1025767c73da2af8e68d8b31b2133c48a93dca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f931fbb0f233f99bf5e84c8a1025767c73da2af8e68d8b31b2133c48a93dca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f931fbb0f233f99bf5e84c8a1025767c73da2af8e68d8b31b2133c48a93dca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f931fbb0f233f99bf5e84c8a1025767c73da2af8e68d8b31b2133c48a93dca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f931fbb0f233f99bf5e84c8a1025767c73da2af8e68d8b31b2133c48a93dca/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:21 compute-0 podman[206628]: 2025-12-05 01:14:21.680494742 +0000 UTC m=+0.225253162 container init a1423cde747e417c6d5c4992cf49da7bd11f7624f7f045ac8f6cd3cd6dd674e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:14:21 compute-0 podman[206628]: 2025-12-05 01:14:21.697189136 +0000 UTC m=+0.241947496 container start a1423cde747e417c6d5c4992cf49da7bd11f7624f7f045ac8f6cd3cd6dd674e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:14:21 compute-0 bash[206628]: a1423cde747e417c6d5c4992cf49da7bd11f7624f7f045ac8f6cd3cd6dd674e9
Dec 05 01:14:21 compute-0 systemd[1]: Started Ceph osd.0 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec 05 01:14:21 compute-0 ceph-osd[206647]: set uid:gid to 167:167 (ceph:ceph)
Dec 05 01:14:21 compute-0 ceph-osd[206647]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Dec 05 01:14:21 compute-0 ceph-osd[206647]: pidfile_write: ignore empty --pid-file
Dec 05 01:14:21 compute-0 ceph-osd[206647]: bdev(0x5630e4c1d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 05 01:14:21 compute-0 ceph-osd[206647]: bdev(0x5630e4c1d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 05 01:14:21 compute-0 ceph-osd[206647]: bdev(0x5630e4c1d800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 01:14:21 compute-0 ceph-osd[206647]: bdev(0x5630e4c1d800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 01:14:21 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 05 01:14:21 compute-0 ceph-osd[206647]: bdev(0x5630e5a55800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 05 01:14:21 compute-0 ceph-osd[206647]: bdev(0x5630e5a55800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 05 01:14:21 compute-0 ceph-osd[206647]: bdev(0x5630e5a55800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 01:14:21 compute-0 ceph-osd[206647]: bdev(0x5630e5a55800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 01:14:21 compute-0 ceph-osd[206647]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 05 01:14:21 compute-0 ceph-osd[206647]: bdev(0x5630e5a55800 /var/lib/ceph/osd/ceph-0/block) close
Dec 05 01:14:21 compute-0 ceph-osd[206647]: bdev(0x5630e4c1d800 /var/lib/ceph/osd/ceph-0/block) close
Dec 05 01:14:21 compute-0 sudo[206046]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:14:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:14:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Dec 05 01:14:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 05 01:14:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:14:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:14:21 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Dec 05 01:14:21 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Dec 05 01:14:21 compute-0 sudo[206662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:21 compute-0 sudo[206662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:21 compute-0 sudo[206662]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:14:21 compute-0 sudo[206687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:14:22 compute-0 sudo[206687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:22 compute-0 sudo[206687]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:22 compute-0 ceph-osd[206647]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Dec 05 01:14:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:22 compute-0 ceph-osd[206647]: load: jerasure load: lrc 
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) close
Dec 05 01:14:22 compute-0 sudo[206712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:22 compute-0 sudo[206712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:22 compute-0 sudo[206712]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:22 compute-0 sudo[206742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec 05 01:14:22 compute-0 sudo[206742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) close
Dec 05 01:14:22 compute-0 ceph-osd[206647]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 05 01:14:22 compute-0 ceph-osd[206647]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8d400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8d400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8d400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8d400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bluefs mount
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bluefs mount shared_bdev_used = 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: RocksDB version: 7.9.2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Git sha 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Compile date 2025-05-06 23:30:25
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: DB SUMMARY
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: DB Session ID:  GYHZQKVIA575O32EF2LZ
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: CURRENT file:  CURRENT
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: IDENTITY file:  IDENTITY
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                         Options.error_if_exists: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.create_if_missing: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                         Options.paranoid_checks: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                                     Options.env: 0x5630e5aa7e30
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                                Options.info_log: 0x5630e4ca8aa0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_file_opening_threads: 16
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                              Options.statistics: (nil)
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.use_fsync: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.max_log_file_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                         Options.allow_fallocate: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.use_direct_reads: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.create_missing_column_families: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                              Options.db_log_dir: 
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                                 Options.wal_dir: db.wal
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.advise_random_on_open: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.write_buffer_manager: 0x5630e5bae460
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                            Options.rate_limiter: (nil)
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.unordered_write: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.row_cache: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                              Options.wal_filter: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.allow_ingest_behind: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.two_write_queues: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.manual_wal_flush: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.wal_compression: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.atomic_flush: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.log_readahead_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.allow_data_in_errors: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.db_host_id: __hostname__
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.max_background_jobs: 4
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.max_background_compactions: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.max_subcompactions: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.max_open_files: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.bytes_per_sync: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.max_background_flushes: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Compression algorithms supported:
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         kZSTD supported: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         kXpressCompression supported: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         kBZip2Compression supported: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         kLZ4Compression supported: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         kZlibCompression supported: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         kLZ4HCCompression supported: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         kSnappyCompression supported: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9140)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5630e4c90dd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9140)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5630e4c90dd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9140)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5630e4c90dd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9140)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5630e4c90dd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9140)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5630e4c90dd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9140)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5630e4c90dd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9140)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5630e4c90dd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9100)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5630e4c90430
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9100)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5630e4c90430
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9100)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5630e4c90430
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 05 01:14:22 compute-0 podman[206809]: 2025-12-05 01:14:22.647188285 +0000 UTC m=+0.064080795 container create b80d6e5e2a08fe2e7a53305306f2a36627417fc691ab1c89404b06595ac40d0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_albattani, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3a58bf9c-cd82-4306-99b8-5561449df99e
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897262667754, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897262668199, "job": 1, "event": "recovery_finished"}
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: freelist init
Dec 05 01:14:22 compute-0 ceph-osd[206647]: freelist _read_cfg
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bluefs umount
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8d400 /var/lib/ceph/osd/ceph-0/block) close
Dec 05 01:14:22 compute-0 systemd[1]: Started libpod-conmon-b80d6e5e2a08fe2e7a53305306f2a36627417fc691ab1c89404b06595ac40d0e.scope.
Dec 05 01:14:22 compute-0 podman[206809]: 2025-12-05 01:14:22.627560708 +0000 UTC m=+0.044453248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:22 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:22 compute-0 podman[206809]: 2025-12-05 01:14:22.757740842 +0000 UTC m=+0.174633382 container init b80d6e5e2a08fe2e7a53305306f2a36627417fc691ab1c89404b06595ac40d0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 01:14:22 compute-0 podman[206809]: 2025-12-05 01:14:22.768525163 +0000 UTC m=+0.185417683 container start b80d6e5e2a08fe2e7a53305306f2a36627417fc691ab1c89404b06595ac40d0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_albattani, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:14:22 compute-0 podman[206809]: 2025-12-05 01:14:22.773048709 +0000 UTC m=+0.189941259 container attach b80d6e5e2a08fe2e7a53305306f2a36627417fc691ab1c89404b06595ac40d0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_albattani, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:14:22 compute-0 jolly_albattani[207018]: 167 167
Dec 05 01:14:22 compute-0 systemd[1]: libpod-b80d6e5e2a08fe2e7a53305306f2a36627417fc691ab1c89404b06595ac40d0e.scope: Deactivated successfully.
Dec 05 01:14:22 compute-0 podman[206809]: 2025-12-05 01:14:22.777672267 +0000 UTC m=+0.194564807 container died b80d6e5e2a08fe2e7a53305306f2a36627417fc691ab1c89404b06595ac40d0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_albattani, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:14:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 05 01:14:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:14:22 compute-0 ceph-mon[192914]: Deploying daemon osd.1 on compute-0
Dec 05 01:14:22 compute-0 ceph-mon[192914]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d6bade19b92aa99aa7adb5c26c74b7db2848ded4da31b67cf8097e7a12d5ff5-merged.mount: Deactivated successfully.
Dec 05 01:14:22 compute-0 podman[206809]: 2025-12-05 01:14:22.828511643 +0000 UTC m=+0.245404163 container remove b80d6e5e2a08fe2e7a53305306f2a36627417fc691ab1c89404b06595ac40d0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_albattani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 01:14:22 compute-0 systemd[1]: libpod-conmon-b80d6e5e2a08fe2e7a53305306f2a36627417fc691ab1c89404b06595ac40d0e.scope: Deactivated successfully.
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8d400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8d400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8d400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8d400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bluefs mount
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bluefs mount shared_bdev_used = 4718592
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: RocksDB version: 7.9.2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Git sha 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Compile date 2025-05-06 23:30:25
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: DB SUMMARY
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: DB Session ID:  GYHZQKVIA575O32EF2LY
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: CURRENT file:  CURRENT
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: IDENTITY file:  IDENTITY
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                         Options.error_if_exists: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.create_if_missing: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                         Options.paranoid_checks: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                                     Options.env: 0x5630e5c44230
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                                Options.info_log: 0x5630e4ca8840
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_file_opening_threads: 16
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                              Options.statistics: (nil)
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.use_fsync: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.max_log_file_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                         Options.allow_fallocate: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.use_direct_reads: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.create_missing_column_families: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                              Options.db_log_dir: 
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                                 Options.wal_dir: db.wal
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.advise_random_on_open: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.write_buffer_manager: 0x5630e5bae460
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                            Options.rate_limiter: (nil)
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.unordered_write: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.row_cache: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                              Options.wal_filter: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.allow_ingest_behind: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.two_write_queues: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.manual_wal_flush: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.wal_compression: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.atomic_flush: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.log_readahead_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.allow_data_in_errors: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.db_host_id: __hostname__
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.max_background_jobs: 4
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.max_background_compactions: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.max_subcompactions: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.max_open_files: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.bytes_per_sync: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.max_background_flushes: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Compression algorithms supported:
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         kZSTD supported: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         kXpressCompression supported: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         kBZip2Compression supported: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         kLZ4Compression supported: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         kZlibCompression supported: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         kLZ4HCCompression supported: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         kSnappyCompression supported: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca8c20)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5630e4c90dd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca8c20)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5630e4c90dd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca8c20)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5630e4c90dd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca8c20)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5630e4c90dd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca8c20)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5630e4c90dd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca8c20)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5630e4c90dd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca8c20)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5630e4c90dd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9220)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5630e4c90430
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9220)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5630e4c90430
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9220)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x5630e4c90430
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3a58bf9c-cd82-4306-99b8-5561449df99e
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897262923082, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897262930358, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897262, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3a58bf9c-cd82-4306-99b8-5561449df99e", "db_session_id": "GYHZQKVIA575O32EF2LY", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897262935715, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897262, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3a58bf9c-cd82-4306-99b8-5561449df99e", "db_session_id": "GYHZQKVIA575O32EF2LY", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897262940663, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897262, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3a58bf9c-cd82-4306-99b8-5561449df99e", "db_session_id": "GYHZQKVIA575O32EF2LY", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897262943577, "job": 1, "event": "recovery_finished"}
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5630e5c98000
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: DB pointer 0x5630e4ccba00
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Dec 05 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                            Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                            Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 3.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 3.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 3.3e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Dec 05 01:14:22 compute-0 ceph-osd[206647]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 05 01:14:22 compute-0 ceph-osd[206647]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 05 01:14:22 compute-0 ceph-osd[206647]: _get_class not permitted to load lua
Dec 05 01:14:22 compute-0 ceph-osd[206647]: _get_class not permitted to load sdk
Dec 05 01:14:22 compute-0 ceph-osd[206647]: _get_class not permitted to load test_remote_reads
Dec 05 01:14:22 compute-0 ceph-osd[206647]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 05 01:14:22 compute-0 ceph-osd[206647]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 05 01:14:22 compute-0 ceph-osd[206647]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 05 01:14:22 compute-0 ceph-osd[206647]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 05 01:14:22 compute-0 ceph-osd[206647]: osd.0 0 load_pgs
Dec 05 01:14:22 compute-0 ceph-osd[206647]: osd.0 0 load_pgs opened 0 pgs
Dec 05 01:14:22 compute-0 ceph-osd[206647]: osd.0 0 log_to_monitors true
Dec 05 01:14:22 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0[206643]: 2025-12-05T01:14:22.990+0000 7f938f71a740 -1 osd.0 0 log_to_monitors true
Dec 05 01:14:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Dec 05 01:14:23 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/4014556596,v1:192.168.122.100:6803/4014556596]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec 05 01:14:23 compute-0 podman[207263]: 2025-12-05 01:14:23.108701112 +0000 UTC m=+0.058068937 container create 4e24f0920f1f8821451a7cb12d50d0de8b95d32a128d7054e956600d8be53379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate-test, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 01:14:23 compute-0 systemd[1]: Started libpod-conmon-4e24f0920f1f8821451a7cb12d50d0de8b95d32a128d7054e956600d8be53379.scope.
Dec 05 01:14:23 compute-0 podman[207263]: 2025-12-05 01:14:23.085511337 +0000 UTC m=+0.034879162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:23 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac275532091ee7cd958b370752433cba0bd556588939d787c4633a23be4ab44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac275532091ee7cd958b370752433cba0bd556588939d787c4633a23be4ab44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac275532091ee7cd958b370752433cba0bd556588939d787c4633a23be4ab44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac275532091ee7cd958b370752433cba0bd556588939d787c4633a23be4ab44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac275532091ee7cd958b370752433cba0bd556588939d787c4633a23be4ab44/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:23 compute-0 podman[207263]: 2025-12-05 01:14:23.281178844 +0000 UTC m=+0.230546679 container init 4e24f0920f1f8821451a7cb12d50d0de8b95d32a128d7054e956600d8be53379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:14:23 compute-0 podman[207263]: 2025-12-05 01:14:23.305611414 +0000 UTC m=+0.254979209 container start 4e24f0920f1f8821451a7cb12d50d0de8b95d32a128d7054e956600d8be53379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate-test, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 01:14:23 compute-0 podman[207263]: 2025-12-05 01:14:23.310102659 +0000 UTC m=+0.259470494 container attach 4e24f0920f1f8821451a7cb12d50d0de8b95d32a128d7054e956600d8be53379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate-test, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 05 01:14:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec 05 01:14:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 05 01:14:23 compute-0 ceph-mon[192914]: from='osd.0 [v2:192.168.122.100:6802/4014556596,v1:192.168.122.100:6803/4014556596]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec 05 01:14:23 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/4014556596,v1:192.168.122.100:6803/4014556596]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 05 01:14:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Dec 05 01:14:23 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Dec 05 01:14:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Dec 05 01:14:23 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/4014556596,v1:192.168.122.100:6803/4014556596]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 05 01:14:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec 05 01:14:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 05 01:14:23 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 01:14:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 05 01:14:23 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 05 01:14:23 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:23 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 01:14:23 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 01:14:23 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 01:14:23 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate-test[207279]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec 05 01:14:23 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate-test[207279]:                             [--no-systemd] [--no-tmpfs]
Dec 05 01:14:23 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate-test[207279]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 05 01:14:23 compute-0 systemd[1]: libpod-4e24f0920f1f8821451a7cb12d50d0de8b95d32a128d7054e956600d8be53379.scope: Deactivated successfully.
Dec 05 01:14:23 compute-0 podman[207263]: 2025-12-05 01:14:23.962183233 +0000 UTC m=+0.911551058 container died 4e24f0920f1f8821451a7cb12d50d0de8b95d32a128d7054e956600d8be53379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:14:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ac275532091ee7cd958b370752433cba0bd556588939d787c4633a23be4ab44-merged.mount: Deactivated successfully.
Dec 05 01:14:24 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 05 01:14:24 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 05 01:14:24 compute-0 podman[207263]: 2025-12-05 01:14:24.043917449 +0000 UTC m=+0.993285254 container remove 4e24f0920f1f8821451a7cb12d50d0de8b95d32a128d7054e956600d8be53379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate-test, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:14:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:24 compute-0 systemd[1]: libpod-conmon-4e24f0920f1f8821451a7cb12d50d0de8b95d32a128d7054e956600d8be53379.scope: Deactivated successfully.
Dec 05 01:14:24 compute-0 podman[207285]: 2025-12-05 01:14:24.138135242 +0000 UTC m=+0.142355144 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, distribution-scope=public, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=)
Dec 05 01:14:24 compute-0 systemd[1]: Reloading.
Dec 05 01:14:24 compute-0 systemd-sysv-generator[207366]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:14:24 compute-0 systemd-rc-local-generator[207361]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:14:24 compute-0 systemd[1]: Reloading.
Dec 05 01:14:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec 05 01:14:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 05 01:14:24 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/4014556596,v1:192.168.122.100:6803/4014556596]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 05 01:14:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Dec 05 01:14:24 compute-0 ceph-osd[206647]: osd.0 0 done with init, starting boot process
Dec 05 01:14:24 compute-0 ceph-osd[206647]: osd.0 0 start_boot
Dec 05 01:14:24 compute-0 ceph-osd[206647]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 05 01:14:24 compute-0 ceph-osd[206647]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 05 01:14:24 compute-0 ceph-osd[206647]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 05 01:14:24 compute-0 ceph-osd[206647]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 05 01:14:24 compute-0 ceph-osd[206647]: osd.0 0  bench count 12288000 bsize 4 KiB
Dec 05 01:14:24 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Dec 05 01:14:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 05 01:14:24 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 01:14:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 05 01:14:24 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 05 01:14:24 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:24 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 01:14:24 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 01:14:24 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 01:14:24 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4014556596; not ready for session (expect reconnect)
Dec 05 01:14:24 compute-0 ceph-mon[192914]: from='osd.0 [v2:192.168.122.100:6802/4014556596,v1:192.168.122.100:6803/4014556596]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 05 01:14:24 compute-0 ceph-mon[192914]: osdmap e7: 3 total, 0 up, 3 in
Dec 05 01:14:24 compute-0 ceph-mon[192914]: from='osd.0 [v2:192.168.122.100:6802/4014556596,v1:192.168.122.100:6803/4014556596]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 05 01:14:24 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 01:14:24 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:24 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:24 compute-0 ceph-mon[192914]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 05 01:14:24 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 01:14:24 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 01:14:25 compute-0 systemd-sysv-generator[207402]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:14:25 compute-0 systemd-rc-local-generator[207396]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:14:25 compute-0 sudo[207436]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jddrkwkainbzizswbwsxbdyefdufwabe ; /usr/bin/python3'
Dec 05 01:14:25 compute-0 systemd[1]: Starting Ceph osd.1 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee...
Dec 05 01:14:25 compute-0 sudo[207436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:14:25 compute-0 python3[207443]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:14:25 compute-0 podman[207474]: 2025-12-05 01:14:25.547230041 +0000 UTC m=+0.073698072 container create ff03e9b5fa393239416979e62cd283849c0e3b5600d199ecc4e2b73327949043 (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:14:25 compute-0 podman[207495]: 2025-12-05 01:14:25.592465811 +0000 UTC m=+0.067451999 container create 3e37c8594622052a1ba2bf5f0d3df7622e1108c1f9ce1966160a7e564396249c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 05 01:14:25 compute-0 podman[207474]: 2025-12-05 01:14:25.515129038 +0000 UTC m=+0.041597089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:14:25 compute-0 systemd[1]: Started libpod-conmon-ff03e9b5fa393239416979e62cd283849c0e3b5600d199ecc4e2b73327949043.scope.
Dec 05 01:14:25 compute-0 podman[207495]: 2025-12-05 01:14:25.560567583 +0000 UTC m=+0.035553781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:25 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:25 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ad17509fe53fb2424eddcdfaaccd7701b9d09fe571fbf45c128fa30f44aa03/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c2b7f7ba518dde22bbf15aa14ee918fdf4ec7845d7c85b01fd85a3630d8ca5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ad17509fe53fb2424eddcdfaaccd7701b9d09fe571fbf45c128fa30f44aa03/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ad17509fe53fb2424eddcdfaaccd7701b9d09fe571fbf45c128fa30f44aa03/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c2b7f7ba518dde22bbf15aa14ee918fdf4ec7845d7c85b01fd85a3630d8ca5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c2b7f7ba518dde22bbf15aa14ee918fdf4ec7845d7c85b01fd85a3630d8ca5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c2b7f7ba518dde22bbf15aa14ee918fdf4ec7845d7c85b01fd85a3630d8ca5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c2b7f7ba518dde22bbf15aa14ee918fdf4ec7845d7c85b01fd85a3630d8ca5/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:25 compute-0 podman[207474]: 2025-12-05 01:14:25.707300088 +0000 UTC m=+0.233768139 container init ff03e9b5fa393239416979e62cd283849c0e3b5600d199ecc4e2b73327949043 (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 01:14:25 compute-0 podman[207474]: 2025-12-05 01:14:25.722868191 +0000 UTC m=+0.249336222 container start ff03e9b5fa393239416979e62cd283849c0e3b5600d199ecc4e2b73327949043 (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:14:25 compute-0 podman[207495]: 2025-12-05 01:14:25.733136747 +0000 UTC m=+0.208122955 container init 3e37c8594622052a1ba2bf5f0d3df7622e1108c1f9ce1966160a7e564396249c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:14:25 compute-0 podman[207474]: 2025-12-05 01:14:25.741165391 +0000 UTC m=+0.267633442 container attach ff03e9b5fa393239416979e62cd283849c0e3b5600d199ecc4e2b73327949043 (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:14:25 compute-0 podman[207495]: 2025-12-05 01:14:25.742820257 +0000 UTC m=+0.217806445 container start 3e37c8594622052a1ba2bf5f0d3df7622e1108c1f9ce1966160a7e564396249c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Dec 05 01:14:25 compute-0 podman[207495]: 2025-12-05 01:14:25.763610716 +0000 UTC m=+0.238596914 container attach 3e37c8594622052a1ba2bf5f0d3df7622e1108c1f9ce1966160a7e564396249c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 05 01:14:25 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4014556596; not ready for session (expect reconnect)
Dec 05 01:14:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 05 01:14:25 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 01:14:25 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 01:14:25 compute-0 ceph-mon[192914]: from='osd.0 [v2:192.168.122.100:6802/4014556596,v1:192.168.122.100:6803/4014556596]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 05 01:14:25 compute-0 ceph-mon[192914]: osdmap e8: 3 total, 0 up, 3 in
Dec 05 01:14:25 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 01:14:25 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:25 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:25 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 01:14:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec 05 01:14:26 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1252479157' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 05 01:14:26 compute-0 infallible_mahavira[207513]: 
Dec 05 01:14:26 compute-0 infallible_mahavira[207513]: {"fsid":"cbd280d3-cbd8-528b-ace6-2b3a887cdcee","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":119,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":8,"num_osds":3,"num_up_osds":0,"osd_up_since":0,"num_in_osds":3,"osd_in_since":1764897248,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-05T01:14:18.046599+0000","services":{}},"progress_events":{}}
Dec 05 01:14:26 compute-0 systemd[1]: libpod-ff03e9b5fa393239416979e62cd283849c0e3b5600d199ecc4e2b73327949043.scope: Deactivated successfully.
Dec 05 01:14:26 compute-0 conmon[207513]: conmon ff03e9b5fa3932394169 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff03e9b5fa393239416979e62cd283849c0e3b5600d199ecc4e2b73327949043.scope/container/memory.events
Dec 05 01:14:26 compute-0 podman[207550]: 2025-12-05 01:14:26.456454794 +0000 UTC m=+0.027874657 container died ff03e9b5fa393239416979e62cd283849c0e3b5600d199ecc4e2b73327949043 (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Dec 05 01:14:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4ad17509fe53fb2424eddcdfaaccd7701b9d09fe571fbf45c128fa30f44aa03-merged.mount: Deactivated successfully.
Dec 05 01:14:26 compute-0 podman[207550]: 2025-12-05 01:14:26.546291636 +0000 UTC m=+0.117711469 container remove ff03e9b5fa393239416979e62cd283849c0e3b5600d199ecc4e2b73327949043 (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:14:26 compute-0 systemd[1]: libpod-conmon-ff03e9b5fa393239416979e62cd283849c0e3b5600d199ecc4e2b73327949043.scope: Deactivated successfully.
Dec 05 01:14:26 compute-0 sudo[207436]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:26 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate[207517]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 05 01:14:26 compute-0 bash[207495]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 05 01:14:26 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate[207517]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Dec 05 01:14:26 compute-0 bash[207495]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Dec 05 01:14:26 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate[207517]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Dec 05 01:14:26 compute-0 bash[207495]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Dec 05 01:14:26 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate[207517]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 05 01:14:26 compute-0 bash[207495]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 05 01:14:26 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate[207517]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 05 01:14:26 compute-0 bash[207495]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 05 01:14:26 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate[207517]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 05 01:14:26 compute-0 bash[207495]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 05 01:14:26 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate[207517]: --> ceph-volume raw activate successful for osd ID: 1
Dec 05 01:14:26 compute-0 bash[207495]: --> ceph-volume raw activate successful for osd ID: 1
Dec 05 01:14:26 compute-0 systemd[1]: libpod-3e37c8594622052a1ba2bf5f0d3df7622e1108c1f9ce1966160a7e564396249c.scope: Deactivated successfully.
Dec 05 01:14:26 compute-0 systemd[1]: libpod-3e37c8594622052a1ba2bf5f0d3df7622e1108c1f9ce1966160a7e564396249c.scope: Consumed 1.124s CPU time.
Dec 05 01:14:26 compute-0 podman[207495]: 2025-12-05 01:14:26.872316981 +0000 UTC m=+1.347303169 container died 3e37c8594622052a1ba2bf5f0d3df7622e1108c1f9ce1966160a7e564396249c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:14:26 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4014556596; not ready for session (expect reconnect)
Dec 05 01:14:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 05 01:14:26 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 01:14:26 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 01:14:26 compute-0 ceph-mon[192914]: purged_snaps scrub starts
Dec 05 01:14:26 compute-0 ceph-mon[192914]: purged_snaps scrub ok
Dec 05 01:14:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 01:14:26 compute-0 ceph-mon[192914]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:26 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1252479157' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 05 01:14:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 01:14:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2c2b7f7ba518dde22bbf15aa14ee918fdf4ec7845d7c85b01fd85a3630d8ca5-merged.mount: Deactivated successfully.
Dec 05 01:14:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:14:27 compute-0 podman[207702]: 2025-12-05 01:14:27.00049143 +0000 UTC m=+0.091097878 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-type=git, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal)
Dec 05 01:14:27 compute-0 podman[207495]: 2025-12-05 01:14:27.030534036 +0000 UTC m=+1.505520214 container remove 3e37c8594622052a1ba2bf5f0d3df7622e1108c1f9ce1966160a7e564396249c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 05 01:14:27 compute-0 podman[207777]: 2025-12-05 01:14:27.30619776 +0000 UTC m=+0.071946814 container create 4bb9d15168558ce4ae587a8509818133e2ad26af978ea8ff9feb20c8ba2b4839 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:14:27 compute-0 podman[207777]: 2025-12-05 01:14:27.273992234 +0000 UTC m=+0.039741288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d8edcf8b60eace6ecbb88dbbb627660008ae3e828280673345062c5448a09a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d8edcf8b60eace6ecbb88dbbb627660008ae3e828280673345062c5448a09a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d8edcf8b60eace6ecbb88dbbb627660008ae3e828280673345062c5448a09a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d8edcf8b60eace6ecbb88dbbb627660008ae3e828280673345062c5448a09a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d8edcf8b60eace6ecbb88dbbb627660008ae3e828280673345062c5448a09a/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:27 compute-0 podman[207777]: 2025-12-05 01:14:27.419191996 +0000 UTC m=+0.184941060 container init 4bb9d15168558ce4ae587a8509818133e2ad26af978ea8ff9feb20c8ba2b4839 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 01:14:27 compute-0 podman[207777]: 2025-12-05 01:14:27.432098136 +0000 UTC m=+0.197847180 container start 4bb9d15168558ce4ae587a8509818133e2ad26af978ea8ff9feb20c8ba2b4839 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:14:27 compute-0 bash[207777]: 4bb9d15168558ce4ae587a8509818133e2ad26af978ea8ff9feb20c8ba2b4839
Dec 05 01:14:27 compute-0 systemd[1]: Started Ceph osd.1 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec 05 01:14:27 compute-0 ceph-osd[207795]: set uid:gid to 167:167 (ceph:ceph)
Dec 05 01:14:27 compute-0 ceph-osd[207795]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Dec 05 01:14:27 compute-0 ceph-osd[207795]: pidfile_write: ignore empty --pid-file
Dec 05 01:14:27 compute-0 ceph-osd[207795]: bdev(0x564846697800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 01:14:27 compute-0 ceph-osd[207795]: bdev(0x564846697800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 01:14:27 compute-0 ceph-osd[207795]: bdev(0x564846697800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 01:14:27 compute-0 ceph-osd[207795]: bdev(0x564846697800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 01:14:27 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 05 01:14:27 compute-0 ceph-osd[207795]: bdev(0x5648474d9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 01:14:27 compute-0 ceph-osd[207795]: bdev(0x5648474d9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 01:14:27 compute-0 ceph-osd[207795]: bdev(0x5648474d9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 01:14:27 compute-0 ceph-osd[207795]: bdev(0x5648474d9800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 01:14:27 compute-0 ceph-osd[207795]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 05 01:14:27 compute-0 ceph-osd[207795]: bdev(0x5648474d9800 /var/lib/ceph/osd/ceph-1/block) close
Dec 05 01:14:27 compute-0 sudo[206742]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:14:27 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:14:27 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Dec 05 01:14:27 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec 05 01:14:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:14:27 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:14:27 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Dec 05 01:14:27 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Dec 05 01:14:27 compute-0 sudo[207810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:27 compute-0 sudo[207810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:27 compute-0 sudo[207810]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:27 compute-0 sudo[207835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:14:27 compute-0 sudo[207835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:27 compute-0 sudo[207835]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:27 compute-0 ceph-osd[207795]: bdev(0x564846697800 /var/lib/ceph/osd/ceph-1/block) close
Dec 05 01:14:27 compute-0 sudo[207860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:27 compute-0 sudo[207860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:27 compute-0 sudo[207860]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:27 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4014556596; not ready for session (expect reconnect)
Dec 05 01:14:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 05 01:14:27 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 01:14:27 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 01:14:27 compute-0 sudo[207887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec 05 01:14:27 compute-0 sudo[207887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:28 compute-0 ceph-osd[207795]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec 05 01:14:28 compute-0 ceph-osd[207795]: load: jerasure load: lrc 
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 01:14:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 05 01:14:28 compute-0 podman[207958]: 2025-12-05 01:14:28.461503995 +0000 UTC m=+0.064665822 container create b5ff8d47eb855d34de167f88e779500de92cc5f884ab3251da684e94bb1129ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 01:14:28 compute-0 systemd[1]: Started libpod-conmon-b5ff8d47eb855d34de167f88e779500de92cc5f884ab3251da684e94bb1129ba.scope.
Dec 05 01:14:28 compute-0 podman[207958]: 2025-12-05 01:14:28.431660324 +0000 UTC m=+0.034822221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:28 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec 05 01:14:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:14:28 compute-0 ceph-mon[192914]: Deploying daemon osd.2 on compute-0
Dec 05 01:14:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 01:14:28 compute-0 ceph-mon[192914]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 05 01:14:28 compute-0 podman[207958]: 2025-12-05 01:14:28.577133434 +0000 UTC m=+0.180295311 container init b5ff8d47eb855d34de167f88e779500de92cc5f884ab3251da684e94bb1129ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_khorana, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 05 01:14:28 compute-0 ceph-osd[207795]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 05 01:14:28 compute-0 ceph-osd[207795]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846861400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846861400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846861400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846861400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bluefs mount
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 05 01:14:28 compute-0 podman[207958]: 2025-12-05 01:14:28.58884155 +0000 UTC m=+0.192003377 container start b5ff8d47eb855d34de167f88e779500de92cc5f884ab3251da684e94bb1129ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bluefs mount shared_bdev_used = 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: RocksDB version: 7.9.2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Git sha 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Compile date 2025-05-06 23:30:25
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: DB SUMMARY
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: DB Session ID:  JNG2OMSWA32VFJZW4PQ8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: CURRENT file:  CURRENT
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: IDENTITY file:  IDENTITY
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                         Options.error_if_exists: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.create_if_missing: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                         Options.paranoid_checks: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                                     Options.env: 0x56484752be30
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                                Options.info_log: 0x564846722720
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_file_opening_threads: 16
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                              Options.statistics: (nil)
Dec 05 01:14:28 compute-0 podman[207958]: 2025-12-05 01:14:28.594105916 +0000 UTC m=+0.197267783 container attach b5ff8d47eb855d34de167f88e779500de92cc5f884ab3251da684e94bb1129ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.use_fsync: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.max_log_file_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                         Options.allow_fallocate: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.use_direct_reads: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.create_missing_column_families: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                              Options.db_log_dir: 
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                                 Options.wal_dir: db.wal
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.advise_random_on_open: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.write_buffer_manager: 0x564847638460
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                            Options.rate_limiter: (nil)
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.unordered_write: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.row_cache: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                              Options.wal_filter: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.allow_ingest_behind: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.two_write_queues: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.manual_wal_flush: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.wal_compression: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.atomic_flush: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.log_readahead_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.allow_data_in_errors: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.db_host_id: __hostname__
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.max_background_jobs: 4
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.max_background_compactions: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.max_subcompactions: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.max_open_files: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.bytes_per_sync: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.max_background_flushes: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Compression algorithms supported:
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         kZSTD supported: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         kXpressCompression supported: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         kBZip2Compression supported: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         kLZ4Compression supported: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         kZlibCompression supported: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         kLZ4HCCompression supported: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         kSnappyCompression supported: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722d80)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x56484670add0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722d80)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x56484670add0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:28 compute-0 affectionate_khorana[207971]: 167 167
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722d80)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x56484670add0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 systemd[1]: libpod-b5ff8d47eb855d34de167f88e779500de92cc5f884ab3251da684e94bb1129ba.scope: Deactivated successfully.
Dec 05 01:14:28 compute-0 podman[207958]: 2025-12-05 01:14:28.599187958 +0000 UTC m=+0.202349775 container died b5ff8d47eb855d34de167f88e779500de92cc5f884ab3251da684e94bb1129ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_khorana, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722d80)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x56484670add0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722d80)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x56484670add0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722d80)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x56484670add0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722d80)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x56484670add0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722d60)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x56484670a430
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722d60)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x56484670a430
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722d60)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x56484670a430
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 05 01:14:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-d11a9ff0d1692adadae838336c8084c6f31dc9782fdfd8a1c9184d2592fea31e-merged.mount: Deactivated successfully.
Dec 05 01:14:28 compute-0 podman[207958]: 2025-12-05 01:14:28.661165203 +0000 UTC m=+0.264327030 container remove b5ff8d47eb855d34de167f88e779500de92cc5f884ab3251da684e94bb1129ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_khorana, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1bcc1f64-0499-4951-869f-3891619a47de
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897268665156, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897268666023, "job": 1, "event": "recovery_finished"}
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: freelist init
Dec 05 01:14:28 compute-0 ceph-osd[207795]: freelist _read_cfg
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bluefs umount
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846861400 /var/lib/ceph/osd/ceph-1/block) close
Dec 05 01:14:28 compute-0 systemd[1]: libpod-conmon-b5ff8d47eb855d34de167f88e779500de92cc5f884ab3251da684e94bb1129ba.scope: Deactivated successfully.
Dec 05 01:14:28 compute-0 ceph-osd[206647]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 18.620 iops: 4766.618 elapsed_sec: 0.629
Dec 05 01:14:28 compute-0 ceph-osd[206647]: log_channel(cluster) log [WRN] : OSD bench result of 4766.617645 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 05 01:14:28 compute-0 ceph-osd[206647]: osd.0 0 waiting for initial osdmap
Dec 05 01:14:28 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0[206643]: 2025-12-05T01:14:28.687+0000 7f938b69a640 -1 osd.0 0 waiting for initial osdmap
Dec 05 01:14:28 compute-0 ceph-osd[206647]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec 05 01:14:28 compute-0 ceph-osd[206647]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec 05 01:14:28 compute-0 ceph-osd[206647]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec 05 01:14:28 compute-0 ceph-osd[206647]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Dec 05 01:14:28 compute-0 ceph-osd[206647]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 05 01:14:28 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0[206643]: 2025-12-05T01:14:28.715+0000 7f9386cc2640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 05 01:14:28 compute-0 ceph-osd[206647]: osd.0 8 set_numa_affinity not setting numa affinity
Dec 05 01:14:28 compute-0 ceph-osd[206647]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846861400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846861400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846861400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846861400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bluefs mount
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bluefs mount shared_bdev_used = 4718592
Dec 05 01:14:28 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: RocksDB version: 7.9.2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Git sha 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Compile date 2025-05-06 23:30:25
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: DB SUMMARY
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: DB Session ID:  JNG2OMSWA32VFJZW4PQ9
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: CURRENT file:  CURRENT
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: IDENTITY file:  IDENTITY
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                         Options.error_if_exists: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.create_if_missing: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                         Options.paranoid_checks: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                                     Options.env: 0x5648476ec460
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                                Options.info_log: 0x564846722de0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_file_opening_threads: 16
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                              Options.statistics: (nil)
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.use_fsync: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.max_log_file_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                         Options.allow_fallocate: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.use_direct_reads: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.create_missing_column_families: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                              Options.db_log_dir: 
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                                 Options.wal_dir: db.wal
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.advise_random_on_open: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.write_buffer_manager: 0x564847638460
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                            Options.rate_limiter: (nil)
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.unordered_write: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.row_cache: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                              Options.wal_filter: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.allow_ingest_behind: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.two_write_queues: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.manual_wal_flush: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.wal_compression: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.atomic_flush: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.log_readahead_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.allow_data_in_errors: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.db_host_id: __hostname__
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.max_background_jobs: 4
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.max_background_compactions: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.max_subcompactions: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.max_open_files: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.bytes_per_sync: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.max_background_flushes: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Compression algorithms supported:
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         kZSTD supported: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         kXpressCompression supported: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         kBZip2Compression supported: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         kLZ4Compression supported: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         kZlibCompression supported: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         kLZ4HCCompression supported: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         kSnappyCompression supported: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5648467228a0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x56484670add0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5648467228a0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x56484670add0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5648467228a0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x56484670add0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5648467228a0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x56484670add0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5648467228a0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x56484670add0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:28 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4014556596; not ready for session (expect reconnect)
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5648467228a0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x56484670add0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:28 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5648467228a0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x56484670add0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722e80)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x56484670a430
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722e80)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x56484670a430
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722e80)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x56484670a430
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1bcc1f64-0499-4951-869f-3891619a47de
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897268946491, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897268959667, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897268, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bcc1f64-0499-4951-869f-3891619a47de", "db_session_id": "JNG2OMSWA32VFJZW4PQ9", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897268964226, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897268, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bcc1f64-0499-4951-869f-3891619a47de", "db_session_id": "JNG2OMSWA32VFJZW4PQ9", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897268974451, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897268, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bcc1f64-0499-4951-869f-3891619a47de", "db_session_id": "JNG2OMSWA32VFJZW4PQ9", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897268978726, "job": 1, "event": "recovery_finished"}
Dec 05 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 05 01:14:29 compute-0 podman[208335]: 2025-12-05 01:14:29.028754827 +0000 UTC m=+0.087927589 container create 0a69aca1d39a2dab4b7bb93ddc351052e2759e4b481e78a82388054a6180808d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 01:14:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5648476f8000
Dec 05 01:14:29 compute-0 ceph-osd[207795]: rocksdb: DB pointer 0x564846749a00
Dec 05 01:14:29 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 05 01:14:29 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec 05 01:14:29 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec 05 01:14:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 01:14:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 0.2 total, 0.2 interval
                                            Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                            Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                            Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.2 total, 0.2 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.2 total, 0.2 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.2 total, 0.2 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.2 total, 0.2 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.2 total, 0.2 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.2 total, 0.2 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.2 total, 0.2 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.2 total, 0.2 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670a430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.2 total, 0.2 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670a430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.2 total, 0.2 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670a430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.2 total, 0.2 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.2 total, 0.2 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Dec 05 01:14:29 compute-0 ceph-osd[207795]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 05 01:14:29 compute-0 ceph-osd[207795]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 05 01:14:29 compute-0 ceph-osd[207795]: _get_class not permitted to load lua
Dec 05 01:14:29 compute-0 ceph-osd[207795]: _get_class not permitted to load sdk
Dec 05 01:14:29 compute-0 ceph-osd[207795]: _get_class not permitted to load test_remote_reads
Dec 05 01:14:29 compute-0 ceph-osd[207795]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 05 01:14:29 compute-0 ceph-osd[207795]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 05 01:14:29 compute-0 ceph-osd[207795]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 05 01:14:29 compute-0 ceph-osd[207795]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 05 01:14:29 compute-0 ceph-osd[207795]: osd.1 0 load_pgs
Dec 05 01:14:29 compute-0 ceph-osd[207795]: osd.1 0 load_pgs opened 0 pgs
Dec 05 01:14:29 compute-0 ceph-osd[207795]: osd.1 0 log_to_monitors true
Dec 05 01:14:29 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1[207791]: 2025-12-05T01:14:29.050+0000 7f1d272b4740 -1 osd.1 0 log_to_monitors true
Dec 05 01:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Dec 05 01:14:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2180313635,v1:192.168.122.100:6807/2180313635]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec 05 01:14:29 compute-0 podman[208335]: 2025-12-05 01:14:28.994549635 +0000 UTC m=+0.053722467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:29 compute-0 systemd[1]: Started libpod-conmon-0a69aca1d39a2dab4b7bb93ddc351052e2759e4b481e78a82388054a6180808d.scope.
Dec 05 01:14:29 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f2e58e1a7acb5d5ee4e1c4e17144605e6f96d5774031cad44251431a726c145/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f2e58e1a7acb5d5ee4e1c4e17144605e6f96d5774031cad44251431a726c145/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f2e58e1a7acb5d5ee4e1c4e17144605e6f96d5774031cad44251431a726c145/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f2e58e1a7acb5d5ee4e1c4e17144605e6f96d5774031cad44251431a726c145/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f2e58e1a7acb5d5ee4e1c4e17144605e6f96d5774031cad44251431a726c145/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:29 compute-0 podman[208335]: 2025-12-05 01:14:29.176690546 +0000 UTC m=+0.235863328 container init 0a69aca1d39a2dab4b7bb93ddc351052e2759e4b481e78a82388054a6180808d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:14:29 compute-0 podman[208335]: 2025-12-05 01:14:29.199294965 +0000 UTC m=+0.258467747 container start 0a69aca1d39a2dab4b7bb93ddc351052e2759e4b481e78a82388054a6180808d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate-test, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Dec 05 01:14:29 compute-0 podman[208335]: 2025-12-05 01:14:29.20522437 +0000 UTC m=+0.264397132 container attach 0a69aca1d39a2dab4b7bb93ddc351052e2759e4b481e78a82388054a6180808d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 05 01:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec 05 01:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 05 01:14:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2180313635,v1:192.168.122.100:6807/2180313635]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 05 01:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Dec 05 01:14:29 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/4014556596,v1:192.168.122.100:6803/4014556596] boot
Dec 05 01:14:29 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Dec 05 01:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Dec 05 01:14:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2180313635,v1:192.168.122.100:6807/2180313635]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 05 01:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec 05 01:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 05 01:14:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 01:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 05 01:14:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 05 01:14:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:29 compute-0 ceph-mon[192914]: OSD bench result of 4766.617645 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 05 01:14:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 01:14:29 compute-0 ceph-mon[192914]: from='osd.1 [v2:192.168.122.100:6806/2180313635,v1:192.168.122.100:6807/2180313635]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec 05 01:14:29 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 01:14:29 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 01:14:29 compute-0 ceph-osd[206647]: osd.0 9 state: booting -> active
Dec 05 01:14:29 compute-0 podman[158197]: time="2025-12-05T01:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:14:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29000 "" "Go-http-client/1.1"
Dec 05 01:14:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5784 "" "Go-http-client/1.1"
Dec 05 01:14:29 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate-test[208429]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec 05 01:14:29 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate-test[208429]:                             [--no-systemd] [--no-tmpfs]
Dec 05 01:14:29 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate-test[208429]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 05 01:14:29 compute-0 systemd[1]: libpod-0a69aca1d39a2dab4b7bb93ddc351052e2759e4b481e78a82388054a6180808d.scope: Deactivated successfully.
Dec 05 01:14:29 compute-0 podman[208335]: 2025-12-05 01:14:29.923431005 +0000 UTC m=+0.982603747 container died 0a69aca1d39a2dab4b7bb93ddc351052e2759e4b481e78a82388054a6180808d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 05 01:14:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f2e58e1a7acb5d5ee4e1c4e17144605e6f96d5774031cad44251431a726c145-merged.mount: Deactivated successfully.
Dec 05 01:14:29 compute-0 podman[208335]: 2025-12-05 01:14:29.984275159 +0000 UTC m=+1.043447901 container remove 0a69aca1d39a2dab4b7bb93ddc351052e2759e4b481e78a82388054a6180808d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 05 01:14:29 compute-0 systemd[1]: libpod-conmon-0a69aca1d39a2dab4b7bb93ddc351052e2759e4b481e78a82388054a6180808d.scope: Deactivated successfully.
Dec 05 01:14:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 05 01:14:30 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 05 01:14:30 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 05 01:14:30 compute-0 ceph-mgr[193209]: [devicehealth INFO root] creating mgr pool
Dec 05 01:14:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Dec 05 01:14:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec 05 01:14:30 compute-0 systemd[1]: Reloading.
Dec 05 01:14:30 compute-0 systemd-rc-local-generator[208490]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:14:30 compute-0 systemd-sysv-generator[208495]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:14:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec 05 01:14:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 05 01:14:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2180313635,v1:192.168.122.100:6807/2180313635]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 05 01:14:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 05 01:14:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Dec 05 01:14:30 compute-0 ceph-osd[207795]: osd.1 0 done with init, starting boot process
Dec 05 01:14:30 compute-0 ceph-osd[207795]: osd.1 0 start_boot
Dec 05 01:14:30 compute-0 ceph-osd[207795]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 05 01:14:30 compute-0 ceph-osd[207795]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 05 01:14:30 compute-0 ceph-osd[207795]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 05 01:14:30 compute-0 ceph-osd[207795]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 05 01:14:30 compute-0 ceph-osd[207795]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec 05 01:14:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Dec 05 01:14:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec 05 01:14:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec 05 01:14:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec 05 01:14:30 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Dec 05 01:14:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 05 01:14:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 05 01:14:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:30 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 01:14:30 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 01:14:30 compute-0 ceph-osd[206647]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 05 01:14:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Dec 05 01:14:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec 05 01:14:30 compute-0 ceph-osd[206647]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec 05 01:14:30 compute-0 ceph-osd[206647]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 05 01:14:30 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2180313635; not ready for session (expect reconnect)
Dec 05 01:14:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 05 01:14:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:30 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 01:14:30 compute-0 ceph-mon[192914]: from='osd.1 [v2:192.168.122.100:6806/2180313635,v1:192.168.122.100:6807/2180313635]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 05 01:14:30 compute-0 ceph-mon[192914]: osd.0 [v2:192.168.122.100:6802/4014556596,v1:192.168.122.100:6803/4014556596] boot
Dec 05 01:14:30 compute-0 ceph-mon[192914]: osdmap e9: 3 total, 1 up, 3 in
Dec 05 01:14:30 compute-0 ceph-mon[192914]: from='osd.1 [v2:192.168.122.100:6806/2180313635,v1:192.168.122.100:6807/2180313635]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 05 01:14:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 05 01:14:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:30 compute-0 ceph-mon[192914]: pgmap v37: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 05 01:14:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec 05 01:14:30 compute-0 ceph-mon[192914]: from='osd.1 [v2:192.168.122.100:6806/2180313635,v1:192.168.122.100:6807/2180313635]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 05 01:14:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 05 01:14:30 compute-0 ceph-mon[192914]: osdmap e10: 3 total, 1 up, 3 in
Dec 05 01:14:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:30 compute-0 systemd[1]: Reloading.
Dec 05 01:14:31 compute-0 systemd-rc-local-generator[208532]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:14:31 compute-0 systemd-sysv-generator[208537]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:14:31 compute-0 systemd[1]: Starting Ceph osd.2 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee...
Dec 05 01:14:31 compute-0 openstack_network_exporter[160350]: ERROR   01:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:14:31 compute-0 openstack_network_exporter[160350]: ERROR   01:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:14:31 compute-0 openstack_network_exporter[160350]: ERROR   01:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:14:31 compute-0 openstack_network_exporter[160350]: ERROR   01:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:14:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:14:31 compute-0 openstack_network_exporter[160350]: ERROR   01:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:14:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:14:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec 05 01:14:31 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2180313635; not ready for session (expect reconnect)
Dec 05 01:14:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 05 01:14:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:31 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 01:14:31 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 05 01:14:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Dec 05 01:14:31 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Dec 05 01:14:31 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 01:14:31 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 01:14:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 05 01:14:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 05 01:14:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:31 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec 05 01:14:31 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:31 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:31 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 05 01:14:31 compute-0 ceph-mon[192914]: osdmap e11: 3 total, 1 up, 3 in
Dec 05 01:14:31 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:31 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:31 compute-0 podman[208574]: 2025-12-05 01:14:31.68081295 +0000 UTC m=+0.091023795 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 01:14:31 compute-0 podman[208594]: 2025-12-05 01:14:31.714405595 +0000 UTC m=+0.078640980 container create cb116a37651a01f6de19824ee721a826e522a1eb68d3df93cfeea6a127ec52be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:14:31 compute-0 podman[208594]: 2025-12-05 01:14:31.686723614 +0000 UTC m=+0.050959009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:31 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de31eebe7e9042a9854aa7871de0be3864e9495eaf2d11dfb906e4bd740d673b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de31eebe7e9042a9854aa7871de0be3864e9495eaf2d11dfb906e4bd740d673b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de31eebe7e9042a9854aa7871de0be3864e9495eaf2d11dfb906e4bd740d673b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de31eebe7e9042a9854aa7871de0be3864e9495eaf2d11dfb906e4bd740d673b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de31eebe7e9042a9854aa7871de0be3864e9495eaf2d11dfb906e4bd740d673b/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:31 compute-0 podman[208594]: 2025-12-05 01:14:31.900017883 +0000 UTC m=+0.264253258 container init cb116a37651a01f6de19824ee721a826e522a1eb68d3df93cfeea6a127ec52be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Dec 05 01:14:31 compute-0 podman[208594]: 2025-12-05 01:14:31.908242411 +0000 UTC m=+0.272477796 container start cb116a37651a01f6de19824ee721a826e522a1eb68d3df93cfeea6a127ec52be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:14:31 compute-0 podman[208594]: 2025-12-05 01:14:31.924467653 +0000 UTC m=+0.288703078 container attach cb116a37651a01f6de19824ee721a826e522a1eb68d3df93cfeea6a127ec52be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 01:14:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:14:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v40: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 05 01:14:32 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2180313635; not ready for session (expect reconnect)
Dec 05 01:14:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 05 01:14:32 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:32 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 01:14:32 compute-0 ceph-mon[192914]: purged_snaps scrub starts
Dec 05 01:14:32 compute-0 ceph-mon[192914]: purged_snaps scrub ok
Dec 05 01:14:32 compute-0 ceph-mon[192914]: pgmap v40: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 05 01:14:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:33 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate[208622]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 05 01:14:33 compute-0 bash[208594]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 05 01:14:33 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate[208622]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Dec 05 01:14:33 compute-0 bash[208594]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Dec 05 01:14:33 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate[208622]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Dec 05 01:14:33 compute-0 bash[208594]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Dec 05 01:14:33 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate[208622]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 05 01:14:33 compute-0 bash[208594]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 05 01:14:33 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate[208622]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 05 01:14:33 compute-0 bash[208594]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 05 01:14:33 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate[208622]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 05 01:14:33 compute-0 bash[208594]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 05 01:14:33 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate[208622]: --> ceph-volume raw activate successful for osd ID: 2
Dec 05 01:14:33 compute-0 bash[208594]: --> ceph-volume raw activate successful for osd ID: 2
Dec 05 01:14:33 compute-0 systemd[1]: libpod-cb116a37651a01f6de19824ee721a826e522a1eb68d3df93cfeea6a127ec52be.scope: Deactivated successfully.
Dec 05 01:14:33 compute-0 systemd[1]: libpod-cb116a37651a01f6de19824ee721a826e522a1eb68d3df93cfeea6a127ec52be.scope: Consumed 1.373s CPU time.
Dec 05 01:14:33 compute-0 podman[208594]: 2025-12-05 01:14:33.26845451 +0000 UTC m=+1.632689955 container died cb116a37651a01f6de19824ee721a826e522a1eb68d3df93cfeea6a127ec52be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Dec 05 01:14:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-de31eebe7e9042a9854aa7871de0be3864e9495eaf2d11dfb906e4bd740d673b-merged.mount: Deactivated successfully.
Dec 05 01:14:33 compute-0 podman[208594]: 2025-12-05 01:14:33.424780342 +0000 UTC m=+1.789015757 container remove cb116a37651a01f6de19824ee721a826e522a1eb68d3df93cfeea6a127ec52be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 01:14:33 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2180313635; not ready for session (expect reconnect)
Dec 05 01:14:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 05 01:14:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:33 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 01:14:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:33 compute-0 podman[208809]: 2025-12-05 01:14:33.899836337 +0000 UTC m=+0.081201532 container create 6e6a7cedb28bff2eaefd9d1f0a74b137d90c05ecaa1d91aa67275a4d70d5d74a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:14:33 compute-0 podman[208809]: 2025-12-05 01:14:33.864061521 +0000 UTC m=+0.045426756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de3b83122a6d15bf4d6f88ba2d42cc8e5c5fd5c90d968c3d58c311055a27e01/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de3b83122a6d15bf4d6f88ba2d42cc8e5c5fd5c90d968c3d58c311055a27e01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de3b83122a6d15bf4d6f88ba2d42cc8e5c5fd5c90d968c3d58c311055a27e01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de3b83122a6d15bf4d6f88ba2d42cc8e5c5fd5c90d968c3d58c311055a27e01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de3b83122a6d15bf4d6f88ba2d42cc8e5c5fd5c90d968c3d58c311055a27e01/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:34 compute-0 podman[208809]: 2025-12-05 01:14:34.036333357 +0000 UTC m=+0.217698822 container init 6e6a7cedb28bff2eaefd9d1f0a74b137d90c05ecaa1d91aa67275a4d70d5d74a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 01:14:34 compute-0 podman[208809]: 2025-12-05 01:14:34.050572833 +0000 UTC m=+0.231938018 container start 6e6a7cedb28bff2eaefd9d1f0a74b137d90c05ecaa1d91aa67275a4d70d5d74a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 01:14:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v41: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 05 01:14:34 compute-0 bash[208809]: 6e6a7cedb28bff2eaefd9d1f0a74b137d90c05ecaa1d91aa67275a4d70d5d74a
Dec 05 01:14:34 compute-0 systemd[1]: Started Ceph osd.2 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec 05 01:14:34 compute-0 ceph-osd[208828]: set uid:gid to 167:167 (ceph:ceph)
Dec 05 01:14:34 compute-0 ceph-osd[208828]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Dec 05 01:14:34 compute-0 ceph-osd[208828]: pidfile_write: ignore empty --pid-file
Dec 05 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4356eb800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 05 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4356eb800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 05 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4356eb800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4356eb800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 01:14:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 05 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c43652d800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 05 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c43652d800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 05 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c43652d800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c43652d800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 01:14:34 compute-0 ceph-osd[208828]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 05 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c43652d800 /var/lib/ceph/osd/ceph-2/block) close
Dec 05 01:14:34 compute-0 sudo[207887]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:14:34 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:14:34 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:34 compute-0 sudo[208841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:34 compute-0 sudo[208841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:34 compute-0 sudo[208841]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:34 compute-0 sudo[208866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:14:34 compute-0 sudo[208866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:34 compute-0 sudo[208866]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4356eb800 /var/lib/ceph/osd/ceph-2/block) close
Dec 05 01:14:34 compute-0 sudo[208891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:34 compute-0 sudo[208891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:34 compute-0 sudo[208891]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:34 compute-0 sudo[208919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:14:34 compute-0 sudo[208919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:34 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2180313635; not ready for session (expect reconnect)
Dec 05 01:14:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 05 01:14:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:34 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 01:14:34 compute-0 ceph-osd[208828]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Dec 05 01:14:34 compute-0 ceph-osd[208828]: load: jerasure load: lrc 
Dec 05 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 05 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 05 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 01:14:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 05 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) close
Dec 05 01:14:34 compute-0 ceph-osd[207795]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 22.222 iops: 5688.725 elapsed_sec: 0.527
Dec 05 01:14:34 compute-0 ceph-osd[207795]: log_channel(cluster) log [WRN] : OSD bench result of 5688.724897 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 05 01:14:34 compute-0 ceph-osd[207795]: osd.1 0 waiting for initial osdmap
Dec 05 01:14:34 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1[207791]: 2025-12-05T01:14:34.733+0000 7f1d23a4b640 -1 osd.1 0 waiting for initial osdmap
Dec 05 01:14:34 compute-0 ceph-osd[207795]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 05 01:14:34 compute-0 ceph-osd[207795]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec 05 01:14:34 compute-0 ceph-osd[207795]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 05 01:14:34 compute-0 ceph-osd[207795]: osd.1 11 check_osdmap_features require_osd_release unknown -> reef
Dec 05 01:14:34 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1[207791]: 2025-12-05T01:14:34.766+0000 7f1d1e85c640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 05 01:14:34 compute-0 ceph-osd[207795]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 05 01:14:34 compute-0 ceph-osd[207795]: osd.1 11 set_numa_affinity not setting numa affinity
Dec 05 01:14:34 compute-0 ceph-osd[207795]: osd.1 11 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Dec 05 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 05 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 05 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 01:14:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 05 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) close
Dec 05 01:14:35 compute-0 podman[208986]: 2025-12-05 01:14:35.027932213 +0000 UTC m=+0.074800643 container create fe52e5c8471f5419d04d1120caefa14e497ba48dd3030121202f4faa8be63fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 01:14:35 compute-0 podman[208986]: 2025-12-05 01:14:34.993207056 +0000 UTC m=+0.040075576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:35 compute-0 systemd[1]: Started libpod-conmon-fe52e5c8471f5419d04d1120caefa14e497ba48dd3030121202f4faa8be63fb9.scope.
Dec 05 01:14:35 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:35 compute-0 ceph-mon[192914]: pgmap v41: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 05 01:14:35 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:35 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:35 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:35 compute-0 ceph-mon[192914]: OSD bench result of 5688.724897 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 05 01:14:35 compute-0 podman[208986]: 2025-12-05 01:14:35.160457523 +0000 UTC m=+0.207325973 container init fe52e5c8471f5419d04d1120caefa14e497ba48dd3030121202f4faa8be63fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:14:35 compute-0 podman[208986]: 2025-12-05 01:14:35.178563747 +0000 UTC m=+0.225432167 container start fe52e5c8471f5419d04d1120caefa14e497ba48dd3030121202f4faa8be63fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 01:14:35 compute-0 podman[208986]: 2025-12-05 01:14:35.18333604 +0000 UTC m=+0.230204460 container attach fe52e5c8471f5419d04d1120caefa14e497ba48dd3030121202f4faa8be63fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_matsumoto, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:14:35 compute-0 sweet_matsumoto[209006]: 167 167
Dec 05 01:14:35 compute-0 systemd[1]: libpod-fe52e5c8471f5419d04d1120caefa14e497ba48dd3030121202f4faa8be63fb9.scope: Deactivated successfully.
Dec 05 01:14:35 compute-0 conmon[209006]: conmon fe52e5c8471f5419d04d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fe52e5c8471f5419d04d1120caefa14e497ba48dd3030121202f4faa8be63fb9.scope/container/memory.events
Dec 05 01:14:35 compute-0 podman[208986]: 2025-12-05 01:14:35.191555418 +0000 UTC m=+0.238423858 container died fe52e5c8471f5419d04d1120caefa14e497ba48dd3030121202f4faa8be63fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:14:35 compute-0 ceph-osd[208828]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 05 01:14:35 compute-0 ceph-osd[208828]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b5400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b5400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b5400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b5400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bluefs mount
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bluefs mount shared_bdev_used = 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: RocksDB version: 7.9.2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Git sha 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Compile date 2025-05-06 23:30:25
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: DB SUMMARY
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: DB Session ID:  XCPSDI8P01OE9YLX3G6I
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: CURRENT file:  CURRENT
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: IDENTITY file:  IDENTITY
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                         Options.error_if_exists: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.create_if_missing: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                         Options.paranoid_checks: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                                     Options.env: 0x55c43657fd50
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                                Options.info_log: 0x55c435776800
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_file_opening_threads: 16
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                              Options.statistics: (nil)
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.use_fsync: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.max_log_file_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                         Options.allow_fallocate: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.use_direct_reads: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.create_missing_column_families: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                              Options.db_log_dir: 
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                                 Options.wal_dir: db.wal
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.advise_random_on_open: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.write_buffer_manager: 0x55c436668460
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                            Options.rate_limiter: (nil)
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.unordered_write: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.row_cache: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                              Options.wal_filter: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.allow_ingest_behind: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.two_write_queues: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.manual_wal_flush: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.wal_compression: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.atomic_flush: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.log_readahead_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.allow_data_in_errors: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.db_host_id: __hostname__
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.max_background_jobs: 4
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.max_background_compactions: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.max_subcompactions: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.max_open_files: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.bytes_per_sync: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.max_background_flushes: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Compression algorithms supported:
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         kZSTD supported: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         kXpressCompression supported: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         kBZip2Compression supported: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         kLZ4Compression supported: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         kZlibCompression supported: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         kLZ4HCCompression supported: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         kSnappyCompression supported: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776e80)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55c43575edd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776e80)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55c43575edd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776e80)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55c43575edd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776e80)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55c43575edd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776e80)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55c43575edd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776e80)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55c43575edd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776e80)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55c43575edd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c8ebee885af0519e04e0880b76a481772193f04499d02b55cb4df13e5e7e79c-merged.mount: Deactivated successfully.
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776e60)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55c43575e430
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776e60)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55c43575e430
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776e60)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55c43575e430
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 05 01:14:35 compute-0 podman[208986]: 2025-12-05 01:14:35.255856439 +0000 UTC m=+0.302724859 container remove fe52e5c8471f5419d04d1120caefa14e497ba48dd3030121202f4faa8be63fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 05 01:14:35 compute-0 systemd[1]: libpod-conmon-fe52e5c8471f5419d04d1120caefa14e497ba48dd3030121202f4faa8be63fb9.scope: Deactivated successfully.
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 71b297f4-3cae-4ec5-b4fe-6311e6b0d4ce
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897275284132, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897275284609, "job": 1, "event": "recovery_finished"}
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: freelist init
Dec 05 01:14:35 compute-0 ceph-osd[208828]: freelist _read_cfg
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bluefs umount
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b5400 /var/lib/ceph/osd/ceph-2/block) close
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b5400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b5400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b5400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b5400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bluefs mount
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bluefs mount shared_bdev_used = 4718592
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: RocksDB version: 7.9.2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Git sha 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Compile date 2025-05-06 23:30:25
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: DB SUMMARY
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: DB Session ID:  XCPSDI8P01OE9YLX3G6J
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: CURRENT file:  CURRENT
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: IDENTITY file:  IDENTITY
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                         Options.error_if_exists: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.create_if_missing: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                         Options.paranoid_checks: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                                     Options.env: 0x55c43671c460
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                                Options.info_log: 0x55c4357771c0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_file_opening_threads: 16
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                              Options.statistics: (nil)
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.use_fsync: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.max_log_file_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                         Options.allow_fallocate: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.use_direct_reads: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.create_missing_column_families: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                              Options.db_log_dir: 
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                                 Options.wal_dir: db.wal
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.advise_random_on_open: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.write_buffer_manager: 0x55c4366686e0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                            Options.rate_limiter: (nil)
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.unordered_write: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.row_cache: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                              Options.wal_filter: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.allow_ingest_behind: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.two_write_queues: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.manual_wal_flush: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.wal_compression: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.atomic_flush: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.log_readahead_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.allow_data_in_errors: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.db_host_id: __hostname__
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.max_background_jobs: 4
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.max_background_compactions: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.max_subcompactions: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.max_open_files: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.bytes_per_sync: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.max_background_flushes: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Compression algorithms supported:
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         kZSTD supported: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         kXpressCompression supported: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         kBZip2Compression supported: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         kLZ4Compression supported: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         kZlibCompression supported: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         kLZ4HCCompression supported: 1
Dec 05 01:14:35 compute-0 podman[209224]: 2025-12-05 01:14:35.494653377 +0000 UTC m=+0.090477750 container create 5c3dca1ac9784c5ee558bd4f8f448242ddc395c70d1ce88b790c391f38c14a6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tharp, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         kSnappyCompression supported: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4357769c0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55c43575edd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4357769c0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55c43575edd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4357769c0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55c43575edd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4357769c0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55c43575edd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4357769c0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55c43575edd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4357769c0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55c43575edd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4357769c0)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55c43575edd0
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 483183820
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776f60)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55c43575e430
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776f60)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55c43575e430
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776f60)
                                              cache_index_and_filter_blocks: 1
                                              cache_index_and_filter_blocks_with_high_priority: 0
                                              pin_l0_filter_and_index_blocks_in_cache: 0
                                              pin_top_level_index_and_filter: 1
                                              index_type: 0
                                              data_block_index_type: 0
                                              index_shortening: 1
                                              data_block_hash_table_util_ratio: 0.750000
                                              checksum: 4
                                              no_block_cache: 0
                                              block_cache: 0x55c43575e430
                                              block_cache_name: BinnedLRUCache
                                              block_cache_options:
                                                capacity : 536870912
                                                num_shard_bits : 4
                                                strict_capacity_limit : 0
                                                high_pri_pool_ratio: 0.000
                                              block_cache_compressed: (nil)
                                              persistent_cache: (nil)
                                              block_size: 4096
                                              block_size_deviation: 10
                                              block_restart_interval: 16
                                              index_block_restart_interval: 1
                                              metadata_block_size: 4096
                                              partition_filters: 0
                                              use_delta_encoding: 1
                                              filter_policy: bloomfilter
                                              whole_key_filtering: 1
                                              verify_compression: 0
                                              read_amp_bytes_per_bit: 0
                                              format_version: 5
                                              enable_index_compression: 1
                                              block_align: 0
                                              max_auto_readahead_size: 262144
                                              prepopulate_block_cache: 0
                                              initial_auto_readahead_size: 8192
                                              num_file_reads_for_auto_readahead: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 05 01:14:35 compute-0 podman[209224]: 2025-12-05 01:14:35.453035648 +0000 UTC m=+0.048860101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:35 compute-0 systemd[1]: Started libpod-conmon-5c3dca1ac9784c5ee558bd4f8f448242ddc395c70d1ce88b790c391f38c14a6b.scope.
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 71b297f4-3cae-4ec5-b4fe-6311e6b0d4ce
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897275556537, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897275563837, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897275, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "71b297f4-3cae-4ec5-b4fe-6311e6b0d4ce", "db_session_id": "XCPSDI8P01OE9YLX3G6J", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897275569875, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897275, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "71b297f4-3cae-4ec5-b4fe-6311e6b0d4ce", "db_session_id": "XCPSDI8P01OE9YLX3G6J", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897275576071, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897275, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "71b297f4-3cae-4ec5-b4fe-6311e6b0d4ce", "db_session_id": "XCPSDI8P01OE9YLX3G6J", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897275581702, "job": 1, "event": "recovery_finished"}
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 05 01:14:35 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26d8ab5b1a2efa965c0ad3ff366b2d724ddcd5a7087498746d2d2f3afff00739/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26d8ab5b1a2efa965c0ad3ff366b2d724ddcd5a7087498746d2d2f3afff00739/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26d8ab5b1a2efa965c0ad3ff366b2d724ddcd5a7087498746d2d2f3afff00739/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26d8ab5b1a2efa965c0ad3ff366b2d724ddcd5a7087498746d2d2f3afff00739/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:35 compute-0 podman[209224]: 2025-12-05 01:14:35.619329468 +0000 UTC m=+0.215153861 container init 5c3dca1ac9784c5ee558bd4f8f448242ddc395c70d1ce88b790c391f38c14a6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tharp, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c43675fc00
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: DB pointer 0x55c435799a00
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Dec 05 01:14:35 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                            Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                            Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575e430#2 capacity: 512.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.8e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,3.8743e-05%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575e430#2 capacity: 512.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.8e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,3.8743e-05%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575e430#2 capacity: 512.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.8e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,3.8743e-05%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 0.1 total, 0.1 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Dec 05 01:14:35 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2180313635; not ready for session (expect reconnect)
Dec 05 01:14:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 05 01:14:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:35 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 05 01:14:35 compute-0 ceph-osd[208828]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 05 01:14:35 compute-0 podman[209224]: 2025-12-05 01:14:35.633200764 +0000 UTC m=+0.229025127 container start 5c3dca1ac9784c5ee558bd4f8f448242ddc395c70d1ce88b790c391f38c14a6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tharp, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:14:35 compute-0 ceph-osd[208828]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 05 01:14:35 compute-0 ceph-osd[208828]: _get_class not permitted to load lua
Dec 05 01:14:35 compute-0 podman[209224]: 2025-12-05 01:14:35.63773302 +0000 UTC m=+0.233557383 container attach 5c3dca1ac9784c5ee558bd4f8f448242ddc395c70d1ce88b790c391f38c14a6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 05 01:14:35 compute-0 ceph-osd[208828]: _get_class not permitted to load sdk
Dec 05 01:14:35 compute-0 ceph-osd[208828]: _get_class not permitted to load test_remote_reads
Dec 05 01:14:35 compute-0 ceph-osd[208828]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 05 01:14:35 compute-0 ceph-osd[208828]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 05 01:14:35 compute-0 ceph-osd[208828]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 05 01:14:35 compute-0 ceph-osd[208828]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 05 01:14:35 compute-0 ceph-osd[208828]: osd.2 0 load_pgs
Dec 05 01:14:35 compute-0 ceph-osd[208828]: osd.2 0 load_pgs opened 0 pgs
Dec 05 01:14:35 compute-0 ceph-osd[208828]: osd.2 0 log_to_monitors true
Dec 05 01:14:35 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2[208824]: 2025-12-05T01:14:35.639+0000 7f35ea256740 -1 osd.2 0 log_to_monitors true
Dec 05 01:14:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Dec 05 01:14:35 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3318741722,v1:192.168.122.100:6811/3318741722]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 05 01:14:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec 05 01:14:35 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3318741722,v1:192.168.122.100:6811/3318741722]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 05 01:14:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e12 e12: 3 total, 2 up, 3 in
Dec 05 01:14:35 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/2180313635,v1:192.168.122.100:6807/2180313635] boot
Dec 05 01:14:35 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 2 up, 3 in
Dec 05 01:14:35 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 01:14:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Dec 05 01:14:35 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3318741722,v1:192.168.122.100:6811/3318741722]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 05 01:14:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e12 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec 05 01:14:35 compute-0 ceph-osd[207795]: osd.1 12 state: booting -> active
Dec 05 01:14:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:14:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 05 01:14:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 05 01:14:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v43: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 05 01:14:36 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:36 compute-0 ceph-mon[192914]: from='osd.2 [v2:192.168.122.100:6810/3318741722,v1:192.168.122.100:6811/3318741722]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 05 01:14:36 compute-0 ceph-mon[192914]: from='osd.2 [v2:192.168.122.100:6810/3318741722,v1:192.168.122.100:6811/3318741722]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 05 01:14:36 compute-0 ceph-mon[192914]: osd.1 [v2:192.168.122.100:6806/2180313635,v1:192.168.122.100:6807/2180313635] boot
Dec 05 01:14:36 compute-0 ceph-mon[192914]: osdmap e12: 3 total, 2 up, 3 in
Dec 05 01:14:36 compute-0 ceph-mon[192914]: from='osd.2 [v2:192.168.122.100:6810/3318741722,v1:192.168.122.100:6811/3318741722]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 05 01:14:36 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 05 01:14:36 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:36 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 05 01:14:36 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 05 01:14:36 compute-0 wonderful_tharp[209421]: {
Dec 05 01:14:36 compute-0 wonderful_tharp[209421]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:14:36 compute-0 wonderful_tharp[209421]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:14:36 compute-0 wonderful_tharp[209421]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:14:36 compute-0 wonderful_tharp[209421]:         "osd_id": 0,
Dec 05 01:14:36 compute-0 wonderful_tharp[209421]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:14:36 compute-0 wonderful_tharp[209421]:         "type": "bluestore"
Dec 05 01:14:36 compute-0 wonderful_tharp[209421]:     },
Dec 05 01:14:36 compute-0 wonderful_tharp[209421]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:14:36 compute-0 wonderful_tharp[209421]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:14:36 compute-0 wonderful_tharp[209421]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:14:36 compute-0 wonderful_tharp[209421]:         "osd_id": 1,
Dec 05 01:14:36 compute-0 wonderful_tharp[209421]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:14:36 compute-0 wonderful_tharp[209421]:         "type": "bluestore"
Dec 05 01:14:36 compute-0 wonderful_tharp[209421]:     },
Dec 05 01:14:36 compute-0 wonderful_tharp[209421]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:14:36 compute-0 wonderful_tharp[209421]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:14:36 compute-0 wonderful_tharp[209421]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:14:36 compute-0 wonderful_tharp[209421]:         "osd_id": 2,
Dec 05 01:14:36 compute-0 wonderful_tharp[209421]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:14:36 compute-0 wonderful_tharp[209421]:         "type": "bluestore"
Dec 05 01:14:36 compute-0 wonderful_tharp[209421]:     }
Dec 05 01:14:36 compute-0 wonderful_tharp[209421]: }
Dec 05 01:14:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec 05 01:14:36 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3318741722,v1:192.168.122.100:6811/3318741722]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 05 01:14:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Dec 05 01:14:36 compute-0 systemd[1]: libpod-5c3dca1ac9784c5ee558bd4f8f448242ddc395c70d1ce88b790c391f38c14a6b.scope: Deactivated successfully.
Dec 05 01:14:36 compute-0 conmon[209421]: conmon 5c3dca1ac9784c5ee558 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c3dca1ac9784c5ee558bd4f8f448242ddc395c70d1ce88b790c391f38c14a6b.scope/container/memory.events
Dec 05 01:14:36 compute-0 systemd[1]: libpod-5c3dca1ac9784c5ee558bd4f8f448242ddc395c70d1ce88b790c391f38c14a6b.scope: Consumed 1.108s CPU time.
Dec 05 01:14:36 compute-0 podman[209224]: 2025-12-05 01:14:36.746222131 +0000 UTC m=+1.342046534 container died 5c3dca1ac9784c5ee558bd4f8f448242ddc395c70d1ce88b790c391f38c14a6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 05 01:14:36 compute-0 ceph-osd[208828]: osd.2 0 done with init, starting boot process
Dec 05 01:14:36 compute-0 ceph-osd[208828]: osd.2 0 start_boot
Dec 05 01:14:36 compute-0 ceph-osd[208828]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 05 01:14:36 compute-0 ceph-osd[208828]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 05 01:14:36 compute-0 ceph-osd[208828]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 05 01:14:36 compute-0 ceph-osd[208828]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 05 01:14:36 compute-0 ceph-osd[208828]: osd.2 0  bench count 12288000 bsize 4 KiB
Dec 05 01:14:36 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Dec 05 01:14:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 05 01:14:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:36 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 01:14:36 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3318741722; not ready for session (expect reconnect)
Dec 05 01:14:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 05 01:14:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:36 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 01:14:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:14:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-26d8ab5b1a2efa965c0ad3ff366b2d724ddcd5a7087498746d2d2f3afff00739-merged.mount: Deactivated successfully.
Dec 05 01:14:36 compute-0 podman[209224]: 2025-12-05 01:14:36.921236193 +0000 UTC m=+1.517060556 container remove 5c3dca1ac9784c5ee558bd4f8f448242ddc395c70d1ce88b790c391f38c14a6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tharp, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 05 01:14:36 compute-0 systemd[1]: libpod-conmon-5c3dca1ac9784c5ee558bd4f8f448242ddc395c70d1ce88b790c391f38c14a6b.scope: Deactivated successfully.
Dec 05 01:14:36 compute-0 sudo[208919]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:14:36 compute-0 ceph-mgr[193209]: [devicehealth INFO root] creating main.db for devicehealth
Dec 05 01:14:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:14:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:14:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:37 compute-0 sudo[209501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:37 compute-0 sudo[209501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:37 compute-0 sudo[209501]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:37 compute-0 ceph-mon[192914]: pgmap v43: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 05 01:14:37 compute-0 ceph-mon[192914]: from='osd.2 [v2:192.168.122.100:6810/3318741722,v1:192.168.122.100:6811/3318741722]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 05 01:14:37 compute-0 ceph-mon[192914]: osdmap e13: 3 total, 2 up, 3 in
Dec 05 01:14:37 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:37 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:37 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:37 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:37 compute-0 sudo[209535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:14:37 compute-0 sudo[209535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:37 compute-0 sudo[209535]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:37 compute-0 ceph-mgr[193209]: [devicehealth INFO root] Check health
Dec 05 01:14:37 compute-0 ceph-mgr[193209]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Dec 05 01:14:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 05 01:14:37 compute-0 sudo[209567]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Dec 05 01:14:37 compute-0 sudo[209567]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 01:14:37 compute-0 sudo[209567]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Dec 05 01:14:37 compute-0 sudo[209567]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 05 01:14:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Dec 05 01:14:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 05 01:14:37 compute-0 sudo[209561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:37 compute-0 sudo[209561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:37 compute-0 sudo[209561]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:37 compute-0 sudo[209590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:14:37 compute-0 sudo[209590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:37 compute-0 sudo[209590]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:37 compute-0 sudo[209615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:37 compute-0 sudo[209615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:37 compute-0 sudo[209615]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:37 compute-0 sudo[209640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 05 01:14:37 compute-0 sudo[209640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec 05 01:14:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Dec 05 01:14:37 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Dec 05 01:14:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 05 01:14:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:37 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 01:14:37 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3318741722; not ready for session (expect reconnect)
Dec 05 01:14:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 05 01:14:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:37 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 01:14:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 05 01:14:38 compute-0 ceph-mon[192914]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 05 01:14:38 compute-0 ceph-mon[192914]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 05 01:14:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 05 01:14:38 compute-0 ceph-mon[192914]: osdmap e14: 3 total, 2 up, 3 in
Dec 05 01:14:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:38 compute-0 podman[209728]: 2025-12-05 01:14:38.38270821 +0000 UTC m=+0.134097224 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 05 01:14:38 compute-0 podman[209728]: 2025-12-05 01:14:38.504250974 +0000 UTC m=+0.255639988 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:14:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 05 01:14:38 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3318741722; not ready for session (expect reconnect)
Dec 05 01:14:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:38 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 01:14:38 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.afshmv(active, since 82s)
Dec 05 01:14:39 compute-0 ceph-mon[192914]: purged_snaps scrub starts
Dec 05 01:14:39 compute-0 ceph-mon[192914]: purged_snaps scrub ok
Dec 05 01:14:39 compute-0 ceph-mon[192914]: pgmap v46: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 05 01:14:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:39 compute-0 ceph-mon[192914]: mgrmap e9: compute-0.afshmv(active, since 82s)
Dec 05 01:14:39 compute-0 sudo[209640]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:14:39 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:14:39 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:39 compute-0 sudo[209849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:39 compute-0 sudo[209849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:39 compute-0 sudo[209849]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:39 compute-0 sudo[209874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:14:39 compute-0 sudo[209874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:39 compute-0 sudo[209874]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:39 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3318741722; not ready for session (expect reconnect)
Dec 05 01:14:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 05 01:14:39 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:39 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 01:14:39 compute-0 sudo[209899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:39 compute-0 sudo[209899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:39 compute-0 sudo[209899]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:39 compute-0 sudo[209924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:14:39 compute-0 sudo[209924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 05 01:14:40 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:40 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:40 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:40 compute-0 ceph-mon[192914]: pgmap v47: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 05 01:14:40 compute-0 sudo[209924]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:40 compute-0 sudo[209978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:40 compute-0 sudo[209978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:40 compute-0 sudo[209978]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:40 compute-0 sudo[210003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:14:40 compute-0 sudo[210003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:40 compute-0 sudo[210003]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:40 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3318741722; not ready for session (expect reconnect)
Dec 05 01:14:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 05 01:14:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:40 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 05 01:14:40 compute-0 sudo[210028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:40 compute-0 sudo[210028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:40 compute-0 sudo[210028]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:40 compute-0 sudo[210053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- inventory --format=json-pretty --filter-for-batch
Dec 05 01:14:40 compute-0 sudo[210053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:40 compute-0 ceph-osd[208828]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 18.678 iops: 4781.521 elapsed_sec: 0.627
Dec 05 01:14:40 compute-0 ceph-osd[208828]: log_channel(cluster) log [WRN] : OSD bench result of 4781.521020 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 05 01:14:40 compute-0 ceph-osd[208828]: osd.2 0 waiting for initial osdmap
Dec 05 01:14:40 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2[208824]: 2025-12-05T01:14:40.918+0000 7f35e69ed640 -1 osd.2 0 waiting for initial osdmap
Dec 05 01:14:40 compute-0 ceph-osd[208828]: osd.2 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 05 01:14:40 compute-0 ceph-osd[208828]: osd.2 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec 05 01:14:40 compute-0 ceph-osd[208828]: osd.2 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 05 01:14:40 compute-0 ceph-osd[208828]: osd.2 14 check_osdmap_features require_osd_release unknown -> reef
Dec 05 01:14:40 compute-0 ceph-osd[208828]: osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 05 01:14:40 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2[208824]: 2025-12-05T01:14:40.947+0000 7f35e17fe640 -1 osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 05 01:14:40 compute-0 ceph-osd[208828]: osd.2 14 set_numa_affinity not setting numa affinity
Dec 05 01:14:40 compute-0 ceph-osd[208828]: osd.2 14 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Dec 05 01:14:41 compute-0 podman[210119]: 2025-12-05 01:14:41.37838767 +0000 UTC m=+0.063203051 container create 185a75e9ccb30fdaf16f2a86ce6d64860d3762bbc18ed04abd26420a20ccf742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 01:14:41 compute-0 systemd[1]: Started libpod-conmon-185a75e9ccb30fdaf16f2a86ce6d64860d3762bbc18ed04abd26420a20ccf742.scope.
Dec 05 01:14:41 compute-0 podman[210119]: 2025-12-05 01:14:41.354754872 +0000 UTC m=+0.039570263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec 05 01:14:41 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e15 e15: 3 total, 3 up, 3 in
Dec 05 01:14:41 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/3318741722,v1:192.168.122.100:6811/3318741722] boot
Dec 05 01:14:41 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 3 up, 3 in
Dec 05 01:14:41 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 05 01:14:41 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:41 compute-0 ceph-osd[208828]: osd.2 15 state: booting -> active
Dec 05 01:14:41 compute-0 podman[210119]: 2025-12-05 01:14:41.508825331 +0000 UTC m=+0.193640742 container init 185a75e9ccb30fdaf16f2a86ce6d64860d3762bbc18ed04abd26420a20ccf742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_blackburn, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:14:41 compute-0 podman[210119]: 2025-12-05 01:14:41.523050367 +0000 UTC m=+0.207865738 container start 185a75e9ccb30fdaf16f2a86ce6d64860d3762bbc18ed04abd26420a20ccf742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_blackburn, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Dec 05 01:14:41 compute-0 podman[210119]: 2025-12-05 01:14:41.529849867 +0000 UTC m=+0.214665438 container attach 185a75e9ccb30fdaf16f2a86ce6d64860d3762bbc18ed04abd26420a20ccf742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_blackburn, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:14:41 compute-0 admiring_blackburn[210135]: 167 167
Dec 05 01:14:41 compute-0 systemd[1]: libpod-185a75e9ccb30fdaf16f2a86ce6d64860d3762bbc18ed04abd26420a20ccf742.scope: Deactivated successfully.
Dec 05 01:14:41 compute-0 podman[210119]: 2025-12-05 01:14:41.534496886 +0000 UTC m=+0.219312277 container died 185a75e9ccb30fdaf16f2a86ce6d64860d3762bbc18ed04abd26420a20ccf742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:14:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-59a4342c6a2c6116266627aa057e800f20100bbbe198174045079e9ca4bcb7ee-merged.mount: Deactivated successfully.
Dec 05 01:14:41 compute-0 podman[210119]: 2025-12-05 01:14:41.591505963 +0000 UTC m=+0.276321334 container remove 185a75e9ccb30fdaf16f2a86ce6d64860d3762bbc18ed04abd26420a20ccf742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_blackburn, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:14:41 compute-0 systemd[1]: libpod-conmon-185a75e9ccb30fdaf16f2a86ce6d64860d3762bbc18ed04abd26420a20ccf742.scope: Deactivated successfully.
Dec 05 01:14:41 compute-0 podman[210158]: 2025-12-05 01:14:41.862030284 +0000 UTC m=+0.096487057 container create 3b960fb67c3fcb7e7df228d6ac5266042f012de4c1b849fb70d26f735817e043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 01:14:41 compute-0 podman[210158]: 2025-12-05 01:14:41.812098104 +0000 UTC m=+0.046554937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:41 compute-0 systemd[1]: Started libpod-conmon-3b960fb67c3fcb7e7df228d6ac5266042f012de4c1b849fb70d26f735817e043.scope.
Dec 05 01:14:41 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28d09b62b182d77cbc6628f1fda0ed5f2ff09cbf956f212da6f40dbe4d3b158/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28d09b62b182d77cbc6628f1fda0ed5f2ff09cbf956f212da6f40dbe4d3b158/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28d09b62b182d77cbc6628f1fda0ed5f2ff09cbf956f212da6f40dbe4d3b158/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28d09b62b182d77cbc6628f1fda0ed5f2ff09cbf956f212da6f40dbe4d3b158/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:14:42 compute-0 podman[210158]: 2025-12-05 01:14:42.028583791 +0000 UTC m=+0.263040594 container init 3b960fb67c3fcb7e7df228d6ac5266042f012de4c1b849fb70d26f735817e043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:14:42 compute-0 podman[210158]: 2025-12-05 01:14:42.040102372 +0000 UTC m=+0.274559125 container start 3b960fb67c3fcb7e7df228d6ac5266042f012de4c1b849fb70d26f735817e043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaum, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:14:42 compute-0 podman[210158]: 2025-12-05 01:14:42.045100541 +0000 UTC m=+0.279557374 container attach 3b960fb67c3fcb7e7df228d6ac5266042f012de4c1b849fb70d26f735817e043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaum, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:14:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Dec 05 01:14:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec 05 01:14:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Dec 05 01:14:42 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Dec 05 01:14:42 compute-0 ceph-mon[192914]: OSD bench result of 4781.521020 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 05 01:14:42 compute-0 ceph-mon[192914]: osd.2 [v2:192.168.122.100:6810/3318741722,v1:192.168.122.100:6811/3318741722] boot
Dec 05 01:14:42 compute-0 ceph-mon[192914]: osdmap e15: 3 total, 3 up, 3 in
Dec 05 01:14:42 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 05 01:14:42 compute-0 ceph-mon[192914]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.542 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.543 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.543 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.544 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.544 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.546 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.561 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.561 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.562 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.562 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.562 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.562 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.563 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.563 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:14:43 compute-0 ceph-mon[192914]: osdmap e16: 3 total, 3 up, 3 in
Dec 05 01:14:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Dec 05 01:14:44 compute-0 sad_chaum[210175]: [
Dec 05 01:14:44 compute-0 sad_chaum[210175]:     {
Dec 05 01:14:44 compute-0 sad_chaum[210175]:         "available": false,
Dec 05 01:14:44 compute-0 sad_chaum[210175]:         "ceph_device": false,
Dec 05 01:14:44 compute-0 sad_chaum[210175]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 05 01:14:44 compute-0 sad_chaum[210175]:         "lsm_data": {},
Dec 05 01:14:44 compute-0 sad_chaum[210175]:         "lvs": [],
Dec 05 01:14:44 compute-0 sad_chaum[210175]:         "path": "/dev/sr0",
Dec 05 01:14:44 compute-0 sad_chaum[210175]:         "rejected_reasons": [
Dec 05 01:14:44 compute-0 sad_chaum[210175]:             "Insufficient space (<5GB)",
Dec 05 01:14:44 compute-0 sad_chaum[210175]:             "Has a FileSystem"
Dec 05 01:14:44 compute-0 sad_chaum[210175]:         ],
Dec 05 01:14:44 compute-0 sad_chaum[210175]:         "sys_api": {
Dec 05 01:14:44 compute-0 sad_chaum[210175]:             "actuators": null,
Dec 05 01:14:44 compute-0 sad_chaum[210175]:             "device_nodes": "sr0",
Dec 05 01:14:44 compute-0 sad_chaum[210175]:             "devname": "sr0",
Dec 05 01:14:44 compute-0 sad_chaum[210175]:             "human_readable_size": "482.00 KB",
Dec 05 01:14:44 compute-0 sad_chaum[210175]:             "id_bus": "ata",
Dec 05 01:14:44 compute-0 sad_chaum[210175]:             "model": "QEMU DVD-ROM",
Dec 05 01:14:44 compute-0 sad_chaum[210175]:             "nr_requests": "2",
Dec 05 01:14:44 compute-0 sad_chaum[210175]:             "parent": "/dev/sr0",
Dec 05 01:14:44 compute-0 sad_chaum[210175]:             "partitions": {},
Dec 05 01:14:44 compute-0 sad_chaum[210175]:             "path": "/dev/sr0",
Dec 05 01:14:44 compute-0 sad_chaum[210175]:             "removable": "1",
Dec 05 01:14:44 compute-0 sad_chaum[210175]:             "rev": "2.5+",
Dec 05 01:14:44 compute-0 sad_chaum[210175]:             "ro": "0",
Dec 05 01:14:44 compute-0 sad_chaum[210175]:             "rotational": "1",
Dec 05 01:14:44 compute-0 sad_chaum[210175]:             "sas_address": "",
Dec 05 01:14:44 compute-0 sad_chaum[210175]:             "sas_device_handle": "",
Dec 05 01:14:44 compute-0 sad_chaum[210175]:             "scheduler_mode": "mq-deadline",
Dec 05 01:14:44 compute-0 sad_chaum[210175]:             "sectors": 0,
Dec 05 01:14:44 compute-0 sad_chaum[210175]:             "sectorsize": "2048",
Dec 05 01:14:44 compute-0 sad_chaum[210175]:             "size": 493568.0,
Dec 05 01:14:44 compute-0 sad_chaum[210175]:             "support_discard": "2048",
Dec 05 01:14:44 compute-0 sad_chaum[210175]:             "type": "disk",
Dec 05 01:14:44 compute-0 sad_chaum[210175]:             "vendor": "QEMU"
Dec 05 01:14:44 compute-0 sad_chaum[210175]:         }
Dec 05 01:14:44 compute-0 sad_chaum[210175]:     }
Dec 05 01:14:44 compute-0 sad_chaum[210175]: ]
Dec 05 01:14:44 compute-0 systemd[1]: libpod-3b960fb67c3fcb7e7df228d6ac5266042f012de4c1b849fb70d26f735817e043.scope: Deactivated successfully.
Dec 05 01:14:44 compute-0 systemd[1]: libpod-3b960fb67c3fcb7e7df228d6ac5266042f012de4c1b849fb70d26f735817e043.scope: Consumed 2.205s CPU time.
Dec 05 01:14:44 compute-0 podman[210158]: 2025-12-05 01:14:44.151324469 +0000 UTC m=+2.385781232 container died 3b960fb67c3fcb7e7df228d6ac5266042f012de4c1b849fb70d26f735817e043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 05 01:14:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b28d09b62b182d77cbc6628f1fda0ed5f2ff09cbf956f212da6f40dbe4d3b158-merged.mount: Deactivated successfully.
Dec 05 01:14:44 compute-0 podman[210158]: 2025-12-05 01:14:44.23002979 +0000 UTC m=+2.464486553 container remove 3b960fb67c3fcb7e7df228d6ac5266042f012de4c1b849fb70d26f735817e043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:14:44 compute-0 systemd[1]: libpod-conmon-3b960fb67c3fcb7e7df228d6ac5266042f012de4c1b849fb70d26f735817e043.scope: Deactivated successfully.
Dec 05 01:14:44 compute-0 sudo[210053]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:14:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:14:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Dec 05 01:14:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 05 01:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Dec 05 01:14:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 05 01:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Dec 05 01:14:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec 05 01:14:44 compute-0 ceph-mgr[193209]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43690k
Dec 05 01:14:44 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43690k
Dec 05 01:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Dec 05 01:14:44 compute-0 ceph-mgr[193209]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Dec 05 01:14:44 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Dec 05 01:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:14:44 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:14:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:14:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:44 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev f90cb7b8-f51b-435b-bcdd-6502b5985af0 does not exist
Dec 05 01:14:44 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8e4bfb61-48c9-43d9-b97f-273bd6a2475b does not exist
Dec 05 01:14:44 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 22e20da0-917a-41c9-bd88-e06517a486b6 does not exist
Dec 05 01:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:14:44 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:14:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:14:44 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:14:44 compute-0 sudo[212237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:44 compute-0 sudo[212237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:44 compute-0 sudo[212237]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:44 compute-0 sudo[212262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:14:44 compute-0 sudo[212262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:44 compute-0 sudo[212262]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:44 compute-0 sudo[212287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:44 compute-0 sudo[212287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:44 compute-0 sudo[212287]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:44 compute-0 sudo[212312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:14:44 compute-0 sudo[212312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:45 compute-0 podman[212374]: 2025-12-05 01:14:45.139757626 +0000 UTC m=+0.075203815 container create 814386589e1c33bd33f38da008c1cfed4c45473dc797d293ff529ec3029fce28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 01:14:45 compute-0 podman[212374]: 2025-12-05 01:14:45.111495529 +0000 UTC m=+0.046941798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:45 compute-0 systemd[1]: Started libpod-conmon-814386589e1c33bd33f38da008c1cfed4c45473dc797d293ff529ec3029fce28.scope.
Dec 05 01:14:45 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:45 compute-0 podman[212374]: 2025-12-05 01:14:45.262763811 +0000 UTC m=+0.198210030 container init 814386589e1c33bd33f38da008c1cfed4c45473dc797d293ff529ec3029fce28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_herschel, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:14:45 compute-0 podman[212374]: 2025-12-05 01:14:45.274819876 +0000 UTC m=+0.210266085 container start 814386589e1c33bd33f38da008c1cfed4c45473dc797d293ff529ec3029fce28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:14:45 compute-0 podman[212374]: 2025-12-05 01:14:45.280724961 +0000 UTC m=+0.216171190 container attach 814386589e1c33bd33f38da008c1cfed4c45473dc797d293ff529ec3029fce28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_herschel, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 01:14:45 compute-0 youthful_herschel[212390]: 167 167
Dec 05 01:14:45 compute-0 systemd[1]: libpod-814386589e1c33bd33f38da008c1cfed4c45473dc797d293ff529ec3029fce28.scope: Deactivated successfully.
Dec 05 01:14:45 compute-0 podman[212374]: 2025-12-05 01:14:45.285880964 +0000 UTC m=+0.221327143 container died 814386589e1c33bd33f38da008c1cfed4c45473dc797d293ff529ec3029fce28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:14:45 compute-0 ceph-mon[192914]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Dec 05 01:14:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 05 01:14:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 05 01:14:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec 05 01:14:45 compute-0 ceph-mon[192914]: Adjusting osd_memory_target on compute-0 to 43690k
Dec 05 01:14:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:14:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:14:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:14:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:14:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:14:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d3b7578590ffdf54a468406e97c4d49c78e05c54640adc4fd3ff5bb853041a1-merged.mount: Deactivated successfully.
Dec 05 01:14:45 compute-0 podman[212374]: 2025-12-05 01:14:45.334529049 +0000 UTC m=+0.269975228 container remove 814386589e1c33bd33f38da008c1cfed4c45473dc797d293ff529ec3029fce28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Dec 05 01:14:45 compute-0 systemd[1]: libpod-conmon-814386589e1c33bd33f38da008c1cfed4c45473dc797d293ff529ec3029fce28.scope: Deactivated successfully.
Dec 05 01:14:45 compute-0 podman[212413]: 2025-12-05 01:14:45.524034515 +0000 UTC m=+0.054332854 container create 0ea5afd2f550f613acc2630751462aca5b8d7b5db9f473ee837d464ce1443051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hugle, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:14:45 compute-0 podman[212413]: 2025-12-05 01:14:45.502240428 +0000 UTC m=+0.032538747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:45 compute-0 systemd[1]: Started libpod-conmon-0ea5afd2f550f613acc2630751462aca5b8d7b5db9f473ee837d464ce1443051.scope.
Dec 05 01:14:45 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef64c72de7de042c853a5d11f68d82a128b285a0018e8f3e0e1d291c9ac80de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef64c72de7de042c853a5d11f68d82a128b285a0018e8f3e0e1d291c9ac80de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef64c72de7de042c853a5d11f68d82a128b285a0018e8f3e0e1d291c9ac80de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef64c72de7de042c853a5d11f68d82a128b285a0018e8f3e0e1d291c9ac80de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef64c72de7de042c853a5d11f68d82a128b285a0018e8f3e0e1d291c9ac80de/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:45 compute-0 podman[212413]: 2025-12-05 01:14:45.670942505 +0000 UTC m=+0.201240824 container init 0ea5afd2f550f613acc2630751462aca5b8d7b5db9f473ee837d464ce1443051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hugle, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:14:45 compute-0 podman[212413]: 2025-12-05 01:14:45.68551441 +0000 UTC m=+0.215812749 container start 0ea5afd2f550f613acc2630751462aca5b8d7b5db9f473ee837d464ce1443051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hugle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 05 01:14:45 compute-0 podman[212413]: 2025-12-05 01:14:45.69376918 +0000 UTC m=+0.224067479 container attach 0ea5afd2f550f613acc2630751462aca5b8d7b5db9f473ee837d464ce1443051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:14:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Dec 05 01:14:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:14:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:14:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:14:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:14:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:14:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:14:46 compute-0 ceph-mon[192914]: Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Dec 05 01:14:46 compute-0 trusting_hugle[212428]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:14:46 compute-0 trusting_hugle[212428]: --> relative data size: 1.0
Dec 05 01:14:46 compute-0 trusting_hugle[212428]: --> All data devices are unavailable
Dec 05 01:14:46 compute-0 systemd[1]: libpod-0ea5afd2f550f613acc2630751462aca5b8d7b5db9f473ee837d464ce1443051.scope: Deactivated successfully.
Dec 05 01:14:46 compute-0 podman[212413]: 2025-12-05 01:14:46.800749709 +0000 UTC m=+1.331048008 container died 0ea5afd2f550f613acc2630751462aca5b8d7b5db9f473ee837d464ce1443051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 05 01:14:46 compute-0 systemd[1]: libpod-0ea5afd2f550f613acc2630751462aca5b8d7b5db9f473ee837d464ce1443051.scope: Consumed 1.056s CPU time.
Dec 05 01:14:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ef64c72de7de042c853a5d11f68d82a128b285a0018e8f3e0e1d291c9ac80de-merged.mount: Deactivated successfully.
Dec 05 01:14:46 compute-0 podman[212413]: 2025-12-05 01:14:46.86474056 +0000 UTC m=+1.395038859 container remove 0ea5afd2f550f613acc2630751462aca5b8d7b5db9f473ee837d464ce1443051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hugle, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:14:46 compute-0 systemd[1]: libpod-conmon-0ea5afd2f550f613acc2630751462aca5b8d7b5db9f473ee837d464ce1443051.scope: Deactivated successfully.
Dec 05 01:14:46 compute-0 sudo[212312]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:46 compute-0 sudo[212468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:46 compute-0 sudo[212468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:46 compute-0 sudo[212468]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:14:47 compute-0 sudo[212493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:14:47 compute-0 sudo[212493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:47 compute-0 sudo[212493]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:47 compute-0 sudo[212518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:47 compute-0 sudo[212518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:47 compute-0 sudo[212518]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:47 compute-0 sudo[212543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:14:47 compute-0 sudo[212543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:47 compute-0 ceph-mon[192914]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Dec 05 01:14:47 compute-0 podman[212608]: 2025-12-05 01:14:47.728268161 +0000 UTC m=+0.086656673 container create a0b81d9d302169395452672585dd2849a710af40eda82215205a9df4326c0470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 05 01:14:47 compute-0 systemd[1]: Started libpod-conmon-a0b81d9d302169395452672585dd2849a710af40eda82215205a9df4326c0470.scope.
Dec 05 01:14:47 compute-0 podman[212608]: 2025-12-05 01:14:47.695362205 +0000 UTC m=+0.053750757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:47 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 01:14:47 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:47 compute-0 podman[212608]: 2025-12-05 01:14:47.843831778 +0000 UTC m=+0.202220290 container init a0b81d9d302169395452672585dd2849a710af40eda82215205a9df4326c0470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 05 01:14:47 compute-0 podman[212608]: 2025-12-05 01:14:47.855780281 +0000 UTC m=+0.214168753 container start a0b81d9d302169395452672585dd2849a710af40eda82215205a9df4326c0470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_goldwasser, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 05 01:14:47 compute-0 podman[212608]: 2025-12-05 01:14:47.859853264 +0000 UTC m=+0.218241776 container attach a0b81d9d302169395452672585dd2849a710af40eda82215205a9df4326c0470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_goldwasser, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 01:14:47 compute-0 tender_goldwasser[212624]: 167 167
Dec 05 01:14:47 compute-0 systemd[1]: libpod-a0b81d9d302169395452672585dd2849a710af40eda82215205a9df4326c0470.scope: Deactivated successfully.
Dec 05 01:14:47 compute-0 podman[212608]: 2025-12-05 01:14:47.863026993 +0000 UTC m=+0.221415465 container died a0b81d9d302169395452672585dd2849a710af40eda82215205a9df4326c0470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_goldwasser, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:14:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ba757d730ff10c334650be52e677c2bd835f95f1588311c19b6aed4848505fd-merged.mount: Deactivated successfully.
Dec 05 01:14:47 compute-0 podman[212608]: 2025-12-05 01:14:47.942180876 +0000 UTC m=+0.300569349 container remove a0b81d9d302169395452672585dd2849a710af40eda82215205a9df4326c0470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 05 01:14:47 compute-0 systemd[1]: libpod-conmon-a0b81d9d302169395452672585dd2849a710af40eda82215205a9df4326c0470.scope: Deactivated successfully.
Dec 05 01:14:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:14:48 compute-0 podman[212648]: 2025-12-05 01:14:48.198710427 +0000 UTC m=+0.089874473 container create 7c597e6fc0226a383e44187d5d364071ea467717f2fa57cd0707aa3a4aa5c272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lichterman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:14:48 compute-0 podman[212648]: 2025-12-05 01:14:48.170223434 +0000 UTC m=+0.061387510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:48 compute-0 systemd[1]: Started libpod-conmon-7c597e6fc0226a383e44187d5d364071ea467717f2fa57cd0707aa3a4aa5c272.scope.
Dec 05 01:14:48 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/005e630fd4daef456975de4e8cd48fbe25712a2c11592d8726cf4903a13a5edd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/005e630fd4daef456975de4e8cd48fbe25712a2c11592d8726cf4903a13a5edd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/005e630fd4daef456975de4e8cd48fbe25712a2c11592d8726cf4903a13a5edd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/005e630fd4daef456975de4e8cd48fbe25712a2c11592d8726cf4903a13a5edd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:48 compute-0 podman[212648]: 2025-12-05 01:14:48.380960361 +0000 UTC m=+0.272124427 container init 7c597e6fc0226a383e44187d5d364071ea467717f2fa57cd0707aa3a4aa5c272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lichterman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 05 01:14:48 compute-0 ceph-mon[192914]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:14:48 compute-0 podman[212648]: 2025-12-05 01:14:48.413055205 +0000 UTC m=+0.304219241 container start 7c597e6fc0226a383e44187d5d364071ea467717f2fa57cd0707aa3a4aa5c272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lichterman, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:14:48 compute-0 podman[212648]: 2025-12-05 01:14:48.419626968 +0000 UTC m=+0.310791094 container attach 7c597e6fc0226a383e44187d5d364071ea467717f2fa57cd0707aa3a4aa5c272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lichterman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]: {
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:     "0": [
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:         {
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "devices": [
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "/dev/loop3"
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             ],
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "lv_name": "ceph_lv0",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "lv_size": "21470642176",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "name": "ceph_lv0",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "tags": {
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.cluster_name": "ceph",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.crush_device_class": "",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.encrypted": "0",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.osd_id": "0",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.type": "block",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.vdo": "0"
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             },
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "type": "block",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "vg_name": "ceph_vg0"
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:         }
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:     ],
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:     "1": [
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:         {
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "devices": [
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "/dev/loop4"
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             ],
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "lv_name": "ceph_lv1",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "lv_size": "21470642176",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "name": "ceph_lv1",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "tags": {
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.cluster_name": "ceph",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.crush_device_class": "",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.encrypted": "0",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.osd_id": "1",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.type": "block",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.vdo": "0"
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             },
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "type": "block",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "vg_name": "ceph_vg1"
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:         }
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:     ],
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:     "2": [
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:         {
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "devices": [
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "/dev/loop5"
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             ],
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "lv_name": "ceph_lv2",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "lv_size": "21470642176",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "name": "ceph_lv2",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "tags": {
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.cluster_name": "ceph",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.crush_device_class": "",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.encrypted": "0",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.osd_id": "2",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.type": "block",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:                 "ceph.vdo": "0"
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             },
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "type": "block",
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:             "vg_name": "ceph_vg2"
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:         }
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]:     ]
Dec 05 01:14:49 compute-0 awesome_lichterman[212664]: }
Dec 05 01:14:49 compute-0 systemd[1]: libpod-7c597e6fc0226a383e44187d5d364071ea467717f2fa57cd0707aa3a4aa5c272.scope: Deactivated successfully.
Dec 05 01:14:49 compute-0 podman[212648]: 2025-12-05 01:14:49.252102194 +0000 UTC m=+1.143266280 container died 7c597e6fc0226a383e44187d5d364071ea467717f2fa57cd0707aa3a4aa5c272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lichterman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:14:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-005e630fd4daef456975de4e8cd48fbe25712a2c11592d8726cf4903a13a5edd-merged.mount: Deactivated successfully.
Dec 05 01:14:49 compute-0 podman[212648]: 2025-12-05 01:14:49.363398062 +0000 UTC m=+1.254562128 container remove 7c597e6fc0226a383e44187d5d364071ea467717f2fa57cd0707aa3a4aa5c272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:14:49 compute-0 systemd[1]: libpod-conmon-7c597e6fc0226a383e44187d5d364071ea467717f2fa57cd0707aa3a4aa5c272.scope: Deactivated successfully.
Dec 05 01:14:49 compute-0 sudo[212543]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:49 compute-0 podman[212674]: 2025-12-05 01:14:49.425028468 +0000 UTC m=+0.136209233 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 01:14:49 compute-0 sudo[212706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:49 compute-0 sudo[212706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:49 compute-0 sudo[212706]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:49 compute-0 sudo[212732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:14:49 compute-0 sudo[212732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:49 compute-0 sudo[212732]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:49 compute-0 sudo[212757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:49 compute-0 sudo[212757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:49 compute-0 sudo[212757]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:49 compute-0 podman[212781]: 2025-12-05 01:14:49.91251152 +0000 UTC m=+0.108078860 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 01:14:49 compute-0 sudo[212795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:14:49 compute-0 sudo[212795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:49 compute-0 podman[212782]: 2025-12-05 01:14:49.957614546 +0000 UTC m=+0.154101052 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:14:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:14:50 compute-0 sshd-session[212888]: Connection closed by 23.94.28.167 port 35646
Dec 05 01:14:50 compute-0 podman[212895]: 2025-12-05 01:14:50.452359939 +0000 UTC m=+0.093296668 container create 26030b446d39c4727a88c2a9529bd43f2a76d2b5248f66ebad792fa04e151b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 05 01:14:50 compute-0 podman[212895]: 2025-12-05 01:14:50.411118611 +0000 UTC m=+0.052055390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:50 compute-0 systemd[1]: Started libpod-conmon-26030b446d39c4727a88c2a9529bd43f2a76d2b5248f66ebad792fa04e151b21.scope.
Dec 05 01:14:50 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:50 compute-0 podman[212895]: 2025-12-05 01:14:50.608219319 +0000 UTC m=+0.249156048 container init 26030b446d39c4727a88c2a9529bd43f2a76d2b5248f66ebad792fa04e151b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 05 01:14:50 compute-0 podman[212895]: 2025-12-05 01:14:50.619615626 +0000 UTC m=+0.260552325 container start 26030b446d39c4727a88c2a9529bd43f2a76d2b5248f66ebad792fa04e151b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:14:50 compute-0 podman[212895]: 2025-12-05 01:14:50.625587222 +0000 UTC m=+0.266523971 container attach 26030b446d39c4727a88c2a9529bd43f2a76d2b5248f66ebad792fa04e151b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:14:50 compute-0 strange_ellis[212910]: 167 167
Dec 05 01:14:50 compute-0 systemd[1]: libpod-26030b446d39c4727a88c2a9529bd43f2a76d2b5248f66ebad792fa04e151b21.scope: Deactivated successfully.
Dec 05 01:14:50 compute-0 podman[212915]: 2025-12-05 01:14:50.717194202 +0000 UTC m=+0.050652821 container died 26030b446d39c4727a88c2a9529bd43f2a76d2b5248f66ebad792fa04e151b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ellis, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:14:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe52a3a2a907b67780964303a6702aa48d084501cc1d644f56e3b6f198f4f133-merged.mount: Deactivated successfully.
Dec 05 01:14:50 compute-0 podman[212915]: 2025-12-05 01:14:50.801489359 +0000 UTC m=+0.134947918 container remove 26030b446d39c4727a88c2a9529bd43f2a76d2b5248f66ebad792fa04e151b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ellis, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:14:50 compute-0 systemd[1]: libpod-conmon-26030b446d39c4727a88c2a9529bd43f2a76d2b5248f66ebad792fa04e151b21.scope: Deactivated successfully.
Dec 05 01:14:51 compute-0 podman[212937]: 2025-12-05 01:14:51.07520544 +0000 UTC m=+0.098754471 container create ebac158785ad08b61be247e6c01e813a69d196fb1d409e14df87d5eda22fb1b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jennings, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:14:51 compute-0 podman[212937]: 2025-12-05 01:14:51.035537485 +0000 UTC m=+0.059086556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:51 compute-0 ceph-mon[192914]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:14:51 compute-0 systemd[1]: Started libpod-conmon-ebac158785ad08b61be247e6c01e813a69d196fb1d409e14df87d5eda22fb1b3.scope.
Dec 05 01:14:51 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73b5bc714bf8f2293280f1fe97eeeadd91f1e442e5748cbf093ff466c8048bb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73b5bc714bf8f2293280f1fe97eeeadd91f1e442e5748cbf093ff466c8048bb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73b5bc714bf8f2293280f1fe97eeeadd91f1e442e5748cbf093ff466c8048bb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73b5bc714bf8f2293280f1fe97eeeadd91f1e442e5748cbf093ff466c8048bb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:51 compute-0 podman[212937]: 2025-12-05 01:14:51.211650278 +0000 UTC m=+0.235199359 container init ebac158785ad08b61be247e6c01e813a69d196fb1d409e14df87d5eda22fb1b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jennings, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 05 01:14:51 compute-0 podman[212937]: 2025-12-05 01:14:51.225057432 +0000 UTC m=+0.248606433 container start ebac158785ad08b61be247e6c01e813a69d196fb1d409e14df87d5eda22fb1b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jennings, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 05 01:14:51 compute-0 podman[212937]: 2025-12-05 01:14:51.230797111 +0000 UTC m=+0.254346142 container attach ebac158785ad08b61be247e6c01e813a69d196fb1d409e14df87d5eda22fb1b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 01:14:51 compute-0 podman[212959]: 2025-12-05 01:14:51.711486823 +0000 UTC m=+0.107870943 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:14:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:14:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:14:52 compute-0 elated_jennings[212954]: {
Dec 05 01:14:52 compute-0 elated_jennings[212954]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:14:52 compute-0 elated_jennings[212954]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:14:52 compute-0 elated_jennings[212954]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:14:52 compute-0 elated_jennings[212954]:         "osd_id": 0,
Dec 05 01:14:52 compute-0 elated_jennings[212954]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:14:52 compute-0 elated_jennings[212954]:         "type": "bluestore"
Dec 05 01:14:52 compute-0 elated_jennings[212954]:     },
Dec 05 01:14:52 compute-0 elated_jennings[212954]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:14:52 compute-0 elated_jennings[212954]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:14:52 compute-0 elated_jennings[212954]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:14:52 compute-0 elated_jennings[212954]:         "osd_id": 1,
Dec 05 01:14:52 compute-0 elated_jennings[212954]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:14:52 compute-0 elated_jennings[212954]:         "type": "bluestore"
Dec 05 01:14:52 compute-0 elated_jennings[212954]:     },
Dec 05 01:14:52 compute-0 elated_jennings[212954]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:14:52 compute-0 elated_jennings[212954]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:14:52 compute-0 elated_jennings[212954]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:14:52 compute-0 elated_jennings[212954]:         "osd_id": 2,
Dec 05 01:14:52 compute-0 elated_jennings[212954]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:14:52 compute-0 elated_jennings[212954]:         "type": "bluestore"
Dec 05 01:14:52 compute-0 elated_jennings[212954]:     }
Dec 05 01:14:52 compute-0 elated_jennings[212954]: }
Dec 05 01:14:52 compute-0 systemd[1]: libpod-ebac158785ad08b61be247e6c01e813a69d196fb1d409e14df87d5eda22fb1b3.scope: Deactivated successfully.
Dec 05 01:14:52 compute-0 systemd[1]: libpod-ebac158785ad08b61be247e6c01e813a69d196fb1d409e14df87d5eda22fb1b3.scope: Consumed 1.165s CPU time.
Dec 05 01:14:52 compute-0 conmon[212954]: conmon ebac158785ad08b61be2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ebac158785ad08b61be247e6c01e813a69d196fb1d409e14df87d5eda22fb1b3.scope/container/memory.events
Dec 05 01:14:52 compute-0 podman[212937]: 2025-12-05 01:14:52.403644603 +0000 UTC m=+1.427193634 container died ebac158785ad08b61be247e6c01e813a69d196fb1d409e14df87d5eda22fb1b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:14:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-73b5bc714bf8f2293280f1fe97eeeadd91f1e442e5748cbf093ff466c8048bb4-merged.mount: Deactivated successfully.
Dec 05 01:14:52 compute-0 podman[212937]: 2025-12-05 01:14:52.491463068 +0000 UTC m=+1.515012049 container remove ebac158785ad08b61be247e6c01e813a69d196fb1d409e14df87d5eda22fb1b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jennings, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Dec 05 01:14:52 compute-0 systemd[1]: libpod-conmon-ebac158785ad08b61be247e6c01e813a69d196fb1d409e14df87d5eda22fb1b3.scope: Deactivated successfully.
Dec 05 01:14:52 compute-0 sudo[212795]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:14:52 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:14:52 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:52 compute-0 sudo[213019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:52 compute-0 sudo[213019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:52 compute-0 sudo[213019]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:52 compute-0 sudo[213044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:14:52 compute-0 sudo[213044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:52 compute-0 sudo[213044]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Dec 05 01:14:52 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Dec 05 01:14:52 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Dec 05 01:14:52 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Dec 05 01:14:52 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:52 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec 05 01:14:52 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec 05 01:14:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Dec 05 01:14:52 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 05 01:14:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Dec 05 01:14:52 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 05 01:14:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:14:52 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:14:52 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec 05 01:14:52 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec 05 01:14:52 compute-0 sudo[213069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:52 compute-0 sudo[213069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:52 compute-0 sudo[213069]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:53 compute-0 sudo[213094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:14:53 compute-0 sudo[213094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:53 compute-0 sudo[213094]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:53 compute-0 ceph-mon[192914]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:14:53 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:53 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:53 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:53 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:53 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:53 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:53 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 05 01:14:53 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 05 01:14:53 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:14:53 compute-0 sudo[213119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:53 compute-0 sudo[213119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:53 compute-0 sudo[213119]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:53 compute-0 sudo[213144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec 05 01:14:53 compute-0 sudo[213144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:53 compute-0 podman[213186]: 2025-12-05 01:14:53.604245837 +0000 UTC m=+0.049282523 container create aa6040f9ec02c34c8715edd95c08c9e5c1e8f6725088605c640211e5b7245ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_darwin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 05 01:14:53 compute-0 systemd[1]: Started libpod-conmon-aa6040f9ec02c34c8715edd95c08c9e5c1e8f6725088605c640211e5b7245ed6.scope.
Dec 05 01:14:53 compute-0 podman[213186]: 2025-12-05 01:14:53.585005061 +0000 UTC m=+0.030041767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:53 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:53 compute-0 podman[213186]: 2025-12-05 01:14:53.719768113 +0000 UTC m=+0.164804819 container init aa6040f9ec02c34c8715edd95c08c9e5c1e8f6725088605c640211e5b7245ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_darwin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 05 01:14:53 compute-0 podman[213186]: 2025-12-05 01:14:53.729878215 +0000 UTC m=+0.174914921 container start aa6040f9ec02c34c8715edd95c08c9e5c1e8f6725088605c640211e5b7245ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:14:53 compute-0 podman[213186]: 2025-12-05 01:14:53.734720779 +0000 UTC m=+0.179757465 container attach aa6040f9ec02c34c8715edd95c08c9e5c1e8f6725088605c640211e5b7245ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 01:14:53 compute-0 clever_darwin[213201]: 167 167
Dec 05 01:14:53 compute-0 systemd[1]: libpod-aa6040f9ec02c34c8715edd95c08c9e5c1e8f6725088605c640211e5b7245ed6.scope: Deactivated successfully.
Dec 05 01:14:53 compute-0 podman[213186]: 2025-12-05 01:14:53.737412344 +0000 UTC m=+0.182449030 container died aa6040f9ec02c34c8715edd95c08c9e5c1e8f6725088605c640211e5b7245ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_darwin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 05 01:14:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4df6a19e4ced6c34fffb0ebc080ab24e338ba19d2f75e1043d9fa22cce95348-merged.mount: Deactivated successfully.
Dec 05 01:14:53 compute-0 podman[213186]: 2025-12-05 01:14:53.806856098 +0000 UTC m=+0.251892784 container remove aa6040f9ec02c34c8715edd95c08c9e5c1e8f6725088605c640211e5b7245ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_darwin, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:14:53 compute-0 systemd[1]: libpod-conmon-aa6040f9ec02c34c8715edd95c08c9e5c1e8f6725088605c640211e5b7245ed6.scope: Deactivated successfully.
Dec 05 01:14:53 compute-0 sudo[213144]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:14:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:14:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:53 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.afshmv (unknown last config time)...
Dec 05 01:14:53 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.afshmv (unknown last config time)...
Dec 05 01:14:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.afshmv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Dec 05 01:14:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.afshmv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 05 01:14:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec 05 01:14:53 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 01:14:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:14:53 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:14:53 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.afshmv on compute-0
Dec 05 01:14:53 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.afshmv on compute-0
Dec 05 01:14:54 compute-0 sudo[213221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:54 compute-0 sudo[213221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:54 compute-0 sudo[213221]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:14:54 compute-0 sudo[213246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:14:54 compute-0 sudo[213246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:54 compute-0 sudo[213246]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:54 compute-0 ceph-mon[192914]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec 05 01:14:54 compute-0 ceph-mon[192914]: Reconfiguring daemon mon.compute-0 on compute-0
Dec 05 01:14:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.afshmv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 05 01:14:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 01:14:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:14:54 compute-0 sudo[213271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:54 compute-0 sudo[213271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:54 compute-0 sudo[213271]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:54 compute-0 sudo[213302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec 05 01:14:54 compute-0 sudo[213302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:54 compute-0 podman[213295]: 2025-12-05 01:14:54.352216981 +0000 UTC m=+0.112748530 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, distribution-scope=public, vcs-type=git, container_name=kepler, io.openshift.tags=base rhel9, version=9.4, config_id=edpm, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30)
Dec 05 01:14:54 compute-0 podman[213356]: 2025-12-05 01:14:54.573762369 +0000 UTC m=+0.047085272 container create 08fe32d4ca85915365df0dad5e7420d8c852ac871cc9d4e6142ffa10578fad9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_lalande, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:14:54 compute-0 systemd[1]: Started libpod-conmon-08fe32d4ca85915365df0dad5e7420d8c852ac871cc9d4e6142ffa10578fad9c.scope.
Dec 05 01:14:54 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:54 compute-0 podman[213356]: 2025-12-05 01:14:54.551825368 +0000 UTC m=+0.025148251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:54 compute-0 podman[213356]: 2025-12-05 01:14:54.670753919 +0000 UTC m=+0.144076832 container init 08fe32d4ca85915365df0dad5e7420d8c852ac871cc9d4e6142ffa10578fad9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 05 01:14:54 compute-0 podman[213356]: 2025-12-05 01:14:54.679542284 +0000 UTC m=+0.152865177 container start 08fe32d4ca85915365df0dad5e7420d8c852ac871cc9d4e6142ffa10578fad9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_lalande, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 05 01:14:54 compute-0 podman[213356]: 2025-12-05 01:14:54.683455513 +0000 UTC m=+0.156778416 container attach 08fe32d4ca85915365df0dad5e7420d8c852ac871cc9d4e6142ffa10578fad9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 05 01:14:54 compute-0 serene_lalande[213372]: 167 167
Dec 05 01:14:54 compute-0 systemd[1]: libpod-08fe32d4ca85915365df0dad5e7420d8c852ac871cc9d4e6142ffa10578fad9c.scope: Deactivated successfully.
Dec 05 01:14:54 compute-0 podman[213356]: 2025-12-05 01:14:54.692656519 +0000 UTC m=+0.165979412 container died 08fe32d4ca85915365df0dad5e7420d8c852ac871cc9d4e6142ffa10578fad9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_lalande, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 05 01:14:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-23f6851471a0ef9a74aefeea38cbf0323fe725c6dbc84110ca0ddbeeec77ab95-merged.mount: Deactivated successfully.
Dec 05 01:14:54 compute-0 podman[213356]: 2025-12-05 01:14:54.738935688 +0000 UTC m=+0.212258591 container remove 08fe32d4ca85915365df0dad5e7420d8c852ac871cc9d4e6142ffa10578fad9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 01:14:54 compute-0 systemd[1]: libpod-conmon-08fe32d4ca85915365df0dad5e7420d8c852ac871cc9d4e6142ffa10578fad9c.scope: Deactivated successfully.
Dec 05 01:14:54 compute-0 sudo[213302]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:14:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:14:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:54 compute-0 sudo[213389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:54 compute-0 sudo[213389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:54 compute-0 sudo[213389]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:54 compute-0 sudo[213414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:14:54 compute-0 sudo[213414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:54 compute-0 sudo[213414]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:55 compute-0 sudo[213439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:55 compute-0 sudo[213439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:55 compute-0 sudo[213439]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:55 compute-0 sudo[213464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 05 01:14:55 compute-0 sudo[213464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:55 compute-0 ceph-mon[192914]: Reconfiguring mgr.compute-0.afshmv (unknown last config time)...
Dec 05 01:14:55 compute-0 ceph-mon[192914]: Reconfiguring daemon mgr.compute-0.afshmv on compute-0
Dec 05 01:14:55 compute-0 ceph-mon[192914]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:14:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:55 compute-0 podman[213559]: 2025-12-05 01:14:55.715760911 +0000 UTC m=+0.070145514 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 05 01:14:55 compute-0 podman[213559]: 2025-12-05 01:14:55.832741078 +0000 UTC m=+0.187125691 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:14:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:14:56 compute-0 sudo[213464]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:14:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:14:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:14:56 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:14:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:14:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:14:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:14:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:56 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 25a0b490-005e-441b-a51c-b0cba394fca0 does not exist
Dec 05 01:14:56 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b92f9777-0a54-4544-bebf-fa911afaed42 does not exist
Dec 05 01:14:56 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8160cbf4-62f7-4181-ae9e-cf19cba0726b does not exist
Dec 05 01:14:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:14:56 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:14:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:14:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:14:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:14:56 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:14:56 compute-0 sudo[213675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:56 compute-0 sudo[213675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:56 compute-0 sudo[213675]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:56 compute-0 sudo[213700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:14:56 compute-0 sudo[213700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:56 compute-0 sudo[213700]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:56 compute-0 sudo[213737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:56 compute-0 sudo[213737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:56 compute-0 sudo[213737]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:56 compute-0 sudo[213766]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwsmbmugmfpevywnfpdojtzvzqcwanin ; /usr/bin/python3'
Dec 05 01:14:56 compute-0 sudo[213766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:14:56 compute-0 sudo[213776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:14:56 compute-0 sudo[213776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:56 compute-0 python3[213775]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:14:56 compute-0 podman[213802]: 2025-12-05 01:14:56.953608522 +0000 UTC m=+0.052135522 container create 97a30fd0e15d4f8ece7b0599abf03f747fbc985e6f13eabd54ee16bea5042ecf (image=quay.io/ceph/ceph:v18, name=practical_poincare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:14:57 compute-0 systemd[1]: Started libpod-conmon-97a30fd0e15d4f8ece7b0599abf03f747fbc985e6f13eabd54ee16bea5042ecf.scope.
Dec 05 01:14:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:14:57 compute-0 podman[213802]: 2025-12-05 01:14:56.928989677 +0000 UTC m=+0.027516707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:14:57 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d038bd823964328c51e25e06913a79ae7f05a00da1e3d48eb001ed7ca4e8226c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d038bd823964328c51e25e06913a79ae7f05a00da1e3d48eb001ed7ca4e8226c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d038bd823964328c51e25e06913a79ae7f05a00da1e3d48eb001ed7ca4e8226c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:57 compute-0 podman[213802]: 2025-12-05 01:14:57.090691139 +0000 UTC m=+0.189218129 container init 97a30fd0e15d4f8ece7b0599abf03f747fbc985e6f13eabd54ee16bea5042ecf (image=quay.io/ceph/ceph:v18, name=practical_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 05 01:14:57 compute-0 podman[213802]: 2025-12-05 01:14:57.11732207 +0000 UTC m=+0.215849050 container start 97a30fd0e15d4f8ece7b0599abf03f747fbc985e6f13eabd54ee16bea5042ecf (image=quay.io/ceph/ceph:v18, name=practical_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Dec 05 01:14:57 compute-0 podman[213802]: 2025-12-05 01:14:57.122853485 +0000 UTC m=+0.221380465 container attach 97a30fd0e15d4f8ece7b0599abf03f747fbc985e6f13eabd54ee16bea5042ecf (image=quay.io/ceph/ceph:v18, name=practical_poincare, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:14:57 compute-0 podman[213840]: 2025-12-05 01:14:57.161010297 +0000 UTC m=+0.123526411 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, release=1755695350, architecture=x86_64, container_name=openstack_network_exporter, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Dec 05 01:14:57 compute-0 ceph-mon[192914]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:14:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:14:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:14:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:14:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:14:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:14:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:14:57 compute-0 podman[213878]: 2025-12-05 01:14:57.323555822 +0000 UTC m=+0.061750800 container create 1a22f0c3f066077eb7a8a7ed296e47636b558800432d8b755179522c67fa2be8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leavitt, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:14:57 compute-0 systemd[1]: Started libpod-conmon-1a22f0c3f066077eb7a8a7ed296e47636b558800432d8b755179522c67fa2be8.scope.
Dec 05 01:14:57 compute-0 podman[213878]: 2025-12-05 01:14:57.293724421 +0000 UTC m=+0.031919409 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:57 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:57 compute-0 podman[213878]: 2025-12-05 01:14:57.423047212 +0000 UTC m=+0.161242170 container init 1a22f0c3f066077eb7a8a7ed296e47636b558800432d8b755179522c67fa2be8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leavitt, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:14:57 compute-0 podman[213878]: 2025-12-05 01:14:57.434076669 +0000 UTC m=+0.172271627 container start 1a22f0c3f066077eb7a8a7ed296e47636b558800432d8b755179522c67fa2be8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 01:14:57 compute-0 podman[213878]: 2025-12-05 01:14:57.439114349 +0000 UTC m=+0.177309297 container attach 1a22f0c3f066077eb7a8a7ed296e47636b558800432d8b755179522c67fa2be8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leavitt, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 01:14:57 compute-0 zealous_leavitt[213895]: 167 167
Dec 05 01:14:57 compute-0 systemd[1]: libpod-1a22f0c3f066077eb7a8a7ed296e47636b558800432d8b755179522c67fa2be8.scope: Deactivated successfully.
Dec 05 01:14:57 compute-0 podman[213878]: 2025-12-05 01:14:57.447731269 +0000 UTC m=+0.185926227 container died 1a22f0c3f066077eb7a8a7ed296e47636b558800432d8b755179522c67fa2be8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leavitt, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:14:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-1db5914ce12bef39b78ba8067b9e73c11a95a0bb0cde50ef055d03ad3b12f8f9-merged.mount: Deactivated successfully.
Dec 05 01:14:57 compute-0 podman[213878]: 2025-12-05 01:14:57.498500762 +0000 UTC m=+0.236695710 container remove 1a22f0c3f066077eb7a8a7ed296e47636b558800432d8b755179522c67fa2be8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 05 01:14:57 compute-0 systemd[1]: libpod-conmon-1a22f0c3f066077eb7a8a7ed296e47636b558800432d8b755179522c67fa2be8.scope: Deactivated successfully.
Dec 05 01:14:57 compute-0 podman[213937]: 2025-12-05 01:14:57.752505463 +0000 UTC m=+0.079966444 container create 521be331ee93edc723e7a7b5fe819e4cdc08c76e530fd053e8c2b69c11c67ac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_archimedes, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:14:57 compute-0 podman[213937]: 2025-12-05 01:14:57.717991588 +0000 UTC m=+0.045452629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:14:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec 05 01:14:57 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3439009551' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 05 01:14:57 compute-0 practical_poincare[213842]: 
Dec 05 01:14:57 compute-0 practical_poincare[213842]: {"fsid":"cbd280d3-cbd8-528b-ace6-2b3a887cdcee","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":150,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":16,"num_osds":3,"num_up_osds":3,"osd_up_since":1764897281,"num_in_osds":3,"osd_in_since":1764897248,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":502738944,"bytes_avail":63909187584,"bytes_total":64411926528},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-05T01:14:18.046599+0000","services":{}},"progress_events":{}}
Dec 05 01:14:57 compute-0 systemd[1]: Started libpod-conmon-521be331ee93edc723e7a7b5fe819e4cdc08c76e530fd053e8c2b69c11c67ac0.scope.
Dec 05 01:14:57 compute-0 podman[213802]: 2025-12-05 01:14:57.877354257 +0000 UTC m=+0.975881297 container died 97a30fd0e15d4f8ece7b0599abf03f747fbc985e6f13eabd54ee16bea5042ecf (image=quay.io/ceph/ceph:v18, name=practical_poincare, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 05 01:14:57 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:57 compute-0 systemd[1]: libpod-97a30fd0e15d4f8ece7b0599abf03f747fbc985e6f13eabd54ee16bea5042ecf.scope: Deactivated successfully.
Dec 05 01:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f4a51d442750fa1428c1136d3cdeb9a080c1db84db95cee9207d79f3dc29a71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f4a51d442750fa1428c1136d3cdeb9a080c1db84db95cee9207d79f3dc29a71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f4a51d442750fa1428c1136d3cdeb9a080c1db84db95cee9207d79f3dc29a71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f4a51d442750fa1428c1136d3cdeb9a080c1db84db95cee9207d79f3dc29a71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f4a51d442750fa1428c1136d3cdeb9a080c1db84db95cee9207d79f3dc29a71/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:57 compute-0 podman[213937]: 2025-12-05 01:14:57.938609578 +0000 UTC m=+0.266070569 container init 521be331ee93edc723e7a7b5fe819e4cdc08c76e530fd053e8c2b69c11c67ac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_archimedes, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:14:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d038bd823964328c51e25e06913a79ae7f05a00da1e3d48eb001ed7ca4e8226c-merged.mount: Deactivated successfully.
Dec 05 01:14:57 compute-0 podman[213937]: 2025-12-05 01:14:57.959915909 +0000 UTC m=+0.287376860 container start 521be331ee93edc723e7a7b5fe819e4cdc08c76e530fd053e8c2b69c11c67ac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_archimedes, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 05 01:14:57 compute-0 podman[213937]: 2025-12-05 01:14:57.96742586 +0000 UTC m=+0.294886851 container attach 521be331ee93edc723e7a7b5fe819e4cdc08c76e530fd053e8c2b69c11c67ac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_archimedes, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Dec 05 01:14:57 compute-0 podman[213802]: 2025-12-05 01:14:57.983271894 +0000 UTC m=+1.081798864 container remove 97a30fd0e15d4f8ece7b0599abf03f747fbc985e6f13eabd54ee16bea5042ecf (image=quay.io/ceph/ceph:v18, name=practical_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 05 01:14:58 compute-0 systemd[1]: libpod-conmon-97a30fd0e15d4f8ece7b0599abf03f747fbc985e6f13eabd54ee16bea5042ecf.scope: Deactivated successfully.
Dec 05 01:14:58 compute-0 sudo[213766]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:14:58 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3439009551' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 05 01:14:58 compute-0 sudo[213994]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eomjbraxwaprliizfqfqglbxenvjdnvv ; /usr/bin/python3'
Dec 05 01:14:58 compute-0 sudo[213994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:14:58 compute-0 python3[213996]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:14:58 compute-0 podman[213997]: 2025-12-05 01:14:58.649422728 +0000 UTC m=+0.090983728 container create ee927366d55720da649cf108f4b5fac7a6900ae9803783ba59847238d0cc27d5 (image=quay.io/ceph/ceph:v18, name=cranky_margulis, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Dec 05 01:14:58 compute-0 podman[213997]: 2025-12-05 01:14:58.610744212 +0000 UTC m=+0.052305292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:14:58 compute-0 systemd[1]: Started libpod-conmon-ee927366d55720da649cf108f4b5fac7a6900ae9803783ba59847238d0cc27d5.scope.
Dec 05 01:14:58 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/086b60429445134e1299d512fc865234e034c94d6d29507051f8cead594d5219/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/086b60429445134e1299d512fc865234e034c94d6d29507051f8cead594d5219/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:14:58 compute-0 podman[213997]: 2025-12-05 01:14:58.816340809 +0000 UTC m=+0.257901879 container init ee927366d55720da649cf108f4b5fac7a6900ae9803783ba59847238d0cc27d5 (image=quay.io/ceph/ceph:v18, name=cranky_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 05 01:14:58 compute-0 podman[213997]: 2025-12-05 01:14:58.833256652 +0000 UTC m=+0.274817642 container start ee927366d55720da649cf108f4b5fac7a6900ae9803783ba59847238d0cc27d5 (image=quay.io/ceph/ceph:v18, name=cranky_margulis, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:14:58 compute-0 podman[213997]: 2025-12-05 01:14:58.840377622 +0000 UTC m=+0.281938642 container attach ee927366d55720da649cf108f4b5fac7a6900ae9803783ba59847238d0cc27d5 (image=quay.io/ceph/ceph:v18, name=cranky_margulis, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 05 01:14:59 compute-0 ceph-mon[192914]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:14:59 compute-0 wonderful_archimedes[213955]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:14:59 compute-0 wonderful_archimedes[213955]: --> relative data size: 1.0
Dec 05 01:14:59 compute-0 wonderful_archimedes[213955]: --> All data devices are unavailable
Dec 05 01:14:59 compute-0 systemd[1]: libpod-521be331ee93edc723e7a7b5fe819e4cdc08c76e530fd053e8c2b69c11c67ac0.scope: Deactivated successfully.
Dec 05 01:14:59 compute-0 podman[213937]: 2025-12-05 01:14:59.258820471 +0000 UTC m=+1.586281422 container died 521be331ee93edc723e7a7b5fe819e4cdc08c76e530fd053e8c2b69c11c67ac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 01:14:59 compute-0 systemd[1]: libpod-521be331ee93edc723e7a7b5fe819e4cdc08c76e530fd053e8c2b69c11c67ac0.scope: Consumed 1.221s CPU time.
Dec 05 01:14:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f4a51d442750fa1428c1136d3cdeb9a080c1db84db95cee9207d79f3dc29a71-merged.mount: Deactivated successfully.
Dec 05 01:14:59 compute-0 podman[213937]: 2025-12-05 01:14:59.34835406 +0000 UTC m=+1.675815001 container remove 521be331ee93edc723e7a7b5fe819e4cdc08c76e530fd053e8c2b69c11c67ac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:14:59 compute-0 systemd[1]: libpod-conmon-521be331ee93edc723e7a7b5fe819e4cdc08c76e530fd053e8c2b69c11c67ac0.scope: Deactivated successfully.
Dec 05 01:14:59 compute-0 sudo[213776]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec 05 01:14:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1204988894' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 05 01:14:59 compute-0 sudo[214071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:59 compute-0 sudo[214071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:59 compute-0 sudo[214071]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:59 compute-0 sudo[214099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:14:59 compute-0 sudo[214099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:59 compute-0 sudo[214099]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:59 compute-0 sudo[214124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:14:59 compute-0 sudo[214124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:14:59 compute-0 sudo[214124]: pam_unix(sudo:session): session closed for user root
Dec 05 01:14:59 compute-0 podman[158197]: time="2025-12-05T01:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:14:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30683 "" "Go-http-client/1.1"
Dec 05 01:14:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6252 "" "Go-http-client/1.1"
Dec 05 01:14:59 compute-0 sudo[214149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:14:59 compute-0 sudo[214149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec 05 01:15:00 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1204988894' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 05 01:15:00 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1204988894' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 05 01:15:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Dec 05 01:15:00 compute-0 cranky_margulis[214016]: pool 'vms' created
Dec 05 01:15:00 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Dec 05 01:15:00 compute-0 systemd[1]: libpod-ee927366d55720da649cf108f4b5fac7a6900ae9803783ba59847238d0cc27d5.scope: Deactivated successfully.
Dec 05 01:15:00 compute-0 podman[213997]: 2025-12-05 01:15:00.265148708 +0000 UTC m=+1.706709708 container died ee927366d55720da649cf108f4b5fac7a6900ae9803783ba59847238d0cc27d5 (image=quay.io/ceph/ceph:v18, name=cranky_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 01:15:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-086b60429445134e1299d512fc865234e034c94d6d29507051f8cead594d5219-merged.mount: Deactivated successfully.
Dec 05 01:15:00 compute-0 podman[213997]: 2025-12-05 01:15:00.337860005 +0000 UTC m=+1.779420995 container remove ee927366d55720da649cf108f4b5fac7a6900ae9803783ba59847238d0cc27d5 (image=quay.io/ceph/ceph:v18, name=cranky_margulis, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 05 01:15:00 compute-0 podman[214214]: 2025-12-05 01:15:00.360506562 +0000 UTC m=+0.079019228 container create 43f03b24827fbf80ef163998bcbe6cb872940ca359c8cae2837bc84d6a642184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_leavitt, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:15:00 compute-0 systemd[1]: libpod-conmon-ee927366d55720da649cf108f4b5fac7a6900ae9803783ba59847238d0cc27d5.scope: Deactivated successfully.
Dec 05 01:15:00 compute-0 sudo[213994]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:00 compute-0 systemd[1]: Started libpod-conmon-43f03b24827fbf80ef163998bcbe6cb872940ca359c8cae2837bc84d6a642184.scope.
Dec 05 01:15:00 compute-0 podman[214214]: 2025-12-05 01:15:00.341932124 +0000 UTC m=+0.060444810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:15:00 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:00 compute-0 podman[214214]: 2025-12-05 01:15:00.465431343 +0000 UTC m=+0.183944029 container init 43f03b24827fbf80ef163998bcbe6cb872940ca359c8cae2837bc84d6a642184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Dec 05 01:15:00 compute-0 podman[214214]: 2025-12-05 01:15:00.475191814 +0000 UTC m=+0.193704480 container start 43f03b24827fbf80ef163998bcbe6cb872940ca359c8cae2837bc84d6a642184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_leavitt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 01:15:00 compute-0 podman[214214]: 2025-12-05 01:15:00.479349486 +0000 UTC m=+0.197862252 container attach 43f03b24827fbf80ef163998bcbe6cb872940ca359c8cae2837bc84d6a642184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_leavitt, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:15:00 compute-0 interesting_leavitt[214240]: 167 167
Dec 05 01:15:00 compute-0 systemd[1]: libpod-43f03b24827fbf80ef163998bcbe6cb872940ca359c8cae2837bc84d6a642184.scope: Deactivated successfully.
Dec 05 01:15:00 compute-0 podman[214214]: 2025-12-05 01:15:00.487698009 +0000 UTC m=+0.206210675 container died 43f03b24827fbf80ef163998bcbe6cb872940ca359c8cae2837bc84d6a642184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_leavitt, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Dec 05 01:15:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-69cafeafffae32b2f7de309446f8a65fde52389a7d3c5200d128262fae904f69-merged.mount: Deactivated successfully.
Dec 05 01:15:00 compute-0 sudo[214275]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzwdtowirlbburpwooraqoxivnzluxey ; /usr/bin/python3'
Dec 05 01:15:00 compute-0 sudo[214275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:00 compute-0 podman[214214]: 2025-12-05 01:15:00.538226183 +0000 UTC m=+0.256738849 container remove 43f03b24827fbf80ef163998bcbe6cb872940ca359c8cae2837bc84d6a642184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_leavitt, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 01:15:00 compute-0 systemd[1]: libpod-conmon-43f03b24827fbf80ef163998bcbe6cb872940ca359c8cae2837bc84d6a642184.scope: Deactivated successfully.
Dec 05 01:15:00 compute-0 python3[214280]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:00 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:00 compute-0 podman[214289]: 2025-12-05 01:15:00.794707893 +0000 UTC m=+0.071864066 container create ec7b4e35c577b2c4c4b86d33459a006ccf832d16ca8f267006ba53001a44e019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 01:15:00 compute-0 podman[214299]: 2025-12-05 01:15:00.814595426 +0000 UTC m=+0.058157389 container create 0f62494d4a40e27c1edde502a88166b85447b3209c15bb6752f8d277037f17d8 (image=quay.io/ceph/ceph:v18, name=exciting_mendel, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:15:00 compute-0 podman[214289]: 2025-12-05 01:15:00.77107263 +0000 UTC m=+0.048228783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:15:00 compute-0 systemd[1]: Started libpod-conmon-0f62494d4a40e27c1edde502a88166b85447b3209c15bb6752f8d277037f17d8.scope.
Dec 05 01:15:00 compute-0 podman[214299]: 2025-12-05 01:15:00.781348145 +0000 UTC m=+0.024910128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:00 compute-0 systemd[1]: Started libpod-conmon-ec7b4e35c577b2c4c4b86d33459a006ccf832d16ca8f267006ba53001a44e019.scope.
Dec 05 01:15:00 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:00 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6298e8c9dc1b52b2a83c6d140862906b8994f6920c55f010f18af1200e309369/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6298e8c9dc1b52b2a83c6d140862906b8994f6920c55f010f18af1200e309369/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09fa312c4b985690566eb6b623a4d02c9870be9e84da412825f3991a4a6418ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09fa312c4b985690566eb6b623a4d02c9870be9e84da412825f3991a4a6418ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09fa312c4b985690566eb6b623a4d02c9870be9e84da412825f3991a4a6418ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09fa312c4b985690566eb6b623a4d02c9870be9e84da412825f3991a4a6418ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:00 compute-0 podman[214289]: 2025-12-05 01:15:00.940838488 +0000 UTC m=+0.217994731 container init ec7b4e35c577b2c4c4b86d33459a006ccf832d16ca8f267006ba53001a44e019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 05 01:15:00 compute-0 podman[214299]: 2025-12-05 01:15:00.952451209 +0000 UTC m=+0.196013162 container init 0f62494d4a40e27c1edde502a88166b85447b3209c15bb6752f8d277037f17d8 (image=quay.io/ceph/ceph:v18, name=exciting_mendel, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Dec 05 01:15:00 compute-0 podman[214289]: 2025-12-05 01:15:00.956804195 +0000 UTC m=+0.233960348 container start ec7b4e35c577b2c4c4b86d33459a006ccf832d16ca8f267006ba53001a44e019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lovelace, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 01:15:00 compute-0 podman[214299]: 2025-12-05 01:15:00.959469817 +0000 UTC m=+0.203031770 container start 0f62494d4a40e27c1edde502a88166b85447b3209c15bb6752f8d277037f17d8 (image=quay.io/ceph/ceph:v18, name=exciting_mendel, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 05 01:15:00 compute-0 podman[214289]: 2025-12-05 01:15:00.96333335 +0000 UTC m=+0.240489503 container attach ec7b4e35c577b2c4c4b86d33459a006ccf832d16ca8f267006ba53001a44e019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:15:00 compute-0 podman[214299]: 2025-12-05 01:15:00.974272483 +0000 UTC m=+0.217834456 container attach 0f62494d4a40e27c1edde502a88166b85447b3209c15bb6752f8d277037f17d8 (image=quay.io/ceph/ceph:v18, name=exciting_mendel, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 05 01:15:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec 05 01:15:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Dec 05 01:15:01 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Dec 05 01:15:01 compute-0 ceph-mon[192914]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:01 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1204988894' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 05 01:15:01 compute-0 ceph-mon[192914]: osdmap e17: 3 total, 3 up, 3 in
Dec 05 01:15:01 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:01 compute-0 openstack_network_exporter[160350]: ERROR   01:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:15:01 compute-0 openstack_network_exporter[160350]: ERROR   01:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:15:01 compute-0 openstack_network_exporter[160350]: ERROR   01:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:15:01 compute-0 openstack_network_exporter[160350]: ERROR   01:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:15:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:15:01 compute-0 openstack_network_exporter[160350]: ERROR   01:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:15:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:15:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec 05 01:15:01 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4206810067' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]: {
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:     "0": [
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:         {
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "devices": [
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "/dev/loop3"
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             ],
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "lv_name": "ceph_lv0",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "lv_size": "21470642176",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "name": "ceph_lv0",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "tags": {
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.cluster_name": "ceph",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.crush_device_class": "",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.encrypted": "0",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.osd_id": "0",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.type": "block",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.vdo": "0"
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             },
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "type": "block",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "vg_name": "ceph_vg0"
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:         }
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:     ],
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:     "1": [
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:         {
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "devices": [
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "/dev/loop4"
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             ],
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "lv_name": "ceph_lv1",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "lv_size": "21470642176",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "name": "ceph_lv1",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "tags": {
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.cluster_name": "ceph",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.crush_device_class": "",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.encrypted": "0",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.osd_id": "1",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.type": "block",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.vdo": "0"
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             },
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "type": "block",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "vg_name": "ceph_vg1"
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:         }
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:     ],
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:     "2": [
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:         {
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "devices": [
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "/dev/loop5"
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             ],
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "lv_name": "ceph_lv2",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "lv_size": "21470642176",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "name": "ceph_lv2",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "tags": {
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.cluster_name": "ceph",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.crush_device_class": "",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.encrypted": "0",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.osd_id": "2",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.type": "block",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:                 "ceph.vdo": "0"
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             },
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "type": "block",
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:             "vg_name": "ceph_vg2"
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:         }
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]:     ]
Dec 05 01:15:01 compute-0 stoic_lovelace[214318]: }
Dec 05 01:15:01 compute-0 systemd[1]: libpod-ec7b4e35c577b2c4c4b86d33459a006ccf832d16ca8f267006ba53001a44e019.scope: Deactivated successfully.
Dec 05 01:15:01 compute-0 podman[214289]: 2025-12-05 01:15:01.786539021 +0000 UTC m=+1.063695204 container died ec7b4e35c577b2c4c4b86d33459a006ccf832d16ca8f267006ba53001a44e019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lovelace, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:15:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-09fa312c4b985690566eb6b623a4d02c9870be9e84da412825f3991a4a6418ca-merged.mount: Deactivated successfully.
Dec 05 01:15:01 compute-0 podman[214289]: 2025-12-05 01:15:01.891257046 +0000 UTC m=+1.168413209 container remove ec7b4e35c577b2c4c4b86d33459a006ccf832d16ca8f267006ba53001a44e019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:15:01 compute-0 systemd[1]: libpod-conmon-ec7b4e35c577b2c4c4b86d33459a006ccf832d16ca8f267006ba53001a44e019.scope: Deactivated successfully.
Dec 05 01:15:01 compute-0 sudo[214149]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:01 compute-0 podman[214352]: 2025-12-05 01:15:01.951919851 +0000 UTC m=+0.116087370 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 01:15:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:15:02 compute-0 anacron[91608]: Job `cron.daily' started
Dec 05 01:15:02 compute-0 anacron[91608]: Job `cron.daily' terminated
Dec 05 01:15:02 compute-0 sudo[214386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v62: 2 pgs: 2 active+clean; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:02 compute-0 sudo[214386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:02 compute-0 sudo[214386]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:02 compute-0 sudo[214413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:15:02 compute-0 sudo[214413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:02 compute-0 sudo[214413]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:02 compute-0 ceph-mon[192914]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 05 01:15:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec 05 01:15:02 compute-0 ceph-mon[192914]: osdmap e18: 3 total, 3 up, 3 in
Dec 05 01:15:02 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4206810067' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 05 01:15:02 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4206810067' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 05 01:15:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Dec 05 01:15:02 compute-0 exciting_mendel[214317]: pool 'volumes' created
Dec 05 01:15:02 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Dec 05 01:15:02 compute-0 sudo[214438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:02 compute-0 sudo[214438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:02 compute-0 sudo[214438]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:02 compute-0 systemd[1]: libpod-0f62494d4a40e27c1edde502a88166b85447b3209c15bb6752f8d277037f17d8.scope: Deactivated successfully.
Dec 05 01:15:02 compute-0 podman[214299]: 2025-12-05 01:15:02.338338842 +0000 UTC m=+1.581900805 container died 0f62494d4a40e27c1edde502a88166b85447b3209c15bb6752f8d277037f17d8 (image=quay.io/ceph/ceph:v18, name=exciting_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 05 01:15:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-6298e8c9dc1b52b2a83c6d140862906b8994f6920c55f010f18af1200e309369-merged.mount: Deactivated successfully.
Dec 05 01:15:02 compute-0 podman[214299]: 2025-12-05 01:15:02.411549892 +0000 UTC m=+1.655111855 container remove 0f62494d4a40e27c1edde502a88166b85447b3209c15bb6752f8d277037f17d8 (image=quay.io/ceph/ceph:v18, name=exciting_mendel, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 01:15:02 compute-0 systemd[1]: libpod-conmon-0f62494d4a40e27c1edde502a88166b85447b3209c15bb6752f8d277037f17d8.scope: Deactivated successfully.
Dec 05 01:15:02 compute-0 sudo[214275]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:02 compute-0 sudo[214469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:15:02 compute-0 sudo[214469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:02 compute-0 sudo[214522]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpznnwhvssjjmvdfrqooiebibjtzdcca ; /usr/bin/python3'
Dec 05 01:15:02 compute-0 sudo[214522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:02 compute-0 python3[214526]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:02 compute-0 podman[214550]: 2025-12-05 01:15:02.889202887 +0000 UTC m=+0.078101763 container create f3a6a726a5842eda42feecd8be7f645541634f1f47143fbc4df0418ecbe1120a (image=quay.io/ceph/ceph:v18, name=clever_jones, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:15:02 compute-0 systemd[1]: Started libpod-conmon-f3a6a726a5842eda42feecd8be7f645541634f1f47143fbc4df0418ecbe1120a.scope.
Dec 05 01:15:02 compute-0 podman[214550]: 2025-12-05 01:15:02.857677353 +0000 UTC m=+0.046576269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:02 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 19 pg[3.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:02 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e20b1575d741839d61402c17f1e4b53f2c6c780e9f282d50f350e1d67ee95c8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e20b1575d741839d61402c17f1e4b53f2c6c780e9f282d50f350e1d67ee95c8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:03 compute-0 podman[214550]: 2025-12-05 01:15:03.021172732 +0000 UTC m=+0.210071628 container init f3a6a726a5842eda42feecd8be7f645541634f1f47143fbc4df0418ecbe1120a (image=quay.io/ceph/ceph:v18, name=clever_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:15:03 compute-0 podman[214550]: 2025-12-05 01:15:03.03898635 +0000 UTC m=+0.227885226 container start f3a6a726a5842eda42feecd8be7f645541634f1f47143fbc4df0418ecbe1120a (image=quay.io/ceph/ceph:v18, name=clever_jones, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:15:03 compute-0 podman[214579]: 2025-12-05 01:15:03.042474113 +0000 UTC m=+0.093349592 container create a0d068d9dadc36ebe93e16b56655f13514b20bf78c0615c47780ed7fee518e88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_blackwell, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 01:15:03 compute-0 podman[214550]: 2025-12-05 01:15:03.047664942 +0000 UTC m=+0.236563828 container attach f3a6a726a5842eda42feecd8be7f645541634f1f47143fbc4df0418ecbe1120a (image=quay.io/ceph/ceph:v18, name=clever_jones, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:15:03 compute-0 systemd[1]: Started libpod-conmon-a0d068d9dadc36ebe93e16b56655f13514b20bf78c0615c47780ed7fee518e88.scope.
Dec 05 01:15:03 compute-0 podman[214579]: 2025-12-05 01:15:02.999948414 +0000 UTC m=+0.050823943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:15:03 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:03 compute-0 podman[214579]: 2025-12-05 01:15:03.138246798 +0000 UTC m=+0.189122337 container init a0d068d9dadc36ebe93e16b56655f13514b20bf78c0615c47780ed7fee518e88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_blackwell, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 05 01:15:03 compute-0 podman[214579]: 2025-12-05 01:15:03.149205432 +0000 UTC m=+0.200080921 container start a0d068d9dadc36ebe93e16b56655f13514b20bf78c0615c47780ed7fee518e88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_blackwell, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:15:03 compute-0 podman[214579]: 2025-12-05 01:15:03.155533372 +0000 UTC m=+0.206408901 container attach a0d068d9dadc36ebe93e16b56655f13514b20bf78c0615c47780ed7fee518e88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:15:03 compute-0 upbeat_blackwell[214599]: 167 167
Dec 05 01:15:03 compute-0 systemd[1]: libpod-a0d068d9dadc36ebe93e16b56655f13514b20bf78c0615c47780ed7fee518e88.scope: Deactivated successfully.
Dec 05 01:15:03 compute-0 podman[214579]: 2025-12-05 01:15:03.168975902 +0000 UTC m=+0.219851421 container died a0d068d9dadc36ebe93e16b56655f13514b20bf78c0615c47780ed7fee518e88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 05 01:15:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ccb8ce5e0d4a0b1ba42974a0786132ff1d0ad8a1463ae9af456a734714d9852-merged.mount: Deactivated successfully.
Dec 05 01:15:03 compute-0 podman[214579]: 2025-12-05 01:15:03.256768893 +0000 UTC m=+0.307644382 container remove a0d068d9dadc36ebe93e16b56655f13514b20bf78c0615c47780ed7fee518e88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 01:15:03 compute-0 ceph-mon[192914]: pgmap v62: 2 pgs: 2 active+clean; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:03 compute-0 ceph-mon[192914]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 05 01:15:03 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4206810067' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 05 01:15:03 compute-0 ceph-mon[192914]: osdmap e19: 3 total, 3 up, 3 in
Dec 05 01:15:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec 05 01:15:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Dec 05 01:15:03 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Dec 05 01:15:03 compute-0 systemd[1]: libpod-conmon-a0d068d9dadc36ebe93e16b56655f13514b20bf78c0615c47780ed7fee518e88.scope: Deactivated successfully.
Dec 05 01:15:03 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:03 compute-0 podman[214642]: 2025-12-05 01:15:03.514213329 +0000 UTC m=+0.067529789 container create 48042c4961f2e6dbab0b607a27040235ccfeb2df730637833070882f1dc5df6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_yalow, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 01:15:03 compute-0 podman[214642]: 2025-12-05 01:15:03.488743637 +0000 UTC m=+0.042060137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:15:03 compute-0 systemd[1]: Started libpod-conmon-48042c4961f2e6dbab0b607a27040235ccfeb2df730637833070882f1dc5df6d.scope.
Dec 05 01:15:03 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e656e2d67edda8fe3c80b496dd6eef5885afb055061af41791eb858ba21bffe9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e656e2d67edda8fe3c80b496dd6eef5885afb055061af41791eb858ba21bffe9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e656e2d67edda8fe3c80b496dd6eef5885afb055061af41791eb858ba21bffe9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e656e2d67edda8fe3c80b496dd6eef5885afb055061af41791eb858ba21bffe9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec 05 01:15:03 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/58862313' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 05 01:15:03 compute-0 podman[214642]: 2025-12-05 01:15:03.687130511 +0000 UTC m=+0.240447041 container init 48042c4961f2e6dbab0b607a27040235ccfeb2df730637833070882f1dc5df6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_yalow, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:15:03 compute-0 podman[214642]: 2025-12-05 01:15:03.69528776 +0000 UTC m=+0.248604220 container start 48042c4961f2e6dbab0b607a27040235ccfeb2df730637833070882f1dc5df6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:15:03 compute-0 podman[214642]: 2025-12-05 01:15:03.700912271 +0000 UTC m=+0.254228721 container attach 48042c4961f2e6dbab0b607a27040235ccfeb2df730637833070882f1dc5df6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 05 01:15:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v65: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec 05 01:15:04 compute-0 ceph-mon[192914]: osdmap e20: 3 total, 3 up, 3 in
Dec 05 01:15:04 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/58862313' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 05 01:15:04 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/58862313' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 05 01:15:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Dec 05 01:15:04 compute-0 clever_jones[214581]: pool 'backups' created
Dec 05 01:15:04 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Dec 05 01:15:04 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 21 pg[4.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:04 compute-0 systemd[1]: libpod-f3a6a726a5842eda42feecd8be7f645541634f1f47143fbc4df0418ecbe1120a.scope: Deactivated successfully.
Dec 05 01:15:04 compute-0 podman[214550]: 2025-12-05 01:15:04.376421585 +0000 UTC m=+1.565320491 container died f3a6a726a5842eda42feecd8be7f645541634f1f47143fbc4df0418ecbe1120a (image=quay.io/ceph/ceph:v18, name=clever_jones, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 01:15:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e20b1575d741839d61402c17f1e4b53f2c6c780e9f282d50f350e1d67ee95c8-merged.mount: Deactivated successfully.
Dec 05 01:15:04 compute-0 podman[214550]: 2025-12-05 01:15:04.476559598 +0000 UTC m=+1.665458474 container remove f3a6a726a5842eda42feecd8be7f645541634f1f47143fbc4df0418ecbe1120a (image=quay.io/ceph/ceph:v18, name=clever_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 05 01:15:04 compute-0 systemd[1]: libpod-conmon-f3a6a726a5842eda42feecd8be7f645541634f1f47143fbc4df0418ecbe1120a.scope: Deactivated successfully.
Dec 05 01:15:04 compute-0 sudo[214522]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:04 compute-0 sudo[214713]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytivlsqnwjkacdjaxjcuazkqzcmapmdz ; /usr/bin/python3'
Dec 05 01:15:04 compute-0 sudo[214713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:04 compute-0 python3[214720]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:04 compute-0 awesome_yalow[214659]: {
Dec 05 01:15:04 compute-0 awesome_yalow[214659]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:15:04 compute-0 awesome_yalow[214659]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:15:04 compute-0 awesome_yalow[214659]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:15:04 compute-0 awesome_yalow[214659]:         "osd_id": 0,
Dec 05 01:15:04 compute-0 awesome_yalow[214659]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:15:04 compute-0 awesome_yalow[214659]:         "type": "bluestore"
Dec 05 01:15:04 compute-0 awesome_yalow[214659]:     },
Dec 05 01:15:04 compute-0 awesome_yalow[214659]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:15:04 compute-0 awesome_yalow[214659]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:15:04 compute-0 awesome_yalow[214659]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:15:04 compute-0 awesome_yalow[214659]:         "osd_id": 1,
Dec 05 01:15:04 compute-0 awesome_yalow[214659]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:15:04 compute-0 awesome_yalow[214659]:         "type": "bluestore"
Dec 05 01:15:04 compute-0 awesome_yalow[214659]:     },
Dec 05 01:15:04 compute-0 awesome_yalow[214659]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:15:04 compute-0 awesome_yalow[214659]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:15:04 compute-0 awesome_yalow[214659]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:15:04 compute-0 awesome_yalow[214659]:         "osd_id": 2,
Dec 05 01:15:04 compute-0 awesome_yalow[214659]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:15:04 compute-0 awesome_yalow[214659]:         "type": "bluestore"
Dec 05 01:15:04 compute-0 awesome_yalow[214659]:     }
Dec 05 01:15:04 compute-0 awesome_yalow[214659]: }
Dec 05 01:15:04 compute-0 systemd[1]: libpod-48042c4961f2e6dbab0b607a27040235ccfeb2df730637833070882f1dc5df6d.scope: Deactivated successfully.
Dec 05 01:15:04 compute-0 systemd[1]: libpod-48042c4961f2e6dbab0b607a27040235ccfeb2df730637833070882f1dc5df6d.scope: Consumed 1.202s CPU time.
Dec 05 01:15:04 compute-0 podman[214642]: 2025-12-05 01:15:04.906740671 +0000 UTC m=+1.460057141 container died 48042c4961f2e6dbab0b607a27040235ccfeb2df730637833070882f1dc5df6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_yalow, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:15:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-e656e2d67edda8fe3c80b496dd6eef5885afb055061af41791eb858ba21bffe9-merged.mount: Deactivated successfully.
Dec 05 01:15:05 compute-0 podman[214731]: 2025-12-05 01:15:04.999138146 +0000 UTC m=+0.105814095 container create 2f92165b94d775012641be80c61201f2642ebad10ba52ad00c71b8387ed70faf (image=quay.io/ceph/ceph:v18, name=compassionate_brattain, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:15:05 compute-0 podman[214642]: 2025-12-05 01:15:05.007817979 +0000 UTC m=+1.561134429 container remove 48042c4961f2e6dbab0b607a27040235ccfeb2df730637833070882f1dc5df6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 01:15:05 compute-0 systemd[1]: libpod-conmon-48042c4961f2e6dbab0b607a27040235ccfeb2df730637833070882f1dc5df6d.scope: Deactivated successfully.
Dec 05 01:15:05 compute-0 sudo[214469]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:15:05 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:15:05 compute-0 systemd[1]: Started libpod-conmon-2f92165b94d775012641be80c61201f2642ebad10ba52ad00c71b8387ed70faf.scope.
Dec 05 01:15:05 compute-0 podman[214731]: 2025-12-05 01:15:04.977470636 +0000 UTC m=+0.084146605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:05 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:05 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27335335aa2c08067bbc799fc4db4d271d8e8598a0a1c5ae0fc69b3899bf5edc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27335335aa2c08067bbc799fc4db4d271d8e8598a0a1c5ae0fc69b3899bf5edc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:05 compute-0 podman[214731]: 2025-12-05 01:15:05.136099305 +0000 UTC m=+0.242775274 container init 2f92165b94d775012641be80c61201f2642ebad10ba52ad00c71b8387ed70faf (image=quay.io/ceph/ceph:v18, name=compassionate_brattain, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:15:05 compute-0 podman[214731]: 2025-12-05 01:15:05.153523192 +0000 UTC m=+0.260199141 container start 2f92165b94d775012641be80c61201f2642ebad10ba52ad00c71b8387ed70faf (image=quay.io/ceph/ceph:v18, name=compassionate_brattain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:15:05 compute-0 podman[214731]: 2025-12-05 01:15:05.161466904 +0000 UTC m=+0.268142873 container attach 2f92165b94d775012641be80c61201f2642ebad10ba52ad00c71b8387ed70faf (image=quay.io/ceph/ceph:v18, name=compassionate_brattain, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:15:05 compute-0 sudo[214764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:05 compute-0 sudo[214764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:05 compute-0 sudo[214764]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:05 compute-0 sudo[214791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:15:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec 05 01:15:05 compute-0 sudo[214791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:05 compute-0 ceph-mon[192914]: pgmap v65: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:05 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/58862313' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 05 01:15:05 compute-0 ceph-mon[192914]: osdmap e21: 3 total, 3 up, 3 in
Dec 05 01:15:05 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:05 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:05 compute-0 sudo[214791]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Dec 05 01:15:05 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Dec 05 01:15:05 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 22 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec 05 01:15:05 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/950505885' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 05 01:15:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v68: 4 pgs: 1 creating+peering, 1 unknown, 2 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:06 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec 05 01:15:06 compute-0 ceph-mon[192914]: osdmap e22: 3 total, 3 up, 3 in
Dec 05 01:15:06 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/950505885' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 05 01:15:06 compute-0 ceph-mon[192914]: pgmap v68: 4 pgs: 1 creating+peering, 1 unknown, 2 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:06 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/950505885' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 05 01:15:06 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Dec 05 01:15:06 compute-0 compassionate_brattain[214762]: pool 'images' created
Dec 05 01:15:06 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Dec 05 01:15:06 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 23 pg[5.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [2] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:06 compute-0 systemd[1]: libpod-2f92165b94d775012641be80c61201f2642ebad10ba52ad00c71b8387ed70faf.scope: Deactivated successfully.
Dec 05 01:15:06 compute-0 podman[214731]: 2025-12-05 01:15:06.439821227 +0000 UTC m=+1.546497246 container died 2f92165b94d775012641be80c61201f2642ebad10ba52ad00c71b8387ed70faf (image=quay.io/ceph/ceph:v18, name=compassionate_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 05 01:15:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-27335335aa2c08067bbc799fc4db4d271d8e8598a0a1c5ae0fc69b3899bf5edc-merged.mount: Deactivated successfully.
Dec 05 01:15:06 compute-0 podman[214731]: 2025-12-05 01:15:06.527058174 +0000 UTC m=+1.633734163 container remove 2f92165b94d775012641be80c61201f2642ebad10ba52ad00c71b8387ed70faf (image=quay.io/ceph/ceph:v18, name=compassionate_brattain, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 01:15:06 compute-0 systemd[1]: libpod-conmon-2f92165b94d775012641be80c61201f2642ebad10ba52ad00c71b8387ed70faf.scope: Deactivated successfully.
Dec 05 01:15:06 compute-0 sudo[214713]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:06 compute-0 sudo[214872]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klqjkfuighqkslnnimwarxnfjiftgguk ; /usr/bin/python3'
Dec 05 01:15:06 compute-0 sudo[214872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:06 compute-0 python3[214874]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:15:07 compute-0 podman[214875]: 2025-12-05 01:15:07.095780188 +0000 UTC m=+0.111147688 container create 74b35977e4fb45d2c604cf02897c97a3afead41d2b60c1dd1c9ae21c6d077f61 (image=quay.io/ceph/ceph:v18, name=modest_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 01:15:07 compute-0 podman[214875]: 2025-12-05 01:15:07.047130815 +0000 UTC m=+0.062498415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:07 compute-0 systemd[1]: Started libpod-conmon-74b35977e4fb45d2c604cf02897c97a3afead41d2b60c1dd1c9ae21c6d077f61.scope.
Dec 05 01:15:07 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1108143c62ab745d34a00ad97739a4c5d42a707f551b7135f04c975258a3d6a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1108143c62ab745d34a00ad97739a4c5d42a707f551b7135f04c975258a3d6a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:07 compute-0 podman[214875]: 2025-12-05 01:15:07.23773113 +0000 UTC m=+0.253098710 container init 74b35977e4fb45d2c604cf02897c97a3afead41d2b60c1dd1c9ae21c6d077f61 (image=quay.io/ceph/ceph:v18, name=modest_meninsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 01:15:07 compute-0 podman[214875]: 2025-12-05 01:15:07.255978509 +0000 UTC m=+0.271346019 container start 74b35977e4fb45d2c604cf02897c97a3afead41d2b60c1dd1c9ae21c6d077f61 (image=quay.io/ceph/ceph:v18, name=modest_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 05 01:15:07 compute-0 podman[214875]: 2025-12-05 01:15:07.260839549 +0000 UTC m=+0.276207149 container attach 74b35977e4fb45d2c604cf02897c97a3afead41d2b60c1dd1c9ae21c6d077f61 (image=quay.io/ceph/ceph:v18, name=modest_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 01:15:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec 05 01:15:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Dec 05 01:15:07 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Dec 05 01:15:07 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/950505885' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 05 01:15:07 compute-0 ceph-mon[192914]: osdmap e23: 3 total, 3 up, 3 in
Dec 05 01:15:07 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 24 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [2] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec 05 01:15:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/302135047' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 05 01:15:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v71: 5 pgs: 2 creating+peering, 3 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec 05 01:15:08 compute-0 ceph-mon[192914]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 05 01:15:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/302135047' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 05 01:15:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Dec 05 01:15:08 compute-0 modest_meninsky[214890]: pool 'cephfs.cephfs.meta' created
Dec 05 01:15:08 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Dec 05 01:15:08 compute-0 ceph-mon[192914]: osdmap e24: 3 total, 3 up, 3 in
Dec 05 01:15:08 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/302135047' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 05 01:15:08 compute-0 ceph-mon[192914]: pgmap v71: 5 pgs: 2 creating+peering, 3 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:08 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 25 pg[6.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [0] r=0 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:08 compute-0 systemd[1]: libpod-74b35977e4fb45d2c604cf02897c97a3afead41d2b60c1dd1c9ae21c6d077f61.scope: Deactivated successfully.
Dec 05 01:15:08 compute-0 podman[214875]: 2025-12-05 01:15:08.456136558 +0000 UTC m=+1.471504108 container died 74b35977e4fb45d2c604cf02897c97a3afead41d2b60c1dd1c9ae21c6d077f61 (image=quay.io/ceph/ceph:v18, name=modest_meninsky, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:15:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1108143c62ab745d34a00ad97739a4c5d42a707f551b7135f04c975258a3d6a-merged.mount: Deactivated successfully.
Dec 05 01:15:08 compute-0 podman[214875]: 2025-12-05 01:15:08.552943671 +0000 UTC m=+1.568311191 container remove 74b35977e4fb45d2c604cf02897c97a3afead41d2b60c1dd1c9ae21c6d077f61 (image=quay.io/ceph/ceph:v18, name=modest_meninsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:15:08 compute-0 systemd[1]: libpod-conmon-74b35977e4fb45d2c604cf02897c97a3afead41d2b60c1dd1c9ae21c6d077f61.scope: Deactivated successfully.
Dec 05 01:15:08 compute-0 sudo[214872]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:08 compute-0 sudo[214951]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nghpiqsyjqjdrphsgvfuqmquehyutfnj ; /usr/bin/python3'
Dec 05 01:15:08 compute-0 sudo[214951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:08 compute-0 python3[214953]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:09 compute-0 podman[214954]: 2025-12-05 01:15:09.093189502 +0000 UTC m=+0.110723876 container create aedc1dec37176fcb1f8140f790e8a851ef2540117a1b141d84e0541d6a4a9476 (image=quay.io/ceph/ceph:v18, name=sharp_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:15:09 compute-0 podman[214954]: 2025-12-05 01:15:09.052755169 +0000 UTC m=+0.070289613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:09 compute-0 systemd[1]: Started libpod-conmon-aedc1dec37176fcb1f8140f790e8a851ef2540117a1b141d84e0541d6a4a9476.scope.
Dec 05 01:15:09 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6e3affd19c76c44e7d3f4e732c4a0fc28bc70c1d28ad24952be432c3b93431e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6e3affd19c76c44e7d3f4e732c4a0fc28bc70c1d28ad24952be432c3b93431e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:09 compute-0 podman[214954]: 2025-12-05 01:15:09.234737304 +0000 UTC m=+0.252271678 container init aedc1dec37176fcb1f8140f790e8a851ef2540117a1b141d84e0541d6a4a9476 (image=quay.io/ceph/ceph:v18, name=sharp_galileo, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 05 01:15:09 compute-0 podman[214954]: 2025-12-05 01:15:09.247084485 +0000 UTC m=+0.264618839 container start aedc1dec37176fcb1f8140f790e8a851ef2540117a1b141d84e0541d6a4a9476 (image=quay.io/ceph/ceph:v18, name=sharp_galileo, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 01:15:09 compute-0 podman[214954]: 2025-12-05 01:15:09.252426648 +0000 UTC m=+0.269961002 container attach aedc1dec37176fcb1f8140f790e8a851ef2540117a1b141d84e0541d6a4a9476 (image=quay.io/ceph/ceph:v18, name=sharp_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 05 01:15:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec 05 01:15:09 compute-0 ceph-mon[192914]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 05 01:15:09 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/302135047' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 05 01:15:09 compute-0 ceph-mon[192914]: osdmap e25: 3 total, 3 up, 3 in
Dec 05 01:15:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Dec 05 01:15:09 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Dec 05 01:15:09 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 26 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [0] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec 05 01:15:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3740689644' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 05 01:15:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v74: 6 pgs: 1 creating+peering, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec 05 01:15:10 compute-0 ceph-mon[192914]: osdmap e26: 3 total, 3 up, 3 in
Dec 05 01:15:10 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3740689644' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 05 01:15:10 compute-0 ceph-mon[192914]: pgmap v74: 6 pgs: 1 creating+peering, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3740689644' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 05 01:15:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Dec 05 01:15:10 compute-0 sharp_galileo[214969]: pool 'cephfs.cephfs.data' created
Dec 05 01:15:10 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Dec 05 01:15:10 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 27 pg[7.0( empty local-lis/les=0/0 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [1] r=0 lpr=27 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:10 compute-0 systemd[1]: libpod-aedc1dec37176fcb1f8140f790e8a851ef2540117a1b141d84e0541d6a4a9476.scope: Deactivated successfully.
Dec 05 01:15:10 compute-0 podman[214954]: 2025-12-05 01:15:10.526687661 +0000 UTC m=+1.544222035 container died aedc1dec37176fcb1f8140f790e8a851ef2540117a1b141d84e0541d6a4a9476 (image=quay.io/ceph/ceph:v18, name=sharp_galileo, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:15:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6e3affd19c76c44e7d3f4e732c4a0fc28bc70c1d28ad24952be432c3b93431e-merged.mount: Deactivated successfully.
Dec 05 01:15:10 compute-0 podman[214954]: 2025-12-05 01:15:10.620201726 +0000 UTC m=+1.637736080 container remove aedc1dec37176fcb1f8140f790e8a851ef2540117a1b141d84e0541d6a4a9476 (image=quay.io/ceph/ceph:v18, name=sharp_galileo, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 01:15:10 compute-0 systemd[1]: libpod-conmon-aedc1dec37176fcb1f8140f790e8a851ef2540117a1b141d84e0541d6a4a9476.scope: Deactivated successfully.
Dec 05 01:15:10 compute-0 sudo[214951]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:10 compute-0 sudo[215031]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iforpxgpusbhrzlfepkvefmedzjwoopj ; /usr/bin/python3'
Dec 05 01:15:10 compute-0 sudo[215031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:11 compute-0 python3[215033]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:11 compute-0 podman[215034]: 2025-12-05 01:15:11.179388435 +0000 UTC m=+0.086440997 container create 5555fb7280d15b72b99f82ac9aa84b72bdf89c66c64282f8625dd34b9c11c63a (image=quay.io/ceph/ceph:v18, name=kind_chaum, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:15:11 compute-0 systemd[1]: Started libpod-conmon-5555fb7280d15b72b99f82ac9aa84b72bdf89c66c64282f8625dd34b9c11c63a.scope.
Dec 05 01:15:11 compute-0 podman[215034]: 2025-12-05 01:15:11.147784108 +0000 UTC m=+0.054836670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:11 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f241374f297f7ebfe64ecd24c971bd720d274e94c2d679635f594141956ace94/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f241374f297f7ebfe64ecd24c971bd720d274e94c2d679635f594141956ace94/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:11 compute-0 podman[215034]: 2025-12-05 01:15:11.326628249 +0000 UTC m=+0.233680821 container init 5555fb7280d15b72b99f82ac9aa84b72bdf89c66c64282f8625dd34b9c11c63a (image=quay.io/ceph/ceph:v18, name=kind_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 01:15:11 compute-0 podman[215034]: 2025-12-05 01:15:11.337572122 +0000 UTC m=+0.244624714 container start 5555fb7280d15b72b99f82ac9aa84b72bdf89c66c64282f8625dd34b9c11c63a (image=quay.io/ceph/ceph:v18, name=kind_chaum, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 05 01:15:11 compute-0 podman[215034]: 2025-12-05 01:15:11.343022898 +0000 UTC m=+0.250075470 container attach 5555fb7280d15b72b99f82ac9aa84b72bdf89c66c64282f8625dd34b9c11c63a (image=quay.io/ceph/ceph:v18, name=kind_chaum, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 05 01:15:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec 05 01:15:11 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3740689644' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 05 01:15:11 compute-0 ceph-mon[192914]: osdmap e27: 3 total, 3 up, 3 in
Dec 05 01:15:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Dec 05 01:15:11 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Dec 05 01:15:11 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 28 pg[7.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [1] r=0 lpr=27 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Dec 05 01:15:11 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2738035894' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec 05 01:15:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:15:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec 05 01:15:12 compute-0 ceph-mon[192914]: osdmap e28: 3 total, 3 up, 3 in
Dec 05 01:15:12 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2738035894' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec 05 01:15:12 compute-0 ceph-mon[192914]: pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:12 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2738035894' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 05 01:15:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Dec 05 01:15:12 compute-0 kind_chaum[215049]: enabled application 'rbd' on pool 'vms'
Dec 05 01:15:12 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Dec 05 01:15:12 compute-0 systemd[1]: libpod-5555fb7280d15b72b99f82ac9aa84b72bdf89c66c64282f8625dd34b9c11c63a.scope: Deactivated successfully.
Dec 05 01:15:12 compute-0 podman[215034]: 2025-12-05 01:15:12.586550928 +0000 UTC m=+1.493603530 container died 5555fb7280d15b72b99f82ac9aa84b72bdf89c66c64282f8625dd34b9c11c63a (image=quay.io/ceph/ceph:v18, name=kind_chaum, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 05 01:15:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-f241374f297f7ebfe64ecd24c971bd720d274e94c2d679635f594141956ace94-merged.mount: Deactivated successfully.
Dec 05 01:15:12 compute-0 podman[215034]: 2025-12-05 01:15:12.668560635 +0000 UTC m=+1.575613227 container remove 5555fb7280d15b72b99f82ac9aa84b72bdf89c66c64282f8625dd34b9c11c63a (image=quay.io/ceph/ceph:v18, name=kind_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:15:12 compute-0 systemd[1]: libpod-conmon-5555fb7280d15b72b99f82ac9aa84b72bdf89c66c64282f8625dd34b9c11c63a.scope: Deactivated successfully.
Dec 05 01:15:12 compute-0 sudo[215031]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:12 compute-0 sudo[215107]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfwjhsprwppvlnlwfowbwugsgvvfeksu ; /usr/bin/python3'
Dec 05 01:15:12 compute-0 sudo[215107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:13 compute-0 python3[215109]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:13 compute-0 podman[215110]: 2025-12-05 01:15:13.219642406 +0000 UTC m=+0.097290477 container create cfd86456f67edc3fe2d90bde1a783b523b33c5283f4b2ffbf4ddf3bef4182b7e (image=quay.io/ceph/ceph:v18, name=magical_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:15:13 compute-0 podman[215110]: 2025-12-05 01:15:13.183160709 +0000 UTC m=+0.060808850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:13 compute-0 systemd[1]: Started libpod-conmon-cfd86456f67edc3fe2d90bde1a783b523b33c5283f4b2ffbf4ddf3bef4182b7e.scope.
Dec 05 01:15:13 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96590ee5135165e1403de94eb34100db7cf2f971ad916585d6a7a98a7c6d55fd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96590ee5135165e1403de94eb34100db7cf2f971ad916585d6a7a98a7c6d55fd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:13 compute-0 podman[215110]: 2025-12-05 01:15:13.408223157 +0000 UTC m=+0.285871208 container init cfd86456f67edc3fe2d90bde1a783b523b33c5283f4b2ffbf4ddf3bef4182b7e (image=quay.io/ceph/ceph:v18, name=magical_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:15:13 compute-0 podman[215110]: 2025-12-05 01:15:13.425076679 +0000 UTC m=+0.302724760 container start cfd86456f67edc3fe2d90bde1a783b523b33c5283f4b2ffbf4ddf3bef4182b7e (image=quay.io/ceph/ceph:v18, name=magical_murdock, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 01:15:13 compute-0 podman[215110]: 2025-12-05 01:15:13.432845687 +0000 UTC m=+0.310493818 container attach cfd86456f67edc3fe2d90bde1a783b523b33c5283f4b2ffbf4ddf3bef4182b7e (image=quay.io/ceph/ceph:v18, name=magical_murdock, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Dec 05 01:15:13 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2738035894' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 05 01:15:13 compute-0 ceph-mon[192914]: osdmap e29: 3 total, 3 up, 3 in
Dec 05 01:15:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Dec 05 01:15:13 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3659675886' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec 05 01:15:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:14 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec 05 01:15:14 compute-0 ceph-mon[192914]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 05 01:15:14 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3659675886' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec 05 01:15:14 compute-0 ceph-mon[192914]: pgmap v79: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:14 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3659675886' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 05 01:15:14 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Dec 05 01:15:14 compute-0 magical_murdock[215125]: enabled application 'rbd' on pool 'volumes'
Dec 05 01:15:14 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Dec 05 01:15:14 compute-0 systemd[1]: libpod-cfd86456f67edc3fe2d90bde1a783b523b33c5283f4b2ffbf4ddf3bef4182b7e.scope: Deactivated successfully.
Dec 05 01:15:14 compute-0 podman[215110]: 2025-12-05 01:15:14.588125343 +0000 UTC m=+1.465773384 container died cfd86456f67edc3fe2d90bde1a783b523b33c5283f4b2ffbf4ddf3bef4182b7e (image=quay.io/ceph/ceph:v18, name=magical_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:15:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-96590ee5135165e1403de94eb34100db7cf2f971ad916585d6a7a98a7c6d55fd-merged.mount: Deactivated successfully.
Dec 05 01:15:14 compute-0 podman[215110]: 2025-12-05 01:15:14.648968803 +0000 UTC m=+1.526616844 container remove cfd86456f67edc3fe2d90bde1a783b523b33c5283f4b2ffbf4ddf3bef4182b7e (image=quay.io/ceph/ceph:v18, name=magical_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 05 01:15:14 compute-0 sudo[215107]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:14 compute-0 systemd[1]: libpod-conmon-cfd86456f67edc3fe2d90bde1a783b523b33c5283f4b2ffbf4ddf3bef4182b7e.scope: Deactivated successfully.
Dec 05 01:15:14 compute-0 sudo[215186]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvsgoimiavpjrfhttbiknmemyoexbzfv ; /usr/bin/python3'
Dec 05 01:15:14 compute-0 sudo[215186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:15 compute-0 python3[215188]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:15 compute-0 podman[215189]: 2025-12-05 01:15:15.178717483 +0000 UTC m=+0.083470506 container create 99d5b2ef557d9eb826809da38cf63c1f8c936867f01acfdc1937a3e616ff6ae6 (image=quay.io/ceph/ceph:v18, name=distracted_matsumoto, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 05 01:15:15 compute-0 podman[215189]: 2025-12-05 01:15:15.145502984 +0000 UTC m=+0.050256087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:15 compute-0 systemd[1]: Started libpod-conmon-99d5b2ef557d9eb826809da38cf63c1f8c936867f01acfdc1937a3e616ff6ae6.scope.
Dec 05 01:15:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a500b52efa8c432a23ba815a1a9f7f5c93367d2ea0ee07984993bd40b8d427b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a500b52efa8c432a23ba815a1a9f7f5c93367d2ea0ee07984993bd40b8d427b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:15 compute-0 podman[215189]: 2025-12-05 01:15:15.323538023 +0000 UTC m=+0.228291136 container init 99d5b2ef557d9eb826809da38cf63c1f8c936867f01acfdc1937a3e616ff6ae6 (image=quay.io/ceph/ceph:v18, name=distracted_matsumoto, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:15:15 compute-0 podman[215189]: 2025-12-05 01:15:15.339636454 +0000 UTC m=+0.244389517 container start 99d5b2ef557d9eb826809da38cf63c1f8c936867f01acfdc1937a3e616ff6ae6 (image=quay.io/ceph/ceph:v18, name=distracted_matsumoto, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 05 01:15:15 compute-0 podman[215189]: 2025-12-05 01:15:15.347231517 +0000 UTC m=+0.251984570 container attach 99d5b2ef557d9eb826809da38cf63c1f8c936867f01acfdc1937a3e616ff6ae6 (image=quay.io/ceph/ceph:v18, name=distracted_matsumoto, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 01:15:15 compute-0 ceph-mon[192914]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 05 01:15:15 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3659675886' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 05 01:15:15 compute-0 ceph-mon[192914]: osdmap e30: 3 total, 3 up, 3 in
Dec 05 01:15:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Dec 05 01:15:15 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/895646168' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v81: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:15:16
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'backups', 'volumes', 'vms', 'cephfs.cephfs.meta', '.mgr']
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 05 01:15:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Dec 05 01:15:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:15:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec 05 01:15:16 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/895646168' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec 05 01:15:16 compute-0 ceph-mon[192914]: pgmap v81: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:16 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 01:15:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/895646168' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 05 01:15:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 05 01:15:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Dec 05 01:15:16 compute-0 distracted_matsumoto[215204]: enabled application 'rbd' on pool 'backups'
Dec 05 01:15:16 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Dec 05 01:15:16 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev c9af2dd1-de48-4226-86e7-e655905eba6a (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 05 01:15:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Dec 05 01:15:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 01:15:16 compute-0 systemd[1]: libpod-99d5b2ef557d9eb826809da38cf63c1f8c936867f01acfdc1937a3e616ff6ae6.scope: Deactivated successfully.
Dec 05 01:15:16 compute-0 podman[215189]: 2025-12-05 01:15:16.63526964 +0000 UTC m=+1.540022703 container died 99d5b2ef557d9eb826809da38cf63c1f8c936867f01acfdc1937a3e616ff6ae6 (image=quay.io/ceph/ceph:v18, name=distracted_matsumoto, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 05 01:15:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a500b52efa8c432a23ba815a1a9f7f5c93367d2ea0ee07984993bd40b8d427b-merged.mount: Deactivated successfully.
Dec 05 01:15:16 compute-0 podman[215189]: 2025-12-05 01:15:16.742765088 +0000 UTC m=+1.647518111 container remove 99d5b2ef557d9eb826809da38cf63c1f8c936867f01acfdc1937a3e616ff6ae6 (image=quay.io/ceph/ceph:v18, name=distracted_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:15:16 compute-0 systemd[1]: libpod-conmon-99d5b2ef557d9eb826809da38cf63c1f8c936867f01acfdc1937a3e616ff6ae6.scope: Deactivated successfully.
Dec 05 01:15:16 compute-0 sudo[215186]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:16 compute-0 sudo[215262]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxdrootmplmpzerytxcttmoikibqicuo ; /usr/bin/python3'
Dec 05 01:15:16 compute-0 sudo[215262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:15:17 compute-0 python3[215264]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:17 compute-0 podman[215265]: 2025-12-05 01:15:17.182321712 +0000 UTC m=+0.091817780 container create bd74be3a0fed2d9d10e963d6e4fc1b07d1dd3a7894726445adc5cac138a1367a (image=quay.io/ceph/ceph:v18, name=elated_cori, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:15:17 compute-0 podman[215265]: 2025-12-05 01:15:17.142160697 +0000 UTC m=+0.051656835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:17 compute-0 systemd[1]: Started libpod-conmon-bd74be3a0fed2d9d10e963d6e4fc1b07d1dd3a7894726445adc5cac138a1367a.scope.
Dec 05 01:15:17 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b2fe447dc443a62b4f74c4a7d187be0037d67dc4d57f3764e33c35f927c78a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b2fe447dc443a62b4f74c4a7d187be0037d67dc4d57f3764e33c35f927c78a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:17 compute-0 podman[215265]: 2025-12-05 01:15:17.362309714 +0000 UTC m=+0.271805832 container init bd74be3a0fed2d9d10e963d6e4fc1b07d1dd3a7894726445adc5cac138a1367a (image=quay.io/ceph/ceph:v18, name=elated_cori, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 05 01:15:17 compute-0 podman[215265]: 2025-12-05 01:15:17.381018805 +0000 UTC m=+0.290514863 container start bd74be3a0fed2d9d10e963d6e4fc1b07d1dd3a7894726445adc5cac138a1367a (image=quay.io/ceph/ceph:v18, name=elated_cori, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:15:17 compute-0 podman[215265]: 2025-12-05 01:15:17.390999872 +0000 UTC m=+0.300495980 container attach bd74be3a0fed2d9d10e963d6e4fc1b07d1dd3a7894726445adc5cac138a1367a (image=quay.io/ceph/ceph:v18, name=elated_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 01:15:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec 05 01:15:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 05 01:15:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Dec 05 01:15:17 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Dec 05 01:15:17 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev 0de8065e-0050-4e26-b97b-0b8799bec160 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 05 01:15:17 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/895646168' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 05 01:15:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 05 01:15:17 compute-0 ceph-mon[192914]: osdmap e31: 3 total, 3 up, 3 in
Dec 05 01:15:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 01:15:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Dec 05 01:15:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 01:15:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Dec 05 01:15:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1272040897' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec 05 01:15:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v84: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 05 01:15:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 01:15:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 05 01:15:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 01:15:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec 05 01:15:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 05 01:15:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1272040897' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 05 01:15:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 01:15:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 01:15:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Dec 05 01:15:18 compute-0 elated_cori[215281]: enabled application 'rbd' on pool 'images'
Dec 05 01:15:18 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Dec 05 01:15:18 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 33 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=33 pruub=14.372560501s) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active pruub 57.631557465s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:18 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 33 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=33 pruub=14.372560501s) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown pruub 57.631557465s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:18 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev 339ad7be-a8a2-4a9e-8c52-0669a043c0a3 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 05 01:15:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 05 01:15:18 compute-0 ceph-mon[192914]: osdmap e32: 3 total, 3 up, 3 in
Dec 05 01:15:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 01:15:18 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1272040897' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec 05 01:15:18 compute-0 ceph-mon[192914]: pgmap v84: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 01:15:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 01:15:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Dec 05 01:15:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 01:15:18 compute-0 podman[215265]: 2025-12-05 01:15:18.928264031 +0000 UTC m=+1.837760099 container died bd74be3a0fed2d9d10e963d6e4fc1b07d1dd3a7894726445adc5cac138a1367a (image=quay.io/ceph/ceph:v18, name=elated_cori, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:15:18 compute-0 systemd[1]: libpod-bd74be3a0fed2d9d10e963d6e4fc1b07d1dd3a7894726445adc5cac138a1367a.scope: Deactivated successfully.
Dec 05 01:15:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-81b2fe447dc443a62b4f74c4a7d187be0037d67dc4d57f3764e33c35f927c78a-merged.mount: Deactivated successfully.
Dec 05 01:15:19 compute-0 podman[215265]: 2025-12-05 01:15:19.039582943 +0000 UTC m=+1.949078991 container remove bd74be3a0fed2d9d10e963d6e4fc1b07d1dd3a7894726445adc5cac138a1367a (image=quay.io/ceph/ceph:v18, name=elated_cori, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:15:19 compute-0 systemd[1]: libpod-conmon-bd74be3a0fed2d9d10e963d6e4fc1b07d1dd3a7894726445adc5cac138a1367a.scope: Deactivated successfully.
Dec 05 01:15:19 compute-0 sudo[215262]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:19 compute-0 sudo[215341]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvljjpmqpmrhreippjrxgkjnzojttlqi ; /usr/bin/python3'
Dec 05 01:15:19 compute-0 sudo[215341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:19 compute-0 python3[215343]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:19 compute-0 podman[215344]: 2025-12-05 01:15:19.599496291 +0000 UTC m=+0.100652857 container create 08e13b24ec485888c5ab734d2e26564900b0938c094f87c370524d5324e8c5d0 (image=quay.io/ceph/ceph:v18, name=recursing_faraday, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 05 01:15:19 compute-0 podman[215344]: 2025-12-05 01:15:19.555083092 +0000 UTC m=+0.056239718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:19 compute-0 systemd[1]: Started libpod-conmon-08e13b24ec485888c5ab734d2e26564900b0938c094f87c370524d5324e8c5d0.scope.
Dec 05 01:15:19 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7403e190a00127c2ea1491a61795c1b6cd83841b3830add86a0ff7caab67c06/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7403e190a00127c2ea1491a61795c1b6cd83841b3830add86a0ff7caab67c06/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:19 compute-0 podman[215357]: 2025-12-05 01:15:19.77339208 +0000 UTC m=+0.180834406 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 05 01:15:19 compute-0 podman[215344]: 2025-12-05 01:15:19.791298169 +0000 UTC m=+0.292454765 container init 08e13b24ec485888c5ab734d2e26564900b0938c094f87c370524d5324e8c5d0 (image=quay.io/ceph/ceph:v18, name=recursing_faraday, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:15:19 compute-0 podman[215344]: 2025-12-05 01:15:19.809856866 +0000 UTC m=+0.311013412 container start 08e13b24ec485888c5ab734d2e26564900b0938c094f87c370524d5324e8c5d0 (image=quay.io/ceph/ceph:v18, name=recursing_faraday, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 05 01:15:19 compute-0 podman[215344]: 2025-12-05 01:15:19.818797316 +0000 UTC m=+0.319953952 container attach 08e13b24ec485888c5ab734d2e26564900b0938c094f87c370524d5324e8c5d0 (image=quay.io/ceph/ceph:v18, name=recursing_faraday, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Dec 05 01:15:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec 05 01:15:19 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 05 01:15:19 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1272040897' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 05 01:15:19 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 01:15:19 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 01:15:19 compute-0 ceph-mon[192914]: osdmap e33: 3 total, 3 up, 3 in
Dec 05 01:15:19 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 01:15:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 05 01:15:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Dec 05 01:15:19 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Dec 05 01:15:19 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev 5bf9daa4-deb2-4a7f-82fa-7555bfe35f05 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 05 01:15:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0) v1
Dec 05 01:15:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1e( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1d( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1c( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1f( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.b( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.a( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.9( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.8( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.6( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.3( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.4( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.2( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.7( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.c( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.e( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.d( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.5( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.10( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.f( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.12( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.13( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.14( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.15( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.11( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.16( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.17( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.18( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.19( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1a( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1b( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1e( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.c( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.0( empty local-lis/les=33/34 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.e( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.10( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.12( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.14( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1a( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v87: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 05 01:15:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 01:15:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 05 01:15:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 33 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=33 pruub=14.931506157s) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active pruub 66.275405884s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=33 pruub=14.931506157s) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown pruub 66.275405884s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.e( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.d( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.9( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.a( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.7( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.8( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.1f( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.f( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.10( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.11( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.12( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.13( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.14( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.17( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.18( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.15( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.16( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.1b( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.1c( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.19( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.1a( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.1e( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.2( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.1d( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.1( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.5( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.6( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.3( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.4( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.c( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.b( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Dec 05 01:15:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/35306483' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec 05 01:15:20 compute-0 podman[215402]: 2025-12-05 01:15:20.754254333 +0000 UTC m=+0.156137894 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:15:20 compute-0 podman[215403]: 2025-12-05 01:15:20.795651362 +0000 UTC m=+0.194285536 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 01:15:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec 05 01:15:20 compute-0 ceph-mon[192914]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 05 01:15:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec 05 01:15:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 01:15:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 01:15:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/35306483' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 05 01:15:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Dec 05 01:15:20 compute-0 recursing_faraday[215377]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec 05 01:15:20 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Dec 05 01:15:20 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev 7028de3a-64c4-4577-87c4-9a3152076f1d (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec 05 01:15:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 05 01:15:20 compute-0 ceph-mon[192914]: osdmap e34: 3 total, 3 up, 3 in
Dec 05 01:15:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 01:15:20 compute-0 ceph-mon[192914]: pgmap v87: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 01:15:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 01:15:20 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/35306483' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec 05 01:15:20 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 35 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=10.460465431s) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active pruub 55.776924133s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:20 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 35 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=10.460465431s) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown pruub 55.776924133s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Dec 05 01:15:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.1e( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.1f( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.1c( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.a( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.1d( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.9( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.8( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.7( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.6( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.1( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.3( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.4( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.2( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.0( empty local-lis/les=33/35 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.b( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.d( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.f( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.e( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.11( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.12( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.10( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.1b( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.15( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.16( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.17( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.18( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.19( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.c( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.5( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.14( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.13( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:20 compute-0 systemd[1]: libpod-08e13b24ec485888c5ab734d2e26564900b0938c094f87c370524d5324e8c5d0.scope: Deactivated successfully.
Dec 05 01:15:20 compute-0 podman[215344]: 2025-12-05 01:15:20.977133093 +0000 UTC m=+1.478289669 container died 08e13b24ec485888c5ab734d2e26564900b0938c094f87c370524d5324e8c5d0 (image=quay.io/ceph/ceph:v18, name=recursing_faraday, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.1a( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7403e190a00127c2ea1491a61795c1b6cd83841b3830add86a0ff7caab67c06-merged.mount: Deactivated successfully.
Dec 05 01:15:21 compute-0 podman[215344]: 2025-12-05 01:15:21.062632633 +0000 UTC m=+1.563789179 container remove 08e13b24ec485888c5ab734d2e26564900b0938c094f87c370524d5324e8c5d0 (image=quay.io/ceph/ceph:v18, name=recursing_faraday, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 05 01:15:21 compute-0 systemd[1]: libpod-conmon-08e13b24ec485888c5ab734d2e26564900b0938c094f87c370524d5324e8c5d0.scope: Deactivated successfully.
Dec 05 01:15:21 compute-0 sudo[215341]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:21 compute-0 ceph-mgr[193209]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Dec 05 01:15:21 compute-0 sudo[215485]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfgxbvqylfcqpjcqcugjhtficudgimgd ; /usr/bin/python3'
Dec 05 01:15:21 compute-0 sudo[215485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:21 compute-0 python3[215487]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:21 compute-0 podman[215488]: 2025-12-05 01:15:21.565048712 +0000 UTC m=+0.057031059 container create 46e6d6a41c47a037567bd980e76f64a0266a114b370b7bee2dd13fd941b3f783 (image=quay.io/ceph/ceph:v18, name=sleepy_spence, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 05 01:15:21 compute-0 podman[215488]: 2025-12-05 01:15:21.542655072 +0000 UTC m=+0.034637439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:21 compute-0 systemd[1]: Started libpod-conmon-46e6d6a41c47a037567bd980e76f64a0266a114b370b7bee2dd13fd941b3f783.scope.
Dec 05 01:15:21 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557b335779dfe6275eb5e2820deb655f19244ae102f7a4813043e07c0f2c3155/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557b335779dfe6275eb5e2820deb655f19244ae102f7a4813043e07c0f2c3155/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:21 compute-0 podman[215488]: 2025-12-05 01:15:21.743683217 +0000 UTC m=+0.235665664 container init 46e6d6a41c47a037567bd980e76f64a0266a114b370b7bee2dd13fd941b3f783 (image=quay.io/ceph/ceph:v18, name=sleepy_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 01:15:21 compute-0 podman[215488]: 2025-12-05 01:15:21.756956632 +0000 UTC m=+0.248938979 container start 46e6d6a41c47a037567bd980e76f64a0266a114b370b7bee2dd13fd941b3f783 (image=quay.io/ceph/ceph:v18, name=sleepy_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:15:21 compute-0 podman[215488]: 2025-12-05 01:15:21.76210626 +0000 UTC m=+0.254088707 container attach 46e6d6a41c47a037567bd980e76f64a0266a114b370b7bee2dd13fd941b3f783 (image=quay.io/ceph/ceph:v18, name=sleepy_spence, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:15:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec 05 01:15:21 compute-0 ceph-mon[192914]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 05 01:15:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec 05 01:15:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 01:15:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 01:15:21 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/35306483' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 05 01:15:21 compute-0 ceph-mon[192914]: osdmap e35: 3 total, 3 up, 3 in
Dec 05 01:15:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 01:15:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 05 01:15:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Dec 05 01:15:21 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Dec 05 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev 21763038-846a-4407-8073-b5a4c2947a4f (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 05 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev c9af2dd1-de48-4226-86e7-e655905eba6a (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 05 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event c9af2dd1-de48-4226-86e7-e655905eba6a (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Dec 05 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev 0de8065e-0050-4e26-b97b-0b8799bec160 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 05 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event 0de8065e-0050-4e26-b97b-0b8799bec160 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Dec 05 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev 339ad7be-a8a2-4a9e-8c52-0669a043c0a3 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 05 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event 339ad7be-a8a2-4a9e-8c52-0669a043c0a3 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Dec 05 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev 5bf9daa4-deb2-4a7f-82fa-7555bfe35f05 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 05 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event 5bf9daa4-deb2-4a7f-82fa-7555bfe35f05 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Dec 05 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev 7028de3a-64c4-4577-87c4-9a3152076f1d (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec 05 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event 7028de3a-64c4-4577-87c4-9a3152076f1d (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Dec 05 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev 21763038-846a-4407-8073-b5a4c2947a4f (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 05 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event 21763038-846a-4407-8073-b5a4c2947a4f (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1c( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1f( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1e( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1d( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.11( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.12( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.10( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.13( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.14( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.15( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.17( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.8( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.16( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.9( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.a( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.7( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.b( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.5( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.6( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.4( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.3( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.2( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.e( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.c( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.f( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1b( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1a( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.d( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.19( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.18( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1c( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1f( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.10( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.17( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.a( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.8( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.0( empty local-lis/les=35/36 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.b( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.6( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.e( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1b( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.d( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:15:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v90: 131 pgs: 1 peering, 62 unknown, 68 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 05 01:15:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 01:15:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 05 01:15:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 35 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=15.096145630s) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 74.387512207s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=15.096145630s) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown pruub 74.387512207s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.1b( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.1c( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.1d( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.1e( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.3( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.4( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.13( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.1( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.2( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.1f( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.17( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.18( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.15( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.16( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.11( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.12( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.f( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.10( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.19( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.1a( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.5( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.6( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.14( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.7( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.8( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.9( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.a( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.d( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.e( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.b( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.c( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Dec 05 01:15:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3942063658' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec 05 01:15:22 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Dec 05 01:15:22 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Dec 05 01:15:22 compute-0 podman[215527]: 2025-12-05 01:15:22.742838371 +0000 UTC m=+0.150289487 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec 05 01:15:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec 05 01:15:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 01:15:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 01:15:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3942063658' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 05 01:15:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Dec 05 01:15:22 compute-0 sleepy_spence[215502]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec 05 01:15:22 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Dec 05 01:15:23 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 37 pg[7.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=37 pruub=12.528459549s) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active pruub 66.495002747s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:23 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 05 01:15:23 compute-0 ceph-mon[192914]: osdmap e36: 3 total, 3 up, 3 in
Dec 05 01:15:23 compute-0 ceph-mon[192914]: pgmap v90: 131 pgs: 1 peering, 62 unknown, 68 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:23 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 01:15:23 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 01:15:23 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3942063658' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec 05 01:15:23 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 37 pg[7.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=37 pruub=12.528459549s) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown pruub 66.495002747s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=37 pruub=10.464612007s) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active pruub 70.495025635s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=37 pruub=10.464612007s) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown pruub 70.495025635s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.15( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.13( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.18( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.16( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.12( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.17( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.14( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.10( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.f( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.e( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.c( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.d( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.1( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.2( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.11( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.3( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.9( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.4( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.1a( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.5( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.a( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.0( empty local-lis/les=35/37 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.1b( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.6( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.b( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.8( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.1d( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.7( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.1e( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.1f( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.19( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.1c( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:23 compute-0 systemd[1]: libpod-46e6d6a41c47a037567bd980e76f64a0266a114b370b7bee2dd13fd941b3f783.scope: Deactivated successfully.
Dec 05 01:15:23 compute-0 podman[215488]: 2025-12-05 01:15:23.044389729 +0000 UTC m=+1.536372156 container died 46e6d6a41c47a037567bd980e76f64a0266a114b370b7bee2dd13fd941b3f783 (image=quay.io/ceph/ceph:v18, name=sleepy_spence, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 05 01:15:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-557b335779dfe6275eb5e2820deb655f19244ae102f7a4813043e07c0f2c3155-merged.mount: Deactivated successfully.
Dec 05 01:15:23 compute-0 podman[215488]: 2025-12-05 01:15:23.141540991 +0000 UTC m=+1.633523378 container remove 46e6d6a41c47a037567bd980e76f64a0266a114b370b7bee2dd13fd941b3f783 (image=quay.io/ceph/ceph:v18, name=sleepy_spence, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 01:15:23 compute-0 systemd[1]: libpod-conmon-46e6d6a41c47a037567bd980e76f64a0266a114b370b7bee2dd13fd941b3f783.scope: Deactivated successfully.
Dec 05 01:15:23 compute-0 sudo[215485]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:23 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec 05 01:15:23 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec 05 01:15:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec 05 01:15:24 compute-0 ceph-mon[192914]: 3.1 scrub starts
Dec 05 01:15:24 compute-0 ceph-mon[192914]: 3.1 scrub ok
Dec 05 01:15:24 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 01:15:24 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 01:15:24 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3942063658' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 05 01:15:24 compute-0 ceph-mon[192914]: osdmap e37: 3 total, 3 up, 3 in
Dec 05 01:15:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Dec 05 01:15:24 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1e( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1d( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1c( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.13( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.12( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.11( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.10( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.17( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.15( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.14( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.b( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.a( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.9( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.8( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.f( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.6( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.4( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.16( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.5( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.7( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.2( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.3( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.c( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.e( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1f( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.d( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.18( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.19( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1a( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1b( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1a( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.15( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.14( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.17( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.16( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.11( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.10( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.13( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.12( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.d( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.f( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.e( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.2( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.3( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.c( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1b( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.6( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.b( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.18( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.7( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.8( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.19( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.4( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.9( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.5( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1e( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.a( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1c( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1f( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1d( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1e( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1c( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.12( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.13( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.11( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.10( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.17( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.15( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.14( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.b( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.9( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.a( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.f( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.8( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.0( empty local-lis/les=37/38 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1a( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.17( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.16( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.5( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.4( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.7( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.2( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1d( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.e( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.c( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1f( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.d( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.3( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.15( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v93: 193 pgs: 1 peering, 124 unknown, 68 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.18( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1a( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.19( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1b( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.6( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.11( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.14( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.16( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.13( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.10( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.12( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.d( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.e( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.f( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.0( empty local-lis/les=37/38 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.c( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.2( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.3( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.b( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.18( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.7( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.8( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.19( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.9( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.4( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1e( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.a( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1d( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1f( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.5( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.6( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1c( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1b( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:24 compute-0 python3[215633]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 01:15:24 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Dec 05 01:15:24 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Dec 05 01:15:24 compute-0 podman[215702]: 2025-12-05 01:15:24.705391161 +0000 UTC m=+0.111937299 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.component=ubi9-container, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, container_name=kepler, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc.)
Dec 05 01:15:24 compute-0 python3[215705]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764897323.8143196-37123-56126538119197/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:15:24 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Dec 05 01:15:24 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Dec 05 01:15:25 compute-0 ceph-mon[192914]: 4.1 scrub starts
Dec 05 01:15:25 compute-0 ceph-mon[192914]: 4.1 scrub ok
Dec 05 01:15:25 compute-0 ceph-mon[192914]: osdmap e38: 3 total, 3 up, 3 in
Dec 05 01:15:25 compute-0 ceph-mon[192914]: pgmap v93: 193 pgs: 1 peering, 124 unknown, 68 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:25 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 05 01:15:25 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 05 01:15:25 compute-0 sudo[215824]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcubpglfsjxfqtfliqckqpygetvwtqiw ; /usr/bin/python3'
Dec 05 01:15:25 compute-0 sudo[215824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:25 compute-0 python3[215826]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 01:15:25 compute-0 sudo[215824]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:25 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.2 deep-scrub starts
Dec 05 01:15:25 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.2 deep-scrub ok
Dec 05 01:15:25 compute-0 sudo[215899]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrigfyluxbviufdcuyvudukcoptnopdm ; /usr/bin/python3'
Dec 05 01:15:25 compute-0 sudo[215899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:26 compute-0 ceph-mon[192914]: 3.2 scrub starts
Dec 05 01:15:26 compute-0 ceph-mon[192914]: 3.2 scrub ok
Dec 05 01:15:26 compute-0 ceph-mon[192914]: 2.1 scrub starts
Dec 05 01:15:26 compute-0 ceph-mon[192914]: 2.1 scrub ok
Dec 05 01:15:26 compute-0 ceph-mon[192914]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 05 01:15:26 compute-0 ceph-mon[192914]: Cluster is now healthy
Dec 05 01:15:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v94: 193 pgs: 1 peering, 62 unknown, 130 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:26 compute-0 python3[215901]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764897325.1030889-37137-125420570080905/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=a07363e650807b3400a8fed2d84c9a8d6bf803ad backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:15:26 compute-0 ceph-mgr[193209]: [progress INFO root] Writing back 9 completed events
Dec 05 01:15:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 05 01:15:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:26 compute-0 sudo[215899]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:26 compute-0 sudo[215949]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tltkppzmrnhslaqrqyokduomzcseysgx ; /usr/bin/python3'
Dec 05 01:15:26 compute-0 sudo[215949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:26 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec 05 01:15:26 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec 05 01:15:26 compute-0 python3[215951]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                            _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:26 compute-0 podman[215952]: 2025-12-05 01:15:26.870156169 +0000 UTC m=+0.077322362 container create 85fc6ef32907e3e1329f0edde408ab604bb5022e865cbb9f51b750f8b363ff30 (image=quay.io/ceph/ceph:v18, name=festive_mendeleev, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:15:26 compute-0 systemd[194721]: Starting Mark boot as successful...
Dec 05 01:15:26 compute-0 systemd[1]: Started libpod-conmon-85fc6ef32907e3e1329f0edde408ab604bb5022e865cbb9f51b750f8b363ff30.scope.
Dec 05 01:15:26 compute-0 systemd[194721]: Finished Mark boot as successful.
Dec 05 01:15:26 compute-0 podman[215952]: 2025-12-05 01:15:26.841963204 +0000 UTC m=+0.049129417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:26 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3f1f278fad6650683304631436df438fe49e1309ac42571878003a68e7433ad/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3f1f278fad6650683304631436df438fe49e1309ac42571878003a68e7433ad/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3f1f278fad6650683304631436df438fe49e1309ac42571878003a68e7433ad/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:26 compute-0 podman[215952]: 2025-12-05 01:15:26.987708208 +0000 UTC m=+0.194874431 container init 85fc6ef32907e3e1329f0edde408ab604bb5022e865cbb9f51b750f8b363ff30 (image=quay.io/ceph/ceph:v18, name=festive_mendeleev, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 05 01:15:27 compute-0 podman[215952]: 2025-12-05 01:15:27.003329356 +0000 UTC m=+0.210495559 container start 85fc6ef32907e3e1329f0edde408ab604bb5022e865cbb9f51b750f8b363ff30 (image=quay.io/ceph/ceph:v18, name=festive_mendeleev, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 05 01:15:27 compute-0 podman[215952]: 2025-12-05 01:15:27.008654849 +0000 UTC m=+0.215821062 container attach 85fc6ef32907e3e1329f0edde408ab604bb5022e865cbb9f51b750f8b363ff30 (image=quay.io/ceph/ceph:v18, name=festive_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 01:15:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:15:27 compute-0 ceph-mon[192914]: 2.2 deep-scrub starts
Dec 05 01:15:27 compute-0 ceph-mon[192914]: 2.2 deep-scrub ok
Dec 05 01:15:27 compute-0 ceph-mon[192914]: pgmap v94: 193 pgs: 1 peering, 62 unknown, 130 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:27 compute-0 ceph-mon[192914]: 3.3 scrub starts
Dec 05 01:15:27 compute-0 ceph-mon[192914]: 3.3 scrub ok
Dec 05 01:15:27 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Dec 05 01:15:27 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Dec 05 01:15:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Dec 05 01:15:27 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1465093527' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 05 01:15:27 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1465093527' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 05 01:15:27 compute-0 festive_mendeleev[215968]: 
Dec 05 01:15:27 compute-0 festive_mendeleev[215968]: [global]
Dec 05 01:15:27 compute-0 festive_mendeleev[215968]:         fsid = cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec 05 01:15:27 compute-0 festive_mendeleev[215968]:         mon_host = 192.168.122.100
Dec 05 01:15:27 compute-0 systemd[1]: libpod-85fc6ef32907e3e1329f0edde408ab604bb5022e865cbb9f51b750f8b363ff30.scope: Deactivated successfully.
Dec 05 01:15:27 compute-0 podman[215952]: 2025-12-05 01:15:27.651772245 +0000 UTC m=+0.858938478 container died 85fc6ef32907e3e1329f0edde408ab604bb5022e865cbb9f51b750f8b363ff30 (image=quay.io/ceph/ceph:v18, name=festive_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 05 01:15:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3f1f278fad6650683304631436df438fe49e1309ac42571878003a68e7433ad-merged.mount: Deactivated successfully.
Dec 05 01:15:27 compute-0 podman[215952]: 2025-12-05 01:15:27.746472872 +0000 UTC m=+0.953639085 container remove 85fc6ef32907e3e1329f0edde408ab604bb5022e865cbb9f51b750f8b363ff30 (image=quay.io/ceph/ceph:v18, name=festive_mendeleev, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 01:15:27 compute-0 systemd[1]: libpod-conmon-85fc6ef32907e3e1329f0edde408ab604bb5022e865cbb9f51b750f8b363ff30.scope: Deactivated successfully.
Dec 05 01:15:27 compute-0 sudo[215949]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:27 compute-0 sudo[216008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:27 compute-0 podman[215991]: 2025-12-05 01:15:27.786792522 +0000 UTC m=+0.182597712 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 01:15:27 compute-0 sudo[216008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:27 compute-0 sudo[216008]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:27 compute-0 sudo[216049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:15:27 compute-0 sudo[216049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:27 compute-0 sudo[216049]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:27 compute-0 sudo[216102]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdcrnxvhgrqrgkiwpvbdxkbfdccefpjs ; /usr/bin/python3'
Dec 05 01:15:27 compute-0 sudo[216102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:28 compute-0 sudo[216095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:28 compute-0 sudo[216095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:28 compute-0 sudo[216095]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v95: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 05 01:15:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 01:15:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 05 01:15:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 01:15:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 05 01:15:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 01:15:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 05 01:15:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 01:15:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 05 01:15:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 01:15:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 05 01:15:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 01:15:28 compute-0 sudo[216125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 05 01:15:28 compute-0 python3[216111]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                            _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:28 compute-0 sudo[216125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:28 compute-0 podman[216149]: 2025-12-05 01:15:28.233450397 +0000 UTC m=+0.062697541 container create ea203986fa33ae5c4dbc0400f4bad6b10b795341e942fa9247878b999b57ffc9 (image=quay.io/ceph/ceph:v18, name=unruffled_carver, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:15:28 compute-0 ceph-mon[192914]: 4.2 scrub starts
Dec 05 01:15:28 compute-0 ceph-mon[192914]: 4.2 scrub ok
Dec 05 01:15:28 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1465093527' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 05 01:15:28 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1465093527' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 05 01:15:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 01:15:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 01:15:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 01:15:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 01:15:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 01:15:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 01:15:28 compute-0 systemd[1]: Started libpod-conmon-ea203986fa33ae5c4dbc0400f4bad6b10b795341e942fa9247878b999b57ffc9.scope.
Dec 05 01:15:28 compute-0 podman[216149]: 2025-12-05 01:15:28.210158903 +0000 UTC m=+0.039406067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:28 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dfceeae49b9b6d666c968bd9900e3599ed0556b9ba95f56b89b3cabf2b738c9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dfceeae49b9b6d666c968bd9900e3599ed0556b9ba95f56b89b3cabf2b738c9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dfceeae49b9b6d666c968bd9900e3599ed0556b9ba95f56b89b3cabf2b738c9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:28 compute-0 podman[216149]: 2025-12-05 01:15:28.365482193 +0000 UTC m=+0.194729357 container init ea203986fa33ae5c4dbc0400f4bad6b10b795341e942fa9247878b999b57ffc9 (image=quay.io/ceph/ceph:v18, name=unruffled_carver, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:15:28 compute-0 podman[216149]: 2025-12-05 01:15:28.385272903 +0000 UTC m=+0.214520047 container start ea203986fa33ae5c4dbc0400f4bad6b10b795341e942fa9247878b999b57ffc9 (image=quay.io/ceph/ceph:v18, name=unruffled_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:15:28 compute-0 podman[216149]: 2025-12-05 01:15:28.389383864 +0000 UTC m=+0.218631058 container attach ea203986fa33ae5c4dbc0400f4bad6b10b795341e942fa9247878b999b57ffc9 (image=quay.io/ceph/ceph:v18, name=unruffled_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 05 01:15:28 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Dec 05 01:15:28 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Dec 05 01:15:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec 05 01:15:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 01:15:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 01:15:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 01:15:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 01:15:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 01:15:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 01:15:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.304944992s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.322792053s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.304885864s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.322792053s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.357736588s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.375885010s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.357615471s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.375858307s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.357591629s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.375858307s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.357636452s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.375885010s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.304480553s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.322868347s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.304277420s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.322738647s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.304409981s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.322868347s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.304246902s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.322738647s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.304057121s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.322738647s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.304033279s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.322738647s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.304913521s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.323722839s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.304894447s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.323722839s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.303823471s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.322731018s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.357022285s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.375934601s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356994629s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.375934601s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.303780556s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.322731018s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356890678s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.375877380s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356790543s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.375877380s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356760025s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.375900269s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.303393364s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.322540283s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.303343773s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.322540283s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356688499s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.375900269s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356653214s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.375949860s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356674194s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.375972748s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356656075s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.375972748s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356337547s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.375995636s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.302814484s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.322509766s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356300354s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.375995636s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.302789688s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.322509766s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356621742s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.375949860s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.301838875s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.321701050s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.301820755s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.321701050s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355987549s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.375980377s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355951309s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.375980377s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355987549s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.376041412s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355964661s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.376041412s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.303320885s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.323463440s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.301454544s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.321678162s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.301431656s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.321678162s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.303228378s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.323463440s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.301286697s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.321655273s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.301271439s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.321655273s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355638504s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.376068115s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355610847s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.376068115s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355605125s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.376102448s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300979614s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.321487427s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355587959s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.376102448s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300941467s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.321495056s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.301899910s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.322502136s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300894737s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.321495056s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.301883698s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.322502136s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300880432s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.321487427s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355349541s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.376091003s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300849915s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.321617126s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355325699s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.376091003s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300819397s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.321617126s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355272293s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.376148224s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355251312s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.376132965s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355217934s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.376132965s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355228424s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.376148224s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300493240s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.321487427s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300465584s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.321487427s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355172157s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.376213074s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355142593s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.376213074s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300222397s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.321434021s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300314903s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.321472168s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300189018s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.321434021s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300136566s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.321418762s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300115585s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.321418762s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300202370s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.321472168s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.354790688s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.376213074s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.299877167s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.321311951s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.354733467s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.376213074s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.301344872s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.322875977s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.301317215s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.322875977s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.354547501s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.376239777s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355680466s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.377380371s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.354521751s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.376239777s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355651855s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.377380371s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.299545288s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.321334839s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356031418s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.377906799s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.299497604s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.321334839s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356008530s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.377906799s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.299395561s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.321311951s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[5.1e( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[2.19( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[2.18( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[5.7( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[2.1d( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[5.4( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[2.1c( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[2.f( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[2.2( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[5.5( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[2.1f( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[5.2( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[5.3( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[5.19( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[2.b( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[2.8( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[2.16( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[5.15( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[2.13( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[5.14( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[2.11( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.18( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.364226341s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.055831909s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.18( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.364199638s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.055831909s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.15( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.392975807s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.084739685s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.15( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.392960548s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.084739685s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.14( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.403947830s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.095840454s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.14( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.403932571s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.095840454s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.17( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.392865181s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.084884644s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.17( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.392849922s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.084884644s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.14( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363949776s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056091309s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.14( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363932610s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056091309s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.13( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363529205s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.055809021s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.13( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363513947s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.055809021s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.11( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.403358459s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.095748901s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.11( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.403343201s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.095748901s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.12( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363568306s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056037903s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.12( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363554955s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056037903s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.11( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363934517s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056488037s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.11( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363919258s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056488037s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.13( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.403327942s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.095993042s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.13( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.403310776s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.095993042s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.10( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363390923s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056144714s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.10( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363377571s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056144714s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.f( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363306999s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056182861s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.f( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363291740s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056182861s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.d( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.403141022s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096138000s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.d( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.403121948s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096138000s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.e( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363162041s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056259155s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.e( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363145828s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056259155s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.c( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.403162003s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096397400s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.c( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.403143883s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096397400s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.d( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363057137s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056381226s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.d( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363042831s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056381226s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[5.18( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[5.1a( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[4.18( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[6.15( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.e( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.402201653s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096275330s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.e( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.402175903s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096275330s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.2( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.402244568s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096427917s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.2( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.402199745s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096427917s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.2( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.362100601s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056411743s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.2( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.362083435s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056411743s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.f( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401975632s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096343994s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.1( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.362030029s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056427002s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.1( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.362015724s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056427002s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.f( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401942253s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096343994s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.1( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401949883s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096481323s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.1( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401930809s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096481323s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.6( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401866913s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096496582s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.6( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401853561s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096496582s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.4( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361931801s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056587219s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.9( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361819267s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056533813s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.4( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361874580s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056587219s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.9( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361803055s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056533813s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.b( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401807785s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096626282s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.b( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401793480s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096626282s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.1a( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361680984s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056594849s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.1a( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361666679s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056594849s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.5( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361672401s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056632996s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.5( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361646652s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056632996s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.a( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361621857s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056655884s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.a( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361604691s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056655884s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.8( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401548386s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096679688s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.1b( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361520767s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056678772s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.1b( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361498833s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056678772s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.8( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401518822s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096679688s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.4( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401325226s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096733093s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.4( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401301384s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096733093s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.8( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361418724s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.057006836s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.8( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361398697s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.057006836s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.1e( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401041031s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096755981s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.1e( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401024818s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096755981s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.1f( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401050568s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096855164s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.1c( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361319542s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.057121277s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.1f( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401032448s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096855164s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.1c( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361282349s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.057121277s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.1c( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.399742126s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.097023010s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.1c( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.399720192s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.097023010s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.7( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.359388351s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.057029724s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.7( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.359352112s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.057029724s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.1d( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.398802757s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096832275s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.1d( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.398769379s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096832275s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[6.14( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[4.13( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[6.11( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[4.11( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[6.13( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[5.1d( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[5.c( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[5.f( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[2.9( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[2.6( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[5.1( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[2.7( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[2.4( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[2.5( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[4.e( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[2.3( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[2.a( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[2.d( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[5.9( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[5.16( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[5.12( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[2.15( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[5.13( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[2.17( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[5.11( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[2.1b( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.1c( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.381144524s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028083801s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.1c( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.381120682s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028083801s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.18( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.287405968s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.934448242s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.18( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.287388802s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.934448242s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.17( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.287194252s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.934394836s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.17( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.287174225s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.934394836s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.13( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.380805016s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028121948s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.13( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.380765915s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028121948s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.16( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.286743164s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.934387207s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.16( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.286722183s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.934387207s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[4.1( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.15( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.286543846s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.934341431s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.15( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.286526680s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.934341431s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[4.1a( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.11( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.380309105s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028228760s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.11( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.380291939s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028228760s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[4.a( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.12( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.285936356s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.934265137s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.15( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.379876137s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028305054s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.12( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.285881996s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.934265137s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.15( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.379821777s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028305054s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.f( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.285408020s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.934082031s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.f( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.285367966s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.934082031s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.e( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.285311699s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.934066772s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.e( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.285275459s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.934066772s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.11( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.285237312s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.934127808s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.11( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.285213470s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.934127808s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.a( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.379448891s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028373718s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.9( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.379390717s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028381348s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.a( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.379405975s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028373718s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.9( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.379357338s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028381348s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.8( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.379235268s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028465271s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.8( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.379196167s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028465271s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.c( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.284764290s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.934135437s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.c( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.284743309s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.934135437s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.f( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.378978729s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028442383s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.f( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.378941536s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028442383s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.4( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.378835678s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028564453s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[4.1b( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[6.8( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[6.1f( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[4.1c( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[7.1c( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[3.18( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[3.16( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[7.11( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[7.15( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[3.17( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.1( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.284173012s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.933906555s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.4( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.378814697s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028564453s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[3.e( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.1( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.284139633s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.933906555s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.5( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.378671646s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028564453s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.5( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.378643990s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028564453s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.3( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.283966064s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.933914185s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.3( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.283926964s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.933914185s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.5( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.284580231s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.934646606s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.5( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.284555435s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.934646606s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.1( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.378441811s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028648376s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.1( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.378406525s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028648376s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.2( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.378314972s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028694153s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[3.11( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.2( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.378288269s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028694153s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.6( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.388133049s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.037796021s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.6( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.387015343s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.037796021s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.6( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.282447815s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.933822632s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.6( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.282421112s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.933822632s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.3( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.376772881s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028694153s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.7( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.281116486s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.933090210s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.3( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.376710892s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028694153s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.8( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.280876160s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.933074951s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.8( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.280844688s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.933074951s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.c( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.376416206s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028823853s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.c( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.376382828s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028823853s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.7( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.280618668s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.933090210s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.9( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.280435562s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.933052063s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.9( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.280406952s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.933052063s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.a( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.280331612s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.933029175s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.a( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.280296326s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.933029175s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.e( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.376008034s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028816223s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.e( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.375980377s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028816223s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.1f( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.375961304s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028892517s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.1f( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.375928879s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028892517s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.1b( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.281331062s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.934333801s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.1b( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.281299591s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.934333801s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.18( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.384404182s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.037620544s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.18( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.384373665s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.037620544s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.1d( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.279585838s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.933006287s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.1d( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.279357910s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.933006287s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.1a( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.383955002s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.037704468s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.1a( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.383920670s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.037704468s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.1e( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.278944969s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.932914734s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.1e( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.278918266s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.932914734s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.1b( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.383519173s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.037635803s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.1b( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.383499146s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.037635803s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.1f( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.278614998s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.932945251s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.1f( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.278591156s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.932945251s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[6.17( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[4.14( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[4.12( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[4.10( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[4.f( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[7.a( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[7.8( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[7.5( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[3.5( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[7.13( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[3.15( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[3.12( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[7.1( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[3.f( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[7.9( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[3.c( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[7.f( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[7.4( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[3.1( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[3.3( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[6.c( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[4.d( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[7.2( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[6.e( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[6.2( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[4.2( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[6.1( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[6.6( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[4.9( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[4.4( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[4.5( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[6.4( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[4.8( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[6.1e( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[6.1c( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[6.1d( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[4.7( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[3.6( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[7.6( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[7.3( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[3.9( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[3.8( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[3.a( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[7.1f( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[7.c( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[3.1b( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[7.18( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[3.7( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[7.1b( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[3.1f( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[7.e( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[7.1a( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[3.1e( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[3.1d( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:28 compute-0 podman[216253]: 2025-12-05 01:15:28.907878443 +0000 UTC m=+0.088239055 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 05 01:15:29 compute-0 podman[216253]: 2025-12-05 01:15:29.004573253 +0000 UTC m=+0.184933855 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:15:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Dec 05 01:15:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1578651117' entity='client.admin' 
Dec 05 01:15:29 compute-0 unruffled_carver[216167]: set ssl_option
Dec 05 01:15:29 compute-0 systemd[1]: libpod-ea203986fa33ae5c4dbc0400f4bad6b10b795341e942fa9247878b999b57ffc9.scope: Deactivated successfully.
Dec 05 01:15:29 compute-0 podman[216149]: 2025-12-05 01:15:29.099777893 +0000 UTC m=+0.929025037 container died ea203986fa33ae5c4dbc0400f4bad6b10b795341e942fa9247878b999b57ffc9 (image=quay.io/ceph/ceph:v18, name=unruffled_carver, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 01:15:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-8dfceeae49b9b6d666c968bd9900e3599ed0556b9ba95f56b89b3cabf2b738c9-merged.mount: Deactivated successfully.
Dec 05 01:15:29 compute-0 podman[216149]: 2025-12-05 01:15:29.15940204 +0000 UTC m=+0.988649184 container remove ea203986fa33ae5c4dbc0400f4bad6b10b795341e942fa9247878b999b57ffc9 (image=quay.io/ceph/ceph:v18, name=unruffled_carver, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 01:15:29 compute-0 systemd[1]: libpod-conmon-ea203986fa33ae5c4dbc0400f4bad6b10b795341e942fa9247878b999b57ffc9.scope: Deactivated successfully.
Dec 05 01:15:29 compute-0 sudo[216102]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:29 compute-0 ceph-mon[192914]: pgmap v95: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:29 compute-0 ceph-mon[192914]: 4.3 scrub starts
Dec 05 01:15:29 compute-0 ceph-mon[192914]: 4.3 scrub ok
Dec 05 01:15:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 01:15:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 01:15:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 01:15:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 01:15:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 01:15:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 01:15:29 compute-0 ceph-mon[192914]: osdmap e39: 3 total, 3 up, 3 in
Dec 05 01:15:29 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1578651117' entity='client.admin' 
Dec 05 01:15:29 compute-0 sudo[216374]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uogwnmogrctxmjqmbanlqfirkmljnrga ; /usr/bin/python3'
Dec 05 01:15:29 compute-0 sudo[216374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:29 compute-0 python3[216380]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:29 compute-0 sudo[216125]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:15:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:15:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:15:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:15:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:15:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:15:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:15:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 4f1d045a-418e-4ab9-b318-bbb001c4ef3a does not exist
Dec 05 01:15:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 3866c7e1-196f-4fb3-b294-f73adfaa9aa4 does not exist
Dec 05 01:15:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 6dec5ec6-d67c-4585-b1bd-0ffdf2fd0d5e does not exist
Dec 05 01:15:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:15:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:15:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:15:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:15:29 compute-0 podman[216412]: 2025-12-05 01:15:29.605660904 +0000 UTC m=+0.054167662 container create 1ee16aa2a9d3edd34fcaab706bae41910a533ef2c34b0f9fed8fdfc9e1abd0a6 (image=quay.io/ceph/ceph:v18, name=unruffled_sammet, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 05 01:15:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:15:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:15:29 compute-0 systemd[1]: Started libpod-conmon-1ee16aa2a9d3edd34fcaab706bae41910a533ef2c34b0f9fed8fdfc9e1abd0a6.scope.
Dec 05 01:15:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec 05 01:15:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Dec 05 01:15:29 compute-0 podman[216412]: 2025-12-05 01:15:29.584776674 +0000 UTC m=+0.033283442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:29 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Dec 05 01:15:29 compute-0 sudo[216425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:29 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[7.1b( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[3.1f( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[2.11( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 sudo[216425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07e06a101a428cc6ee40bdbad045c7723800bc93ae133c19180e9b1674430fea/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07e06a101a428cc6ee40bdbad045c7723800bc93ae133c19180e9b1674430fea/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07e06a101a428cc6ee40bdbad045c7723800bc93ae133c19180e9b1674430fea/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:29 compute-0 sudo[216425]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[4.18( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[4.1b( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[5.14( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[3.12( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[2.13( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[3.15( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[5.15( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[7.13( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[3.17( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[2.16( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[3.9( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[2.b( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[3.a( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[2.8( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[7.f( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[5.3( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[5.2( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[3.6( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[2.1f( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[3.3( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[2.2( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[2.f( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[2.1c( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[5.5( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[7.6( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[7.18( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[5.7( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[2.1d( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[3.1( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[7.3( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[7.9( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[3.c( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[5.4( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[7.4( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[3.f( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[3.1b( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[7.1f( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[2.18( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[5.1e( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[2.19( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[6.1e( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[4.f( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[6.2( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[4.2( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[4.4( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[6.6( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[6.1( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[4.d( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[4.5( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[4.9( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[6.d( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[6.b( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[6.4( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[4.1a( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[6.f( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[4.1( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[6.14( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[4.a( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[6.15( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[4.13( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[6.11( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[6.13( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[4.1c( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[6.1f( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[4.11( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[7.1c( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[3.18( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[3.11( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[7.15( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[3.e( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[7.11( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[7.8( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[7.5( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[7.2( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[7.1( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[3.5( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[7.a( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[3.7( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[3.8( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[7.e( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[3.1d( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[7.1a( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[3.1e( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[6.8( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[7.c( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[3.16( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[4.e( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 podman[216412]: 2025-12-05 01:15:29.729018818 +0000 UTC m=+0.177525566 container init 1ee16aa2a9d3edd34fcaab706bae41910a533ef2c34b0f9fed8fdfc9e1abd0a6 (image=quay.io/ceph/ceph:v18, name=unruffled_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[6.17( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[4.8( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[4.14( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[4.10( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[6.1d( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[6.1c( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[2.1b( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[6.c( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[4.7( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[4.12( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[5.11( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[5.12( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[5.16( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[6.e( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[5.9( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[2.a( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[2.d( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[2.3( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[2.5( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[2.4( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[2.17( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[2.7( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[5.1( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[2.6( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[2.9( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[5.f( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[5.1d( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[5.1a( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[5.18( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[5.13( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[2.15( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[5.19( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[5.c( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:29 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec 05 01:15:29 compute-0 podman[216412]: 2025-12-05 01:15:29.743614049 +0000 UTC m=+0.192120817 container start 1ee16aa2a9d3edd34fcaab706bae41910a533ef2c34b0f9fed8fdfc9e1abd0a6 (image=quay.io/ceph/ceph:v18, name=unruffled_sammet, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 01:15:29 compute-0 podman[158197]: time="2025-12-05T01:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:15:29 compute-0 podman[216412]: 2025-12-05 01:15:29.761310943 +0000 UTC m=+0.209817721 container attach 1ee16aa2a9d3edd34fcaab706bae41910a533ef2c34b0f9fed8fdfc9e1abd0a6 (image=quay.io/ceph/ceph:v18, name=unruffled_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:15:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30687 "" "Go-http-client/1.1"
Dec 05 01:15:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6250 "" "Go-http-client/1.1"
Dec 05 01:15:29 compute-0 sudo[216455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:15:29 compute-0 sudo[216455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:29 compute-0 sudo[216455]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:29 compute-0 sudo[216481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:29 compute-0 sudo[216481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:29 compute-0 sudo[216481]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:29 compute-0 sudo[216506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:15:29 compute-0 sudo[216506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v98: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:30 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:15:30 compute-0 ceph-mgr[193209]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Dec 05 01:15:30 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec 05 01:15:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec 05 01:15:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:30 compute-0 unruffled_sammet[216450]: Scheduled rgw.rgw update...
Dec 05 01:15:30 compute-0 systemd[1]: libpod-1ee16aa2a9d3edd34fcaab706bae41910a533ef2c34b0f9fed8fdfc9e1abd0a6.scope: Deactivated successfully.
Dec 05 01:15:30 compute-0 podman[216412]: 2025-12-05 01:15:30.370122751 +0000 UTC m=+0.818629539 container died 1ee16aa2a9d3edd34fcaab706bae41910a533ef2c34b0f9fed8fdfc9e1abd0a6 (image=quay.io/ceph/ceph:v18, name=unruffled_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 01:15:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-07e06a101a428cc6ee40bdbad045c7723800bc93ae133c19180e9b1674430fea-merged.mount: Deactivated successfully.
Dec 05 01:15:30 compute-0 podman[216589]: 2025-12-05 01:15:30.444162394 +0000 UTC m=+0.065470875 container create e6fd9570eea0620248d9cfab4783e0c77e83f3d18f095b54cb4566ce0507bff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kare, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:15:30 compute-0 podman[216412]: 2025-12-05 01:15:30.449397454 +0000 UTC m=+0.897904202 container remove 1ee16aa2a9d3edd34fcaab706bae41910a533ef2c34b0f9fed8fdfc9e1abd0a6 (image=quay.io/ceph/ceph:v18, name=unruffled_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 05 01:15:30 compute-0 systemd[1]: libpod-conmon-1ee16aa2a9d3edd34fcaab706bae41910a533ef2c34b0f9fed8fdfc9e1abd0a6.scope: Deactivated successfully.
Dec 05 01:15:30 compute-0 sudo[216374]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:30 compute-0 systemd[1]: Started libpod-conmon-e6fd9570eea0620248d9cfab4783e0c77e83f3d18f095b54cb4566ce0507bff5.scope.
Dec 05 01:15:30 compute-0 podman[216589]: 2025-12-05 01:15:30.403439083 +0000 UTC m=+0.024747594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:15:30 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:30 compute-0 podman[216589]: 2025-12-05 01:15:30.554381586 +0000 UTC m=+0.175690087 container init e6fd9570eea0620248d9cfab4783e0c77e83f3d18f095b54cb4566ce0507bff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 05 01:15:30 compute-0 podman[216589]: 2025-12-05 01:15:30.566286075 +0000 UTC m=+0.187594566 container start e6fd9570eea0620248d9cfab4783e0c77e83f3d18f095b54cb4566ce0507bff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kare, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:15:30 compute-0 romantic_kare[216616]: 167 167
Dec 05 01:15:30 compute-0 systemd[1]: libpod-e6fd9570eea0620248d9cfab4783e0c77e83f3d18f095b54cb4566ce0507bff5.scope: Deactivated successfully.
Dec 05 01:15:30 compute-0 podman[216589]: 2025-12-05 01:15:30.577220808 +0000 UTC m=+0.198529319 container attach e6fd9570eea0620248d9cfab4783e0c77e83f3d18f095b54cb4566ce0507bff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kare, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 05 01:15:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:15:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:15:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:15:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:15:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:15:30 compute-0 ceph-mon[192914]: osdmap e40: 3 total, 3 up, 3 in
Dec 05 01:15:30 compute-0 ceph-mon[192914]: 3.4 scrub starts
Dec 05 01:15:30 compute-0 ceph-mon[192914]: 3.4 scrub ok
Dec 05 01:15:30 compute-0 ceph-mon[192914]: pgmap v98: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:30 compute-0 ceph-mon[192914]: from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:15:30 compute-0 ceph-mon[192914]: Saving service rgw.rgw spec with placement compute-0
Dec 05 01:15:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:30 compute-0 podman[216589]: 2025-12-05 01:15:30.585851069 +0000 UTC m=+0.207159600 container died e6fd9570eea0620248d9cfab4783e0c77e83f3d18f095b54cb4566ce0507bff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kare, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:15:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d7c56e79f87e082195a02808e68476d5e50026a5ffaa0b558b96713fccffe29-merged.mount: Deactivated successfully.
Dec 05 01:15:30 compute-0 podman[216589]: 2025-12-05 01:15:30.637097612 +0000 UTC m=+0.258406093 container remove e6fd9570eea0620248d9cfab4783e0c77e83f3d18f095b54cb4566ce0507bff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kare, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:15:30 compute-0 systemd[1]: libpod-conmon-e6fd9570eea0620248d9cfab4783e0c77e83f3d18f095b54cb4566ce0507bff5.scope: Deactivated successfully.
Dec 05 01:15:30 compute-0 podman[216638]: 2025-12-05 01:15:30.84756659 +0000 UTC m=+0.088984135 container create b34dc3564fc362578d4b1bfe1bff1a891765a42d3942342914bd01cb515d5b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 01:15:30 compute-0 systemd[1]: Started libpod-conmon-b34dc3564fc362578d4b1bfe1bff1a891765a42d3942342914bd01cb515d5b9e.scope.
Dec 05 01:15:30 compute-0 podman[216638]: 2025-12-05 01:15:30.812536571 +0000 UTC m=+0.053954076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:15:30 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b11f0ab7ab6212ada1308720dbb3c1ad0cbff9dfa890222b47bddfc15fff817/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b11f0ab7ab6212ada1308720dbb3c1ad0cbff9dfa890222b47bddfc15fff817/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b11f0ab7ab6212ada1308720dbb3c1ad0cbff9dfa890222b47bddfc15fff817/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b11f0ab7ab6212ada1308720dbb3c1ad0cbff9dfa890222b47bddfc15fff817/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b11f0ab7ab6212ada1308720dbb3c1ad0cbff9dfa890222b47bddfc15fff817/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:30 compute-0 podman[216638]: 2025-12-05 01:15:30.968103038 +0000 UTC m=+0.209520593 container init b34dc3564fc362578d4b1bfe1bff1a891765a42d3942342914bd01cb515d5b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_feistel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 05 01:15:30 compute-0 podman[216638]: 2025-12-05 01:15:30.98012765 +0000 UTC m=+0.221545165 container start b34dc3564fc362578d4b1bfe1bff1a891765a42d3942342914bd01cb515d5b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 01:15:30 compute-0 podman[216638]: 2025-12-05 01:15:30.984383075 +0000 UTC m=+0.225800610 container attach b34dc3564fc362578d4b1bfe1bff1a891765a42d3942342914bd01cb515d5b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Dec 05 01:15:31 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event a81ac3fe-503f-4ecd-a35d-c802451fb572 (Global Recovery Event) in 10 seconds
Dec 05 01:15:31 compute-0 openstack_network_exporter[160350]: ERROR   01:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:15:31 compute-0 openstack_network_exporter[160350]: ERROR   01:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:15:31 compute-0 openstack_network_exporter[160350]: ERROR   01:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:15:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:15:31 compute-0 openstack_network_exporter[160350]: ERROR   01:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:15:31 compute-0 openstack_network_exporter[160350]: ERROR   01:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:15:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:15:31 compute-0 python3[216733]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 01:15:31 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.c scrub starts
Dec 05 01:15:31 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.c scrub ok
Dec 05 01:15:31 compute-0 python3[216805]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764897331.1680703-37178-57175415099049/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:15:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:15:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v99: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:32 compute-0 podman[216819]: 2025-12-05 01:15:32.167591178 +0000 UTC m=+0.137365961 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:15:32 compute-0 modest_feistel[216653]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:15:32 compute-0 modest_feistel[216653]: --> relative data size: 1.0
Dec 05 01:15:32 compute-0 modest_feistel[216653]: --> All data devices are unavailable
Dec 05 01:15:32 compute-0 systemd[1]: libpod-b34dc3564fc362578d4b1bfe1bff1a891765a42d3942342914bd01cb515d5b9e.scope: Deactivated successfully.
Dec 05 01:15:32 compute-0 systemd[1]: libpod-b34dc3564fc362578d4b1bfe1bff1a891765a42d3942342914bd01cb515d5b9e.scope: Consumed 1.236s CPU time.
Dec 05 01:15:32 compute-0 podman[216874]: 2025-12-05 01:15:32.369041974 +0000 UTC m=+0.051908361 container died b34dc3564fc362578d4b1bfe1bff1a891765a42d3942342914bd01cb515d5b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_feistel, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 01:15:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b11f0ab7ab6212ada1308720dbb3c1ad0cbff9dfa890222b47bddfc15fff817-merged.mount: Deactivated successfully.
Dec 05 01:15:32 compute-0 sudo[216910]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdgxqvpimnzwevsiqgffsuttsabzqlef ; /usr/bin/python3'
Dec 05 01:15:32 compute-0 sudo[216910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:32 compute-0 podman[216874]: 2025-12-05 01:15:32.468460497 +0000 UTC m=+0.151326844 container remove b34dc3564fc362578d4b1bfe1bff1a891765a42d3942342914bd01cb515d5b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 05 01:15:32 compute-0 systemd[1]: libpod-conmon-b34dc3564fc362578d4b1bfe1bff1a891765a42d3942342914bd01cb515d5b9e.scope: Deactivated successfully.
Dec 05 01:15:32 compute-0 sudo[216506]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:32 compute-0 sudo[216913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:32 compute-0 sudo[216913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:32 compute-0 sudo[216913]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:32 compute-0 python3[216912]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                            _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:32 compute-0 podman[216938]: 2025-12-05 01:15:32.806259746 +0000 UTC m=+0.083595500 container create 58fb71bcce52932cda50079677ef169672480d585fa671c538de41eeaa1a0eca (image=quay.io/ceph/ceph:v18, name=great_wu, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:15:32 compute-0 sudo[216939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:15:32 compute-0 sudo[216939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:32 compute-0 sudo[216939]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:32 compute-0 podman[216938]: 2025-12-05 01:15:32.772557783 +0000 UTC m=+0.049893617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:32 compute-0 systemd[1]: Started libpod-conmon-58fb71bcce52932cda50079677ef169672480d585fa671c538de41eeaa1a0eca.scope.
Dec 05 01:15:32 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77b8ae5867e02a910009cc9ab1b78b14eefb966316a223f0b0ba476359d0eda7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77b8ae5867e02a910009cc9ab1b78b14eefb966316a223f0b0ba476359d0eda7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77b8ae5867e02a910009cc9ab1b78b14eefb966316a223f0b0ba476359d0eda7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:32 compute-0 podman[216938]: 2025-12-05 01:15:32.948582538 +0000 UTC m=+0.225918382 container init 58fb71bcce52932cda50079677ef169672480d585fa671c538de41eeaa1a0eca (image=quay.io/ceph/ceph:v18, name=great_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:15:32 compute-0 sudo[216978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:32 compute-0 sudo[216978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:32 compute-0 sudo[216978]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:32 compute-0 podman[216938]: 2025-12-05 01:15:32.966329924 +0000 UTC m=+0.243665708 container start 58fb71bcce52932cda50079677ef169672480d585fa671c538de41eeaa1a0eca (image=quay.io/ceph/ceph:v18, name=great_wu, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Dec 05 01:15:32 compute-0 podman[216938]: 2025-12-05 01:15:32.97439966 +0000 UTC m=+0.251735554 container attach 58fb71bcce52932cda50079677ef169672480d585fa671c538de41eeaa1a0eca (image=quay.io/ceph/ceph:v18, name=great_wu, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 01:15:33 compute-0 sudo[217007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:15:33 compute-0 sudo[217007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:33 compute-0 ceph-mon[192914]: 2.c scrub starts
Dec 05 01:15:33 compute-0 ceph-mon[192914]: 2.c scrub ok
Dec 05 01:15:33 compute-0 ceph-mon[192914]: pgmap v99: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:33 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Dec 05 01:15:33 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Dec 05 01:15:33 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:15:33 compute-0 ceph-mgr[193209]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 05 01:15:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Dec 05 01:15:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec 05 01:15:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Dec 05 01:15:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec 05 01:15:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Dec 05 01:15:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec 05 01:15:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec 05 01:15:33 compute-0 ceph-mon[192914]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 05 01:15:33 compute-0 ceph-mon[192914]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 05 01:15:33 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0[192910]: 2025-12-05T01:15:33.601+0000 7f6c12c58640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 05 01:15:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 05 01:15:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).mds e2 new map
Dec 05 01:15:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).mds e2 print_map
                                            e2
                                            enable_multiple, ever_enabled_multiple: 1,1
                                            default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            legacy client fscid: 1
                                             
                                            Filesystem 'cephfs' (1)
                                            fs_name        cephfs
                                            epoch        2
                                            flags        12 joinable allow_snaps allow_multimds_snaps
                                            created        2025-12-05T01:15:33.603075+0000
                                            modified        2025-12-05T01:15:33.603118+0000
                                            tableserver        0
                                            root        0
                                            session_timeout        60
                                            session_autoclose        300
                                            max_file_size        1099511627776
                                            max_xattr_size        65536
                                            required_client_features        {}
                                            last_failure        0
                                            last_failure_osd_epoch        0
                                            compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            max_mds        1
                                            in        
                                            up        {}
                                            failed        
                                            damaged        
                                            stopped        
                                            data_pools        [7]
                                            metadata_pool        6
                                            inline_data        disabled
                                            balancer        
                                            bal_rank_mask        -1
                                            standby_count_wanted        0
                                             
                                             
Dec 05 01:15:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Dec 05 01:15:33 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Dec 05 01:15:33 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec 05 01:15:33 compute-0 ceph-mgr[193209]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec 05 01:15:33 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec 05 01:15:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec 05 01:15:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:33 compute-0 ceph-mgr[193209]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 05 01:15:33 compute-0 podman[217090]: 2025-12-05 01:15:33.638610072 +0000 UTC m=+0.079516721 container create 27c9858eae97c91ce3e5922ac9881a2982007f7d0f76ac29910e54a63ae5d212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 05 01:15:33 compute-0 systemd[1]: libpod-58fb71bcce52932cda50079677ef169672480d585fa671c538de41eeaa1a0eca.scope: Deactivated successfully.
Dec 05 01:15:33 compute-0 podman[216938]: 2025-12-05 01:15:33.672618283 +0000 UTC m=+0.949954067 container died 58fb71bcce52932cda50079677ef169672480d585fa671c538de41eeaa1a0eca (image=quay.io/ceph/ceph:v18, name=great_wu, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:15:33 compute-0 podman[217090]: 2025-12-05 01:15:33.597717497 +0000 UTC m=+0.038624156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:15:33 compute-0 systemd[1]: Started libpod-conmon-27c9858eae97c91ce3e5922ac9881a2982007f7d0f76ac29910e54a63ae5d212.scope.
Dec 05 01:15:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-77b8ae5867e02a910009cc9ab1b78b14eefb966316a223f0b0ba476359d0eda7-merged.mount: Deactivated successfully.
Dec 05 01:15:33 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:33 compute-0 podman[216938]: 2025-12-05 01:15:33.756774117 +0000 UTC m=+1.034109851 container remove 58fb71bcce52932cda50079677ef169672480d585fa671c538de41eeaa1a0eca (image=quay.io/ceph/ceph:v18, name=great_wu, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 05 01:15:33 compute-0 systemd[1]: libpod-conmon-58fb71bcce52932cda50079677ef169672480d585fa671c538de41eeaa1a0eca.scope: Deactivated successfully.
Dec 05 01:15:33 compute-0 podman[217090]: 2025-12-05 01:15:33.774377059 +0000 UTC m=+0.215283688 container init 27c9858eae97c91ce3e5922ac9881a2982007f7d0f76ac29910e54a63ae5d212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_buck, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:15:33 compute-0 podman[217090]: 2025-12-05 01:15:33.784775917 +0000 UTC m=+0.225682526 container start 27c9858eae97c91ce3e5922ac9881a2982007f7d0f76ac29910e54a63ae5d212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:15:33 compute-0 podman[217090]: 2025-12-05 01:15:33.790555562 +0000 UTC m=+0.231462211 container attach 27c9858eae97c91ce3e5922ac9881a2982007f7d0f76ac29910e54a63ae5d212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_buck, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:15:33 compute-0 systemd[1]: libpod-27c9858eae97c91ce3e5922ac9881a2982007f7d0f76ac29910e54a63ae5d212.scope: Deactivated successfully.
Dec 05 01:15:33 compute-0 awesome_buck[217115]: 167 167
Dec 05 01:15:33 compute-0 conmon[217115]: conmon 27c9858eae97c91ce3e5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-27c9858eae97c91ce3e5922ac9881a2982007f7d0f76ac29910e54a63ae5d212.scope/container/memory.events
Dec 05 01:15:33 compute-0 podman[217090]: 2025-12-05 01:15:33.795874805 +0000 UTC m=+0.236781414 container died 27c9858eae97c91ce3e5922ac9881a2982007f7d0f76ac29910e54a63ae5d212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_buck, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 05 01:15:33 compute-0 sudo[216910]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b8d2580085c10a2aa179d5e2e433f036500b39e879518d773625da04e353c4c-merged.mount: Deactivated successfully.
Dec 05 01:15:33 compute-0 podman[217090]: 2025-12-05 01:15:33.845535495 +0000 UTC m=+0.286442104 container remove 27c9858eae97c91ce3e5922ac9881a2982007f7d0f76ac29910e54a63ae5d212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 01:15:33 compute-0 systemd[1]: libpod-conmon-27c9858eae97c91ce3e5922ac9881a2982007f7d0f76ac29910e54a63ae5d212.scope: Deactivated successfully.
Dec 05 01:15:34 compute-0 sudo[217160]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfeoftjitwumwzzvszlfzegybhicnaar ; /usr/bin/python3'
Dec 05 01:15:34 compute-0 sudo[217160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v101: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:34 compute-0 podman[217168]: 2025-12-05 01:15:34.149334123 +0000 UTC m=+0.097247496 container create ea0c2de8eccf8d7afe54b3466951670598e35a066957af668809c5814254c443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_newton, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 01:15:34 compute-0 ceph-mon[192914]: 4.6 scrub starts
Dec 05 01:15:34 compute-0 ceph-mon[192914]: 4.6 scrub ok
Dec 05 01:15:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec 05 01:15:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec 05 01:15:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec 05 01:15:34 compute-0 ceph-mon[192914]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 05 01:15:34 compute-0 ceph-mon[192914]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 05 01:15:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 05 01:15:34 compute-0 ceph-mon[192914]: osdmap e41: 3 total, 3 up, 3 in
Dec 05 01:15:34 compute-0 ceph-mon[192914]: fsmap cephfs:0
Dec 05 01:15:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:34 compute-0 python3[217167]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:34 compute-0 podman[217168]: 2025-12-05 01:15:34.116326679 +0000 UTC m=+0.064240092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:15:34 compute-0 systemd[1]: Started libpod-conmon-ea0c2de8eccf8d7afe54b3466951670598e35a066957af668809c5814254c443.scope.
Dec 05 01:15:34 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082b2c0237057011c32f66ece8924b27c85b9560767cfc0f67dd19af71ee99d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082b2c0237057011c32f66ece8924b27c85b9560767cfc0f67dd19af71ee99d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082b2c0237057011c32f66ece8924b27c85b9560767cfc0f67dd19af71ee99d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082b2c0237057011c32f66ece8924b27c85b9560767cfc0f67dd19af71ee99d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:34 compute-0 podman[217168]: 2025-12-05 01:15:34.289461936 +0000 UTC m=+0.237375349 container init ea0c2de8eccf8d7afe54b3466951670598e35a066957af668809c5814254c443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_newton, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:15:34 compute-0 podman[217168]: 2025-12-05 01:15:34.31015036 +0000 UTC m=+0.258063733 container start ea0c2de8eccf8d7afe54b3466951670598e35a066957af668809c5814254c443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_newton, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:15:34 compute-0 podman[217168]: 2025-12-05 01:15:34.315651708 +0000 UTC m=+0.263565081 container attach ea0c2de8eccf8d7afe54b3466951670598e35a066957af668809c5814254c443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Dec 05 01:15:34 compute-0 podman[217183]: 2025-12-05 01:15:34.322526472 +0000 UTC m=+0.105069495 container create a6319edc16ec898c696e1d207378c2e362f6524012ea7717d211768c9bccf3df (image=quay.io/ceph/ceph:v18, name=hungry_saha, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:15:34 compute-0 podman[217183]: 2025-12-05 01:15:34.275565784 +0000 UTC m=+0.058108807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:34 compute-0 systemd[1]: Started libpod-conmon-a6319edc16ec898c696e1d207378c2e362f6524012ea7717d211768c9bccf3df.scope.
Dec 05 01:15:34 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc84a11aa9bc944c649606a90ba8878e4085b017f3237cef40ef429836ee356/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc84a11aa9bc944c649606a90ba8878e4085b017f3237cef40ef429836ee356/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc84a11aa9bc944c649606a90ba8878e4085b017f3237cef40ef429836ee356/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:34 compute-0 podman[217183]: 2025-12-05 01:15:34.471343718 +0000 UTC m=+0.253886711 container init a6319edc16ec898c696e1d207378c2e362f6524012ea7717d211768c9bccf3df (image=quay.io/ceph/ceph:v18, name=hungry_saha, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec 05 01:15:34 compute-0 podman[217183]: 2025-12-05 01:15:34.478852629 +0000 UTC m=+0.261395632 container start a6319edc16ec898c696e1d207378c2e362f6524012ea7717d211768c9bccf3df (image=quay.io/ceph/ceph:v18, name=hungry_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:15:34 compute-0 podman[217183]: 2025-12-05 01:15:34.484421139 +0000 UTC m=+0.266964132 container attach a6319edc16ec898c696e1d207378c2e362f6524012ea7717d211768c9bccf3df (image=quay.io/ceph/ceph:v18, name=hungry_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:15:34 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.b scrub starts
Dec 05 01:15:34 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.b scrub ok
Dec 05 01:15:35 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:15:35 compute-0 ceph-mgr[193209]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec 05 01:15:35 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec 05 01:15:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec 05 01:15:35 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:35 compute-0 hopeful_newton[217186]: {
Dec 05 01:15:35 compute-0 hungry_saha[217204]: Scheduled mds.cephfs update...
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:     "0": [
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:         {
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "devices": [
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "/dev/loop3"
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             ],
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "lv_name": "ceph_lv0",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "lv_size": "21470642176",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "name": "ceph_lv0",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "tags": {
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.cluster_name": "ceph",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.crush_device_class": "",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.encrypted": "0",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.osd_id": "0",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.type": "block",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.vdo": "0"
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             },
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "type": "block",
Dec 05 01:15:35 compute-0 ceph-mon[192914]: from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:15:35 compute-0 ceph-mon[192914]: Saving service mds.cephfs spec with placement compute-0
Dec 05 01:15:35 compute-0 ceph-mon[192914]: pgmap v101: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:35 compute-0 ceph-mon[192914]: 4.b scrub starts
Dec 05 01:15:35 compute-0 ceph-mon[192914]: 4.b scrub ok
Dec 05 01:15:35 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "vg_name": "ceph_vg0"
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:         }
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:     ],
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:     "1": [
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:         {
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "devices": [
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "/dev/loop4"
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             ],
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "lv_name": "ceph_lv1",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "lv_size": "21470642176",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "name": "ceph_lv1",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "tags": {
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.cluster_name": "ceph",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.crush_device_class": "",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.encrypted": "0",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.osd_id": "1",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.type": "block",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.vdo": "0"
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             },
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "type": "block",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "vg_name": "ceph_vg1"
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:         }
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:     ],
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:     "2": [
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:         {
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "devices": [
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "/dev/loop5"
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             ],
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "lv_name": "ceph_lv2",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "lv_size": "21470642176",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "name": "ceph_lv2",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "tags": {
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.cluster_name": "ceph",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.crush_device_class": "",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.encrypted": "0",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.osd_id": "2",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.type": "block",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:                 "ceph.vdo": "0"
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             },
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "type": "block",
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:             "vg_name": "ceph_vg2"
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:         }
Dec 05 01:15:35 compute-0 hopeful_newton[217186]:     ]
Dec 05 01:15:35 compute-0 hopeful_newton[217186]: }
Dec 05 01:15:35 compute-0 systemd[1]: libpod-a6319edc16ec898c696e1d207378c2e362f6524012ea7717d211768c9bccf3df.scope: Deactivated successfully.
Dec 05 01:15:35 compute-0 conmon[217204]: conmon a6319edc16ec898c696e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a6319edc16ec898c696e1d207378c2e362f6524012ea7717d211768c9bccf3df.scope/container/memory.events
Dec 05 01:15:35 compute-0 podman[217183]: 2025-12-05 01:15:35.197329564 +0000 UTC m=+0.979872607 container died a6319edc16ec898c696e1d207378c2e362f6524012ea7717d211768c9bccf3df (image=quay.io/ceph/ceph:v18, name=hungry_saha, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 01:15:35 compute-0 systemd[1]: libpod-ea0c2de8eccf8d7afe54b3466951670598e35a066957af668809c5814254c443.scope: Deactivated successfully.
Dec 05 01:15:35 compute-0 podman[217168]: 2025-12-05 01:15:35.220475114 +0000 UTC m=+1.168388517 container died ea0c2de8eccf8d7afe54b3466951670598e35a066957af668809c5814254c443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 05 01:15:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-5dc84a11aa9bc944c649606a90ba8878e4085b017f3237cef40ef429836ee356-merged.mount: Deactivated successfully.
Dec 05 01:15:35 compute-0 podman[217183]: 2025-12-05 01:15:35.27408379 +0000 UTC m=+1.056626783 container remove a6319edc16ec898c696e1d207378c2e362f6524012ea7717d211768c9bccf3df (image=quay.io/ceph/ceph:v18, name=hungry_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 01:15:35 compute-0 systemd[1]: libpod-conmon-a6319edc16ec898c696e1d207378c2e362f6524012ea7717d211768c9bccf3df.scope: Deactivated successfully.
Dec 05 01:15:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-082b2c0237057011c32f66ece8924b27c85b9560767cfc0f67dd19af71ee99d3-merged.mount: Deactivated successfully.
Dec 05 01:15:35 compute-0 sudo[217160]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:35 compute-0 podman[217168]: 2025-12-05 01:15:35.32634018 +0000 UTC m=+1.274253543 container remove ea0c2de8eccf8d7afe54b3466951670598e35a066957af668809c5814254c443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_newton, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:15:35 compute-0 systemd[1]: libpod-conmon-ea0c2de8eccf8d7afe54b3466951670598e35a066957af668809c5814254c443.scope: Deactivated successfully.
Dec 05 01:15:35 compute-0 sudo[217007]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:35 compute-0 sudo[217255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:35 compute-0 sudo[217255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:35 compute-0 sudo[217255]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:35 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.c scrub starts
Dec 05 01:15:35 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.c scrub ok
Dec 05 01:15:35 compute-0 sudo[217280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:15:35 compute-0 sudo[217280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:35 compute-0 sudo[217280]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:35 compute-0 sudo[217309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:35 compute-0 sudo[217309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:35 compute-0 sudo[217309]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:35 compute-0 sudo[217361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:15:35 compute-0 sudo[217361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:35 compute-0 sudo[217430]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iommyaoldkxkizkewmnrdttbpfnprxbq ; /usr/bin/python3'
Dec 05 01:15:35 compute-0 sudo[217430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v102: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:36 compute-0 python3[217432]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 05 01:15:36 compute-0 sudo[217430]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:36 compute-0 ceph-mon[192914]: from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 01:15:36 compute-0 ceph-mon[192914]: Saving service mds.cephfs spec with placement compute-0
Dec 05 01:15:36 compute-0 ceph-mon[192914]: 4.c scrub starts
Dec 05 01:15:36 compute-0 ceph-mon[192914]: 4.c scrub ok
Dec 05 01:15:36 compute-0 ceph-mgr[193209]: [progress INFO root] Writing back 10 completed events
Dec 05 01:15:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 05 01:15:36 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:36 compute-0 podman[217515]: 2025-12-05 01:15:36.484110853 +0000 UTC m=+0.078712729 container create 80ed08e20a96e84b4c3c0cacd4050f0a34956ab116e8946b979d2fcff51ef59e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ritchie, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 05 01:15:36 compute-0 sudo[217552]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaeuxohexxlhpgitcyyezrovyrcxnymc ; /usr/bin/python3'
Dec 05 01:15:36 compute-0 sudo[217552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:36 compute-0 podman[217515]: 2025-12-05 01:15:36.45150171 +0000 UTC m=+0.046103596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:15:36 compute-0 systemd[1]: Started libpod-conmon-80ed08e20a96e84b4c3c0cacd4050f0a34956ab116e8946b979d2fcff51ef59e.scope.
Dec 05 01:15:36 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:36 compute-0 podman[217515]: 2025-12-05 01:15:36.638062807 +0000 UTC m=+0.232664713 container init 80ed08e20a96e84b4c3c0cacd4050f0a34956ab116e8946b979d2fcff51ef59e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ritchie, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:15:36 compute-0 podman[217515]: 2025-12-05 01:15:36.658184696 +0000 UTC m=+0.252786572 container start 80ed08e20a96e84b4c3c0cacd4050f0a34956ab116e8946b979d2fcff51ef59e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 01:15:36 compute-0 podman[217515]: 2025-12-05 01:15:36.665577984 +0000 UTC m=+0.260180040 container attach 80ed08e20a96e84b4c3c0cacd4050f0a34956ab116e8946b979d2fcff51ef59e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ritchie, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:15:36 compute-0 elated_ritchie[217557]: 167 167
Dec 05 01:15:36 compute-0 systemd[1]: libpod-80ed08e20a96e84b4c3c0cacd4050f0a34956ab116e8946b979d2fcff51ef59e.scope: Deactivated successfully.
Dec 05 01:15:36 compute-0 podman[217515]: 2025-12-05 01:15:36.670622559 +0000 UTC m=+0.265224465 container died 80ed08e20a96e84b4c3c0cacd4050f0a34956ab116e8946b979d2fcff51ef59e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 05 01:15:36 compute-0 python3[217556]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764897335.7043757-37208-109404068672281/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=1ccf2af1c4d9cd0d8c5f12e3a57b95f6f703bc49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:15:36 compute-0 sudo[217552]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f2d3413a1c0ee88046defb6be889886c5e0f7eb31bfb1b3cefc3a4f03d480bf-merged.mount: Deactivated successfully.
Dec 05 01:15:36 compute-0 podman[217515]: 2025-12-05 01:15:36.745965087 +0000 UTC m=+0.340566933 container remove 80ed08e20a96e84b4c3c0cacd4050f0a34956ab116e8946b979d2fcff51ef59e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 01:15:36 compute-0 systemd[1]: libpod-conmon-80ed08e20a96e84b4c3c0cacd4050f0a34956ab116e8946b979d2fcff51ef59e.scope: Deactivated successfully.
Dec 05 01:15:36 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.e scrub starts
Dec 05 01:15:36 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.e scrub ok
Dec 05 01:15:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:15:37 compute-0 podman[217603]: 2025-12-05 01:15:37.037942929 +0000 UTC m=+0.087540856 container create 8e162ba5525dc443cf72740a661fdc415687d3e857729061b3e0b68ad00e6cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 05 01:15:37 compute-0 podman[217603]: 2025-12-05 01:15:37.003189508 +0000 UTC m=+0.052787505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:15:37 compute-0 systemd[1]: Started libpod-conmon-8e162ba5525dc443cf72740a661fdc415687d3e857729061b3e0b68ad00e6cb7.scope.
Dec 05 01:15:37 compute-0 sudo[217642]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spaqqoayhyroizqozpzkqaehpeignils ; /usr/bin/python3'
Dec 05 01:15:37 compute-0 sudo[217642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:37 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309a38b47357b93495fcfb629ed71819c1ca0971bbea5b8d072c8b1a325bf050/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309a38b47357b93495fcfb629ed71819c1ca0971bbea5b8d072c8b1a325bf050/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309a38b47357b93495fcfb629ed71819c1ca0971bbea5b8d072c8b1a325bf050/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309a38b47357b93495fcfb629ed71819c1ca0971bbea5b8d072c8b1a325bf050/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:37 compute-0 podman[217603]: 2025-12-05 01:15:37.18992901 +0000 UTC m=+0.239526957 container init 8e162ba5525dc443cf72740a661fdc415687d3e857729061b3e0b68ad00e6cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:15:37 compute-0 podman[217603]: 2025-12-05 01:15:37.205086866 +0000 UTC m=+0.254684793 container start 8e162ba5525dc443cf72740a661fdc415687d3e857729061b3e0b68ad00e6cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Dec 05 01:15:37 compute-0 podman[217603]: 2025-12-05 01:15:37.210509721 +0000 UTC m=+0.260107658 container attach 8e162ba5525dc443cf72740a661fdc415687d3e857729061b3e0b68ad00e6cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:15:37 compute-0 ceph-mon[192914]: pgmap v102: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:37 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:37 compute-0 python3[217647]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:37 compute-0 podman[217650]: 2025-12-05 01:15:37.424220246 +0000 UTC m=+0.113763649 container create 2065a5b11f871314f3ef6765147ac3792feaae0723f2f9e2a045077235bf0a7d (image=quay.io/ceph/ceph:v18, name=strange_lamport, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:15:37 compute-0 podman[217650]: 2025-12-05 01:15:37.389263389 +0000 UTC m=+0.078806842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:37 compute-0 systemd[1]: Started libpod-conmon-2065a5b11f871314f3ef6765147ac3792feaae0723f2f9e2a045077235bf0a7d.scope.
Dec 05 01:15:37 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b11af1af30d66c68fbf874990f6587c28b8f42849ef265759206c485d35a89/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b11af1af30d66c68fbf874990f6587c28b8f42849ef265759206c485d35a89/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:37 compute-0 podman[217650]: 2025-12-05 01:15:37.604146835 +0000 UTC m=+0.293690308 container init 2065a5b11f871314f3ef6765147ac3792feaae0723f2f9e2a045077235bf0a7d (image=quay.io/ceph/ceph:v18, name=strange_lamport, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 05 01:15:37 compute-0 podman[217650]: 2025-12-05 01:15:37.624381507 +0000 UTC m=+0.313924900 container start 2065a5b11f871314f3ef6765147ac3792feaae0723f2f9e2a045077235bf0a7d (image=quay.io/ceph/ceph:v18, name=strange_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec 05 01:15:37 compute-0 podman[217650]: 2025-12-05 01:15:37.631311793 +0000 UTC m=+0.320855236 container attach 2065a5b11f871314f3ef6765147ac3792feaae0723f2f9e2a045077235bf0a7d (image=quay.io/ceph/ceph:v18, name=strange_lamport, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:15:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:38 compute-0 ceph-mon[192914]: 2.e scrub starts
Dec 05 01:15:38 compute-0 ceph-mon[192914]: 2.e scrub ok
Dec 05 01:15:38 compute-0 stupefied_buck[217644]: {
Dec 05 01:15:38 compute-0 stupefied_buck[217644]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:15:38 compute-0 stupefied_buck[217644]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:15:38 compute-0 stupefied_buck[217644]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:15:38 compute-0 stupefied_buck[217644]:         "osd_id": 0,
Dec 05 01:15:38 compute-0 stupefied_buck[217644]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:15:38 compute-0 stupefied_buck[217644]:         "type": "bluestore"
Dec 05 01:15:38 compute-0 stupefied_buck[217644]:     },
Dec 05 01:15:38 compute-0 stupefied_buck[217644]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:15:38 compute-0 stupefied_buck[217644]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:15:38 compute-0 stupefied_buck[217644]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:15:38 compute-0 stupefied_buck[217644]:         "osd_id": 1,
Dec 05 01:15:38 compute-0 stupefied_buck[217644]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:15:38 compute-0 stupefied_buck[217644]:         "type": "bluestore"
Dec 05 01:15:38 compute-0 stupefied_buck[217644]:     },
Dec 05 01:15:38 compute-0 stupefied_buck[217644]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:15:38 compute-0 stupefied_buck[217644]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:15:38 compute-0 stupefied_buck[217644]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:15:38 compute-0 stupefied_buck[217644]:         "osd_id": 2,
Dec 05 01:15:38 compute-0 stupefied_buck[217644]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:15:38 compute-0 stupefied_buck[217644]:         "type": "bluestore"
Dec 05 01:15:38 compute-0 stupefied_buck[217644]:     }
Dec 05 01:15:38 compute-0 stupefied_buck[217644]: }
Dec 05 01:15:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Dec 05 01:15:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2168049675' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec 05 01:15:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2168049675' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 05 01:15:38 compute-0 systemd[1]: libpod-8e162ba5525dc443cf72740a661fdc415687d3e857729061b3e0b68ad00e6cb7.scope: Deactivated successfully.
Dec 05 01:15:38 compute-0 systemd[1]: libpod-8e162ba5525dc443cf72740a661fdc415687d3e857729061b3e0b68ad00e6cb7.scope: Consumed 1.161s CPU time.
Dec 05 01:15:38 compute-0 systemd[1]: libpod-2065a5b11f871314f3ef6765147ac3792feaae0723f2f9e2a045077235bf0a7d.scope: Deactivated successfully.
Dec 05 01:15:38 compute-0 podman[217650]: 2025-12-05 01:15:38.413615458 +0000 UTC m=+1.103158811 container died 2065a5b11f871314f3ef6765147ac3792feaae0723f2f9e2a045077235bf0a7d (image=quay.io/ceph/ceph:v18, name=strange_lamport, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:15:38 compute-0 podman[217718]: 2025-12-05 01:15:38.452022127 +0000 UTC m=+0.053063673 container died 8e162ba5525dc443cf72740a661fdc415687d3e857729061b3e0b68ad00e6cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 05 01:15:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6b11af1af30d66c68fbf874990f6587c28b8f42849ef265759206c485d35a89-merged.mount: Deactivated successfully.
Dec 05 01:15:38 compute-0 podman[217650]: 2025-12-05 01:15:38.508998933 +0000 UTC m=+1.198542296 container remove 2065a5b11f871314f3ef6765147ac3792feaae0723f2f9e2a045077235bf0a7d (image=quay.io/ceph/ceph:v18, name=strange_lamport, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:15:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-309a38b47357b93495fcfb629ed71819c1ca0971bbea5b8d072c8b1a325bf050-merged.mount: Deactivated successfully.
Dec 05 01:15:38 compute-0 systemd[1]: libpod-conmon-2065a5b11f871314f3ef6765147ac3792feaae0723f2f9e2a045077235bf0a7d.scope: Deactivated successfully.
Dec 05 01:15:38 compute-0 sudo[217642]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:38 compute-0 podman[217718]: 2025-12-05 01:15:38.561480319 +0000 UTC m=+0.162521835 container remove 8e162ba5525dc443cf72740a661fdc415687d3e857729061b3e0b68ad00e6cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 05 01:15:38 compute-0 systemd[1]: libpod-conmon-8e162ba5525dc443cf72740a661fdc415687d3e857729061b3e0b68ad00e6cb7.scope: Deactivated successfully.
Dec 05 01:15:38 compute-0 sudo[217361]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:15:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:15:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:38 compute-0 sudo[217744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:38 compute-0 sudo[217744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:38 compute-0 sudo[217744]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:38 compute-0 sudo[217769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:15:38 compute-0 sudo[217769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:38 compute-0 sudo[217769]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:39 compute-0 sudo[217794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:39 compute-0 sudo[217794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:39 compute-0 sudo[217794]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:39 compute-0 sudo[217819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:15:39 compute-0 sudo[217819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:39 compute-0 sudo[217819]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:39 compute-0 ceph-mon[192914]: pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:39 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2168049675' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec 05 01:15:39 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2168049675' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 05 01:15:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:39 compute-0 sudo[217844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:39 compute-0 sudo[217844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:39 compute-0 sudo[217844]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:39 compute-0 sudo[217869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 05 01:15:39 compute-0 sudo[217869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:39 compute-0 sudo[217917]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awrzqttwzpkdzqzpaekvchwfwthooeex ; /usr/bin/python3'
Dec 05 01:15:39 compute-0 sudo[217917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:39 compute-0 python3[217919]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:39 compute-0 podman[217935]: 2025-12-05 01:15:39.910057613 +0000 UTC m=+0.083407555 container create 67fb4245e88e3f947555e13c643bafec3f2e2822d90ec040af5b867a4c7515cc (image=quay.io/ceph/ceph:v18, name=pensive_zhukovsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:15:39 compute-0 podman[217935]: 2025-12-05 01:15:39.87857452 +0000 UTC m=+0.051924472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:39 compute-0 systemd[1]: Started libpod-conmon-67fb4245e88e3f947555e13c643bafec3f2e2822d90ec040af5b867a4c7515cc.scope.
Dec 05 01:15:40 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c83cd88815889775860378630d862ee15a356550e287c88045273a0b231a7c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c83cd88815889775860378630d862ee15a356550e287c88045273a0b231a7c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:40 compute-0 podman[217935]: 2025-12-05 01:15:40.055747946 +0000 UTC m=+0.229097958 container init 67fb4245e88e3f947555e13c643bafec3f2e2822d90ec040af5b867a4c7515cc (image=quay.io/ceph/ceph:v18, name=pensive_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 01:15:40 compute-0 podman[217935]: 2025-12-05 01:15:40.072961787 +0000 UTC m=+0.246311699 container start 67fb4245e88e3f947555e13c643bafec3f2e2822d90ec040af5b867a4c7515cc (image=quay.io/ceph/ceph:v18, name=pensive_zhukovsky, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 01:15:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:40 compute-0 podman[217935]: 2025-12-05 01:15:40.081429304 +0000 UTC m=+0.254779266 container attach 67fb4245e88e3f947555e13c643bafec3f2e2822d90ec040af5b867a4c7515cc (image=quay.io/ceph/ceph:v18, name=pensive_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 05 01:15:40 compute-0 podman[218009]: 2025-12-05 01:15:40.524705867 +0000 UTC m=+0.131704188 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 05 01:15:40 compute-0 podman[218009]: 2025-12-05 01:15:40.647171558 +0000 UTC m=+0.254169869 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 01:15:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec 05 01:15:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4077586626' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 05 01:15:40 compute-0 pensive_zhukovsky[217973]: 
Dec 05 01:15:40 compute-0 pensive_zhukovsky[217973]: {"fsid":"cbd280d3-cbd8-528b-ace6-2b3a887cdcee","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":193,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":41,"num_osds":3,"num_up_osds":3,"osd_up_since":1764897281,"num_in_osds":3,"osd_in_since":1764897248,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84148224,"bytes_avail":64327778304,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2025-12-05T01:15:38.079813+0000","services":{"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Dec 05 01:15:40 compute-0 systemd[1]: libpod-67fb4245e88e3f947555e13c643bafec3f2e2822d90ec040af5b867a4c7515cc.scope: Deactivated successfully.
Dec 05 01:15:40 compute-0 podman[217935]: 2025-12-05 01:15:40.799294213 +0000 UTC m=+0.972644125 container died 67fb4245e88e3f947555e13c643bafec3f2e2822d90ec040af5b867a4c7515cc (image=quay.io/ceph/ceph:v18, name=pensive_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 05 01:15:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9c83cd88815889775860378630d862ee15a356550e287c88045273a0b231a7c-merged.mount: Deactivated successfully.
Dec 05 01:15:40 compute-0 podman[217935]: 2025-12-05 01:15:40.855292483 +0000 UTC m=+1.028642395 container remove 67fb4245e88e3f947555e13c643bafec3f2e2822d90ec040af5b867a4c7515cc (image=quay.io/ceph/ceph:v18, name=pensive_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 05 01:15:40 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.10 deep-scrub starts
Dec 05 01:15:40 compute-0 systemd[1]: libpod-conmon-67fb4245e88e3f947555e13c643bafec3f2e2822d90ec040af5b867a4c7515cc.scope: Deactivated successfully.
Dec 05 01:15:40 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.10 deep-scrub ok
Dec 05 01:15:40 compute-0 sudo[217917]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:40 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.b scrub starts
Dec 05 01:15:40 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.b scrub ok
Dec 05 01:15:41 compute-0 sudo[218132]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxobwrknbianzcohwyqylyiraxnpqyds ; /usr/bin/python3'
Dec 05 01:15:41 compute-0 sudo[218132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:41 compute-0 ceph-mon[192914]: pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:41 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4077586626' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 05 01:15:41 compute-0 ceph-mon[192914]: 3.b scrub starts
Dec 05 01:15:41 compute-0 ceph-mon[192914]: 3.b scrub ok
Dec 05 01:15:41 compute-0 python3[218144]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:41 compute-0 podman[218162]: 2025-12-05 01:15:41.515778845 +0000 UTC m=+0.128623646 container create 7aa9c4e07984bb1ab60d288e64dea6a7af70034b764a49bf76a483e9042b6e72 (image=quay.io/ceph/ceph:v18, name=busy_goldwasser, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:15:41 compute-0 podman[218162]: 2025-12-05 01:15:41.469061954 +0000 UTC m=+0.081906805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:41 compute-0 systemd[1]: Started libpod-conmon-7aa9c4e07984bb1ab60d288e64dea6a7af70034b764a49bf76a483e9042b6e72.scope.
Dec 05 01:15:41 compute-0 sudo[217869]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:15:41 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:15:41 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:41 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f971d02b62a0431988fa6ff8d09b8d2abed4302b5971877ed74e2fe6aa0c7435/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f971d02b62a0431988fa6ff8d09b8d2abed4302b5971877ed74e2fe6aa0c7435/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:41 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Dec 05 01:15:41 compute-0 podman[218162]: 2025-12-05 01:15:41.686225121 +0000 UTC m=+0.299069892 container init 7aa9c4e07984bb1ab60d288e64dea6a7af70034b764a49bf76a483e9042b6e72 (image=quay.io/ceph/ceph:v18, name=busy_goldwasser, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec 05 01:15:41 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Dec 05 01:15:41 compute-0 podman[218162]: 2025-12-05 01:15:41.708363784 +0000 UTC m=+0.321208545 container start 7aa9c4e07984bb1ab60d288e64dea6a7af70034b764a49bf76a483e9042b6e72 (image=quay.io/ceph/ceph:v18, name=busy_goldwasser, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:15:41 compute-0 podman[218162]: 2025-12-05 01:15:41.712807173 +0000 UTC m=+0.325651974 container attach 7aa9c4e07984bb1ab60d288e64dea6a7af70034b764a49bf76a483e9042b6e72 (image=quay.io/ceph/ceph:v18, name=busy_goldwasser, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:15:41 compute-0 sudo[218197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:41 compute-0 sudo[218197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:41 compute-0 sudo[218197]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:41 compute-0 sudo[218223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:15:41 compute-0 sudo[218223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:41 compute-0 sudo[218223]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:41 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.d deep-scrub starts
Dec 05 01:15:41 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.d deep-scrub ok
Dec 05 01:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:15:42 compute-0 sudo[218248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:42 compute-0 sudo[218248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:42 compute-0 sudo[218248]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:42 compute-0 sudo[218292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:15:42 compute-0 sudo[218292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 01:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1849309770' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:15:42 compute-0 busy_goldwasser[218194]: 
Dec 05 01:15:42 compute-0 busy_goldwasser[218194]: {"epoch":1,"fsid":"cbd280d3-cbd8-528b-ace6-2b3a887cdcee","modified":"2025-12-05T01:12:19.563284Z","created":"2025-12-05T01:12:19.563284Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Dec 05 01:15:42 compute-0 busy_goldwasser[218194]: dumped monmap epoch 1
Dec 05 01:15:42 compute-0 systemd[1]: libpod-7aa9c4e07984bb1ab60d288e64dea6a7af70034b764a49bf76a483e9042b6e72.scope: Deactivated successfully.
Dec 05 01:15:42 compute-0 podman[218162]: 2025-12-05 01:15:42.449450555 +0000 UTC m=+1.062295346 container died 7aa9c4e07984bb1ab60d288e64dea6a7af70034b764a49bf76a483e9042b6e72 (image=quay.io/ceph/ceph:v18, name=busy_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:15:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f971d02b62a0431988fa6ff8d09b8d2abed4302b5971877ed74e2fe6aa0c7435-merged.mount: Deactivated successfully.
Dec 05 01:15:42 compute-0 podman[218162]: 2025-12-05 01:15:42.522553723 +0000 UTC m=+1.135398484 container remove 7aa9c4e07984bb1ab60d288e64dea6a7af70034b764a49bf76a483e9042b6e72 (image=quay.io/ceph/ceph:v18, name=busy_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 05 01:15:42 compute-0 systemd[1]: libpod-conmon-7aa9c4e07984bb1ab60d288e64dea6a7af70034b764a49bf76a483e9042b6e72.scope: Deactivated successfully.
Dec 05 01:15:42 compute-0 sudo[218132]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:42 compute-0 ceph-mon[192914]: 2.10 deep-scrub starts
Dec 05 01:15:42 compute-0 ceph-mon[192914]: 2.10 deep-scrub ok
Dec 05 01:15:42 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:42 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:42 compute-0 ceph-mon[192914]: 4.15 scrub starts
Dec 05 01:15:42 compute-0 ceph-mon[192914]: 4.15 scrub ok
Dec 05 01:15:42 compute-0 ceph-mon[192914]: 3.d deep-scrub starts
Dec 05 01:15:42 compute-0 ceph-mon[192914]: 3.d deep-scrub ok
Dec 05 01:15:42 compute-0 ceph-mon[192914]: pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:42 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1849309770' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:15:42 compute-0 sudo[218292]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 1ee6ef4c-f652-40db-ae28-fefd31679028 does not exist
Dec 05 01:15:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7339d669-5b71-4f3d-a29e-0485ced1233e does not exist
Dec 05 01:15:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 4d80e170-cb23-4dd5-9b82-ffe46d5fed91 does not exist
Dec 05 01:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:15:42 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Dec 05 01:15:42 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Dec 05 01:15:43 compute-0 sudo[218360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:43 compute-0 sudo[218360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:43 compute-0 sudo[218405]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aubxxsljqsgfoeccnwacmqjtixetupso ; /usr/bin/python3'
Dec 05 01:15:43 compute-0 sudo[218360]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:43 compute-0 sudo[218405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:43 compute-0 sudo[218410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:15:43 compute-0 sudo[218410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:43 compute-0 sudo[218410]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:43 compute-0 python3[218409]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:43 compute-0 sudo[218435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:43 compute-0 sudo[218435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:43 compute-0 sudo[218435]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:43 compute-0 podman[218443]: 2025-12-05 01:15:43.371693619 +0000 UTC m=+0.103349739 container create d1ce8efef730217ace4044429dd10edbf118ad3fb4ee9baf59cde34f458c5e1d (image=quay.io/ceph/ceph:v18, name=competent_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 05 01:15:43 compute-0 podman[218443]: 2025-12-05 01:15:43.338822938 +0000 UTC m=+0.070479128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:43 compute-0 systemd[1]: Started libpod-conmon-d1ce8efef730217ace4044429dd10edbf118ad3fb4ee9baf59cde34f458c5e1d.scope.
Dec 05 01:15:43 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ea8d63b8dd87f6d8680411412c5b69ba7a02e3e2cd67a16e1bf7a3b5fea87fc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ea8d63b8dd87f6d8680411412c5b69ba7a02e3e2cd67a16e1bf7a3b5fea87fc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:43 compute-0 sudo[218471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:15:43 compute-0 sudo[218471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:43 compute-0 podman[218443]: 2025-12-05 01:15:43.525632102 +0000 UTC m=+0.257288252 container init d1ce8efef730217ace4044429dd10edbf118ad3fb4ee9baf59cde34f458c5e1d (image=quay.io/ceph/ceph:v18, name=competent_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 01:15:43 compute-0 podman[218443]: 2025-12-05 01:15:43.536298278 +0000 UTC m=+0.267954388 container start d1ce8efef730217ace4044429dd10edbf118ad3fb4ee9baf59cde34f458c5e1d (image=quay.io/ceph/ceph:v18, name=competent_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 05 01:15:43 compute-0 podman[218443]: 2025-12-05 01:15:43.541499848 +0000 UTC m=+0.273156008 container attach d1ce8efef730217ace4044429dd10edbf118ad3fb4ee9baf59cde34f458c5e1d (image=quay.io/ceph/ceph:v18, name=competent_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:15:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:15:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:15:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:15:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:15:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:15:43 compute-0 ceph-mon[192914]: 3.10 scrub starts
Dec 05 01:15:43 compute-0 ceph-mon[192914]: 3.10 scrub ok
Dec 05 01:15:43 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Dec 05 01:15:43 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Dec 05 01:15:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:44 compute-0 podman[218560]: 2025-12-05 01:15:44.115188825 +0000 UTC m=+0.074363153 container create 8b557eec9f1a766e6d47bb8e562c8f4fdc9a4fdd8bc1dbf48f557ffdff3d868b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_haibt, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:15:44 compute-0 podman[218560]: 2025-12-05 01:15:44.087642847 +0000 UTC m=+0.046817195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:15:44 compute-0 systemd[1]: Started libpod-conmon-8b557eec9f1a766e6d47bb8e562c8f4fdc9a4fdd8bc1dbf48f557ffdff3d868b.scope.
Dec 05 01:15:44 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Dec 05 01:15:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2977895372' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec 05 01:15:44 compute-0 competent_bhaskara[218491]: [client.openstack]
Dec 05 01:15:44 compute-0 competent_bhaskara[218491]:         key = AQBBMTJpAAAAABAAQWv2lkQhfZ74+C7m+rCDZA==
Dec 05 01:15:44 compute-0 competent_bhaskara[218491]:         caps mgr = "allow *"
Dec 05 01:15:44 compute-0 competent_bhaskara[218491]:         caps mon = "profile rbd"
Dec 05 01:15:44 compute-0 competent_bhaskara[218491]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Dec 05 01:15:44 compute-0 podman[218560]: 2025-12-05 01:15:44.282408844 +0000 UTC m=+0.241583202 container init 8b557eec9f1a766e6d47bb8e562c8f4fdc9a4fdd8bc1dbf48f557ffdff3d868b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_haibt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 01:15:44 compute-0 systemd[1]: libpod-d1ce8efef730217ace4044429dd10edbf118ad3fb4ee9baf59cde34f458c5e1d.scope: Deactivated successfully.
Dec 05 01:15:44 compute-0 podman[218443]: 2025-12-05 01:15:44.288946319 +0000 UTC m=+1.020602429 container died d1ce8efef730217ace4044429dd10edbf118ad3fb4ee9baf59cde34f458c5e1d (image=quay.io/ceph/ceph:v18, name=competent_bhaskara, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:15:44 compute-0 podman[218560]: 2025-12-05 01:15:44.301355082 +0000 UTC m=+0.260529410 container start 8b557eec9f1a766e6d47bb8e562c8f4fdc9a4fdd8bc1dbf48f557ffdff3d868b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:15:44 compute-0 podman[218560]: 2025-12-05 01:15:44.308519654 +0000 UTC m=+0.267694022 container attach 8b557eec9f1a766e6d47bb8e562c8f4fdc9a4fdd8bc1dbf48f557ffdff3d868b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:15:44 compute-0 quizzical_haibt[218576]: 167 167
Dec 05 01:15:44 compute-0 systemd[1]: libpod-8b557eec9f1a766e6d47bb8e562c8f4fdc9a4fdd8bc1dbf48f557ffdff3d868b.scope: Deactivated successfully.
Dec 05 01:15:44 compute-0 podman[218560]: 2025-12-05 01:15:44.313849107 +0000 UTC m=+0.273023445 container died 8b557eec9f1a766e6d47bb8e562c8f4fdc9a4fdd8bc1dbf48f557ffdff3d868b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 01:15:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ea8d63b8dd87f6d8680411412c5b69ba7a02e3e2cd67a16e1bf7a3b5fea87fc-merged.mount: Deactivated successfully.
Dec 05 01:15:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-baf6429769974d326af0ab38a28e6f5ed29e07f51aed2f8141f42326a9a99c76-merged.mount: Deactivated successfully.
Dec 05 01:15:44 compute-0 podman[218560]: 2025-12-05 01:15:44.403130858 +0000 UTC m=+0.362305156 container remove 8b557eec9f1a766e6d47bb8e562c8f4fdc9a4fdd8bc1dbf48f557ffdff3d868b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_haibt, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 05 01:15:44 compute-0 podman[218443]: 2025-12-05 01:15:44.412485689 +0000 UTC m=+1.144141809 container remove d1ce8efef730217ace4044429dd10edbf118ad3fb4ee9baf59cde34f458c5e1d (image=quay.io/ceph/ceph:v18, name=competent_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:15:44 compute-0 systemd[1]: libpod-conmon-d1ce8efef730217ace4044429dd10edbf118ad3fb4ee9baf59cde34f458c5e1d.scope: Deactivated successfully.
Dec 05 01:15:44 compute-0 systemd[1]: libpod-conmon-8b557eec9f1a766e6d47bb8e562c8f4fdc9a4fdd8bc1dbf48f557ffdff3d868b.scope: Deactivated successfully.
Dec 05 01:15:44 compute-0 sudo[218405]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:44 compute-0 podman[218614]: 2025-12-05 01:15:44.614680015 +0000 UTC m=+0.068221459 container create 8da448f965a8dee74ee578bc251dce9f037b87456420f57f51236ddd143ced1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackwell, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:15:44 compute-0 ceph-mon[192914]: pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:44 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2977895372' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec 05 01:15:44 compute-0 podman[218614]: 2025-12-05 01:15:44.590725163 +0000 UTC m=+0.044266647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:15:44 compute-0 systemd[1]: Started libpod-conmon-8da448f965a8dee74ee578bc251dce9f037b87456420f57f51236ddd143ced1c.scope.
Dec 05 01:15:44 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b1e078342abc7603479bac8005322d11c9747485e9193885133b727057dca1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b1e078342abc7603479bac8005322d11c9747485e9193885133b727057dca1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b1e078342abc7603479bac8005322d11c9747485e9193885133b727057dca1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b1e078342abc7603479bac8005322d11c9747485e9193885133b727057dca1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b1e078342abc7603479bac8005322d11c9747485e9193885133b727057dca1a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:44 compute-0 podman[218614]: 2025-12-05 01:15:44.781542875 +0000 UTC m=+0.235084369 container init 8da448f965a8dee74ee578bc251dce9f037b87456420f57f51236ddd143ced1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:15:44 compute-0 podman[218614]: 2025-12-05 01:15:44.794853881 +0000 UTC m=+0.248395355 container start 8da448f965a8dee74ee578bc251dce9f037b87456420f57f51236ddd143ced1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackwell, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 05 01:15:44 compute-0 podman[218614]: 2025-12-05 01:15:44.801663194 +0000 UTC m=+0.255204658 container attach 8da448f965a8dee74ee578bc251dce9f037b87456420f57f51236ddd143ced1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackwell, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 05 01:15:44 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Dec 05 01:15:44 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Dec 05 01:15:45 compute-0 ceph-mon[192914]: 2.12 scrub starts
Dec 05 01:15:45 compute-0 ceph-mon[192914]: 2.12 scrub ok
Dec 05 01:15:45 compute-0 ceph-mon[192914]: 3.13 scrub starts
Dec 05 01:15:45 compute-0 ceph-mon[192914]: 3.13 scrub ok
Dec 05 01:15:46 compute-0 sudo[218804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qelewssspjnktimylckxjgptsknfybpf ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764897345.4624417-37280-117956692638760/async_wrapper.py j19528271487 30 /home/zuul/.ansible/tmp/ansible-tmp-1764897345.4624417-37280-117956692638760/AnsiballZ_command.py _'
Dec 05 01:15:46 compute-0 sudo[218804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:46 compute-0 awesome_blackwell[218630]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:15:46 compute-0 awesome_blackwell[218630]: --> relative data size: 1.0
Dec 05 01:15:46 compute-0 awesome_blackwell[218630]: --> All data devices are unavailable
Dec 05 01:15:46 compute-0 systemd[1]: libpod-8da448f965a8dee74ee578bc251dce9f037b87456420f57f51236ddd143ced1c.scope: Deactivated successfully.
Dec 05 01:15:46 compute-0 systemd[1]: libpod-8da448f965a8dee74ee578bc251dce9f037b87456420f57f51236ddd143ced1c.scope: Consumed 1.195s CPU time.
Dec 05 01:15:46 compute-0 podman[218614]: 2025-12-05 01:15:46.07643716 +0000 UTC m=+1.529978634 container died 8da448f965a8dee74ee578bc251dce9f037b87456420f57f51236ddd143ced1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:15:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b1e078342abc7603479bac8005322d11c9747485e9193885133b727057dca1a-merged.mount: Deactivated successfully.
Dec 05 01:15:46 compute-0 podman[218614]: 2025-12-05 01:15:46.15519602 +0000 UTC m=+1.608737474 container remove 8da448f965a8dee74ee578bc251dce9f037b87456420f57f51236ddd143ced1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackwell, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 05 01:15:46 compute-0 systemd[1]: libpod-conmon-8da448f965a8dee74ee578bc251dce9f037b87456420f57f51236ddd143ced1c.scope: Deactivated successfully.
Dec 05 01:15:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:15:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:15:46 compute-0 sudo[218471]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:46 compute-0 ansible-async_wrapper.py[218808]: Invoked with j19528271487 30 /home/zuul/.ansible/tmp/ansible-tmp-1764897345.4624417-37280-117956692638760/AnsiballZ_command.py _
Dec 05 01:15:46 compute-0 ansible-async_wrapper.py[218826]: Starting module and watcher
Dec 05 01:15:46 compute-0 ansible-async_wrapper.py[218826]: Start watching 218829 (30)
Dec 05 01:15:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:15:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:15:46 compute-0 ansible-async_wrapper.py[218829]: Start module (218829)
Dec 05 01:15:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:15:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:15:46 compute-0 ansible-async_wrapper.py[218808]: Return async_wrapper task started.
Dec 05 01:15:46 compute-0 sudo[218804]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:46 compute-0 sudo[218824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:46 compute-0 sudo[218824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:46 compute-0 sudo[218824]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:46 compute-0 python3[218832]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:46 compute-0 sudo[218853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:15:46 compute-0 sudo[218853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:46 compute-0 sudo[218853]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:46 compute-0 podman[218876]: 2025-12-05 01:15:46.452314099 +0000 UTC m=+0.064861039 container create e1325acb9e5a8d2f77a78fd66874cee3218596535e8b5fd8438b85c396bdb446 (image=quay.io/ceph/ceph:v18, name=unruffled_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:15:46 compute-0 systemd[1]: Started libpod-conmon-e1325acb9e5a8d2f77a78fd66874cee3218596535e8b5fd8438b85c396bdb446.scope.
Dec 05 01:15:46 compute-0 sudo[218888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:46 compute-0 sudo[218888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:46 compute-0 sudo[218888]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:46 compute-0 podman[218876]: 2025-12-05 01:15:46.422753087 +0000 UTC m=+0.035300057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:46 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe17f2ac7ccfb22364720e3ad9e9c700aa19f5927f972c1e32fda8b8975d9304/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe17f2ac7ccfb22364720e3ad9e9c700aa19f5927f972c1e32fda8b8975d9304/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:46 compute-0 podman[218876]: 2025-12-05 01:15:46.556805278 +0000 UTC m=+0.169352248 container init e1325acb9e5a8d2f77a78fd66874cee3218596535e8b5fd8438b85c396bdb446 (image=quay.io/ceph/ceph:v18, name=unruffled_satoshi, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:15:46 compute-0 podman[218876]: 2025-12-05 01:15:46.567260468 +0000 UTC m=+0.179807408 container start e1325acb9e5a8d2f77a78fd66874cee3218596535e8b5fd8438b85c396bdb446 (image=quay.io/ceph/ceph:v18, name=unruffled_satoshi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 05 01:15:46 compute-0 podman[218876]: 2025-12-05 01:15:46.572135498 +0000 UTC m=+0.184682438 container attach e1325acb9e5a8d2f77a78fd66874cee3218596535e8b5fd8438b85c396bdb446 (image=quay.io/ceph/ceph:v18, name=unruffled_satoshi, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 01:15:46 compute-0 sudo[218921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:15:46 compute-0 sudo[218921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:46 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Dec 05 01:15:46 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Dec 05 01:15:46 compute-0 ceph-mon[192914]: pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:46 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec 05 01:15:46 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec 05 01:15:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:15:47 compute-0 podman[219003]: 2025-12-05 01:15:47.080391283 +0000 UTC m=+0.064280143 container create 53c21974d6c02c512a629bd1cb9be7d7bf93041833f6e2bdfdf0a62ac871614f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 05 01:15:47 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 01:15:47 compute-0 unruffled_satoshi[218917]: 
Dec 05 01:15:47 compute-0 unruffled_satoshi[218917]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 05 01:15:47 compute-0 systemd[1]: Started libpod-conmon-53c21974d6c02c512a629bd1cb9be7d7bf93041833f6e2bdfdf0a62ac871614f.scope.
Dec 05 01:15:47 compute-0 podman[219003]: 2025-12-05 01:15:47.059473123 +0000 UTC m=+0.043361993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:15:47 compute-0 systemd[1]: libpod-e1325acb9e5a8d2f77a78fd66874cee3218596535e8b5fd8438b85c396bdb446.scope: Deactivated successfully.
Dec 05 01:15:47 compute-0 podman[218876]: 2025-12-05 01:15:47.162314687 +0000 UTC m=+0.774861627 container died e1325acb9e5a8d2f77a78fd66874cee3218596535e8b5fd8438b85c396bdb446 (image=quay.io/ceph/ceph:v18, name=unruffled_satoshi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:15:47 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe17f2ac7ccfb22364720e3ad9e9c700aa19f5927f972c1e32fda8b8975d9304-merged.mount: Deactivated successfully.
Dec 05 01:15:47 compute-0 podman[219003]: 2025-12-05 01:15:47.226285161 +0000 UTC m=+0.210174051 container init 53c21974d6c02c512a629bd1cb9be7d7bf93041833f6e2bdfdf0a62ac871614f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:15:47 compute-0 podman[219003]: 2025-12-05 01:15:47.233395251 +0000 UTC m=+0.217284101 container start 53c21974d6c02c512a629bd1cb9be7d7bf93041833f6e2bdfdf0a62ac871614f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:15:47 compute-0 podman[218876]: 2025-12-05 01:15:47.238112308 +0000 UTC m=+0.850659248 container remove e1325acb9e5a8d2f77a78fd66874cee3218596535e8b5fd8438b85c396bdb446 (image=quay.io/ceph/ceph:v18, name=unruffled_satoshi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 05 01:15:47 compute-0 cranky_dhawan[219021]: 167 167
Dec 05 01:15:47 compute-0 podman[219003]: 2025-12-05 01:15:47.246241786 +0000 UTC m=+0.230130666 container attach 53c21974d6c02c512a629bd1cb9be7d7bf93041833f6e2bdfdf0a62ac871614f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:15:47 compute-0 systemd[1]: libpod-53c21974d6c02c512a629bd1cb9be7d7bf93041833f6e2bdfdf0a62ac871614f.scope: Deactivated successfully.
Dec 05 01:15:47 compute-0 podman[219003]: 2025-12-05 01:15:47.248407034 +0000 UTC m=+0.232295924 container died 53c21974d6c02c512a629bd1cb9be7d7bf93041833f6e2bdfdf0a62ac871614f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 05 01:15:47 compute-0 systemd[1]: libpod-conmon-e1325acb9e5a8d2f77a78fd66874cee3218596535e8b5fd8438b85c396bdb446.scope: Deactivated successfully.
Dec 05 01:15:47 compute-0 ansible-async_wrapper.py[218829]: Module complete (218829)
Dec 05 01:15:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7bdb37e729878f1f7be674a19b8c55c91938140e1feaccfdaee56011b75dd56-merged.mount: Deactivated successfully.
Dec 05 01:15:47 compute-0 podman[219003]: 2025-12-05 01:15:47.331427767 +0000 UTC m=+0.315316617 container remove 53c21974d6c02c512a629bd1cb9be7d7bf93041833f6e2bdfdf0a62ac871614f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:15:47 compute-0 systemd[1]: libpod-conmon-53c21974d6c02c512a629bd1cb9be7d7bf93041833f6e2bdfdf0a62ac871614f.scope: Deactivated successfully.
Dec 05 01:15:47 compute-0 sudo[219098]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjmnkhuqgolwkoztlsifaihdvbwlmgwf ; /usr/bin/python3'
Dec 05 01:15:47 compute-0 sudo[219098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:47 compute-0 podman[219106]: 2025-12-05 01:15:47.533608993 +0000 UTC m=+0.072089652 container create 3cb63d98c4d9d958d82d492fdadbe5d88c5b887c430a1325c54b74af98f8622d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_villani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 01:15:47 compute-0 python3[219103]: ansible-ansible.legacy.async_status Invoked with jid=j19528271487.218808 mode=status _async_dir=/root/.ansible_async
Dec 05 01:15:47 compute-0 podman[219106]: 2025-12-05 01:15:47.507604547 +0000 UTC m=+0.046085236 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:15:47 compute-0 sudo[219098]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:47 compute-0 systemd[1]: Started libpod-conmon-3cb63d98c4d9d958d82d492fdadbe5d88c5b887c430a1325c54b74af98f8622d.scope.
Dec 05 01:15:47 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b27453b5a72d59fa4c75f838f2d42de6b6b1e107d77690bdac541629caab9ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b27453b5a72d59fa4c75f838f2d42de6b6b1e107d77690bdac541629caab9ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b27453b5a72d59fa4c75f838f2d42de6b6b1e107d77690bdac541629caab9ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b27453b5a72d59fa4c75f838f2d42de6b6b1e107d77690bdac541629caab9ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:47 compute-0 podman[219106]: 2025-12-05 01:15:47.709870885 +0000 UTC m=+0.248351544 container init 3cb63d98c4d9d958d82d492fdadbe5d88c5b887c430a1325c54b74af98f8622d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_villani, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 05 01:15:47 compute-0 ceph-mon[192914]: 4.16 scrub starts
Dec 05 01:15:47 compute-0 ceph-mon[192914]: 4.16 scrub ok
Dec 05 01:15:47 compute-0 ceph-mon[192914]: from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 01:15:47 compute-0 podman[219106]: 2025-12-05 01:15:47.736985771 +0000 UTC m=+0.275466410 container start 3cb63d98c4d9d958d82d492fdadbe5d88c5b887c430a1325c54b74af98f8622d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 05 01:15:47 compute-0 podman[219106]: 2025-12-05 01:15:47.743139436 +0000 UTC m=+0.281620125 container attach 3cb63d98c4d9d958d82d492fdadbe5d88c5b887c430a1325c54b74af98f8622d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_villani, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 05 01:15:47 compute-0 sudo[219174]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhxtxkikpxxjqmislqzljcciexglnuxb ; /usr/bin/python3'
Dec 05 01:15:47 compute-0 sudo[219174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:47 compute-0 python3[219176]: ansible-ansible.legacy.async_status Invoked with jid=j19528271487.218808 mode=cleanup _async_dir=/root/.ansible_async
Dec 05 01:15:47 compute-0 sudo[219174]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:48 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Dec 05 01:15:48 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Dec 05 01:15:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v108: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:48 compute-0 sudo[219202]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvoulvfbvlaordhabucjfmfnoawbbbmo ; /usr/bin/python3'
Dec 05 01:15:48 compute-0 sudo[219202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:48 compute-0 eloquent_villani[219123]: {
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:     "0": [
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:         {
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "devices": [
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "/dev/loop3"
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             ],
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "lv_name": "ceph_lv0",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "lv_size": "21470642176",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "name": "ceph_lv0",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "tags": {
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.cluster_name": "ceph",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.crush_device_class": "",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.encrypted": "0",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.osd_id": "0",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.type": "block",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.vdo": "0"
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             },
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "type": "block",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "vg_name": "ceph_vg0"
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:         }
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:     ],
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:     "1": [
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:         {
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "devices": [
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "/dev/loop4"
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             ],
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "lv_name": "ceph_lv1",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "lv_size": "21470642176",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "name": "ceph_lv1",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "tags": {
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.cluster_name": "ceph",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.crush_device_class": "",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.encrypted": "0",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.osd_id": "1",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.type": "block",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.vdo": "0"
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             },
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "type": "block",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "vg_name": "ceph_vg1"
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:         }
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:     ],
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:     "2": [
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:         {
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "devices": [
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "/dev/loop5"
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             ],
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "lv_name": "ceph_lv2",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "lv_size": "21470642176",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "name": "ceph_lv2",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "tags": {
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.cluster_name": "ceph",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.crush_device_class": "",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.encrypted": "0",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.osd_id": "2",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.type": "block",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:                 "ceph.vdo": "0"
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             },
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "type": "block",
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:             "vg_name": "ceph_vg2"
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:         }
Dec 05 01:15:48 compute-0 eloquent_villani[219123]:     ]
Dec 05 01:15:48 compute-0 eloquent_villani[219123]: }
Dec 05 01:15:48 compute-0 systemd[1]: libpod-3cb63d98c4d9d958d82d492fdadbe5d88c5b887c430a1325c54b74af98f8622d.scope: Deactivated successfully.
Dec 05 01:15:48 compute-0 podman[219106]: 2025-12-05 01:15:48.584014741 +0000 UTC m=+1.122495390 container died 3cb63d98c4d9d958d82d492fdadbe5d88c5b887c430a1325c54b74af98f8622d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_villani, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:15:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b27453b5a72d59fa4c75f838f2d42de6b6b1e107d77690bdac541629caab9ad-merged.mount: Deactivated successfully.
Dec 05 01:15:48 compute-0 python3[219206]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:48 compute-0 podman[219106]: 2025-12-05 01:15:48.668219716 +0000 UTC m=+1.206700355 container remove 3cb63d98c4d9d958d82d492fdadbe5d88c5b887c430a1325c54b74af98f8622d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 05 01:15:48 compute-0 systemd[1]: libpod-conmon-3cb63d98c4d9d958d82d492fdadbe5d88c5b887c430a1325c54b74af98f8622d.scope: Deactivated successfully.
Dec 05 01:15:48 compute-0 sudo[218921]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:48 compute-0 ceph-mon[192914]: 2.14 scrub starts
Dec 05 01:15:48 compute-0 ceph-mon[192914]: 2.14 scrub ok
Dec 05 01:15:48 compute-0 ceph-mon[192914]: 3.14 scrub starts
Dec 05 01:15:48 compute-0 ceph-mon[192914]: 3.14 scrub ok
Dec 05 01:15:48 compute-0 ceph-mon[192914]: pgmap v108: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:48 compute-0 podman[219218]: 2025-12-05 01:15:48.777633197 +0000 UTC m=+0.095815548 container create bddfc2c7f0561b7a494872e0c8fabb6d87d6536e62f19bad66d022e423753d48 (image=quay.io/ceph/ceph:v18, name=interesting_diffie, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 01:15:48 compute-0 podman[219218]: 2025-12-05 01:15:48.741132549 +0000 UTC m=+0.059314930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:48 compute-0 systemd[1]: Started libpod-conmon-bddfc2c7f0561b7a494872e0c8fabb6d87d6536e62f19bad66d022e423753d48.scope.
Dec 05 01:15:48 compute-0 sudo[219233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:48 compute-0 sudo[219233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:48 compute-0 sudo[219233]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:48 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b06989004edc21ce131ffa038d7e089b299d8b54d613c11b3db71849ba74ef/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b06989004edc21ce131ffa038d7e089b299d8b54d613c11b3db71849ba74ef/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:48 compute-0 podman[219218]: 2025-12-05 01:15:48.939142582 +0000 UTC m=+0.257324973 container init bddfc2c7f0561b7a494872e0c8fabb6d87d6536e62f19bad66d022e423753d48 (image=quay.io/ceph/ceph:v18, name=interesting_diffie, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:15:48 compute-0 podman[219218]: 2025-12-05 01:15:48.95774185 +0000 UTC m=+0.275924211 container start bddfc2c7f0561b7a494872e0c8fabb6d87d6536e62f19bad66d022e423753d48 (image=quay.io/ceph/ceph:v18, name=interesting_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 01:15:48 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec 05 01:15:48 compute-0 podman[219218]: 2025-12-05 01:15:48.965111518 +0000 UTC m=+0.283293889 container attach bddfc2c7f0561b7a494872e0c8fabb6d87d6536e62f19bad66d022e423753d48 (image=quay.io/ceph/ceph:v18, name=interesting_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:15:48 compute-0 sudo[219264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:15:48 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec 05 01:15:48 compute-0 sudo[219264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:48 compute-0 sudo[219264]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:49 compute-0 sudo[219290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:49 compute-0 sudo[219290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:49 compute-0 sudo[219290]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:49 compute-0 sudo[219315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:15:49 compute-0 sudo[219315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:49 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 01:15:49 compute-0 interesting_diffie[219260]: 
Dec 05 01:15:49 compute-0 interesting_diffie[219260]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 05 01:15:49 compute-0 systemd[1]: libpod-bddfc2c7f0561b7a494872e0c8fabb6d87d6536e62f19bad66d022e423753d48.scope: Deactivated successfully.
Dec 05 01:15:49 compute-0 podman[219388]: 2025-12-05 01:15:49.64051856 +0000 UTC m=+0.041890923 container died bddfc2c7f0561b7a494872e0c8fabb6d87d6536e62f19bad66d022e423753d48 (image=quay.io/ceph/ceph:v18, name=interesting_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:15:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-71b06989004edc21ce131ffa038d7e089b299d8b54d613c11b3db71849ba74ef-merged.mount: Deactivated successfully.
Dec 05 01:15:49 compute-0 podman[219388]: 2025-12-05 01:15:49.711106661 +0000 UTC m=+0.112479014 container remove bddfc2c7f0561b7a494872e0c8fabb6d87d6536e62f19bad66d022e423753d48 (image=quay.io/ceph/ceph:v18, name=interesting_diffie, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:15:49 compute-0 systemd[1]: libpod-conmon-bddfc2c7f0561b7a494872e0c8fabb6d87d6536e62f19bad66d022e423753d48.scope: Deactivated successfully.
Dec 05 01:15:49 compute-0 ceph-mon[192914]: 2.1a scrub starts
Dec 05 01:15:49 compute-0 ceph-mon[192914]: 2.1a scrub ok
Dec 05 01:15:49 compute-0 sudo[219202]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:49 compute-0 podman[219414]: 2025-12-05 01:15:49.831788634 +0000 UTC m=+0.067571281 container create a8f12a6bcf8a124aa99479064b770036249a2d9926746455fb4507f77e2d776f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 01:15:49 compute-0 podman[219414]: 2025-12-05 01:15:49.80028868 +0000 UTC m=+0.036071387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:15:49 compute-0 systemd[1]: Started libpod-conmon-a8f12a6bcf8a124aa99479064b770036249a2d9926746455fb4507f77e2d776f.scope.
Dec 05 01:15:49 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec 05 01:15:49 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:49 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec 05 01:15:49 compute-0 podman[219414]: 2025-12-05 01:15:49.98321542 +0000 UTC m=+0.218998087 container init a8f12a6bcf8a124aa99479064b770036249a2d9926746455fb4507f77e2d776f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_franklin, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:15:50 compute-0 podman[219414]: 2025-12-05 01:15:50.009956676 +0000 UTC m=+0.245739293 container start a8f12a6bcf8a124aa99479064b770036249a2d9926746455fb4507f77e2d776f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 05 01:15:50 compute-0 podman[219414]: 2025-12-05 01:15:50.014420876 +0000 UTC m=+0.250203543 container attach a8f12a6bcf8a124aa99479064b770036249a2d9926746455fb4507f77e2d776f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 05 01:15:50 compute-0 sweet_franklin[219436]: 167 167
Dec 05 01:15:50 compute-0 systemd[1]: libpod-a8f12a6bcf8a124aa99479064b770036249a2d9926746455fb4507f77e2d776f.scope: Deactivated successfully.
Dec 05 01:15:50 compute-0 conmon[219436]: conmon a8f12a6bcf8a124aa994 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a8f12a6bcf8a124aa99479064b770036249a2d9926746455fb4507f77e2d776f.scope/container/memory.events
Dec 05 01:15:50 compute-0 podman[219414]: 2025-12-05 01:15:50.023000995 +0000 UTC m=+0.258783732 container died a8f12a6bcf8a124aa99479064b770036249a2d9926746455fb4507f77e2d776f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:15:50 compute-0 podman[219427]: 2025-12-05 01:15:50.037829803 +0000 UTC m=+0.150358539 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 01:15:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ca6d39e744ead1c31b2e048923f13404b3ba97a6fa011a37801093368b1437c-merged.mount: Deactivated successfully.
Dec 05 01:15:50 compute-0 podman[219414]: 2025-12-05 01:15:50.082861639 +0000 UTC m=+0.318644256 container remove a8f12a6bcf8a124aa99479064b770036249a2d9926746455fb4507f77e2d776f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 05 01:15:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v109: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:50 compute-0 systemd[1]: libpod-conmon-a8f12a6bcf8a124aa99479064b770036249a2d9926746455fb4507f77e2d776f.scope: Deactivated successfully.
Dec 05 01:15:50 compute-0 podman[219473]: 2025-12-05 01:15:50.277870683 +0000 UTC m=+0.060273376 container create 11f5457ef9e28925cd831f76d32957ae69b52b0c4f0ff9c0cca8373acd8e0c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_benz, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:15:50 compute-0 systemd[1]: Started libpod-conmon-11f5457ef9e28925cd831f76d32957ae69b52b0c4f0ff9c0cca8373acd8e0c1e.scope.
Dec 05 01:15:50 compute-0 podman[219473]: 2025-12-05 01:15:50.253008027 +0000 UTC m=+0.035410770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:15:50 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/434cc006023a6be239467f2db25fd80183359630c2aaba5fed3657383af4956b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/434cc006023a6be239467f2db25fd80183359630c2aaba5fed3657383af4956b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/434cc006023a6be239467f2db25fd80183359630c2aaba5fed3657383af4956b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/434cc006023a6be239467f2db25fd80183359630c2aaba5fed3657383af4956b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:50 compute-0 podman[219473]: 2025-12-05 01:15:50.426099073 +0000 UTC m=+0.208501826 container init 11f5457ef9e28925cd831f76d32957ae69b52b0c4f0ff9c0cca8373acd8e0c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_benz, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:15:50 compute-0 podman[219473]: 2025-12-05 01:15:50.458665106 +0000 UTC m=+0.241067809 container start 11f5457ef9e28925cd831f76d32957ae69b52b0c4f0ff9c0cca8373acd8e0c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_benz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:15:50 compute-0 podman[219473]: 2025-12-05 01:15:50.464643826 +0000 UTC m=+0.247046559 container attach 11f5457ef9e28925cd831f76d32957ae69b52b0c4f0ff9c0cca8373acd8e0c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Dec 05 01:15:50 compute-0 sudo[219517]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jihldgioxgdsjbzabucuxrzztuxivdcf ; /usr/bin/python3'
Dec 05 01:15:50 compute-0 sudo[219517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:50 compute-0 python3[219520]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:50 compute-0 podman[219521]: 2025-12-05 01:15:50.72799671 +0000 UTC m=+0.077535788 container create cc926f4f91f8e8eb90ef040c0152ac9d486b5875f7681dfddb9f62f83fba86eb (image=quay.io/ceph/ceph:v18, name=cool_yonath, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Dec 05 01:15:50 compute-0 ceph-mon[192914]: from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 01:15:50 compute-0 ceph-mon[192914]: 2.1e scrub starts
Dec 05 01:15:50 compute-0 ceph-mon[192914]: 2.1e scrub ok
Dec 05 01:15:50 compute-0 ceph-mon[192914]: pgmap v109: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:50 compute-0 podman[219521]: 2025-12-05 01:15:50.688537953 +0000 UTC m=+0.038077111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:50 compute-0 systemd[1]: Started libpod-conmon-cc926f4f91f8e8eb90ef040c0152ac9d486b5875f7681dfddb9f62f83fba86eb.scope.
Dec 05 01:15:50 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d50106afa69687cb4a0160de4eef11caa1eff4dedd220cf2d50988e8e4e0b3d9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d50106afa69687cb4a0160de4eef11caa1eff4dedd220cf2d50988e8e4e0b3d9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:50 compute-0 podman[219521]: 2025-12-05 01:15:50.892112516 +0000 UTC m=+0.241651624 container init cc926f4f91f8e8eb90ef040c0152ac9d486b5875f7681dfddb9f62f83fba86eb (image=quay.io/ceph/ceph:v18, name=cool_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 01:15:50 compute-0 podman[219521]: 2025-12-05 01:15:50.905265149 +0000 UTC m=+0.254804217 container start cc926f4f91f8e8eb90ef040c0152ac9d486b5875f7681dfddb9f62f83fba86eb (image=quay.io/ceph/ceph:v18, name=cool_yonath, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 05 01:15:50 compute-0 podman[219521]: 2025-12-05 01:15:50.911056464 +0000 UTC m=+0.260595572 container attach cc926f4f91f8e8eb90ef040c0152ac9d486b5875f7681dfddb9f62f83fba86eb (image=quay.io/ceph/ceph:v18, name=cool_yonath, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 01:15:50 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Dec 05 01:15:50 compute-0 podman[219538]: 2025-12-05 01:15:50.933586447 +0000 UTC m=+0.105572189 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:15:50 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Dec 05 01:15:51 compute-0 podman[219545]: 2025-12-05 01:15:51.013689803 +0000 UTC m=+0.146726461 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 01:15:51 compute-0 ansible-async_wrapper.py[218826]: Done in kid B.
Dec 05 01:15:51 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 01:15:51 compute-0 cool_yonath[219536]: 
Dec 05 01:15:51 compute-0 cool_yonath[219536]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Dec 05 01:15:51 compute-0 systemd[1]: libpod-cc926f4f91f8e8eb90ef040c0152ac9d486b5875f7681dfddb9f62f83fba86eb.scope: Deactivated successfully.
Dec 05 01:15:51 compute-0 podman[219521]: 2025-12-05 01:15:51.51380705 +0000 UTC m=+0.863346128 container died cc926f4f91f8e8eb90ef040c0152ac9d486b5875f7681dfddb9f62f83fba86eb (image=quay.io/ceph/ceph:v18, name=cool_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:15:51 compute-0 peaceful_benz[219490]: {
Dec 05 01:15:51 compute-0 peaceful_benz[219490]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:15:51 compute-0 peaceful_benz[219490]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:15:51 compute-0 peaceful_benz[219490]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:15:51 compute-0 peaceful_benz[219490]:         "osd_id": 0,
Dec 05 01:15:51 compute-0 peaceful_benz[219490]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:15:51 compute-0 peaceful_benz[219490]:         "type": "bluestore"
Dec 05 01:15:51 compute-0 peaceful_benz[219490]:     },
Dec 05 01:15:51 compute-0 peaceful_benz[219490]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:15:51 compute-0 peaceful_benz[219490]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:15:51 compute-0 peaceful_benz[219490]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:15:51 compute-0 peaceful_benz[219490]:         "osd_id": 1,
Dec 05 01:15:51 compute-0 peaceful_benz[219490]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:15:51 compute-0 peaceful_benz[219490]:         "type": "bluestore"
Dec 05 01:15:51 compute-0 peaceful_benz[219490]:     },
Dec 05 01:15:51 compute-0 peaceful_benz[219490]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:15:51 compute-0 peaceful_benz[219490]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:15:51 compute-0 peaceful_benz[219490]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:15:51 compute-0 peaceful_benz[219490]:         "osd_id": 2,
Dec 05 01:15:51 compute-0 peaceful_benz[219490]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:15:51 compute-0 peaceful_benz[219490]:         "type": "bluestore"
Dec 05 01:15:51 compute-0 peaceful_benz[219490]:     }
Dec 05 01:15:51 compute-0 peaceful_benz[219490]: }
Dec 05 01:15:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-d50106afa69687cb4a0160de4eef11caa1eff4dedd220cf2d50988e8e4e0b3d9-merged.mount: Deactivated successfully.
Dec 05 01:15:51 compute-0 systemd[1]: libpod-11f5457ef9e28925cd831f76d32957ae69b52b0c4f0ff9c0cca8373acd8e0c1e.scope: Deactivated successfully.
Dec 05 01:15:51 compute-0 systemd[1]: libpod-11f5457ef9e28925cd831f76d32957ae69b52b0c4f0ff9c0cca8373acd8e0c1e.scope: Consumed 1.114s CPU time.
Dec 05 01:15:51 compute-0 podman[219473]: 2025-12-05 01:15:51.592624801 +0000 UTC m=+1.375027484 container died 11f5457ef9e28925cd831f76d32957ae69b52b0c4f0ff9c0cca8373acd8e0c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_benz, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:15:51 compute-0 podman[219521]: 2025-12-05 01:15:51.609214685 +0000 UTC m=+0.958753743 container remove cc926f4f91f8e8eb90ef040c0152ac9d486b5875f7681dfddb9f62f83fba86eb (image=quay.io/ceph/ceph:v18, name=cool_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 01:15:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-434cc006023a6be239467f2db25fd80183359630c2aaba5fed3657383af4956b-merged.mount: Deactivated successfully.
Dec 05 01:15:51 compute-0 systemd[1]: libpod-conmon-cc926f4f91f8e8eb90ef040c0152ac9d486b5875f7681dfddb9f62f83fba86eb.scope: Deactivated successfully.
Dec 05 01:15:51 compute-0 sudo[219517]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:51 compute-0 podman[219473]: 2025-12-05 01:15:51.675608214 +0000 UTC m=+1.458010907 container remove 11f5457ef9e28925cd831f76d32957ae69b52b0c4f0ff9c0cca8373acd8e0c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 05 01:15:51 compute-0 systemd[1]: libpod-conmon-11f5457ef9e28925cd831f76d32957ae69b52b0c4f0ff9c0cca8373acd8e0c1e.scope: Deactivated successfully.
Dec 05 01:15:51 compute-0 sudo[219315]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:15:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:15:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:51 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev 90453ab0-db65-46d9-9577-6791a8ecefd3 (Updating rgw.rgw deployment (+1 -> 1))
Dec 05 01:15:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.umynax", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Dec 05 01:15:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.umynax", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 05 01:15:51 compute-0 ceph-mon[192914]: 5.6 scrub starts
Dec 05 01:15:51 compute-0 ceph-mon[192914]: 5.6 scrub ok
Dec 05 01:15:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.umynax", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 05 01:15:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Dec 05 01:15:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:15:51 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:15:51 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.umynax on compute-0
Dec 05 01:15:51 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.umynax on compute-0
Dec 05 01:15:51 compute-0 sudo[219660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:51 compute-0 sudo[219660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:51 compute-0 sudo[219660]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:51 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Dec 05 01:15:51 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Dec 05 01:15:51 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Dec 05 01:15:51 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Dec 05 01:15:51 compute-0 sudo[219685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:15:51 compute-0 sudo[219685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:51 compute-0 sudo[219685]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:15:52 compute-0 sudo[219710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:52 compute-0 sudo[219710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:52 compute-0 sudo[219710]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v110: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:52 compute-0 sudo[219735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec 05 01:15:52 compute-0 sudo[219735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:52 compute-0 sudo[219808]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaykjeozmepllittnjdjrrujcmucwkon ; /usr/bin/python3'
Dec 05 01:15:52 compute-0 sudo[219808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:52 compute-0 podman[219823]: 2025-12-05 01:15:52.589805362 +0000 UTC m=+0.075388481 container create c7a3899da3cacb25be6332624bd7c9c2f8d4729a473f88c972ad1eeff9ddd126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 05 01:15:52 compute-0 python3[219817]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:52 compute-0 podman[219823]: 2025-12-05 01:15:52.552101692 +0000 UTC m=+0.037684851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:15:52 compute-0 systemd[1]: Started libpod-conmon-c7a3899da3cacb25be6332624bd7c9c2f8d4729a473f88c972ad1eeff9ddd126.scope.
Dec 05 01:15:52 compute-0 podman[219837]: 2025-12-05 01:15:52.686935463 +0000 UTC m=+0.068173567 container create ed264eaefb19e8c6061b9807fa84ef7afb728aac6e9f62906cb8619552c0fe4d (image=quay.io/ceph/ceph:v18, name=jovial_rosalind, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 01:15:52 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:52 compute-0 podman[219823]: 2025-12-05 01:15:52.727643364 +0000 UTC m=+0.213226443 container init c7a3899da3cacb25be6332624bd7c9c2f8d4729a473f88c972ad1eeff9ddd126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 05 01:15:52 compute-0 podman[219823]: 2025-12-05 01:15:52.74017297 +0000 UTC m=+0.225756089 container start c7a3899da3cacb25be6332624bd7c9c2f8d4729a473f88c972ad1eeff9ddd126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chandrasekhar, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:15:52 compute-0 systemd[1]: Started libpod-conmon-ed264eaefb19e8c6061b9807fa84ef7afb728aac6e9f62906cb8619552c0fe4d.scope.
Dec 05 01:15:52 compute-0 podman[219823]: 2025-12-05 01:15:52.745970225 +0000 UTC m=+0.231553334 container attach c7a3899da3cacb25be6332624bd7c9c2f8d4729a473f88c972ad1eeff9ddd126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 01:15:52 compute-0 objective_chandrasekhar[219849]: 167 167
Dec 05 01:15:52 compute-0 systemd[1]: libpod-c7a3899da3cacb25be6332624bd7c9c2f8d4729a473f88c972ad1eeff9ddd126.scope: Deactivated successfully.
Dec 05 01:15:52 compute-0 podman[219823]: 2025-12-05 01:15:52.748923194 +0000 UTC m=+0.234506283 container died c7a3899da3cacb25be6332624bd7c9c2f8d4729a473f88c972ad1eeff9ddd126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 05 01:15:52 compute-0 podman[219837]: 2025-12-05 01:15:52.661077531 +0000 UTC m=+0.042315645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:52 compute-0 ceph-mon[192914]: from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 01:15:52 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.umynax", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 05 01:15:52 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.umynax", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 05 01:15:52 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:52 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:15:52 compute-0 ceph-mon[192914]: Deploying daemon rgw.rgw.compute-0.umynax on compute-0
Dec 05 01:15:52 compute-0 ceph-mon[192914]: 3.19 scrub starts
Dec 05 01:15:52 compute-0 ceph-mon[192914]: 5.8 scrub starts
Dec 05 01:15:52 compute-0 ceph-mon[192914]: 3.19 scrub ok
Dec 05 01:15:52 compute-0 ceph-mon[192914]: 5.8 scrub ok
Dec 05 01:15:52 compute-0 ceph-mon[192914]: pgmap v110: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:52 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f608feb84c02cf26cdd195146e22c6d2f7cc26d80110f959a6c47bd6209526b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f608feb84c02cf26cdd195146e22c6d2f7cc26d80110f959a6c47bd6209526b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:52 compute-0 podman[219837]: 2025-12-05 01:15:52.808381747 +0000 UTC m=+0.189619891 container init ed264eaefb19e8c6061b9807fa84ef7afb728aac6e9f62906cb8619552c0fe4d (image=quay.io/ceph/ceph:v18, name=jovial_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 01:15:52 compute-0 podman[219837]: 2025-12-05 01:15:52.819512715 +0000 UTC m=+0.200750819 container start ed264eaefb19e8c6061b9807fa84ef7afb728aac6e9f62906cb8619552c0fe4d (image=quay.io/ceph/ceph:v18, name=jovial_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 01:15:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb7aaf0a234a89a073c504672e20a0a83314dacb3cdbc08eed491ae5429c17a5-merged.mount: Deactivated successfully.
Dec 05 01:15:52 compute-0 podman[219837]: 2025-12-05 01:15:52.82494517 +0000 UTC m=+0.206183324 container attach ed264eaefb19e8c6061b9807fa84ef7afb728aac6e9f62906cb8619552c0fe4d (image=quay.io/ceph/ceph:v18, name=jovial_rosalind, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:15:52 compute-0 podman[219823]: 2025-12-05 01:15:52.849173029 +0000 UTC m=+0.334756118 container remove c7a3899da3cacb25be6332624bd7c9c2f8d4729a473f88c972ad1eeff9ddd126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chandrasekhar, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:15:52 compute-0 systemd[1]: libpod-conmon-c7a3899da3cacb25be6332624bd7c9c2f8d4729a473f88c972ad1eeff9ddd126.scope: Deactivated successfully.
Dec 05 01:15:52 compute-0 podman[219873]: 2025-12-05 01:15:52.915417844 +0000 UTC m=+0.114289473 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:15:52 compute-0 systemd[1]: Reloading.
Dec 05 01:15:52 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.a scrub starts
Dec 05 01:15:52 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.a scrub ok
Dec 05 01:15:52 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Dec 05 01:15:52 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Dec 05 01:15:53 compute-0 systemd-rc-local-generator[219913]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:15:53 compute-0 systemd-sysv-generator[219919]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:15:53 compute-0 systemd[1]: Reloading.
Dec 05 01:15:53 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14264 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 01:15:53 compute-0 jovial_rosalind[219860]: 
Dec 05 01:15:53 compute-0 jovial_rosalind[219860]: [{"container_id": "f9154648f016", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.41%", "created": "2025-12-05T01:13:50.760162Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-12-05T01:13:50.817961Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-05T01:15:41.601116Z", "memory_usage": 11618222, "ports": [], "service_name": "crash", "started": "2025-12-05T01:13:50.651037Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee@crash.compute-0", "version": "18.2.7"}, {"container_id": "08717604c330", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "26.75%", "created": "2025-12-05T01:12:30.242897Z", "daemon_id": "compute-0.afshmv", "daemon_name": "mgr.compute-0.afshmv", "daemon_type": "mgr", "events": ["2025-12-05T01:14:54.814152Z daemon:mgr.compute-0.afshmv [INFO] \"Reconfigured mgr.compute-0.afshmv on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-05T01:15:41.600971Z", "memory_usage": 549453824, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-12-05T01:12:30.052616Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee@mgr.compute-0.afshmv", "version": "18.2.7"}, {"container_id": "aab8d24497e0", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "3.02%", "created": "2025-12-05T01:12:22.738694Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-12-05T01:14:53.891126Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-05T01:15:41.600092Z", "memory_request": 2147483648, "memory_usage": 39992688, "ports": [], "service_name": "mon", "started": "2025-12-05T01:12:26.716454Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee@mon.compute-0", "version": "18.2.7"}, {"container_id": "a1423cde747e", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "3.06%", "created": "2025-12-05T01:14:21.724026Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-12-05T01:14:21.812000Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-05T01:15:41.601255Z", "memory_request": 4294967296, "memory_usage": 66857205, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-05T01:14:21.508629Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee@osd.0", "version": "18.2.7"}, {"container_id": "4bb9d1516855", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "3.67%", "created": "2025-12-05T01:14:27.459185Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-12-05T01:14:27.573058Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-05T01:15:41.601389Z", "memory_request": 4294967296, "memory_usage": 67454894, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-05T01:14:27.278124Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee@osd.1", "version": "18.2.7"}, {"container_id": "6e6a7cedb28b", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "3.85%", "created": "2025-12-05T01:14:34.085370Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-12-05T01:14:34.191548Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-05T01:15:41.601519Z", "memory_request": 4294967296, "memory_usage": 66280488, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-05T01:14:33.869842Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee@osd.2", "version": "18.2.7"}]
Dec 05 01:15:53 compute-0 podman[219837]: 2025-12-05 01:15:53.434204801 +0000 UTC m=+0.815442925 container died ed264eaefb19e8c6061b9807fa84ef7afb728aac6e9f62906cb8619552c0fe4d (image=quay.io/ceph/ceph:v18, name=jovial_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 05 01:15:53 compute-0 systemd-sysv-generator[219983]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:15:53 compute-0 systemd-rc-local-generator[219978]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:15:53 compute-0 systemd[1]: libpod-ed264eaefb19e8c6061b9807fa84ef7afb728aac6e9f62906cb8619552c0fe4d.scope: Deactivated successfully.
Dec 05 01:15:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f608feb84c02cf26cdd195146e22c6d2f7cc26d80110f959a6c47bd6209526b-merged.mount: Deactivated successfully.
Dec 05 01:15:53 compute-0 podman[219837]: 2025-12-05 01:15:53.72351281 +0000 UTC m=+1.104750904 container remove ed264eaefb19e8c6061b9807fa84ef7afb728aac6e9f62906cb8619552c0fe4d (image=quay.io/ceph/ceph:v18, name=jovial_rosalind, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:15:53 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.umynax for cbd280d3-cbd8-528b-ace6-2b3a887cdcee...
Dec 05 01:15:53 compute-0 systemd[1]: libpod-conmon-ed264eaefb19e8c6061b9807fa84ef7afb728aac6e9f62906cb8619552c0fe4d.scope: Deactivated successfully.
Dec 05 01:15:53 compute-0 sudo[219808]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:53 compute-0 ceph-mon[192914]: 5.a scrub starts
Dec 05 01:15:53 compute-0 ceph-mon[192914]: 5.a scrub ok
Dec 05 01:15:53 compute-0 ceph-mon[192914]: 3.1a scrub starts
Dec 05 01:15:53 compute-0 ceph-mon[192914]: 3.1a scrub ok
Dec 05 01:15:53 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Dec 05 01:15:53 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Dec 05 01:15:54 compute-0 podman[220047]: 2025-12-05 01:15:54.010789306 +0000 UTC m=+0.052469017 container create 07a0bf3345446b9c345c954d7d5a217cbe22e484fd24a9b291ee8aef91e9a01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-rgw-rgw-compute-0-umynax, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58ed6658262e9b6d4e8fc3242e23d55208eaef19b9dbf78789a789e9eb7a83be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58ed6658262e9b6d4e8fc3242e23d55208eaef19b9dbf78789a789e9eb7a83be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58ed6658262e9b6d4e8fc3242e23d55208eaef19b9dbf78789a789e9eb7a83be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58ed6658262e9b6d4e8fc3242e23d55208eaef19b9dbf78789a789e9eb7a83be/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.umynax supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:54 compute-0 podman[220047]: 2025-12-05 01:15:53.986481934 +0000 UTC m=+0.028161705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:15:54 compute-0 podman[220047]: 2025-12-05 01:15:54.085563579 +0000 UTC m=+0.127243370 container init 07a0bf3345446b9c345c954d7d5a217cbe22e484fd24a9b291ee8aef91e9a01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-rgw-rgw-compute-0-umynax, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:15:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v111: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:54 compute-0 podman[220047]: 2025-12-05 01:15:54.104564618 +0000 UTC m=+0.146244369 container start 07a0bf3345446b9c345c954d7d5a217cbe22e484fd24a9b291ee8aef91e9a01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-rgw-rgw-compute-0-umynax, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:15:54 compute-0 bash[220047]: 07a0bf3345446b9c345c954d7d5a217cbe22e484fd24a9b291ee8aef91e9a01f
Dec 05 01:15:54 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.umynax for cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec 05 01:15:54 compute-0 radosgw[220065]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec 05 01:15:54 compute-0 radosgw[220065]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Dec 05 01:15:54 compute-0 radosgw[220065]: framework: beast
Dec 05 01:15:54 compute-0 radosgw[220065]: framework conf key: endpoint, val: 192.168.122.100:8082
Dec 05 01:15:54 compute-0 radosgw[220065]: init_numa not setting numa affinity
Dec 05 01:15:54 compute-0 sudo[219735]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:15:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:15:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec 05 01:15:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:54 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev 90453ab0-db65-46d9-9577-6791a8ecefd3 (Updating rgw.rgw deployment (+1 -> 1))
Dec 05 01:15:54 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event 90453ab0-db65-46d9-9577-6791a8ecefd3 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Dec 05 01:15:54 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Dec 05 01:15:54 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec 05 01:15:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec 05 01:15:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec 05 01:15:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:54 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev 6761f519-01b4-4e0e-8c9d-4575f616a5ba (Updating mds.cephfs deployment (+1 -> 1))
Dec 05 01:15:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ksxtqc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Dec 05 01:15:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ksxtqc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 05 01:15:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ksxtqc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 05 01:15:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:15:54 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:15:54 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.ksxtqc on compute-0
Dec 05 01:15:54 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.ksxtqc on compute-0
Dec 05 01:15:54 compute-0 sudo[220127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:54 compute-0 sudo[220127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:54 compute-0 sudo[220127]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:54 compute-0 sudo[220152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:15:54 compute-0 sudo[220152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:54 compute-0 sudo[220152]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:54 compute-0 sudo[220177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:54 compute-0 sudo[220177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:54 compute-0 sudo[220177]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:54 compute-0 sudo[220226]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yihnmsxpgyckjruwuzorclxgusunltxk ; /usr/bin/python3'
Dec 05 01:15:54 compute-0 sudo[220226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:54 compute-0 sudo[220225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec 05 01:15:54 compute-0 sudo[220225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:54 compute-0 python3[220240]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:54 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Dec 05 01:15:54 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Dec 05 01:15:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec 05 01:15:54 compute-0 ceph-mon[192914]: from='client.14264 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 05 01:15:54 compute-0 ceph-mon[192914]: 3.1c scrub starts
Dec 05 01:15:54 compute-0 ceph-mon[192914]: 3.1c scrub ok
Dec 05 01:15:54 compute-0 ceph-mon[192914]: pgmap v111: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:54 compute-0 ceph-mon[192914]: Saving service rgw.rgw spec with placement compute-0
Dec 05 01:15:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ksxtqc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 05 01:15:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ksxtqc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 05 01:15:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:15:54 compute-0 ceph-mon[192914]: Deploying daemon mds.cephfs.compute-0.ksxtqc on compute-0
Dec 05 01:15:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Dec 05 01:15:54 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Dec 05 01:15:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Dec 05 01:15:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2042496677' entity='client.rgw.rgw.compute-0.umynax' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 05 01:15:54 compute-0 podman[220253]: 2025-12-05 01:15:54.83186563 +0000 UTC m=+0.060997225 container create 16720125ea74b1f30c50302c8a17d81c14cf776e5d8a92e58dfc80fffd403e9f (image=quay.io/ceph/ceph:v18, name=pedantic_robinson, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:15:54 compute-0 systemd[1]: Started libpod-conmon-16720125ea74b1f30c50302c8a17d81c14cf776e5d8a92e58dfc80fffd403e9f.scope.
Dec 05 01:15:54 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9efcccdbce815484f880e276f2886d781ee70c92bccb8a10ba8a2ed2762da3aa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9efcccdbce815484f880e276f2886d781ee70c92bccb8a10ba8a2ed2762da3aa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:54 compute-0 podman[220253]: 2025-12-05 01:15:54.810324953 +0000 UTC m=+0.039456568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:54 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec 05 01:15:54 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 42 pg[8.0( empty local-lis/les=0/0 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [1] r=0 lpr=42 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:54 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec 05 01:15:54 compute-0 podman[220253]: 2025-12-05 01:15:54.946631214 +0000 UTC m=+0.175762839 container init 16720125ea74b1f30c50302c8a17d81c14cf776e5d8a92e58dfc80fffd403e9f (image=quay.io/ceph/ceph:v18, name=pedantic_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Dec 05 01:15:54 compute-0 podman[220253]: 2025-12-05 01:15:54.960743262 +0000 UTC m=+0.189874867 container start 16720125ea74b1f30c50302c8a17d81c14cf776e5d8a92e58dfc80fffd403e9f (image=quay.io/ceph/ceph:v18, name=pedantic_robinson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 05 01:15:54 compute-0 podman[220253]: 2025-12-05 01:15:54.965922921 +0000 UTC m=+0.195054546 container attach 16720125ea74b1f30c50302c8a17d81c14cf776e5d8a92e58dfc80fffd403e9f (image=quay.io/ceph/ceph:v18, name=pedantic_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 05 01:15:55 compute-0 podman[220277]: 2025-12-05 01:15:55.010621608 +0000 UTC m=+0.131091602 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, distribution-scope=public, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=kepler, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 01:15:55 compute-0 podman[220323]: 2025-12-05 01:15:55.125434054 +0000 UTC m=+0.056490575 container create cc5d7b2cfa7720f58aef668530290cb9a7f931750e7583cdbc6f846f34fc3260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:15:55 compute-0 systemd[1]: Started libpod-conmon-cc5d7b2cfa7720f58aef668530290cb9a7f931750e7583cdbc6f846f34fc3260.scope.
Dec 05 01:15:55 compute-0 podman[220323]: 2025-12-05 01:15:55.104344399 +0000 UTC m=+0.035400950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:15:55 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:55 compute-0 podman[220323]: 2025-12-05 01:15:55.238657276 +0000 UTC m=+0.169713807 container init cc5d7b2cfa7720f58aef668530290cb9a7f931750e7583cdbc6f846f34fc3260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_robinson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Dec 05 01:15:55 compute-0 podman[220323]: 2025-12-05 01:15:55.248345246 +0000 UTC m=+0.179401777 container start cc5d7b2cfa7720f58aef668530290cb9a7f931750e7583cdbc6f846f34fc3260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_robinson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 05 01:15:55 compute-0 podman[220323]: 2025-12-05 01:15:55.253256428 +0000 UTC m=+0.184312979 container attach cc5d7b2cfa7720f58aef668530290cb9a7f931750e7583cdbc6f846f34fc3260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_robinson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:15:55 compute-0 musing_robinson[220339]: 167 167
Dec 05 01:15:55 compute-0 systemd[1]: libpod-cc5d7b2cfa7720f58aef668530290cb9a7f931750e7583cdbc6f846f34fc3260.scope: Deactivated successfully.
Dec 05 01:15:55 compute-0 podman[220323]: 2025-12-05 01:15:55.263238965 +0000 UTC m=+0.194295586 container died cc5d7b2cfa7720f58aef668530290cb9a7f931750e7583cdbc6f846f34fc3260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_robinson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 01:15:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-022d498befe28f59a077d2cd99588156b644cde4140ee5bf5be7113f9798c047-merged.mount: Deactivated successfully.
Dec 05 01:15:55 compute-0 podman[220323]: 2025-12-05 01:15:55.333138137 +0000 UTC m=+0.264194668 container remove cc5d7b2cfa7720f58aef668530290cb9a7f931750e7583cdbc6f846f34fc3260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:15:55 compute-0 systemd[1]: libpod-conmon-cc5d7b2cfa7720f58aef668530290cb9a7f931750e7583cdbc6f846f34fc3260.scope: Deactivated successfully.
Dec 05 01:15:55 compute-0 systemd[1]: Reloading.
Dec 05 01:15:55 compute-0 systemd-sysv-generator[220401]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:15:55 compute-0 systemd-rc-local-generator[220398]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:15:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec 05 01:15:55 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3381832146' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 05 01:15:55 compute-0 pedantic_robinson[220281]: 
Dec 05 01:15:55 compute-0 pedantic_robinson[220281]: {"fsid":"cbd280d3-cbd8-528b-ace6-2b3a887cdcee","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":208,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":42,"num_osds":3,"num_up_osds":3,"osd_up_since":1764897281,"num_in_osds":3,"osd_in_since":1764897248,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84131840,"bytes_avail":64327794688,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":5,"modified":"2025-12-05T01:15:42.082279+0000","services":{}},"progress_events":{"90453ab0-db65-46d9-9577-6791a8ecefd3":{"message":"Updating rgw.rgw deployment (+1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec 05 01:15:55 compute-0 podman[220253]: 2025-12-05 01:15:55.672831787 +0000 UTC m=+0.901963382 container died 16720125ea74b1f30c50302c8a17d81c14cf776e5d8a92e58dfc80fffd403e9f (image=quay.io/ceph/ceph:v18, name=pedantic_robinson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:15:55 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.19 deep-scrub starts
Dec 05 01:15:55 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.19 deep-scrub ok
Dec 05 01:15:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec 05 01:15:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2042496677' entity='client.rgw.rgw.compute-0.umynax' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 05 01:15:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Dec 05 01:15:55 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Dec 05 01:15:55 compute-0 ceph-mon[192914]: 4.17 scrub starts
Dec 05 01:15:55 compute-0 ceph-mon[192914]: 4.17 scrub ok
Dec 05 01:15:55 compute-0 ceph-mon[192914]: osdmap e42: 3 total, 3 up, 3 in
Dec 05 01:15:55 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2042496677' entity='client.rgw.rgw.compute-0.umynax' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 05 01:15:55 compute-0 ceph-mon[192914]: 7.7 scrub starts
Dec 05 01:15:55 compute-0 ceph-mon[192914]: 7.7 scrub ok
Dec 05 01:15:55 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3381832146' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 05 01:15:55 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 43 pg[8.0( empty local-lis/les=42/43 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [1] r=0 lpr=42 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:55 compute-0 systemd[1]: libpod-16720125ea74b1f30c50302c8a17d81c14cf776e5d8a92e58dfc80fffd403e9f.scope: Deactivated successfully.
Dec 05 01:15:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-9efcccdbce815484f880e276f2886d781ee70c92bccb8a10ba8a2ed2762da3aa-merged.mount: Deactivated successfully.
Dec 05 01:15:55 compute-0 podman[220253]: 2025-12-05 01:15:55.932599475 +0000 UTC m=+1.161731080 container remove 16720125ea74b1f30c50302c8a17d81c14cf776e5d8a92e58dfc80fffd403e9f (image=quay.io/ceph/ceph:v18, name=pedantic_robinson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 01:15:55 compute-0 systemd[1]: libpod-conmon-16720125ea74b1f30c50302c8a17d81c14cf776e5d8a92e58dfc80fffd403e9f.scope: Deactivated successfully.
Dec 05 01:15:55 compute-0 systemd[1]: Reloading.
Dec 05 01:15:55 compute-0 sudo[220226]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v114: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:56 compute-0 systemd-sysv-generator[220463]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:15:56 compute-0 systemd-rc-local-generator[220459]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:15:56 compute-0 ceph-mgr[193209]: [progress INFO root] Writing back 11 completed events
Dec 05 01:15:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 05 01:15:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:56 compute-0 ceph-mgr[193209]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Dec 05 01:15:56 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.ksxtqc for cbd280d3-cbd8-528b-ace6-2b3a887cdcee...
Dec 05 01:15:56 compute-0 sudo[220546]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvsappthchimynscufgbjjpqvqoepecp ; /usr/bin/python3'
Dec 05 01:15:56 compute-0 sudo[220546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:56 compute-0 podman[220534]: 2025-12-05 01:15:56.813710566 +0000 UTC m=+0.071075764 container create a2540f6b0515ed7d779a8ff346084fa35650221c77d67c6518898d317ca0e92e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mds-cephfs-compute-0-ksxtqc, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:15:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec 05 01:15:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Dec 05 01:15:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb45c5da733bd8b6a28f6ab4f0fbdfa4031cd81b00f8f981ba3128832202eef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb45c5da733bd8b6a28f6ab4f0fbdfa4031cd81b00f8f981ba3128832202eef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb45c5da733bd8b6a28f6ab4f0fbdfa4031cd81b00f8f981ba3128832202eef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:56 compute-0 ceph-mon[192914]: 4.19 deep-scrub starts
Dec 05 01:15:56 compute-0 ceph-mon[192914]: 4.19 deep-scrub ok
Dec 05 01:15:56 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2042496677' entity='client.rgw.rgw.compute-0.umynax' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 05 01:15:56 compute-0 ceph-mon[192914]: osdmap e43: 3 total, 3 up, 3 in
Dec 05 01:15:56 compute-0 ceph-mon[192914]: pgmap v114: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:15:56 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:56 compute-0 podman[220534]: 2025-12-05 01:15:56.781484213 +0000 UTC m=+0.038849401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:15:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb45c5da733bd8b6a28f6ab4f0fbdfa4031cd81b00f8f981ba3128832202eef/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.ksxtqc supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:56 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Dec 05 01:15:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Dec 05 01:15:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2042496677' entity='client.rgw.rgw.compute-0.umynax' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 05 01:15:56 compute-0 podman[220534]: 2025-12-05 01:15:56.90942199 +0000 UTC m=+0.166787248 container init a2540f6b0515ed7d779a8ff346084fa35650221c77d67c6518898d317ca0e92e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mds-cephfs-compute-0-ksxtqc, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:15:56 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.b scrub starts
Dec 05 01:15:56 compute-0 python3[220553]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:56 compute-0 podman[220534]: 2025-12-05 01:15:56.935779486 +0000 UTC m=+0.193144694 container start a2540f6b0515ed7d779a8ff346084fa35650221c77d67c6518898d317ca0e92e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mds-cephfs-compute-0-ksxtqc, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:15:56 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 44 pg[9.0( empty local-lis/les=0/0 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [1] r=0 lpr=44 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:56 compute-0 bash[220534]: a2540f6b0515ed7d779a8ff346084fa35650221c77d67c6518898d317ca0e92e
Dec 05 01:15:56 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.b scrub ok
Dec 05 01:15:56 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.ksxtqc for cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec 05 01:15:56 compute-0 ceph-mds[220561]: set uid:gid to 167:167 (ceph:ceph)
Dec 05 01:15:56 compute-0 ceph-mds[220561]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Dec 05 01:15:56 compute-0 ceph-mds[220561]: main not setting numa affinity
Dec 05 01:15:56 compute-0 ceph-mds[220561]: pidfile_write: ignore empty --pid-file
Dec 05 01:15:57 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mds-cephfs-compute-0-ksxtqc[220557]: starting mds.cephfs.compute-0.ksxtqc at 
Dec 05 01:15:57 compute-0 sudo[220225]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:57 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc Updating MDS map to version 2 from mon.0
Dec 05 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:15:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).mds e2 assigned standby [v2:192.168.122.100:6814/1147292858,v1:192.168.122.100:6815/1147292858] as mds.0
Dec 05 01:15:57 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.ksxtqc assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 05 01:15:57 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 05 01:15:57 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 05 01:15:57 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 05 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).mds e3 new map
Dec 05 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).mds e3 print_map
                                            e3
                                            enable_multiple, ever_enabled_multiple: 1,1
                                            default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            legacy client fscid: 1
                                             
                                            Filesystem 'cephfs' (1)
                                            fs_name        cephfs
                                            epoch        3
                                            flags        12 joinable allow_snaps allow_multimds_snaps
                                            created        2025-12-05T01:15:33.603075+0000
                                            modified        2025-12-05T01:15:57.040449+0000
                                            tableserver        0
                                            root        0
                                            session_timeout        60
                                            session_autoclose        300
                                            max_file_size        1099511627776
                                            max_xattr_size        65536
                                            required_client_features        {}
                                            last_failure        0
                                            last_failure_osd_epoch        0
                                            compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            max_mds        1
                                            in        0
                                            up        {0=14271}
                                            failed        
                                            damaged        
                                            stopped        
                                            data_pools        [7]
                                            metadata_pool        6
                                            inline_data        disabled
                                            balancer        
                                            bal_rank_mask        -1
                                            standby_count_wanted        0
                                            [mds.cephfs.compute-0.ksxtqc{0:14271} state up:creating seq 1 addr [v2:192.168.122.100:6814/1147292858,v1:192.168.122.100:6815/1147292858] compat {c=[1],r=[1],i=[7ff]}]
                                             
                                             
Dec 05 01:15:57 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc Updating MDS map to version 3 from mon.0
Dec 05 01:15:57 compute-0 ceph-mds[220561]: mds.0.3 handle_mds_map i am now mds.0.3
Dec 05 01:15:57 compute-0 ceph-mds[220561]: mds.0.3 handle_mds_map state change up:standby --> up:creating
Dec 05 01:15:57 compute-0 ceph-mds[220561]: mds.0.cache creating system inode with ino:0x1
Dec 05 01:15:57 compute-0 ceph-mds[220561]: mds.0.cache creating system inode with ino:0x100
Dec 05 01:15:57 compute-0 ceph-mds[220561]: mds.0.cache creating system inode with ino:0x600
Dec 05 01:15:57 compute-0 ceph-mds[220561]: mds.0.cache creating system inode with ino:0x601
Dec 05 01:15:57 compute-0 ceph-mds[220561]: mds.0.cache creating system inode with ino:0x602
Dec 05 01:15:57 compute-0 ceph-mds[220561]: mds.0.cache creating system inode with ino:0x603
Dec 05 01:15:57 compute-0 ceph-mds[220561]: mds.0.cache creating system inode with ino:0x604
Dec 05 01:15:57 compute-0 ceph-mds[220561]: mds.0.cache creating system inode with ino:0x605
Dec 05 01:15:57 compute-0 ceph-mds[220561]: mds.0.cache creating system inode with ino:0x606
Dec 05 01:15:57 compute-0 ceph-mds[220561]: mds.0.cache creating system inode with ino:0x607
Dec 05 01:15:57 compute-0 ceph-mds[220561]: mds.0.cache creating system inode with ino:0x608
Dec 05 01:15:57 compute-0 ceph-mds[220561]: mds.0.cache creating system inode with ino:0x609
Dec 05 01:15:57 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1147292858,v1:192.168.122.100:6815/1147292858] up:boot
Dec 05 01:15:57 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.ksxtqc=up:creating}
Dec 05 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:15:57 compute-0 podman[220562]: 2025-12-05 01:15:57.051392043 +0000 UTC m=+0.074438355 container create 938dc171aa4231302a3411ba987fff008e0e4ea9bee3f2f9eebac8c03e33e38d (image=quay.io/ceph/ceph:v18, name=epic_albattani, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 05 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.ksxtqc"} v 0) v1
Dec 05 01:15:57 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ksxtqc"}]: dispatch
Dec 05 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).mds e3 all = 0
Dec 05 01:15:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec 05 01:15:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:57 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev 6761f519-01b4-4e0e-8c9d-4575f616a5ba (Updating mds.cephfs deployment (+1 -> 1))
Dec 05 01:15:57 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event 6761f519-01b4-4e0e-8c9d-4575f616a5ba (Updating mds.cephfs deployment (+1 -> 1)) in 3 seconds
Dec 05 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Dec 05 01:15:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec 05 01:15:57 compute-0 ceph-mds[220561]: mds.0.3 creating_done
Dec 05 01:15:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:57 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.ksxtqc is now active in filesystem cephfs as rank 0
Dec 05 01:15:57 compute-0 podman[220562]: 2025-12-05 01:15:57.018467061 +0000 UTC m=+0.041513353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:57 compute-0 systemd[1]: Started libpod-conmon-938dc171aa4231302a3411ba987fff008e0e4ea9bee3f2f9eebac8c03e33e38d.scope.
Dec 05 01:15:57 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a186da3fbf6c1d1e49f4ca8c0498aeee7587cdbb81fcd599c8114e6ec71118a2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a186da3fbf6c1d1e49f4ca8c0498aeee7587cdbb81fcd599c8114e6ec71118a2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:57 compute-0 podman[220562]: 2025-12-05 01:15:57.219463765 +0000 UTC m=+0.242510057 container init 938dc171aa4231302a3411ba987fff008e0e4ea9bee3f2f9eebac8c03e33e38d (image=quay.io/ceph/ceph:v18, name=epic_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 01:15:57 compute-0 sudo[220605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:57 compute-0 sudo[220605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:57 compute-0 sudo[220605]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:57 compute-0 podman[220562]: 2025-12-05 01:15:57.234067997 +0000 UTC m=+0.257114289 container start 938dc171aa4231302a3411ba987fff008e0e4ea9bee3f2f9eebac8c03e33e38d (image=quay.io/ceph/ceph:v18, name=epic_albattani, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:15:57 compute-0 podman[220562]: 2025-12-05 01:15:57.241426014 +0000 UTC m=+0.264472316 container attach 938dc171aa4231302a3411ba987fff008e0e4ea9bee3f2f9eebac8c03e33e38d (image=quay.io/ceph/ceph:v18, name=epic_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:15:57 compute-0 sudo[220634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:15:57 compute-0 sudo[220634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:57 compute-0 sudo[220634]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:57 compute-0 sudo[220659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:57 compute-0 sudo[220659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:57 compute-0 sudo[220659]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:57 compute-0 sudo[220684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:15:57 compute-0 sudo[220684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:57 compute-0 sudo[220684]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:57 compute-0 sudo[220710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:57 compute-0 sudo[220710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:57 compute-0 sudo[220710]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:57 compute-0 sudo[220753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 05 01:15:57 compute-0 sudo[220753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec 05 01:15:57 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2775645715' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 05 01:15:57 compute-0 epic_albattani[220612]: 
Dec 05 01:15:57 compute-0 epic_albattani[220612]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.umynax","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Dec 05 01:15:57 compute-0 systemd[1]: libpod-938dc171aa4231302a3411ba987fff008e0e4ea9bee3f2f9eebac8c03e33e38d.scope: Deactivated successfully.
Dec 05 01:15:57 compute-0 podman[220562]: 2025-12-05 01:15:57.84047769 +0000 UTC m=+0.863523992 container died 938dc171aa4231302a3411ba987fff008e0e4ea9bee3f2f9eebac8c03e33e38d (image=quay.io/ceph/ceph:v18, name=epic_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:15:57 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.b scrub starts
Dec 05 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec 05 01:15:57 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.b scrub ok
Dec 05 01:15:57 compute-0 ceph-mon[192914]: osdmap e44: 3 total, 3 up, 3 in
Dec 05 01:15:57 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2042496677' entity='client.rgw.rgw.compute-0.umynax' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 05 01:15:57 compute-0 ceph-mon[192914]: 7.b scrub starts
Dec 05 01:15:57 compute-0 ceph-mon[192914]: 7.b scrub ok
Dec 05 01:15:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:57 compute-0 ceph-mon[192914]: daemon mds.cephfs.compute-0.ksxtqc assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 05 01:15:57 compute-0 ceph-mon[192914]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 05 01:15:57 compute-0 ceph-mon[192914]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 05 01:15:57 compute-0 ceph-mon[192914]: Cluster is now healthy
Dec 05 01:15:57 compute-0 ceph-mon[192914]: mds.? [v2:192.168.122.100:6814/1147292858,v1:192.168.122.100:6815/1147292858] up:boot
Dec 05 01:15:57 compute-0 ceph-mon[192914]: fsmap cephfs:1 {0=cephfs.compute-0.ksxtqc=up:creating}
Dec 05 01:15:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ksxtqc"}]: dispatch
Dec 05 01:15:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:57 compute-0 ceph-mon[192914]: daemon mds.cephfs.compute-0.ksxtqc is now active in filesystem cephfs as rank 0
Dec 05 01:15:57 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2775645715' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 05 01:15:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-a186da3fbf6c1d1e49f4ca8c0498aeee7587cdbb81fcd599c8114e6ec71118a2-merged.mount: Deactivated successfully.
Dec 05 01:15:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2042496677' entity='client.rgw.rgw.compute-0.umynax' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 05 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Dec 05 01:15:57 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Dec 05 01:15:57 compute-0 podman[220562]: 2025-12-05 01:15:57.935317811 +0000 UTC m=+0.958364063 container remove 938dc171aa4231302a3411ba987fff008e0e4ea9bee3f2f9eebac8c03e33e38d (image=quay.io/ceph/ceph:v18, name=epic_albattani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 01:15:57 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 45 pg[9.0( empty local-lis/les=44/45 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [1] r=0 lpr=44 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:15:57 compute-0 systemd[1]: libpod-conmon-938dc171aa4231302a3411ba987fff008e0e4ea9bee3f2f9eebac8c03e33e38d.scope: Deactivated successfully.
Dec 05 01:15:57 compute-0 sudo[220546]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:58 compute-0 podman[220781]: 2025-12-05 01:15:58.030215213 +0000 UTC m=+0.155610669 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, distribution-scope=public, name=ubi9-minimal, build-date=2025-08-20T13:12:41, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, release=1755695350, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_id=edpm, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 05 01:15:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v117: 195 pgs: 1 unknown, 194 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s
Dec 05 01:15:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).mds e4 new map
Dec 05 01:15:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).mds e4 print_map
                                            e4
                                            enable_multiple, ever_enabled_multiple: 1,1
                                            default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            legacy client fscid: 1
                                             
                                            Filesystem 'cephfs' (1)
                                            fs_name        cephfs
                                            epoch        4
                                            flags        12 joinable allow_snaps allow_multimds_snaps
                                            created        2025-12-05T01:15:33.603075+0000
                                            modified        2025-12-05T01:15:58.103370+0000
                                            tableserver        0
                                            root        0
                                            session_timeout        60
                                            session_autoclose        300
                                            max_file_size        1099511627776
                                            max_xattr_size        65536
                                            required_client_features        {}
                                            last_failure        0
                                            last_failure_osd_epoch        0
                                            compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                            max_mds        1
                                            in        0
                                            up        {0=14271}
                                            failed        
                                            damaged        
                                            stopped        
                                            data_pools        [7]
                                            metadata_pool        6
                                            inline_data        disabled
                                            balancer        
                                            bal_rank_mask        -1
                                            standby_count_wanted        0
                                            [mds.cephfs.compute-0.ksxtqc{0:14271} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/1147292858,v1:192.168.122.100:6815/1147292858] compat {c=[1],r=[1],i=[7ff]}]
                                             
                                             
Dec 05 01:15:58 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc Updating MDS map to version 4 from mon.0
Dec 05 01:15:58 compute-0 ceph-mds[220561]: mds.0.3 handle_mds_map i am now mds.0.3
Dec 05 01:15:58 compute-0 ceph-mds[220561]: mds.0.3 handle_mds_map state change up:creating --> up:active
Dec 05 01:15:58 compute-0 ceph-mds[220561]: mds.0.3 recovery_done -- successful recovery!
Dec 05 01:15:58 compute-0 ceph-mds[220561]: mds.0.3 active_start
Dec 05 01:15:58 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1147292858,v1:192.168.122.100:6815/1147292858] up:active
Dec 05 01:15:58 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.ksxtqc=up:active}
Dec 05 01:15:58 compute-0 podman[220882]: 2025-12-05 01:15:58.487990695 +0000 UTC m=+0.104228343 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 05 01:15:58 compute-0 podman[220882]: 2025-12-05 01:15:58.622450357 +0000 UTC m=+0.238688005 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 01:15:58 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Dec 05 01:15:58 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Dec 05 01:15:58 compute-0 sudo[220966]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egfyckobwkskfwooasvffncjpjsbrbia ; /usr/bin/python3'
Dec 05 01:15:58 compute-0 sudo[220966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:15:58 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec 05 01:15:58 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec 05 01:15:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec 05 01:15:58 compute-0 python3[220973]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:15:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Dec 05 01:15:58 compute-0 ceph-mon[192914]: 5.b scrub starts
Dec 05 01:15:58 compute-0 ceph-mon[192914]: 5.b scrub ok
Dec 05 01:15:58 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2042496677' entity='client.rgw.rgw.compute-0.umynax' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 05 01:15:58 compute-0 ceph-mon[192914]: osdmap e45: 3 total, 3 up, 3 in
Dec 05 01:15:58 compute-0 ceph-mon[192914]: pgmap v117: 195 pgs: 1 unknown, 194 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s
Dec 05 01:15:58 compute-0 ceph-mon[192914]: mds.? [v2:192.168.122.100:6814/1147292858,v1:192.168.122.100:6815/1147292858] up:active
Dec 05 01:15:58 compute-0 ceph-mon[192914]: fsmap cephfs:1 {0=cephfs.compute-0.ksxtqc=up:active}
Dec 05 01:15:58 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Dec 05 01:15:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Dec 05 01:15:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2042496677' entity='client.rgw.rgw.compute-0.umynax' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 05 01:15:59 compute-0 podman[220992]: 2025-12-05 01:15:59.068547677 +0000 UTC m=+0.072037791 container create f4999778980d731c06648a939878fc7b6a59a37897d907216bf9bc54a060e567 (image=quay.io/ceph/ceph:v18, name=condescending_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 01:15:59 compute-0 systemd[1]: Started libpod-conmon-f4999778980d731c06648a939878fc7b6a59a37897d907216bf9bc54a060e567.scope.
Dec 05 01:15:59 compute-0 podman[220992]: 2025-12-05 01:15:59.040950758 +0000 UTC m=+0.044440912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:15:59 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:15:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c636909733330c052d040703b15c4061dfca02b90beb94d2415b48e406173988/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c636909733330c052d040703b15c4061dfca02b90beb94d2415b48e406173988/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:15:59 compute-0 podman[220992]: 2025-12-05 01:15:59.197251984 +0000 UTC m=+0.200742108 container init f4999778980d731c06648a939878fc7b6a59a37897d907216bf9bc54a060e567 (image=quay.io/ceph/ceph:v18, name=condescending_wing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 05 01:15:59 compute-0 podman[220992]: 2025-12-05 01:15:59.207038866 +0000 UTC m=+0.210528970 container start f4999778980d731c06648a939878fc7b6a59a37897d907216bf9bc54a060e567 (image=quay.io/ceph/ceph:v18, name=condescending_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:15:59 compute-0 podman[220992]: 2025-12-05 01:15:59.212114182 +0000 UTC m=+0.215604306 container attach f4999778980d731c06648a939878fc7b6a59a37897d907216bf9bc54a060e567 (image=quay.io/ceph/ceph:v18, name=condescending_wing, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:15:59 compute-0 sudo[220753]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:15:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:15:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:15:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:15:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:15:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:15:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:15:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:15:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 9ffb2308-ca2f-4984-94fd-edfcb3085c21 does not exist
Dec 05 01:15:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 952f3be6-a194-4a10-824b-5467be011e24 does not exist
Dec 05 01:15:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 681b0222-2a70-41bf-a041-12c7b9b44e8c does not exist
Dec 05 01:15:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:15:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:15:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:15:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:15:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:15:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:15:59 compute-0 sudo[221083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:59 compute-0 sudo[221083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:59 compute-0 sudo[221083]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:59 compute-0 podman[158197]: time="2025-12-05T01:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:15:59 compute-0 sudo[221125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:15:59 compute-0 sudo[221125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 34299 "" "Go-http-client/1.1"
Dec 05 01:15:59 compute-0 sudo[221125]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7196 "" "Go-http-client/1.1"
Dec 05 01:15:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Dec 05 01:15:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3196859533' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Dec 05 01:15:59 compute-0 condescending_wing[221026]: mimic
Dec 05 01:15:59 compute-0 systemd[1]: libpod-f4999778980d731c06648a939878fc7b6a59a37897d907216bf9bc54a060e567.scope: Deactivated successfully.
Dec 05 01:15:59 compute-0 podman[220992]: 2025-12-05 01:15:59.829798897 +0000 UTC m=+0.833289001 container died f4999778980d731c06648a939878fc7b6a59a37897d907216bf9bc54a060e567 (image=quay.io/ceph/ceph:v18, name=condescending_wing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 01:15:59 compute-0 sudo[221150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:15:59 compute-0 sudo[221150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:59 compute-0 sudo[221150]: pam_unix(sudo:session): session closed for user root
Dec 05 01:15:59 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 46 pg[10.0( empty local-lis/les=0/0 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [2] r=0 lpr=46 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:15:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-c636909733330c052d040703b15c4061dfca02b90beb94d2415b48e406173988-merged.mount: Deactivated successfully.
Dec 05 01:15:59 compute-0 sudo[221187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:15:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec 05 01:15:59 compute-0 sudo[221187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:15:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2042496677' entity='client.rgw.rgw.compute-0.umynax' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 05 01:15:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Dec 05 01:15:59 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Dec 05 01:15:59 compute-0 podman[220992]: 2025-12-05 01:15:59.995616269 +0000 UTC m=+0.999106373 container remove f4999778980d731c06648a939878fc7b6a59a37897d907216bf9bc54a060e567 (image=quay.io/ceph/ceph:v18, name=condescending_wing, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:16:00 compute-0 ceph-mon[192914]: 4.1d scrub starts
Dec 05 01:16:00 compute-0 ceph-mon[192914]: 4.1d scrub ok
Dec 05 01:16:00 compute-0 ceph-mon[192914]: 7.d scrub starts
Dec 05 01:16:00 compute-0 ceph-mon[192914]: 7.d scrub ok
Dec 05 01:16:00 compute-0 ceph-mon[192914]: osdmap e46: 3 total, 3 up, 3 in
Dec 05 01:16:00 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2042496677' entity='client.rgw.rgw.compute-0.umynax' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 05 01:16:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:16:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:16:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:16:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:16:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:16:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:16:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:16:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:16:00 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3196859533' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Dec 05 01:16:00 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 47 pg[10.0( empty local-lis/les=46/47 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [2] r=0 lpr=46 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:00 compute-0 systemd[1]: libpod-conmon-f4999778980d731c06648a939878fc7b6a59a37897d907216bf9bc54a060e567.scope: Deactivated successfully.
Dec 05 01:16:00 compute-0 sudo[220966]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v120: 196 pgs: 2 unknown, 194 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 5 op/s
Dec 05 01:16:00 compute-0 podman[221265]: 2025-12-05 01:16:00.511082097 +0000 UTC m=+0.078568536 container create f2f53b1b259c481aa7b178612574b12e0b1789aca2273199dbda0f1f2a2c5fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mccarthy, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:16:00 compute-0 podman[221265]: 2025-12-05 01:16:00.48431109 +0000 UTC m=+0.051797539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:16:00 compute-0 systemd[1]: Started libpod-conmon-f2f53b1b259c481aa7b178612574b12e0b1789aca2273199dbda0f1f2a2c5fb6.scope.
Dec 05 01:16:00 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:16:00 compute-0 podman[221265]: 2025-12-05 01:16:00.660744606 +0000 UTC m=+0.228231035 container init f2f53b1b259c481aa7b178612574b12e0b1789aca2273199dbda0f1f2a2c5fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mccarthy, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 05 01:16:00 compute-0 podman[221265]: 2025-12-05 01:16:00.679610461 +0000 UTC m=+0.247096860 container start f2f53b1b259c481aa7b178612574b12e0b1789aca2273199dbda0f1f2a2c5fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mccarthy, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 05 01:16:00 compute-0 podman[221265]: 2025-12-05 01:16:00.684343868 +0000 UTC m=+0.251830337 container attach f2f53b1b259c481aa7b178612574b12e0b1789aca2273199dbda0f1f2a2c5fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mccarthy, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:16:00 compute-0 heuristic_mccarthy[221281]: 167 167
Dec 05 01:16:00 compute-0 systemd[1]: libpod-f2f53b1b259c481aa7b178612574b12e0b1789aca2273199dbda0f1f2a2c5fb6.scope: Deactivated successfully.
Dec 05 01:16:00 compute-0 podman[221265]: 2025-12-05 01:16:00.69149055 +0000 UTC m=+0.258977009 container died f2f53b1b259c481aa7b178612574b12e0b1789aca2273199dbda0f1f2a2c5fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 01:16:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-4caa69792e8a755c786529ffff40b8821773cd01441bfd1bb74e3cff5829db51-merged.mount: Deactivated successfully.
Dec 05 01:16:00 compute-0 podman[221265]: 2025-12-05 01:16:00.780032131 +0000 UTC m=+0.347518560 container remove f2f53b1b259c481aa7b178612574b12e0b1789aca2273199dbda0f1f2a2c5fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 01:16:00 compute-0 systemd[1]: libpod-conmon-f2f53b1b259c481aa7b178612574b12e0b1789aca2273199dbda0f1f2a2c5fb6.scope: Deactivated successfully.
Dec 05 01:16:00 compute-0 sudo[221321]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogrubuivkhposzzwzzfjgupeqkpaavuq ; /usr/bin/python3'
Dec 05 01:16:00 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.d scrub starts
Dec 05 01:16:00 compute-0 sudo[221321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:16:00 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.d scrub ok
Dec 05 01:16:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec 05 01:16:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Dec 05 01:16:01 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Dec 05 01:16:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Dec 05 01:16:01 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/343582894' entity='client.rgw.rgw.compute-0.umynax' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 05 01:16:01 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 48 pg[11.0( empty local-lis/les=0/0 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [1] r=0 lpr=48 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:01 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2042496677' entity='client.rgw.rgw.compute-0.umynax' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 05 01:16:01 compute-0 ceph-mon[192914]: osdmap e47: 3 total, 3 up, 3 in
Dec 05 01:16:01 compute-0 ceph-mon[192914]: pgmap v120: 196 pgs: 2 unknown, 194 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 5 op/s
Dec 05 01:16:01 compute-0 podman[221329]: 2025-12-05 01:16:01.041426573 +0000 UTC m=+0.079704056 container create 49b8defef1571414628aafda99e4a5d4b817b2202fd68d2b3fb734e7edeccd5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:16:01 compute-0 python3[221323]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:16:01 compute-0 systemd[1]: Started libpod-conmon-49b8defef1571414628aafda99e4a5d4b817b2202fd68d2b3fb734e7edeccd5f.scope.
Dec 05 01:16:01 compute-0 podman[221329]: 2025-12-05 01:16:01.012755085 +0000 UTC m=+0.051032588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:16:01 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57acb385ffe68f31534ea9a48f8fe4560287132bd21e4dcf888cd80c32b8eb40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57acb385ffe68f31534ea9a48f8fe4560287132bd21e4dcf888cd80c32b8eb40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57acb385ffe68f31534ea9a48f8fe4560287132bd21e4dcf888cd80c32b8eb40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57acb385ffe68f31534ea9a48f8fe4560287132bd21e4dcf888cd80c32b8eb40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57acb385ffe68f31534ea9a48f8fe4560287132bd21e4dcf888cd80c32b8eb40/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:01 compute-0 podman[221342]: 2025-12-05 01:16:01.193373454 +0000 UTC m=+0.075289098 container create 3d4250468e7a780451cab5a24201f02f131cb2768d7e5c69915d5c6b05963f73 (image=quay.io/ceph/ceph:v18, name=festive_boyd, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:16:01 compute-0 podman[221329]: 2025-12-05 01:16:01.203393142 +0000 UTC m=+0.241670665 container init 49b8defef1571414628aafda99e4a5d4b817b2202fd68d2b3fb734e7edeccd5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhabha, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 05 01:16:01 compute-0 podman[221329]: 2025-12-05 01:16:01.220817519 +0000 UTC m=+0.259094982 container start 49b8defef1571414628aafda99e4a5d4b817b2202fd68d2b3fb734e7edeccd5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhabha, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:16:01 compute-0 podman[221329]: 2025-12-05 01:16:01.224770835 +0000 UTC m=+0.263048298 container attach 49b8defef1571414628aafda99e4a5d4b817b2202fd68d2b3fb734e7edeccd5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhabha, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 05 01:16:01 compute-0 systemd[1]: Started libpod-conmon-3d4250468e7a780451cab5a24201f02f131cb2768d7e5c69915d5c6b05963f73.scope.
Dec 05 01:16:01 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:16:01 compute-0 podman[221342]: 2025-12-05 01:16:01.167002017 +0000 UTC m=+0.048917701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e37451e72da8781ad8f32860d7e053c129abf850104bef5442e70f012e6bd46d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e37451e72da8781ad8f32860d7e053c129abf850104bef5442e70f012e6bd46d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:01 compute-0 podman[221342]: 2025-12-05 01:16:01.292782986 +0000 UTC m=+0.174698680 container init 3d4250468e7a780451cab5a24201f02f131cb2768d7e5c69915d5c6b05963f73 (image=quay.io/ceph/ceph:v18, name=festive_boyd, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 05 01:16:01 compute-0 ceph-mgr[193209]: [progress INFO root] Writing back 12 completed events
Dec 05 01:16:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 05 01:16:01 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:16:01 compute-0 podman[221342]: 2025-12-05 01:16:01.310518232 +0000 UTC m=+0.192433876 container start 3d4250468e7a780451cab5a24201f02f131cb2768d7e5c69915d5c6b05963f73 (image=quay.io/ceph/ceph:v18, name=festive_boyd, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 01:16:01 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event 7ef99225-ce26-48d9-bfcb-d0db672cf464 (Global Recovery Event) in 5 seconds
Dec 05 01:16:01 compute-0 podman[221342]: 2025-12-05 01:16:01.316195634 +0000 UTC m=+0.198111278 container attach 3d4250468e7a780451cab5a24201f02f131cb2768d7e5c69915d5c6b05963f73 (image=quay.io/ceph/ceph:v18, name=festive_boyd, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 05 01:16:01 compute-0 openstack_network_exporter[160350]: ERROR   01:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:16:01 compute-0 openstack_network_exporter[160350]: ERROR   01:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:16:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:16:01 compute-0 openstack_network_exporter[160350]: ERROR   01:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:16:01 compute-0 openstack_network_exporter[160350]: ERROR   01:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:16:01 compute-0 openstack_network_exporter[160350]: ERROR   01:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:16:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:16:01 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Dec 05 01:16:01 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Dec 05 01:16:01 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.e scrub starts
Dec 05 01:16:01 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.e scrub ok
Dec 05 01:16:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Dec 05 01:16:01 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3585021271' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Dec 05 01:16:01 compute-0 festive_boyd[221362]: 
Dec 05 01:16:01 compute-0 festive_boyd[221362]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":6}}
Dec 05 01:16:01 compute-0 systemd[1]: libpod-3d4250468e7a780451cab5a24201f02f131cb2768d7e5c69915d5c6b05963f73.scope: Deactivated successfully.
Dec 05 01:16:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec 05 01:16:02 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/343582894' entity='client.rgw.rgw.compute-0.umynax' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 05 01:16:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Dec 05 01:16:02 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Dec 05 01:16:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Dec 05 01:16:02 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/343582894' entity='client.rgw.rgw.compute-0.umynax' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 05 01:16:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:16:02 compute-0 ceph-mon[192914]: 5.d scrub starts
Dec 05 01:16:02 compute-0 ceph-mon[192914]: 5.d scrub ok
Dec 05 01:16:02 compute-0 ceph-mon[192914]: osdmap e48: 3 total, 3 up, 3 in
Dec 05 01:16:02 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/343582894' entity='client.rgw.rgw.compute-0.umynax' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 05 01:16:02 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:16:02 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3585021271' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Dec 05 01:16:02 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/343582894' entity='client.rgw.rgw.compute-0.umynax' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 05 01:16:02 compute-0 ceph-mon[192914]: osdmap e49: 3 total, 3 up, 3 in
Dec 05 01:16:02 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 49 pg[11.0( empty local-lis/les=48/49 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [1] r=0 lpr=48 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:02 compute-0 podman[221392]: 2025-12-05 01:16:02.069204569 +0000 UTC m=+0.055639116 container died 3d4250468e7a780451cab5a24201f02f131cb2768d7e5c69915d5c6b05963f73 (image=quay.io/ceph/ceph:v18, name=festive_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:16:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v123: 197 pgs: 1 creating+peering, 196 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.0 KiB/s wr, 12 op/s
Dec 05 01:16:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e37451e72da8781ad8f32860d7e053c129abf850104bef5442e70f012e6bd46d-merged.mount: Deactivated successfully.
Dec 05 01:16:02 compute-0 podman[221392]: 2025-12-05 01:16:02.132971457 +0000 UTC m=+0.119405984 container remove 3d4250468e7a780451cab5a24201f02f131cb2768d7e5c69915d5c6b05963f73 (image=quay.io/ceph/ceph:v18, name=festive_boyd, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 05 01:16:02 compute-0 systemd[1]: libpod-conmon-3d4250468e7a780451cab5a24201f02f131cb2768d7e5c69915d5c6b05963f73.scope: Deactivated successfully.
Dec 05 01:16:02 compute-0 sudo[221321]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:02 compute-0 flamboyant_bhabha[221343]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:16:02 compute-0 flamboyant_bhabha[221343]: --> relative data size: 1.0
Dec 05 01:16:02 compute-0 flamboyant_bhabha[221343]: --> All data devices are unavailable
Dec 05 01:16:02 compute-0 systemd[1]: libpod-49b8defef1571414628aafda99e4a5d4b817b2202fd68d2b3fb734e7edeccd5f.scope: Deactivated successfully.
Dec 05 01:16:02 compute-0 systemd[1]: libpod-49b8defef1571414628aafda99e4a5d4b817b2202fd68d2b3fb734e7edeccd5f.scope: Consumed 1.182s CPU time.
Dec 05 01:16:02 compute-0 podman[221329]: 2025-12-05 01:16:02.497036209 +0000 UTC m=+1.535313772 container died 49b8defef1571414628aafda99e4a5d4b817b2202fd68d2b3fb734e7edeccd5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:16:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-57acb385ffe68f31534ea9a48f8fe4560287132bd21e4dcf888cd80c32b8eb40-merged.mount: Deactivated successfully.
Dec 05 01:16:02 compute-0 podman[221329]: 2025-12-05 01:16:02.842750682 +0000 UTC m=+1.881028185 container remove 49b8defef1571414628aafda99e4a5d4b817b2202fd68d2b3fb734e7edeccd5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:16:02 compute-0 podman[221426]: 2025-12-05 01:16:02.855358252 +0000 UTC m=+0.315869157 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:16:02 compute-0 sudo[221187]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:02 compute-0 systemd[1]: libpod-conmon-49b8defef1571414628aafda99e4a5d4b817b2202fd68d2b3fb734e7edeccd5f.scope: Deactivated successfully.
Dec 05 01:16:02 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Dec 05 01:16:02 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Dec 05 01:16:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec 05 01:16:03 compute-0 sudo[221465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:16:03 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/343582894' entity='client.rgw.rgw.compute-0.umynax' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 05 01:16:03 compute-0 sudo[221465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Dec 05 01:16:03 compute-0 sudo[221465]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:03 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Dec 05 01:16:03 compute-0 ceph-mon[192914]: 4.1e scrub starts
Dec 05 01:16:03 compute-0 ceph-mon[192914]: 4.1e scrub ok
Dec 05 01:16:03 compute-0 ceph-mon[192914]: 5.e scrub starts
Dec 05 01:16:03 compute-0 ceph-mon[192914]: 5.e scrub ok
Dec 05 01:16:03 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/343582894' entity='client.rgw.rgw.compute-0.umynax' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 05 01:16:03 compute-0 ceph-mon[192914]: pgmap v123: 197 pgs: 1 creating+peering, 196 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.0 KiB/s wr, 12 op/s
Dec 05 01:16:03 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/343582894' entity='client.rgw.rgw.compute-0.umynax' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 05 01:16:03 compute-0 ceph-mon[192914]: osdmap e50: 3 total, 3 up, 3 in
Dec 05 01:16:03 compute-0 sudo[221490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:16:03 compute-0 sudo[221490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:03 compute-0 sudo[221490]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:03 compute-0 sudo[221515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:16:03 compute-0 sudo[221515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:03 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-rgw-rgw-compute-0-umynax[220061]: 2025-12-05T01:16:03.258+0000 7fb96d85d940 -1 LDAP not started since no server URIs were provided in the configuration.
Dec 05 01:16:03 compute-0 radosgw[220065]: LDAP not started since no server URIs were provided in the configuration.
Dec 05 01:16:03 compute-0 sudo[221515]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:03 compute-0 radosgw[220065]: framework: beast
Dec 05 01:16:03 compute-0 radosgw[220065]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec 05 01:16:03 compute-0 radosgw[220065]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec 05 01:16:03 compute-0 radosgw[220065]: starting handler: beast
Dec 05 01:16:03 compute-0 radosgw[220065]: set uid:gid to 167:167 (ceph:ceph)
Dec 05 01:16:03 compute-0 radosgw[220065]: mgrc service_daemon_register rgw.14277 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.umynax,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=95d46763-35cf-41c2-8ad8-575beccf8981,zone_name=default,zonegroup_id=67b9f8de-8a42-4509-9f7a-8c2563510693,zonegroup_name=default}
Dec 05 01:16:03 compute-0 sudo[221567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:16:03 compute-0 sudo[221567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:03 compute-0 podman[222142]: 2025-12-05 01:16:03.80735656 +0000 UTC m=+0.080775850 container create e3d9e729c4f49e844bae0cf6f543768a42ff85c885b5544522f2600bef1157df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 05 01:16:03 compute-0 systemd[1]: Started libpod-conmon-e3d9e729c4f49e844bae0cf6f543768a42ff85c885b5544522f2600bef1157df.scope.
Dec 05 01:16:03 compute-0 podman[222142]: 2025-12-05 01:16:03.785391621 +0000 UTC m=+0.058810931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:16:03 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:16:03 compute-0 podman[222142]: 2025-12-05 01:16:03.935705998 +0000 UTC m=+0.209125378 container init e3d9e729c4f49e844bae0cf6f543768a42ff85c885b5544522f2600bef1157df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Dec 05 01:16:03 compute-0 podman[222142]: 2025-12-05 01:16:03.950961251 +0000 UTC m=+0.224380541 container start e3d9e729c4f49e844bae0cf6f543768a42ff85c885b5544522f2600bef1157df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 05 01:16:03 compute-0 podman[222142]: 2025-12-05 01:16:03.954993343 +0000 UTC m=+0.228412633 container attach e3d9e729c4f49e844bae0cf6f543768a42ff85c885b5544522f2600bef1157df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_edison, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 05 01:16:03 compute-0 gallant_edison[222158]: 167 167
Dec 05 01:16:03 compute-0 systemd[1]: libpod-e3d9e729c4f49e844bae0cf6f543768a42ff85c885b5544522f2600bef1157df.scope: Deactivated successfully.
Dec 05 01:16:03 compute-0 podman[222142]: 2025-12-05 01:16:03.965406192 +0000 UTC m=+0.238825552 container died e3d9e729c4f49e844bae0cf6f543768a42ff85c885b5544522f2600bef1157df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 01:16:04 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec 05 01:16:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a678252b80a9c7108da7e8b4cd8946381b58bf7fb7af713a673000abca827023-merged.mount: Deactivated successfully.
Dec 05 01:16:04 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec 05 01:16:04 compute-0 podman[222142]: 2025-12-05 01:16:04.046100638 +0000 UTC m=+0.319519928 container remove e3d9e729c4f49e844bae0cf6f543768a42ff85c885b5544522f2600bef1157df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 05 01:16:04 compute-0 systemd[1]: libpod-conmon-e3d9e729c4f49e844bae0cf6f543768a42ff85c885b5544522f2600bef1157df.scope: Deactivated successfully.
Dec 05 01:16:04 compute-0 ceph-mon[192914]: 7.10 scrub starts
Dec 05 01:16:04 compute-0 ceph-mon[192914]: 7.10 scrub ok
Dec 05 01:16:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v125: 197 pgs: 1 creating+peering, 196 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 249 B/s rd, 3.9 KiB/s wr, 8 op/s
Dec 05 01:16:04 compute-0 podman[222180]: 2025-12-05 01:16:04.299553294 +0000 UTC m=+0.079253538 container create 8b84c4ebc3845e037cded2cbe9735e8336376756e6c31598155bebf27eb32758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:16:04 compute-0 podman[222180]: 2025-12-05 01:16:04.268577555 +0000 UTC m=+0.048277839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:16:04 compute-0 systemd[1]: Started libpod-conmon-8b84c4ebc3845e037cded2cbe9735e8336376756e6c31598155bebf27eb32758.scope.
Dec 05 01:16:04 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afe2ea0de8d26a1bde57826d06d74a9102d4fa63eb7d8614de4b0313fc67f405/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afe2ea0de8d26a1bde57826d06d74a9102d4fa63eb7d8614de4b0313fc67f405/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afe2ea0de8d26a1bde57826d06d74a9102d4fa63eb7d8614de4b0313fc67f405/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afe2ea0de8d26a1bde57826d06d74a9102d4fa63eb7d8614de4b0313fc67f405/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:04 compute-0 podman[222180]: 2025-12-05 01:16:04.456671829 +0000 UTC m=+0.236372123 container init 8b84c4ebc3845e037cded2cbe9735e8336376756e6c31598155bebf27eb32758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_brattain, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:16:04 compute-0 podman[222180]: 2025-12-05 01:16:04.485699314 +0000 UTC m=+0.265399548 container start 8b84c4ebc3845e037cded2cbe9735e8336376756e6c31598155bebf27eb32758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_brattain, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 05 01:16:04 compute-0 podman[222180]: 2025-12-05 01:16:04.490058415 +0000 UTC m=+0.269758649 container attach 8b84c4ebc3845e037cded2cbe9735e8336376756e6c31598155bebf27eb32758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_brattain, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 01:16:04 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Dec 05 01:16:04 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Dec 05 01:16:05 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Dec 05 01:16:05 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Dec 05 01:16:05 compute-0 ceph-mon[192914]: 7.12 scrub starts
Dec 05 01:16:05 compute-0 ceph-mon[192914]: 7.12 scrub ok
Dec 05 01:16:05 compute-0 ceph-mon[192914]: pgmap v125: 197 pgs: 1 creating+peering, 196 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 249 B/s rd, 3.9 KiB/s wr, 8 op/s
Dec 05 01:16:05 compute-0 ceph-mon[192914]: 4.1f scrub starts
Dec 05 01:16:05 compute-0 ceph-mon[192914]: 4.1f scrub ok
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]: {
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:     "0": [
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:         {
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "devices": [
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "/dev/loop3"
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             ],
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "lv_name": "ceph_lv0",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "lv_size": "21470642176",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "name": "ceph_lv0",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "tags": {
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.cluster_name": "ceph",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.crush_device_class": "",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.encrypted": "0",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.osd_id": "0",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.type": "block",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.vdo": "0"
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             },
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "type": "block",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "vg_name": "ceph_vg0"
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:         }
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:     ],
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:     "1": [
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:         {
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "devices": [
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "/dev/loop4"
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             ],
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "lv_name": "ceph_lv1",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "lv_size": "21470642176",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "name": "ceph_lv1",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "tags": {
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.cluster_name": "ceph",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.crush_device_class": "",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.encrypted": "0",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.osd_id": "1",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.type": "block",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.vdo": "0"
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             },
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "type": "block",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "vg_name": "ceph_vg1"
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:         }
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:     ],
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:     "2": [
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:         {
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "devices": [
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "/dev/loop5"
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             ],
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "lv_name": "ceph_lv2",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "lv_size": "21470642176",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "name": "ceph_lv2",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "tags": {
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.cluster_name": "ceph",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.crush_device_class": "",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.encrypted": "0",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.osd_id": "2",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.type": "block",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:                 "ceph.vdo": "0"
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             },
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "type": "block",
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:             "vg_name": "ceph_vg2"
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:         }
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]:     ]
Dec 05 01:16:05 compute-0 mystifying_brattain[222196]: }
Dec 05 01:16:05 compute-0 systemd[1]: libpod-8b84c4ebc3845e037cded2cbe9735e8336376756e6c31598155bebf27eb32758.scope: Deactivated successfully.
Dec 05 01:16:05 compute-0 podman[222180]: 2025-12-05 01:16:05.320737902 +0000 UTC m=+1.100438176 container died 8b84c4ebc3845e037cded2cbe9735e8336376756e6c31598155bebf27eb32758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 05 01:16:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-afe2ea0de8d26a1bde57826d06d74a9102d4fa63eb7d8614de4b0313fc67f405-merged.mount: Deactivated successfully.
Dec 05 01:16:05 compute-0 podman[222180]: 2025-12-05 01:16:05.393091247 +0000 UTC m=+1.172791481 container remove 8b84c4ebc3845e037cded2cbe9735e8336376756e6c31598155bebf27eb32758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_brattain, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:16:05 compute-0 systemd[1]: libpod-conmon-8b84c4ebc3845e037cded2cbe9735e8336376756e6c31598155bebf27eb32758.scope: Deactivated successfully.
Dec 05 01:16:05 compute-0 sudo[221567]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:05 compute-0 sudo[222216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:16:05 compute-0 sudo[222216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:05 compute-0 sudo[222216]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:05 compute-0 sudo[222241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:16:05 compute-0 sudo[222241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:05 compute-0 sudo[222241]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:05 compute-0 sudo[222266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:16:05 compute-0 sudo[222266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:05 compute-0 sudo[222266]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:05 compute-0 sudo[222291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:16:05 compute-0 sudo[222291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:05 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Dec 05 01:16:05 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Dec 05 01:16:06 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.16 deep-scrub starts
Dec 05 01:16:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v126: 197 pgs: 1 active+clean+scrubbing, 1 creating+peering, 195 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 2.7 KiB/s wr, 5 op/s
Dec 05 01:16:06 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.16 deep-scrub ok
Dec 05 01:16:06 compute-0 ceph-mgr[193209]: [progress INFO root] Writing back 13 completed events
Dec 05 01:16:06 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 05 01:16:06 compute-0 podman[222356]: 2025-12-05 01:16:06.330632536 +0000 UTC m=+0.047053705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:16:06 compute-0 ceph-mon[192914]: 7.14 scrub starts
Dec 05 01:16:06 compute-0 ceph-mon[192914]: 7.14 scrub ok
Dec 05 01:16:06 compute-0 ceph-mon[192914]: 6.3 scrub starts
Dec 05 01:16:06 compute-0 ceph-mon[192914]: 6.3 scrub ok
Dec 05 01:16:06 compute-0 podman[222356]: 2025-12-05 01:16:06.536860853 +0000 UTC m=+0.253282012 container create 0627c3f6b858ab367dfa0785dcb26c66265a5785d1c54a8206bdcfa339daa422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_nightingale, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 05 01:16:06 compute-0 systemd[1]: Started libpod-conmon-0627c3f6b858ab367dfa0785dcb26c66265a5785d1c54a8206bdcfa339daa422.scope.
Dec 05 01:16:06 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:16:06 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:16:06 compute-0 podman[222356]: 2025-12-05 01:16:06.943531324 +0000 UTC m=+0.659952493 container init 0627c3f6b858ab367dfa0785dcb26c66265a5785d1c54a8206bdcfa339daa422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_nightingale, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:16:06 compute-0 podman[222356]: 2025-12-05 01:16:06.95670751 +0000 UTC m=+0.673128649 container start 0627c3f6b858ab367dfa0785dcb26c66265a5785d1c54a8206bdcfa339daa422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:16:06 compute-0 podman[222356]: 2025-12-05 01:16:06.962033837 +0000 UTC m=+0.678455016 container attach 0627c3f6b858ab367dfa0785dcb26c66265a5785d1c54a8206bdcfa339daa422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:16:06 compute-0 funny_nightingale[222373]: 167 167
Dec 05 01:16:06 compute-0 systemd[1]: libpod-0627c3f6b858ab367dfa0785dcb26c66265a5785d1c54a8206bdcfa339daa422.scope: Deactivated successfully.
Dec 05 01:16:06 compute-0 podman[222356]: 2025-12-05 01:16:06.972067236 +0000 UTC m=+0.688488385 container died 0627c3f6b858ab367dfa0785dcb26c66265a5785d1c54a8206bdcfa339daa422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 05 01:16:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c7ed388d8b1b73c5642f2a925864094ba660af63823941e89c103d119f6f15a-merged.mount: Deactivated successfully.
Dec 05 01:16:07 compute-0 podman[222356]: 2025-12-05 01:16:07.039063873 +0000 UTC m=+0.755485012 container remove 0627c3f6b858ab367dfa0785dcb26c66265a5785d1c54a8206bdcfa339daa422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:16:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:16:07 compute-0 systemd[1]: libpod-conmon-0627c3f6b858ab367dfa0785dcb26c66265a5785d1c54a8206bdcfa339daa422.scope: Deactivated successfully.
Dec 05 01:16:07 compute-0 podman[222396]: 2025-12-05 01:16:07.30811376 +0000 UTC m=+0.072731236 container create 41a85ebd5227ba0fe4b5f1ee7507163fbeb8a4197b4ae57fa7cd7426be233072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 05 01:16:07 compute-0 systemd[1]: Started libpod-conmon-41a85ebd5227ba0fe4b5f1ee7507163fbeb8a4197b4ae57fa7cd7426be233072.scope.
Dec 05 01:16:07 compute-0 podman[222396]: 2025-12-05 01:16:07.283523369 +0000 UTC m=+0.048140875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:16:07 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1bdefb1f559caa4cc06cc2e3e8d365e9f7991be0de1ad7249fafb6563d9de0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1bdefb1f559caa4cc06cc2e3e8d365e9f7991be0de1ad7249fafb6563d9de0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1bdefb1f559caa4cc06cc2e3e8d365e9f7991be0de1ad7249fafb6563d9de0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1bdefb1f559caa4cc06cc2e3e8d365e9f7991be0de1ad7249fafb6563d9de0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:07 compute-0 ceph-mon[192914]: 7.16 deep-scrub starts
Dec 05 01:16:07 compute-0 ceph-mon[192914]: pgmap v126: 197 pgs: 1 active+clean+scrubbing, 1 creating+peering, 195 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 2.7 KiB/s wr, 5 op/s
Dec 05 01:16:07 compute-0 ceph-mon[192914]: 7.16 deep-scrub ok
Dec 05 01:16:07 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:16:07 compute-0 podman[222396]: 2025-12-05 01:16:07.444818739 +0000 UTC m=+0.209436275 container init 41a85ebd5227ba0fe4b5f1ee7507163fbeb8a4197b4ae57fa7cd7426be233072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 01:16:07 compute-0 podman[222396]: 2025-12-05 01:16:07.463458576 +0000 UTC m=+0.228076082 container start 41a85ebd5227ba0fe4b5f1ee7507163fbeb8a4197b4ae57fa7cd7426be233072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:16:07 compute-0 podman[222396]: 2025-12-05 01:16:07.469608867 +0000 UTC m=+0.234226383 container attach 41a85ebd5227ba0fe4b5f1ee7507163fbeb8a4197b4ae57fa7cd7426be233072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 05 01:16:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v127: 197 pgs: 1 active+clean+scrubbing, 196 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 6.8 KiB/s wr, 228 op/s
Dec 05 01:16:08 compute-0 ceph-mon[192914]: pgmap v127: 197 pgs: 1 active+clean+scrubbing, 196 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 6.8 KiB/s wr, 228 op/s
Dec 05 01:16:08 compute-0 focused_einstein[222412]: {
Dec 05 01:16:08 compute-0 focused_einstein[222412]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:16:08 compute-0 focused_einstein[222412]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:16:08 compute-0 focused_einstein[222412]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:16:08 compute-0 focused_einstein[222412]:         "osd_id": 0,
Dec 05 01:16:08 compute-0 focused_einstein[222412]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:16:08 compute-0 focused_einstein[222412]:         "type": "bluestore"
Dec 05 01:16:08 compute-0 focused_einstein[222412]:     },
Dec 05 01:16:08 compute-0 focused_einstein[222412]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:16:08 compute-0 focused_einstein[222412]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:16:08 compute-0 focused_einstein[222412]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:16:08 compute-0 focused_einstein[222412]:         "osd_id": 1,
Dec 05 01:16:08 compute-0 focused_einstein[222412]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:16:08 compute-0 focused_einstein[222412]:         "type": "bluestore"
Dec 05 01:16:08 compute-0 focused_einstein[222412]:     },
Dec 05 01:16:08 compute-0 focused_einstein[222412]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:16:08 compute-0 focused_einstein[222412]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:16:08 compute-0 focused_einstein[222412]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:16:08 compute-0 focused_einstein[222412]:         "osd_id": 2,
Dec 05 01:16:08 compute-0 focused_einstein[222412]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:16:08 compute-0 focused_einstein[222412]:         "type": "bluestore"
Dec 05 01:16:08 compute-0 focused_einstein[222412]:     }
Dec 05 01:16:08 compute-0 focused_einstein[222412]: }
Dec 05 01:16:08 compute-0 podman[222396]: 2025-12-05 01:16:08.611235513 +0000 UTC m=+1.375853029 container died 41a85ebd5227ba0fe4b5f1ee7507163fbeb8a4197b4ae57fa7cd7426be233072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 05 01:16:08 compute-0 systemd[1]: libpod-41a85ebd5227ba0fe4b5f1ee7507163fbeb8a4197b4ae57fa7cd7426be233072.scope: Deactivated successfully.
Dec 05 01:16:08 compute-0 systemd[1]: libpod-41a85ebd5227ba0fe4b5f1ee7507163fbeb8a4197b4ae57fa7cd7426be233072.scope: Consumed 1.149s CPU time.
Dec 05 01:16:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa1bdefb1f559caa4cc06cc2e3e8d365e9f7991be0de1ad7249fafb6563d9de0-merged.mount: Deactivated successfully.
Dec 05 01:16:08 compute-0 podman[222396]: 2025-12-05 01:16:08.719112793 +0000 UTC m=+1.483730309 container remove 41a85ebd5227ba0fe4b5f1ee7507163fbeb8a4197b4ae57fa7cd7426be233072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_einstein, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 05 01:16:08 compute-0 systemd[1]: libpod-conmon-41a85ebd5227ba0fe4b5f1ee7507163fbeb8a4197b4ae57fa7cd7426be233072.scope: Deactivated successfully.
Dec 05 01:16:08 compute-0 sudo[222291]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:16:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:16:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:16:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:16:08 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 9ad02fcc-60f1-4501-9188-dc8d2ba57528 does not exist
Dec 05 01:16:08 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 99c36fca-cf95-4237-a8e6-0afc85c77f81 does not exist
Dec 05 01:16:08 compute-0 sudo[222458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:16:08 compute-0 sudo[222458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:08 compute-0 sudo[222458]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:09 compute-0 sudo[222483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:16:09 compute-0 sudo[222483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:09 compute-0 sudo[222483]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:09 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Dec 05 01:16:09 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Dec 05 01:16:09 compute-0 sudo[222508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:16:09 compute-0 sudo[222508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:09 compute-0 sudo[222508]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:09 compute-0 sudo[222533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:16:09 compute-0 sudo[222533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:09 compute-0 sudo[222533]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:09 compute-0 sudo[222558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:16:09 compute-0 sudo[222558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:09 compute-0 sudo[222558]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:09 compute-0 sudo[222583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 05 01:16:09 compute-0 sudo[222583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:09 compute-0 sudo[222633]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsbzjybcozlkazoxycenvxabrtsisluc ; /usr/bin/python3'
Dec 05 01:16:09 compute-0 sudo[222633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:16:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:16:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:16:09 compute-0 ceph-mon[192914]: 7.17 scrub starts
Dec 05 01:16:09 compute-0 ceph-mon[192914]: 7.17 scrub ok
Dec 05 01:16:09 compute-0 python3[222644]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:16:10 compute-0 podman[222666]: 2025-12-05 01:16:10.040991166 +0000 UTC m=+0.099538771 container create fd1d7b6ac1028d21b73eae51edb99cb762a8b7a7f97aeeb2172f8c6a1312ff7f (image=quay.io/ceph/ceph:v18, name=bold_perlman, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:16:10 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec 05 01:16:10 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec 05 01:16:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v128: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 4.0 KiB/s wr, 195 op/s
Dec 05 01:16:10 compute-0 podman[222666]: 2025-12-05 01:16:10.004262578 +0000 UTC m=+0.062810203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:16:10 compute-0 systemd[1]: Started libpod-conmon-fd1d7b6ac1028d21b73eae51edb99cb762a8b7a7f97aeeb2172f8c6a1312ff7f.scope.
Dec 05 01:16:10 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:16:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82e3e90eeb063f1944073b40bf371a9da99df4c5fd77dbc908ca7555daf29f1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82e3e90eeb063f1944073b40bf371a9da99df4c5fd77dbc908ca7555daf29f1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:10 compute-0 podman[222666]: 2025-12-05 01:16:10.17172697 +0000 UTC m=+0.230274575 container init fd1d7b6ac1028d21b73eae51edb99cb762a8b7a7f97aeeb2172f8c6a1312ff7f (image=quay.io/ceph/ceph:v18, name=bold_perlman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:16:10 compute-0 podman[222666]: 2025-12-05 01:16:10.182036455 +0000 UTC m=+0.240584070 container start fd1d7b6ac1028d21b73eae51edb99cb762a8b7a7f97aeeb2172f8c6a1312ff7f (image=quay.io/ceph/ceph:v18, name=bold_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 05 01:16:10 compute-0 podman[222666]: 2025-12-05 01:16:10.188281289 +0000 UTC m=+0.246829014 container attach fd1d7b6ac1028d21b73eae51edb99cb762a8b7a7f97aeeb2172f8c6a1312ff7f (image=quay.io/ceph/ceph:v18, name=bold_perlman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 01:16:10 compute-0 podman[222780]: 2025-12-05 01:16:10.399787771 +0000 UTC m=+0.107927162 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:16:10 compute-0 bold_perlman[222690]: could not fetch user info: no user info saved
Dec 05 01:16:10 compute-0 podman[222780]: 2025-12-05 01:16:10.499868935 +0000 UTC m=+0.208008326 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 05 01:16:10 compute-0 systemd[1]: libpod-fd1d7b6ac1028d21b73eae51edb99cb762a8b7a7f97aeeb2172f8c6a1312ff7f.scope: Deactivated successfully.
Dec 05 01:16:10 compute-0 podman[222666]: 2025-12-05 01:16:10.552041791 +0000 UTC m=+0.610589436 container died fd1d7b6ac1028d21b73eae51edb99cb762a8b7a7f97aeeb2172f8c6a1312ff7f (image=quay.io/ceph/ceph:v18, name=bold_perlman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 01:16:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a82e3e90eeb063f1944073b40bf371a9da99df4c5fd77dbc908ca7555daf29f1-merged.mount: Deactivated successfully.
Dec 05 01:16:10 compute-0 podman[222666]: 2025-12-05 01:16:10.636393139 +0000 UTC m=+0.694940754 container remove fd1d7b6ac1028d21b73eae51edb99cb762a8b7a7f97aeeb2172f8c6a1312ff7f (image=quay.io/ceph/ceph:v18, name=bold_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 05 01:16:10 compute-0 systemd[1]: libpod-conmon-fd1d7b6ac1028d21b73eae51edb99cb762a8b7a7f97aeeb2172f8c6a1312ff7f.scope: Deactivated successfully.
Dec 05 01:16:10 compute-0 sudo[222633]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:10 compute-0 ceph-mon[192914]: 7.19 scrub starts
Dec 05 01:16:10 compute-0 ceph-mon[192914]: 7.19 scrub ok
Dec 05 01:16:10 compute-0 ceph-mon[192914]: pgmap v128: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 4.0 KiB/s wr, 195 op/s
Dec 05 01:16:10 compute-0 sudo[222911]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oceuajhwnxtjderhiwoaagxzkkzlnocl ; /usr/bin/python3'
Dec 05 01:16:10 compute-0 sudo[222911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:16:11 compute-0 python3[222922]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:16:11 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.1d deep-scrub starts
Dec 05 01:16:11 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.1d deep-scrub ok
Dec 05 01:16:11 compute-0 podman[222945]: 2025-12-05 01:16:11.161622938 +0000 UTC m=+0.075673439 container create f26438e79f01ce352826329ea27b8eff785cfcc843e358178b1a3115332e85d2 (image=quay.io/ceph/ceph:v18, name=lucid_bose, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:16:11 compute-0 systemd[1]: Started libpod-conmon-f26438e79f01ce352826329ea27b8eff785cfcc843e358178b1a3115332e85d2.scope.
Dec 05 01:16:11 compute-0 podman[222945]: 2025-12-05 01:16:11.13065268 +0000 UTC m=+0.044703201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 05 01:16:11 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8205e836f11254ed6a26eda7960b5918c647a7a0f3c31d203a26cd000dbbcec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8205e836f11254ed6a26eda7960b5918c647a7a0f3c31d203a26cd000dbbcec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:11 compute-0 podman[222945]: 2025-12-05 01:16:11.317637963 +0000 UTC m=+0.231688524 container init f26438e79f01ce352826329ea27b8eff785cfcc843e358178b1a3115332e85d2 (image=quay.io/ceph/ceph:v18, name=lucid_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Dec 05 01:16:11 compute-0 podman[222945]: 2025-12-05 01:16:11.328809762 +0000 UTC m=+0.242860263 container start f26438e79f01ce352826329ea27b8eff785cfcc843e358178b1a3115332e85d2 (image=quay.io/ceph/ceph:v18, name=lucid_bose, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:16:11 compute-0 podman[222945]: 2025-12-05 01:16:11.343028536 +0000 UTC m=+0.257079087 container attach f26438e79f01ce352826329ea27b8eff785cfcc843e358178b1a3115332e85d2 (image=quay.io/ceph/ceph:v18, name=lucid_bose, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 01:16:11 compute-0 sudo[222583]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:16:11 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:16:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:16:11 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:16:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:16:11 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:16:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:16:11 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:16:11 compute-0 lucid_bose[222977]: {
Dec 05 01:16:11 compute-0 lucid_bose[222977]:     "user_id": "openstack",
Dec 05 01:16:11 compute-0 lucid_bose[222977]:     "display_name": "openstack",
Dec 05 01:16:11 compute-0 lucid_bose[222977]:     "email": "",
Dec 05 01:16:11 compute-0 lucid_bose[222977]:     "suspended": 0,
Dec 05 01:16:11 compute-0 lucid_bose[222977]:     "max_buckets": 1000,
Dec 05 01:16:11 compute-0 lucid_bose[222977]:     "subusers": [],
Dec 05 01:16:11 compute-0 lucid_bose[222977]:     "keys": [
Dec 05 01:16:11 compute-0 lucid_bose[222977]:         {
Dec 05 01:16:11 compute-0 lucid_bose[222977]:             "user": "openstack",
Dec 05 01:16:11 compute-0 lucid_bose[222977]:             "access_key": "D49DRQMAOR3T1P2P12S7",
Dec 05 01:16:11 compute-0 lucid_bose[222977]:             "secret_key": "DUkrPGeChDkks0mQE8eW45wrhj9VE5DM73LT72oz"
Dec 05 01:16:11 compute-0 lucid_bose[222977]:         }
Dec 05 01:16:11 compute-0 lucid_bose[222977]:     ],
Dec 05 01:16:11 compute-0 lucid_bose[222977]:     "swift_keys": [],
Dec 05 01:16:11 compute-0 lucid_bose[222977]:     "caps": [],
Dec 05 01:16:11 compute-0 lucid_bose[222977]:     "op_mask": "read, write, delete",
Dec 05 01:16:11 compute-0 lucid_bose[222977]:     "default_placement": "",
Dec 05 01:16:11 compute-0 lucid_bose[222977]:     "default_storage_class": "",
Dec 05 01:16:11 compute-0 lucid_bose[222977]:     "placement_tags": [],
Dec 05 01:16:11 compute-0 lucid_bose[222977]:     "bucket_quota": {
Dec 05 01:16:11 compute-0 lucid_bose[222977]:         "enabled": false,
Dec 05 01:16:11 compute-0 lucid_bose[222977]:         "check_on_raw": false,
Dec 05 01:16:11 compute-0 lucid_bose[222977]:         "max_size": -1,
Dec 05 01:16:11 compute-0 lucid_bose[222977]:         "max_size_kb": 0,
Dec 05 01:16:11 compute-0 lucid_bose[222977]:         "max_objects": -1
Dec 05 01:16:11 compute-0 lucid_bose[222977]:     },
Dec 05 01:16:11 compute-0 lucid_bose[222977]:     "user_quota": {
Dec 05 01:16:11 compute-0 lucid_bose[222977]:         "enabled": false,
Dec 05 01:16:11 compute-0 lucid_bose[222977]:         "check_on_raw": false,
Dec 05 01:16:11 compute-0 lucid_bose[222977]:         "max_size": -1,
Dec 05 01:16:11 compute-0 lucid_bose[222977]:         "max_size_kb": 0,
Dec 05 01:16:11 compute-0 lucid_bose[222977]:         "max_objects": -1
Dec 05 01:16:11 compute-0 lucid_bose[222977]:     },
Dec 05 01:16:11 compute-0 lucid_bose[222977]:     "temp_url_keys": [],
Dec 05 01:16:11 compute-0 lucid_bose[222977]:     "type": "rgw",
Dec 05 01:16:11 compute-0 lucid_bose[222977]:     "mfa_ids": []
Dec 05 01:16:11 compute-0 lucid_bose[222977]: }
Dec 05 01:16:11 compute-0 lucid_bose[222977]: 
Dec 05 01:16:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:16:11 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:16:11 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b7f246a2-4f7b-481e-af96-fd227bb0ce22 does not exist
Dec 05 01:16:11 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d2666066-421e-4801-ab4e-d1da52f1042f does not exist
Dec 05 01:16:11 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 4d9f7f5c-ceae-4484-8597-31e33157afc2 does not exist
Dec 05 01:16:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:16:11 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:16:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:16:11 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:16:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:16:11 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:16:11 compute-0 systemd[1]: libpod-f26438e79f01ce352826329ea27b8eff785cfcc843e358178b1a3115332e85d2.scope: Deactivated successfully.
Dec 05 01:16:11 compute-0 podman[222945]: 2025-12-05 01:16:11.659253482 +0000 UTC m=+0.573303983 container died f26438e79f01ce352826329ea27b8eff785cfcc843e358178b1a3115332e85d2 (image=quay.io/ceph/ceph:v18, name=lucid_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:16:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8205e836f11254ed6a26eda7960b5918c647a7a0f3c31d203a26cd000dbbcec-merged.mount: Deactivated successfully.
Dec 05 01:16:11 compute-0 sudo[223095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:16:11 compute-0 sudo[223095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:11 compute-0 sudo[223095]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:11 compute-0 podman[222945]: 2025-12-05 01:16:11.704792905 +0000 UTC m=+0.618843396 container remove f26438e79f01ce352826329ea27b8eff785cfcc843e358178b1a3115332e85d2 (image=quay.io/ceph/ceph:v18, name=lucid_bose, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:16:11 compute-0 systemd[1]: libpod-conmon-f26438e79f01ce352826329ea27b8eff785cfcc843e358178b1a3115332e85d2.scope: Deactivated successfully.
Dec 05 01:16:11 compute-0 sudo[222911]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:11 compute-0 sudo[223134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:16:11 compute-0 sudo[223134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:11 compute-0 sudo[223134]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:11 compute-0 ceph-mon[192914]: 7.1d deep-scrub starts
Dec 05 01:16:11 compute-0 ceph-mon[192914]: 7.1d deep-scrub ok
Dec 05 01:16:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:16:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:16:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:16:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:16:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:16:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:16:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:16:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:16:11 compute-0 sudo[223159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:16:11 compute-0 sudo[223159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:11 compute-0 sudo[223159]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:11 compute-0 sudo[223184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:16:11 compute-0 sudo[223184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:16:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v129: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 3.4 KiB/s wr, 159 op/s
Dec 05 01:16:12 compute-0 podman[223248]: 2025-12-05 01:16:12.491541143 +0000 UTC m=+0.083155437 container create 6d5576026b1256a30f697b8e64b9b580732488c49670455d68afdf486336fb19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:16:12 compute-0 podman[223248]: 2025-12-05 01:16:12.443388439 +0000 UTC m=+0.035002812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:16:12 compute-0 systemd[1]: Started libpod-conmon-6d5576026b1256a30f697b8e64b9b580732488c49670455d68afdf486336fb19.scope.
Dec 05 01:16:12 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:16:12 compute-0 podman[223248]: 2025-12-05 01:16:12.597565282 +0000 UTC m=+0.189179575 container init 6d5576026b1256a30f697b8e64b9b580732488c49670455d68afdf486336fb19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:16:12 compute-0 podman[223248]: 2025-12-05 01:16:12.615776207 +0000 UTC m=+0.207390520 container start 6d5576026b1256a30f697b8e64b9b580732488c49670455d68afdf486336fb19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_williamson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 05 01:16:12 compute-0 youthful_williamson[223264]: 167 167
Dec 05 01:16:12 compute-0 systemd[1]: libpod-6d5576026b1256a30f697b8e64b9b580732488c49670455d68afdf486336fb19.scope: Deactivated successfully.
Dec 05 01:16:12 compute-0 podman[223248]: 2025-12-05 01:16:12.623504892 +0000 UTC m=+0.215119255 container attach 6d5576026b1256a30f697b8e64b9b580732488c49670455d68afdf486336fb19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 05 01:16:12 compute-0 podman[223248]: 2025-12-05 01:16:12.626101964 +0000 UTC m=+0.217716297 container died 6d5576026b1256a30f697b8e64b9b580732488c49670455d68afdf486336fb19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_williamson, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:16:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-c68cd4b69316e6954e0c5f0dc7e924ec972d7d095196d1e5ec006da85c412189-merged.mount: Deactivated successfully.
Dec 05 01:16:12 compute-0 podman[223248]: 2025-12-05 01:16:12.699231791 +0000 UTC m=+0.290846114 container remove 6d5576026b1256a30f697b8e64b9b580732488c49670455d68afdf486336fb19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_williamson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:16:12 compute-0 systemd[1]: libpod-conmon-6d5576026b1256a30f697b8e64b9b580732488c49670455d68afdf486336fb19.scope: Deactivated successfully.
Dec 05 01:16:12 compute-0 ceph-mon[192914]: pgmap v129: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 3.4 KiB/s wr, 159 op/s
Dec 05 01:16:12 compute-0 podman[223287]: 2025-12-05 01:16:12.993345044 +0000 UTC m=+0.093932455 container create cfd41497ecc112e2bebe0b31c13481863dd2a4b0ff27fa8ef37e960034ed65c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:16:13 compute-0 podman[223287]: 2025-12-05 01:16:12.956702988 +0000 UTC m=+0.057290369 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:16:13 compute-0 systemd[1]: Started libpod-conmon-cfd41497ecc112e2bebe0b31c13481863dd2a4b0ff27fa8ef37e960034ed65c5.scope.
Dec 05 01:16:13 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825b189b260d7badc4354b726266866072d70c534d1be80b702852be3596acdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825b189b260d7badc4354b726266866072d70c534d1be80b702852be3596acdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825b189b260d7badc4354b726266866072d70c534d1be80b702852be3596acdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825b189b260d7badc4354b726266866072d70c534d1be80b702852be3596acdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825b189b260d7badc4354b726266866072d70c534d1be80b702852be3596acdd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:13 compute-0 podman[223287]: 2025-12-05 01:16:13.162277106 +0000 UTC m=+0.262864547 container init cfd41497ecc112e2bebe0b31c13481863dd2a4b0ff27fa8ef37e960034ed65c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:16:13 compute-0 podman[223287]: 2025-12-05 01:16:13.173077076 +0000 UTC m=+0.273664477 container start cfd41497ecc112e2bebe0b31c13481863dd2a4b0ff27fa8ef37e960034ed65c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:16:13 compute-0 podman[223287]: 2025-12-05 01:16:13.179178755 +0000 UTC m=+0.279766216 container attach cfd41497ecc112e2bebe0b31c13481863dd2a4b0ff27fa8ef37e960034ed65c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:16:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v130: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 3.1 KiB/s wr, 144 op/s
Dec 05 01:16:14 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Dec 05 01:16:14 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Dec 05 01:16:14 compute-0 dreamy_stonebraker[223302]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:16:14 compute-0 dreamy_stonebraker[223302]: --> relative data size: 1.0
Dec 05 01:16:14 compute-0 dreamy_stonebraker[223302]: --> All data devices are unavailable
Dec 05 01:16:14 compute-0 systemd[1]: libpod-cfd41497ecc112e2bebe0b31c13481863dd2a4b0ff27fa8ef37e960034ed65c5.scope: Deactivated successfully.
Dec 05 01:16:14 compute-0 systemd[1]: libpod-cfd41497ecc112e2bebe0b31c13481863dd2a4b0ff27fa8ef37e960034ed65c5.scope: Consumed 1.317s CPU time.
Dec 05 01:16:14 compute-0 podman[223331]: 2025-12-05 01:16:14.645996554 +0000 UTC m=+0.065045234 container died cfd41497ecc112e2bebe0b31c13481863dd2a4b0ff27fa8ef37e960034ed65c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 01:16:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-825b189b260d7badc4354b726266866072d70c534d1be80b702852be3596acdd-merged.mount: Deactivated successfully.
Dec 05 01:16:14 compute-0 podman[223331]: 2025-12-05 01:16:14.76848821 +0000 UTC m=+0.187536810 container remove cfd41497ecc112e2bebe0b31c13481863dd2a4b0ff27fa8ef37e960034ed65c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:16:14 compute-0 systemd[1]: libpod-conmon-cfd41497ecc112e2bebe0b31c13481863dd2a4b0ff27fa8ef37e960034ed65c5.scope: Deactivated successfully.
Dec 05 01:16:14 compute-0 sudo[223184]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:15 compute-0 sudo[223343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:16:15 compute-0 sudo[223343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:15 compute-0 sudo[223343]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:15 compute-0 sudo[223368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:16:15 compute-0 sudo[223368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:15 compute-0 ceph-mon[192914]: pgmap v130: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 3.1 KiB/s wr, 144 op/s
Dec 05 01:16:15 compute-0 ceph-mon[192914]: 7.1e scrub starts
Dec 05 01:16:15 compute-0 ceph-mon[192914]: 7.1e scrub ok
Dec 05 01:16:15 compute-0 sudo[223368]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:15 compute-0 sudo[223393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:16:15 compute-0 sudo[223393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:15 compute-0 sudo[223393]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:15 compute-0 sudo[223418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:16:15 compute-0 sudo[223418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:16 compute-0 podman[223484]: 2025-12-05 01:16:16.039355519 +0000 UTC m=+0.094326266 container create db0d1332fe2cb3ff36d49fa1cc121eb8d5ebd4d454346887a29f3f4fb84075d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec 05 01:16:16 compute-0 podman[223484]: 2025-12-05 01:16:16.003789833 +0000 UTC m=+0.058760620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:16:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:16:16
Dec 05 01:16:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:16:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v131: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.8 KiB/s wr, 133 op/s
Dec 05 01:16:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:16:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', '.rgw.root', 'backups', '.mgr', 'images', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control']
Dec 05 01:16:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:16:16 compute-0 systemd[1]: Started libpod-conmon-db0d1332fe2cb3ff36d49fa1cc121eb8d5ebd4d454346887a29f3f4fb84075d5.scope.
Dec 05 01:16:16 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:16:16 compute-0 podman[223484]: 2025-12-05 01:16:16.18336137 +0000 UTC m=+0.238332167 container init db0d1332fe2cb3ff36d49fa1cc121eb8d5ebd4d454346887a29f3f4fb84075d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_thompson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:16:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:16:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:16:16 compute-0 podman[223484]: 2025-12-05 01:16:16.202709727 +0000 UTC m=+0.257680464 container start db0d1332fe2cb3ff36d49fa1cc121eb8d5ebd4d454346887a29f3f4fb84075d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_thompson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 01:16:16 compute-0 podman[223484]: 2025-12-05 01:16:16.209724661 +0000 UTC m=+0.264695398 container attach db0d1332fe2cb3ff36d49fa1cc121eb8d5ebd4d454346887a29f3f4fb84075d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 05 01:16:16 compute-0 ecstatic_thompson[223499]: 167 167
Dec 05 01:16:16 compute-0 systemd[1]: libpod-db0d1332fe2cb3ff36d49fa1cc121eb8d5ebd4d454346887a29f3f4fb84075d5.scope: Deactivated successfully.
Dec 05 01:16:16 compute-0 podman[223484]: 2025-12-05 01:16:16.215367978 +0000 UTC m=+0.270338715 container died db0d1332fe2cb3ff36d49fa1cc121eb8d5ebd4d454346887a29f3f4fb84075d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 05 01:16:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:16:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:16:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:16:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:16:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:16:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:16:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:16:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:16:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:16:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:16:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:16:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:16:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:16:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:16:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfa105965becef1b237f68056e224e8fc5cc92adbb4bdf0fdb847137601fb8f3-merged.mount: Deactivated successfully.
Dec 05 01:16:16 compute-0 podman[223484]: 2025-12-05 01:16:16.29734104 +0000 UTC m=+0.352311777 container remove db0d1332fe2cb3ff36d49fa1cc121eb8d5ebd4d454346887a29f3f4fb84075d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_thompson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 05 01:16:16 compute-0 systemd[1]: libpod-conmon-db0d1332fe2cb3ff36d49fa1cc121eb8d5ebd4d454346887a29f3f4fb84075d5.scope: Deactivated successfully.
Dec 05 01:16:16 compute-0 podman[223523]: 2025-12-05 01:16:16.605562164 +0000 UTC m=+0.091946270 container create 01dfac9f455d005e7b674e4edaf1a6a393195559c91872c43b11ca0b2987be26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_jemison, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:16:16 compute-0 podman[223523]: 2025-12-05 01:16:16.569076013 +0000 UTC m=+0.055460149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:16:16 compute-0 systemd[1]: Started libpod-conmon-01dfac9f455d005e7b674e4edaf1a6a393195559c91872c43b11ca0b2987be26.scope.
Dec 05 01:16:16 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/631917f490f26ac1548e53ad3ad1d5d5f5af5f8025a660117b05adc8d0fa77f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/631917f490f26ac1548e53ad3ad1d5d5f5af5f8025a660117b05adc8d0fa77f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/631917f490f26ac1548e53ad3ad1d5d5f5af5f8025a660117b05adc8d0fa77f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/631917f490f26ac1548e53ad3ad1d5d5f5af5f8025a660117b05adc8d0fa77f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:16 compute-0 podman[223523]: 2025-12-05 01:16:16.763463931 +0000 UTC m=+0.249848097 container init 01dfac9f455d005e7b674e4edaf1a6a393195559c91872c43b11ca0b2987be26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 05 01:16:16 compute-0 podman[223523]: 2025-12-05 01:16:16.792963269 +0000 UTC m=+0.279347375 container start 01dfac9f455d005e7b674e4edaf1a6a393195559c91872c43b11ca0b2987be26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:16:16 compute-0 podman[223523]: 2025-12-05 01:16:16.799525921 +0000 UTC m=+0.285910017 container attach 01dfac9f455d005e7b674e4edaf1a6a393195559c91872c43b11ca0b2987be26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_jemison, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:16:16 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.10 deep-scrub starts
Dec 05 01:16:17 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.10 deep-scrub ok
Dec 05 01:16:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:16:17 compute-0 ceph-mon[192914]: pgmap v131: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.8 KiB/s wr, 133 op/s
Dec 05 01:16:17 compute-0 agitated_jemison[223539]: {
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:     "0": [
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:         {
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "devices": [
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "/dev/loop3"
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             ],
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "lv_name": "ceph_lv0",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "lv_size": "21470642176",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "name": "ceph_lv0",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "tags": {
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.cluster_name": "ceph",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.crush_device_class": "",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.encrypted": "0",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.osd_id": "0",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.type": "block",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.vdo": "0"
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             },
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "type": "block",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "vg_name": "ceph_vg0"
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:         }
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:     ],
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:     "1": [
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:         {
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "devices": [
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "/dev/loop4"
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             ],
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "lv_name": "ceph_lv1",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "lv_size": "21470642176",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "name": "ceph_lv1",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "tags": {
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.cluster_name": "ceph",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.crush_device_class": "",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.encrypted": "0",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.osd_id": "1",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.type": "block",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.vdo": "0"
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             },
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "type": "block",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "vg_name": "ceph_vg1"
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:         }
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:     ],
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:     "2": [
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:         {
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "devices": [
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "/dev/loop5"
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             ],
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "lv_name": "ceph_lv2",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "lv_size": "21470642176",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "name": "ceph_lv2",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "tags": {
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.cluster_name": "ceph",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.crush_device_class": "",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.encrypted": "0",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.osd_id": "2",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.type": "block",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:                 "ceph.vdo": "0"
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             },
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "type": "block",
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:             "vg_name": "ceph_vg2"
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:         }
Dec 05 01:16:17 compute-0 agitated_jemison[223539]:     ]
Dec 05 01:16:17 compute-0 agitated_jemison[223539]: }
Dec 05 01:16:17 compute-0 systemd[1]: libpod-01dfac9f455d005e7b674e4edaf1a6a393195559c91872c43b11ca0b2987be26.scope: Deactivated successfully.
Dec 05 01:16:17 compute-0 podman[223523]: 2025-12-05 01:16:17.690696083 +0000 UTC m=+1.177080179 container died 01dfac9f455d005e7b674e4edaf1a6a393195559c91872c43b11ca0b2987be26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:16:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-631917f490f26ac1548e53ad3ad1d5d5f5af5f8025a660117b05adc8d0fa77f0-merged.mount: Deactivated successfully.
Dec 05 01:16:17 compute-0 podman[223523]: 2025-12-05 01:16:17.788825313 +0000 UTC m=+1.275209379 container remove 01dfac9f455d005e7b674e4edaf1a6a393195559c91872c43b11ca0b2987be26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 05 01:16:17 compute-0 systemd[1]: libpod-conmon-01dfac9f455d005e7b674e4edaf1a6a393195559c91872c43b11ca0b2987be26.scope: Deactivated successfully.
Dec 05 01:16:17 compute-0 sudo[223418]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:17 compute-0 sudo[223560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:16:17 compute-0 sudo[223560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:17 compute-0 sudo[223560]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:17 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.5 deep-scrub starts
Dec 05 01:16:17 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.5 deep-scrub ok
Dec 05 01:16:18 compute-0 sudo[223585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:16:18 compute-0 sudo[223585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:18 compute-0 sudo[223585]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v132: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.8 KiB/s wr, 132 op/s
Dec 05 01:16:18 compute-0 sudo[223610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:16:18 compute-0 sudo[223610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:18 compute-0 ceph-mon[192914]: 5.10 deep-scrub starts
Dec 05 01:16:18 compute-0 ceph-mon[192914]: 5.10 deep-scrub ok
Dec 05 01:16:18 compute-0 ceph-mon[192914]: 6.5 deep-scrub starts
Dec 05 01:16:18 compute-0 ceph-mon[192914]: 6.5 deep-scrub ok
Dec 05 01:16:18 compute-0 sudo[223610]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:18 compute-0 sudo[223635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:16:18 compute-0 sudo[223635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:19 compute-0 ceph-mon[192914]: pgmap v132: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.8 KiB/s wr, 132 op/s
Dec 05 01:16:19 compute-0 podman[223700]: 2025-12-05 01:16:19.633616341 +0000 UTC m=+0.099295313 container create a14b5d8d76f821f38a5db81914ae9128acdcfb41ab465c4a28b534431165fe24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:16:19 compute-0 podman[223700]: 2025-12-05 01:16:19.598346214 +0000 UTC m=+0.064025206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:16:19 compute-0 systemd[1]: Started libpod-conmon-a14b5d8d76f821f38a5db81914ae9128acdcfb41ab465c4a28b534431165fe24.scope.
Dec 05 01:16:19 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:16:19 compute-0 podman[223700]: 2025-12-05 01:16:19.789314937 +0000 UTC m=+0.254993909 container init a14b5d8d76f821f38a5db81914ae9128acdcfb41ab465c4a28b534431165fe24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shirley, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 05 01:16:19 compute-0 podman[223700]: 2025-12-05 01:16:19.808150049 +0000 UTC m=+0.273829041 container start a14b5d8d76f821f38a5db81914ae9128acdcfb41ab465c4a28b534431165fe24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 05 01:16:19 compute-0 podman[223700]: 2025-12-05 01:16:19.815697339 +0000 UTC m=+0.281376361 container attach a14b5d8d76f821f38a5db81914ae9128acdcfb41ab465c4a28b534431165fe24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shirley, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:16:19 compute-0 loving_shirley[223716]: 167 167
Dec 05 01:16:19 compute-0 systemd[1]: libpod-a14b5d8d76f821f38a5db81914ae9128acdcfb41ab465c4a28b534431165fe24.scope: Deactivated successfully.
Dec 05 01:16:19 compute-0 podman[223700]: 2025-12-05 01:16:19.8211536 +0000 UTC m=+0.286832572 container died a14b5d8d76f821f38a5db81914ae9128acdcfb41ab465c4a28b534431165fe24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shirley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:16:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-f639fe88e0a61952db252562806d46a6b513b07fb847d8377aaf815631eaec71-merged.mount: Deactivated successfully.
Dec 05 01:16:19 compute-0 podman[223700]: 2025-12-05 01:16:19.899490781 +0000 UTC m=+0.365169763 container remove a14b5d8d76f821f38a5db81914ae9128acdcfb41ab465c4a28b534431165fe24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:16:19 compute-0 systemd[1]: libpod-conmon-a14b5d8d76f821f38a5db81914ae9128acdcfb41ab465c4a28b534431165fe24.scope: Deactivated successfully.
Dec 05 01:16:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v133: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s
Dec 05 01:16:20 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Dec 05 01:16:20 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Dec 05 01:16:20 compute-0 podman[223740]: 2025-12-05 01:16:20.198046717 +0000 UTC m=+0.101628118 container create f38ffc3185e6975ac2663c91c9fc0c6f759cd8dee7cbe1e586369809a93aef37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_carson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:16:20 compute-0 podman[223740]: 2025-12-05 01:16:20.162629406 +0000 UTC m=+0.066210887 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:16:20 compute-0 systemd[1]: Started libpod-conmon-f38ffc3185e6975ac2663c91c9fc0c6f759cd8dee7cbe1e586369809a93aef37.scope.
Dec 05 01:16:20 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/274ca815e6b235f1172869034b20a42bbd9f75f4dc2de4f185db2b15c980c85e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/274ca815e6b235f1172869034b20a42bbd9f75f4dc2de4f185db2b15c980c85e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/274ca815e6b235f1172869034b20a42bbd9f75f4dc2de4f185db2b15c980c85e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/274ca815e6b235f1172869034b20a42bbd9f75f4dc2de4f185db2b15c980c85e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:16:20 compute-0 ceph-mon[192914]: pgmap v133: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s
Dec 05 01:16:20 compute-0 ceph-mon[192914]: 5.19 scrub starts
Dec 05 01:16:20 compute-0 ceph-mon[192914]: 5.19 scrub ok
Dec 05 01:16:20 compute-0 podman[223740]: 2025-12-05 01:16:20.399188203 +0000 UTC m=+0.302769674 container init f38ffc3185e6975ac2663c91c9fc0c6f759cd8dee7cbe1e586369809a93aef37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_carson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Dec 05 01:16:20 compute-0 podman[223754]: 2025-12-05 01:16:20.407956486 +0000 UTC m=+0.144443315 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 01:16:20 compute-0 podman[223740]: 2025-12-05 01:16:20.420639138 +0000 UTC m=+0.324220579 container start f38ffc3185e6975ac2663c91c9fc0c6f759cd8dee7cbe1e586369809a93aef37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 05 01:16:20 compute-0 podman[223740]: 2025-12-05 01:16:20.427489978 +0000 UTC m=+0.331071469 container attach f38ffc3185e6975ac2663c91c9fc0c6f759cd8dee7cbe1e586369809a93aef37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_carson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:16:21 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec 05 01:16:21 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec 05 01:16:21 compute-0 ceph-mon[192914]: 5.18 scrub starts
Dec 05 01:16:21 compute-0 ceph-mon[192914]: 5.18 scrub ok
Dec 05 01:16:21 compute-0 jolly_carson[223765]: {
Dec 05 01:16:21 compute-0 jolly_carson[223765]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:16:21 compute-0 jolly_carson[223765]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:16:21 compute-0 jolly_carson[223765]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:16:21 compute-0 jolly_carson[223765]:         "osd_id": 0,
Dec 05 01:16:21 compute-0 jolly_carson[223765]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:16:21 compute-0 jolly_carson[223765]:         "type": "bluestore"
Dec 05 01:16:21 compute-0 jolly_carson[223765]:     },
Dec 05 01:16:21 compute-0 jolly_carson[223765]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:16:21 compute-0 jolly_carson[223765]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:16:21 compute-0 jolly_carson[223765]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:16:21 compute-0 jolly_carson[223765]:         "osd_id": 1,
Dec 05 01:16:21 compute-0 jolly_carson[223765]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:16:21 compute-0 jolly_carson[223765]:         "type": "bluestore"
Dec 05 01:16:21 compute-0 jolly_carson[223765]:     },
Dec 05 01:16:21 compute-0 jolly_carson[223765]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:16:21 compute-0 jolly_carson[223765]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:16:21 compute-0 jolly_carson[223765]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:16:21 compute-0 jolly_carson[223765]:         "osd_id": 2,
Dec 05 01:16:21 compute-0 jolly_carson[223765]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:16:21 compute-0 jolly_carson[223765]:         "type": "bluestore"
Dec 05 01:16:21 compute-0 jolly_carson[223765]:     }
Dec 05 01:16:21 compute-0 jolly_carson[223765]: }
Dec 05 01:16:21 compute-0 systemd[1]: libpod-f38ffc3185e6975ac2663c91c9fc0c6f759cd8dee7cbe1e586369809a93aef37.scope: Deactivated successfully.
Dec 05 01:16:21 compute-0 podman[223740]: 2025-12-05 01:16:21.699564739 +0000 UTC m=+1.603146170 container died f38ffc3185e6975ac2663c91c9fc0c6f759cd8dee7cbe1e586369809a93aef37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_carson, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:16:21 compute-0 systemd[1]: libpod-f38ffc3185e6975ac2663c91c9fc0c6f759cd8dee7cbe1e586369809a93aef37.scope: Consumed 1.282s CPU time.
Dec 05 01:16:21 compute-0 podman[223803]: 2025-12-05 01:16:21.742588472 +0000 UTC m=+0.141444512 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:16:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-274ca815e6b235f1172869034b20a42bbd9f75f4dc2de4f185db2b15c980c85e-merged.mount: Deactivated successfully.
Dec 05 01:16:21 compute-0 podman[223807]: 2025-12-05 01:16:21.822401624 +0000 UTC m=+0.213053817 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 05 01:16:21 compute-0 podman[223740]: 2025-12-05 01:16:21.831669111 +0000 UTC m=+1.735250512 container remove f38ffc3185e6975ac2663c91c9fc0c6f759cd8dee7cbe1e586369809a93aef37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_carson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:16:21 compute-0 systemd[1]: libpod-conmon-f38ffc3185e6975ac2663c91c9fc0c6f759cd8dee7cbe1e586369809a93aef37.scope: Deactivated successfully.
Dec 05 01:16:21 compute-0 sudo[223635]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:16:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:16:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:16:21 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Dec 05 01:16:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:16:21 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Dec 05 01:16:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8dbdaefe-90b7-45c4-acd2-53dd119bdf3c does not exist
Dec 05 01:16:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev de32f3ef-18d5-4249-a79c-1298fd87fe83 does not exist
Dec 05 01:16:21 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:16:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Dec 05 01:16:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Dec 05 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Dec 05 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 05 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 1)
Dec 05 01:16:22 compute-0 sudo[223868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:16:22 compute-0 sudo[223868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:22 compute-0 sudo[223868]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:16:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v134: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s
Dec 05 01:16:22 compute-0 sudo[223893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:16:22 compute-0 sudo[223893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:16:22 compute-0 sudo[223893]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:16:22 compute-0 ceph-mon[192914]: 5.17 scrub starts
Dec 05 01:16:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:16:22 compute-0 ceph-mon[192914]: 5.17 scrub ok
Dec 05 01:16:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 01:16:22 compute-0 ceph-mon[192914]: pgmap v134: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s
Dec 05 01:16:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec 05 01:16:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 05 01:16:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Dec 05 01:16:22 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Dec 05 01:16:22 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev 582dba70-76ae-473a-a8e8-f92239aa32ac (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 05 01:16:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Dec 05 01:16:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 01:16:23 compute-0 podman[223918]: 2025-12-05 01:16:23.719638947 +0000 UTC m=+0.123504565 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:16:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec 05 01:16:23 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 05 01:16:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec 05 01:16:23 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec 05 01:16:23 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev bd60c833-eb19-451e-8af3-d097b0e1ed12 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 05 01:16:23 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 05 01:16:23 compute-0 ceph-mon[192914]: osdmap e51: 3 total, 3 up, 3 in
Dec 05 01:16:23 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 01:16:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Dec 05 01:16:23 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 01:16:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v137: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:16:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 05 01:16:24 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 01:16:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 05 01:16:24 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 01:16:24 compute-0 sshd-session[223938]: Accepted publickey for zuul from 192.168.122.30 port 44008 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 01:16:24 compute-0 systemd-logind[792]: New session 41 of user zuul.
Dec 05 01:16:24 compute-0 systemd[1]: Started Session 41 of User zuul.
Dec 05 01:16:24 compute-0 sshd-session[223938]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:16:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec 05 01:16:24 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 05 01:16:24 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 01:16:24 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 01:16:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec 05 01:16:24 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec 05 01:16:24 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev 80c7654f-dc62-4d1b-9b5d-dfbd2305777a (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 05 01:16:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Dec 05 01:16:24 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 01:16:24 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 05 01:16:24 compute-0 ceph-mon[192914]: osdmap e52: 3 total, 3 up, 3 in
Dec 05 01:16:24 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 01:16:24 compute-0 ceph-mon[192914]: pgmap v137: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:16:24 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 01:16:24 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 01:16:25 compute-0 podman[224017]: 2025-12-05 01:16:25.733419188 +0000 UTC m=+0.133960484 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_id=edpm, release=1214.1726694543, release-0.7.12=, io.openshift.expose-services=, architecture=x86_64, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, vcs-type=git)
Dec 05 01:16:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec 05 01:16:25 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 05 01:16:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec 05 01:16:25 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec 05 01:16:26 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev 4e9493b6-de84-45eb-905f-8cf9df45e752 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 05 01:16:26 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev 582dba70-76ae-473a-a8e8-f92239aa32ac (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 05 01:16:26 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event 582dba70-76ae-473a-a8e8-f92239aa32ac (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Dec 05 01:16:26 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev bd60c833-eb19-451e-8af3-d097b0e1ed12 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 05 01:16:26 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event bd60c833-eb19-451e-8af3-d097b0e1ed12 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Dec 05 01:16:26 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev 80c7654f-dc62-4d1b-9b5d-dfbd2305777a (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 05 01:16:26 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event 80c7654f-dc62-4d1b-9b5d-dfbd2305777a (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Dec 05 01:16:26 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev 4e9493b6-de84-45eb-905f-8cf9df45e752 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 05 01:16:26 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event 4e9493b6-de84-45eb-905f-8cf9df45e752 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 53 pg[9.0( v 50'586 (0'0,50'586] local-lis/les=44/45 n=209 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.930982590s) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 50'585 mlcod 50'585 active pruub 128.905731201s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 05 01:16:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 01:16:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 01:16:26 compute-0 ceph-mon[192914]: osdmap e53: 3 total, 3 up, 3 in
Dec 05 01:16:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 05 01:16:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 05 01:16:26 compute-0 ceph-mon[192914]: osdmap e54: 3 total, 3 up, 3 in
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 53 pg[8.0( v 43'4 (0'0,43'4] local-lis/les=42/43 n=4 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.848893166s) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 43'3 mlcod 43'3 active pruub 126.824256897s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.0( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.848893166s) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 43'3 mlcod 0'0 unknown pruub 126.824256897s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.11( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.1e( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.1c( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.19( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.e( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.1a( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.1d( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.13( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.12( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.a( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.1( v 43'4 (0'0,43'4] local-lis/les=42/43 n=1 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.5( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.14( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.16( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.8( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.1f( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.4( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=1 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.3( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=1 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.d( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.b( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.f( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.c( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.7( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.15( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.17( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.10( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.18( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.1b( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.6( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.9( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.2( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=1 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.0( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.930982590s) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 50'585 mlcod 0'0 unknown pruub 128.905731201s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.7( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.6( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.9( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.8( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.4( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.3( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.5( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.a( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.1( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.2( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.b( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.c( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.d( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.e( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.f( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.10( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.11( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.13( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.12( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.14( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.15( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.16( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.17( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.18( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.19( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.1a( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.1b( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.1c( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.1d( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.1e( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.1f( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:26 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.7 deep-scrub starts
Dec 05 01:16:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v140: 259 pgs: 62 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:16:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 05 01:16:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 01:16:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 05 01:16:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 01:16:26 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.7 deep-scrub ok
Dec 05 01:16:26 compute-0 python3.9[224109]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:16:26 compute-0 ceph-mgr[193209]: [progress INFO root] Writing back 17 completed events
Dec 05 01:16:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 05 01:16:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:16:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec 05 01:16:27 compute-0 ceph-mon[192914]: pgmap v140: 259 pgs: 62 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:16:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 01:16:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 05 01:16:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:16:27 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 01:16:27 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 01:16:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec 05 01:16:27 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec 05 01:16:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:16:27 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 55 pg[10.0( v 50'64 (0'0,50'64] local-lis/les=46/47 n=8 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=55 pruub=12.949882507s) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 50'63 mlcod 50'63 active pruub 124.379646301s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:27 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 55 pg[10.0( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=55 pruub=12.949882507s) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 50'63 mlcod 0'0 unknown pruub 124.379646301s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[11.0( v 50'2 (0'0,50'2] local-lis/les=48/49 n=2 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=14.987639427s) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 50'1 mlcod 50'1 active pruub 133.016494751s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[11.0( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=14.987639427s) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 50'1 mlcod 0'0 unknown pruub 133.016494751s@ mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:27 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.15( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.14( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.16( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.14( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.17( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.10( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.11( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.0( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 50'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.2( v 43'4 (0'0,43'4] local-lis/les=53/55 n=1 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.1( v 43'4 (0'0,43'4] local-lis/les=53/55 n=1 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.3( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.2( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.3( v 43'4 (0'0,43'4] local-lis/les=53/55 n=1 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.d( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.e( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.8( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.9( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.d( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.a( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.b( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.f( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.e( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.b( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.a( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.9( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.7( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.8( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.0( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 43'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.6( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.1( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.6( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.4( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.5( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.1a( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.c( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.1b( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.18( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.19( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.5( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.18( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.1e( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.c( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.4( v 43'4 (0'0,43'4] local-lis/les=53/55 n=1 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.1d( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.1d( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.1c( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.13( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.12( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.12( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.1a( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.1f( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.1b( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.11( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.10( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:27 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Dec 05 01:16:27 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Dec 05 01:16:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec 05 01:16:28 compute-0 ceph-mon[192914]: 6.7 deep-scrub starts
Dec 05 01:16:28 compute-0 ceph-mon[192914]: 6.7 deep-scrub ok
Dec 05 01:16:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 01:16:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 05 01:16:28 compute-0 ceph-mon[192914]: osdmap e55: 3 total, 3 up, 3 in
Dec 05 01:16:28 compute-0 ceph-mon[192914]: 5.1a scrub starts
Dec 05 01:16:28 compute-0 ceph-mon[192914]: 5.1a scrub ok
Dec 05 01:16:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec 05 01:16:28 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec 05 01:16:28 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1e( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.d( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.b( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1b( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.a( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.13( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.12( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.11( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.10( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1d( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1c( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.19( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.18( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1a( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1f( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.7( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.6( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.5( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.4( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.f( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.8( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.9( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.c( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.e( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1( v 50'64 (0'0,50'64] local-lis/les=46/47 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.14( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.3( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.15( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.16( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.17( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.16( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.15( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.14( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.2( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=1 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.17( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1( v 50'2 (0'0,50'2] local-lis/les=48/49 n=1 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.2( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.13( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.f( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.e( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.b( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.9( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.d( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.c( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.8( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.3( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.4( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.5( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.6( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.7( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.18( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1a( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1b( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1c( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.a( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1e( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.10( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1e( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.d( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1f( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.11( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1d( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.12( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.19( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.17( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v143: 321 pgs: 62 unknown, 64 peering, 195 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.13( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.11( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.10( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.a( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1c( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.b( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.19( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.18( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1d( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1f( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.12( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.7( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.4( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1b( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1a( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.8( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.c( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.e( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.6( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.5( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.9( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.14( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.f( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.0( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 50'63 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.16( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.3( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.17( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.2( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.15( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.15( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.2( v 50'2 (0'0,50'2] local-lis/les=55/56 n=1 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.16( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1( v 50'2 (0'0,50'2] local-lis/les=55/56 n=1 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.14( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.13( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.0( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 50'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.f( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.e( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.b( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.9( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.8( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.c( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.3( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.4( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.5( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.6( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.7( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.d( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.18( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1a( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1b( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.a( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1c( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1e( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.10( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.11( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1d( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.12( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1f( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.19( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:28 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.1d deep-scrub starts
Dec 05 01:16:28 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.1d deep-scrub ok
Dec 05 01:16:28 compute-0 sudo[224350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-numvafdtdbarfqimdiusebwctenuvjfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897387.8898425-32-64058568613132/AnsiballZ_command.py'
Dec 05 01:16:28 compute-0 sudo[224350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:16:28 compute-0 podman[224307]: 2025-12-05 01:16:28.67251554 +0000 UTC m=+0.114975568 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git, name=ubi9-minimal, config_id=edpm, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350)
Dec 05 01:16:28 compute-0 python3.9[224354]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:16:29 compute-0 ceph-mon[192914]: 6.9 scrub starts
Dec 05 01:16:29 compute-0 ceph-mon[192914]: 6.9 scrub ok
Dec 05 01:16:29 compute-0 ceph-mon[192914]: osdmap e56: 3 total, 3 up, 3 in
Dec 05 01:16:29 compute-0 ceph-mon[192914]: 6.a scrub starts
Dec 05 01:16:29 compute-0 ceph-mon[192914]: 6.a scrub ok
Dec 05 01:16:29 compute-0 ceph-mon[192914]: pgmap v143: 321 pgs: 62 unknown, 64 peering, 195 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:16:29 compute-0 ceph-mon[192914]: 5.1d deep-scrub starts
Dec 05 01:16:29 compute-0 ceph-mon[192914]: 5.1d deep-scrub ok
Dec 05 01:16:29 compute-0 podman[158197]: time="2025-12-05T01:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:16:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 05 01:16:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6787 "" "Go-http-client/1.1"
Dec 05 01:16:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v144: 321 pgs: 62 unknown, 64 peering, 195 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:16:30 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.c scrub starts
Dec 05 01:16:30 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.c scrub ok
Dec 05 01:16:31 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec 05 01:16:31 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec 05 01:16:31 compute-0 ceph-mon[192914]: pgmap v144: 321 pgs: 62 unknown, 64 peering, 195 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:16:31 compute-0 ceph-mon[192914]: 5.c scrub starts
Dec 05 01:16:31 compute-0 ceph-mon[192914]: 5.c scrub ok
Dec 05 01:16:31 compute-0 openstack_network_exporter[160350]: ERROR   01:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:16:31 compute-0 openstack_network_exporter[160350]: ERROR   01:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:16:31 compute-0 openstack_network_exporter[160350]: ERROR   01:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:16:31 compute-0 openstack_network_exporter[160350]: ERROR   01:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:16:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:16:31 compute-0 openstack_network_exporter[160350]: ERROR   01:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:16:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:16:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:16:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v145: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:16:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 05 01:16:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 01:16:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 05 01:16:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 01:16:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Dec 05 01:16:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec 05 01:16:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 05 01:16:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 01:16:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Dec 05 01:16:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 01:16:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 01:16:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 05 01:16:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 01:16:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Dec 05 01:16:32 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Dec 05 01:16:32 compute-0 ceph-mon[192914]: 5.1b scrub starts
Dec 05 01:16:32 compute-0 ceph-mon[192914]: 5.1b scrub ok
Dec 05 01:16:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 01:16:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 01:16:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec 05 01:16:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.d( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.878374100s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 50'64 mlcod 50'64 active pruub 128.463775635s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.d( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.877438545s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 50'64 mlcod 0'0 unknown NOTIFY pruub 128.463775635s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.12( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.889671326s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476364136s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.11( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.889369965s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476135254s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.11( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.889344215s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476135254s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.12( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.889582634s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476364136s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.10( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.889259338s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476226807s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.10( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.889238358s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476226807s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.1a( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.889079094s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476379395s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.19( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.889013290s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476318359s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.19( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.888989449s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476318359s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.1a( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.889044762s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476379395s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.1e( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.875101089s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.463745117s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.1e( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.875068665s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.463745117s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.7( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.887742996s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476470947s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.7( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.887702942s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476470947s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.b( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.890359879s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476287842s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.b( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.886901855s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476287842s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.6( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.886710167s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476531982s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.6( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.886675835s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476531982s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.4( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.886508942s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476531982s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.4( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.886487961s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476531982s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.13( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885747910s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476074219s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.13( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885709763s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476074219s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.8( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.886237144s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476654053s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.8( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.886204720s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476654053s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.f( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.886037827s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476654053s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.f( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.886006355s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476654053s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.9( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885848999s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 50'64 mlcod 50'64 active pruub 128.476882935s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.e( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885631561s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 50'64 mlcod 50'64 active pruub 128.476699829s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.1( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885595322s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476928711s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.9( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885489464s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 50'64 mlcod 0'0 unknown NOTIFY pruub 128.476882935s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.2( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885498047s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.477096558s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.2( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885476112s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.477096558s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.14( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885182381s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 50'64 mlcod 50'64 active pruub 128.476974487s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.14( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885154724s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 50'64 mlcod 0'0 unknown NOTIFY pruub 128.476974487s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.15( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885043144s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 50'64 mlcod 50'64 active pruub 128.476974487s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.15( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885017395s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 50'64 mlcod 0'0 unknown NOTIFY pruub 128.476974487s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.16( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885004997s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.477066040s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.16( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.884985924s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.477066040s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.17( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.884865761s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.477081299s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.17( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.884846687s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.477081299s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.e( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.884408951s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 50'64 mlcod 0'0 unknown NOTIFY pruub 128.476699829s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.1( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.884453773s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476928711s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[10.9( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[10.8( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[10.15( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[10.13( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[10.10( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[10.11( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[10.1a( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[10.19( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[10.6( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[10.2( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[10.b( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[10.4( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[10.f( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[10.12( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[10.14( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.17( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.859036446s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.061859131s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.14( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.851114273s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.054016113s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.851468086s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.054367065s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.14( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.851088524s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054016113s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.851434708s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054367065s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.15( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.837277412s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.040405273s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.15( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.837207794s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.040405273s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.15( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.871263504s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.074539185s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.850790024s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.054122925s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.15( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.871229172s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.074539185s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.17( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.858995438s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.061859131s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.850772858s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054122925s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.14( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.870972633s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.074707031s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.14( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.870937347s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.074707031s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.10( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.850204468s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.054229736s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.10( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.850178719s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054229736s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.2( v 50'2 (0'0,50'2] local-lis/les=55/56 n=1 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.870398521s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.074630737s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.2( v 50'2 (0'0,50'2] local-lis/les=55/56 n=1 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.870374680s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.074630737s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.2( v 43'4 (0'0,43'4] local-lis/les=53/55 n=1 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.849956512s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.054382324s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.11( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.849859238s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.054306030s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.2( v 43'4 (0'0,43'4] local-lis/les=53/55 n=1 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.849934578s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054382324s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.11( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.849824905s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054306030s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.3( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.849868774s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.054428101s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.3( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.849845886s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054428101s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.f( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.870020866s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.074829102s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.f( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.869997025s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.074829102s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.c( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.850317001s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.055297852s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.c( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.850296974s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.055297852s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.1( v 50'2 (0'0,50'2] local-lis/les=55/56 n=1 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.869367599s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.074676514s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.d( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.849087715s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.054519653s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.e( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.869245529s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.074844360s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.e( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.869209290s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.074844360s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.d( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.848733902s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.054565430s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.d( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.848699570s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054565430s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.d( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.868861198s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.074920654s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.e( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.848338127s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.054580688s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.e( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.848313332s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054580688s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.848398209s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.054809570s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.848378181s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054809570s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.b( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.868321419s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.074859619s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.b( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.868298531s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.074859619s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.d( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.868830681s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.074920654s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.9( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.848031998s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.054763794s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.9( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.848009109s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054763794s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.9( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.868103981s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.074890137s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.b( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.847919464s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.054824829s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.b( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.847896576s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054824829s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.1( v 50'2 (0'0,50'2] local-lis/les=55/56 n=1 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.869333267s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.074676514s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.8( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.867715836s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.074935913s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.f( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.847631454s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.054870605s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.8( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.867692947s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.074935913s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.b( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.847529411s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.054931641s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.b( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.847506523s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054931641s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.f( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.847595215s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054870605s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.d( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.849052429s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054519653s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.9( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.847260475s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.054946899s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.9( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.847226143s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054946899s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.3( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.867200851s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075012207s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.3( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.867165565s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075012207s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.1( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.846507072s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.054992676s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.1( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.846475601s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054992676s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.4( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.866384506s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075042725s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.4( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.866353035s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075042725s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.846394539s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.055221558s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.6( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.846260071s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.055130005s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.6( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.866254807s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075134277s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.846359253s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.055221558s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.6( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.846236229s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.055130005s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.6( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.866223335s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075134277s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.4( v 43'4 (0'0,43'4] local-lis/les=53/55 n=1 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.846088409s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.055297852s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.4( v 43'4 (0'0,43'4] local-lis/les=53/55 n=1 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.846068382s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.055297852s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.5( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.846284866s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.055557251s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.5( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.846252441s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.055557251s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.1b( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.846049309s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.055450439s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.1b( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.846034050s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.055450439s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.1a( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.865964890s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075439453s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.1a( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.865938187s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075439453s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.1b( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.865914345s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075439453s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.1b( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.865899086s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075439453s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.18( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.845909119s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.055603027s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.18( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.845893860s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.055603027s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.18( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.865543365s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075302124s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.1c( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.865718842s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075500488s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.18( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.865479469s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075302124s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.1c( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.865605354s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075500488s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.1f( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.844847679s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.056213379s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.1f( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.844816208s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.056213379s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[8.15( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.2( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[8.2( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.15( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[8.d( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.b( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.d( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.8( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.3( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[10.7( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.1e( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.863600731s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075500488s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.1e( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.863578796s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075500488s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[8.4( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.1d( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.843503952s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.055816650s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.1d( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.843461037s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.055816650s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[8.1b( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.1a( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.1b( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.1f( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.863329887s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075790405s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.843858719s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.056442261s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.1f( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.863151550s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075790405s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.843791962s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.056442261s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.1c( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.843135834s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.055923462s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.1c( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.843114853s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.055923462s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.10( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.862828255s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075653076s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.10( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.862795830s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075653076s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.11( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.862702370s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075668335s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.11( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.862681389s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075668335s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.1d( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.842847824s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.055862427s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.12( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.842951775s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.056015015s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.842913628s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.056030273s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.12( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.842921257s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.056015015s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.842892647s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.056030273s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.1d( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.842644691s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.055862427s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.11( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.842899323s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.056274414s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.11( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.842875481s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.056274414s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.19( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.862174034s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075698853s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.19( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.862127304s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075698853s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.1b( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.842557907s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.056243896s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.1a( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.842420578s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.056137085s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.1b( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.842535973s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.056243896s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.1a( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.842396736s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.056137085s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[10.17( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.845877647s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.055511475s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.841468811s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.055511475s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.12( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.861424446s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075729370s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[10.d( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[10.e( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.12( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.861391068s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075729370s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.9( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.859471321s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.074890137s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.18( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[10.1e( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[10.16( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[10.1( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.1c( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.1e( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.1f( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[8.1c( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.11( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[8.12( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[8.11( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[8.14( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[11.17( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.17( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[8.10( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.12( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[11.14( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.3( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[11.f( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.11( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[11.e( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[8.c( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[8.e( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.9( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.9( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.b( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[11.1( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[8.b( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[8.f( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.d( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[8.9( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.1( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[11.4( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.7( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[8.6( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[11.6( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.5( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[8.18( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.15( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[8.1f( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[8.1d( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[11.10( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.1d( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[11.19( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.13( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[8.1a( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:32 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.1c deep-scrub starts
Dec 05 01:16:32 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.1c deep-scrub ok
Dec 05 01:16:33 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Dec 05 01:16:33 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Dec 05 01:16:33 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec 05 01:16:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Dec 05 01:16:33 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec 05 01:16:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Dec 05 01:16:33 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Dec 05 01:16:33 compute-0 ceph-mon[192914]: pgmap v145: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:16:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 01:16:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 01:16:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 05 01:16:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 01:16:33 compute-0 ceph-mon[192914]: osdmap e57: 3 total, 3 up, 3 in
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.15( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.15( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.1d( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.1d( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.3( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.3( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.1( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.1( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.d( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.d( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[8.1c( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.1f( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.11( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.11( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.3( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.3( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.d( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.d( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.9( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.9( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.b( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.b( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.5( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.5( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.1( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.1( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.9( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.9( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.17( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.17( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.7( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.b( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.7( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.b( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.5( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.5( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.11( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.11( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.13( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.1d( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.13( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.1d( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.1b( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.1b( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[10.13( v 50'64 (0'0,50'64] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[10.10( v 50'64 (0'0,50'64] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[10.11( v 50'64 (0'0,50'64] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.1a( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.b( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[8.11( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.11( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.12( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.1e( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.1c( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.1b( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.18( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[8.d( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[8.1b( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.9( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[8.4( v 43'4 (0'0,43'4] local-lis/les=57/58 n=1 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.8( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.3( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.2( v 50'2 (0'0,50'2] local-lis/les=57/58 n=1 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.15( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[8.2( v 43'4 (0'0,43'4] local-lis/les=57/58 n=1 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[8.15( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[8.12( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.d( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[8.1d( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[10.1a( v 50'64 (0'0,50'64] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[10.19( v 50'64 (0'0,50'64] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[10.b( v 50'64 (0'0,50'64] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[10.6( v 50'64 (0'0,50'64] local-lis/les=57/58 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[10.12( v 50'64 (0'0,50'64] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[10.14( v 56'65 lc 50'54 (0'0,56'65] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=56'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[10.f( v 50'64 (0'0,50'64] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[10.2( v 50'64 (0'0,50'64] local-lis/les=57/58 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[8.1f( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[10.1( v 50'64 (0'0,50'64] local-lis/les=57/58 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[11.17( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[10.16( v 50'64 (0'0,50'64] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[8.1a( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[11.19( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[8.18( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[8.14( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[11.1( v 50'2 (0'0,50'2] local-lis/les=57/58 n=1 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[10.1e( v 50'64 (0'0,50'64] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[10.7( v 50'64 (0'0,50'64] local-lis/les=57/58 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[10.4( v 50'64 (0'0,50'64] local-lis/les=57/58 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[10.8( v 50'64 (0'0,50'64] local-lis/les=57/58 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[10.9( v 56'65 lc 50'56 (0'0,56'65] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=56'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[10.15( v 56'65 lc 50'46 (0'0,56'65] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=56'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[10.17( v 50'64 (0'0,50'64] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[10.e( v 56'65 lc 50'48 (0'0,56'65] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=56'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[8.e( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[11.f( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[11.e( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[8.c( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[8.6( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[8.9( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[8.f( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[11.14( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[11.6( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[11.4( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[8.b( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[11.10( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[8.10( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[10.d( v 56'65 lc 50'50 (0'0,56'65] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=56'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:33 compute-0 podman[224378]: 2025-12-05 01:16:33.72623164 +0000 UTC m=+0.144591629 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 01:16:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v148: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:16:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Dec 05 01:16:34 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec 05 01:16:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Dec 05 01:16:34 compute-0 ceph-mon[192914]: 5.1c deep-scrub starts
Dec 05 01:16:34 compute-0 ceph-mon[192914]: 5.1c deep-scrub ok
Dec 05 01:16:34 compute-0 ceph-mon[192914]: 6.10 scrub starts
Dec 05 01:16:34 compute-0 ceph-mon[192914]: 6.10 scrub ok
Dec 05 01:16:34 compute-0 ceph-mon[192914]: 5.f scrub starts
Dec 05 01:16:34 compute-0 ceph-mon[192914]: 5.f scrub ok
Dec 05 01:16:34 compute-0 ceph-mon[192914]: osdmap e58: 3 total, 3 up, 3 in
Dec 05 01:16:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec 05 01:16:34 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 05 01:16:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Dec 05 01:16:34 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Dec 05 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.b( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.9( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.d( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.1d( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.1b( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.1( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.11( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.5( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.3( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Dec 05 01:16:35 compute-0 ceph-mon[192914]: pgmap v148: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:16:35 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 05 01:16:35 compute-0 ceph-mon[192914]: osdmap e59: 3 total, 3 up, 3 in
Dec 05 01:16:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Dec 05 01:16:35 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Dec 05 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 60 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.791930199s) [0] async=[0] r=-1 lpr=60 pi=[53,60)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.058853149s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 60 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.791602135s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.058853149s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 60 pg[9.b( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.803325653s) [0] async=[0] r=-1 lpr=60 pi=[53,60)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.071273804s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 60 pg[9.b( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.801901817s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.071273804s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:35 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 60 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:35 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 60 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:35 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 60 pg[9.b( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:35 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 60 pg[9.b( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v151: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 127 B/s, 1 objects/s recovering
Dec 05 01:16:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Dec 05 01:16:36 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec 05 01:16:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Dec 05 01:16:36 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 05 01:16:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Dec 05 01:16:36 compute-0 ceph-mon[192914]: osdmap e60: 3 total, 3 up, 3 in
Dec 05 01:16:36 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec 05 01:16:36 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Dec 05 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.766530991s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.072189331s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.d( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.765852928s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.071533203s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.9( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.765682220s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.071380615s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.d( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.765716553s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.071533203s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.9( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.765491486s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.071380615s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.765296936s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.071563721s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.765923500s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.072189331s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.765857697s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.072250366s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.765787125s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.072250366s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.1( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.765112877s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.071807861s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.1( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.764968872s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.071807861s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.765035629s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.072021484s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.764240265s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.071563721s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.764391899s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.071929932s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.764489174s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.072021484s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.764314651s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.071929932s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.1d( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.763593674s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.071670532s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.1d( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.763382912s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.071670532s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.1b( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.762916565s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.071777344s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.1b( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.762870789s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.071777344s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.d( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.d( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.1( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.1( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.1d( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.1d( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.1b( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.1b( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.9( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.9( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=60/61 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.b( v 50'586 (0'0,50'586] local-lis/les=60/61 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:36 compute-0 sudo[224350]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:36 compute-0 sshd-session[223941]: Connection closed by 192.168.122.30 port 44008
Dec 05 01:16:36 compute-0 sshd-session[223938]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:16:36 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Dec 05 01:16:36 compute-0 systemd[1]: session-41.scope: Consumed 10.288s CPU time.
Dec 05 01:16:36 compute-0 systemd-logind[792]: Session 41 logged out. Waiting for processes to exit.
Dec 05 01:16:36 compute-0 systemd-logind[792]: Removed session 41.
Dec 05 01:16:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:16:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Dec 05 01:16:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Dec 05 01:16:37 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Dec 05 01:16:37 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 62 pg[9.3( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62 pruub=14.017174721s) [0] async=[0] r=-1 lpr=62 pi=[53,62)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.072509766s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:37 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 62 pg[9.3( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62 pruub=14.016999245s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.072509766s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:37 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 62 pg[9.11( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62 pruub=14.015672684s) [0] async=[0] r=-1 lpr=62 pi=[53,62)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.072311401s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:37 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 62 pg[9.11( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62 pruub=14.015570641s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.072311401s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.3( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.3( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:37 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 62 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62 pruub=14.013773918s) [0] async=[0] r=-1 lpr=62 pi=[53,62)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.071350098s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:37 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 62 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62 pruub=14.013423920s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.071350098s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:37 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 62 pg[9.5( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62 pruub=14.013886452s) [0] async=[0] r=-1 lpr=62 pi=[53,62)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.072387695s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:37 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 62 pg[9.5( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62 pruub=14.013783455s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.072387695s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.11( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.11( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.5( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.5( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.1b( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.9( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.1d( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.1( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.d( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:37 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Dec 05 01:16:37 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Dec 05 01:16:37 compute-0 ceph-mon[192914]: pgmap v151: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 127 B/s, 1 objects/s recovering
Dec 05 01:16:37 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 05 01:16:37 compute-0 ceph-mon[192914]: osdmap e61: 3 total, 3 up, 3 in
Dec 05 01:16:37 compute-0 ceph-mon[192914]: osdmap e62: 3 total, 3 up, 3 in
Dec 05 01:16:37 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.1f deep-scrub starts
Dec 05 01:16:37 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.1f deep-scrub ok
Dec 05 01:16:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Dec 05 01:16:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Dec 05 01:16:38 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Dec 05 01:16:38 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 63 pg[9.3( v 50'586 (0'0,50'586] local-lis/les=62/63 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:38 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 63 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=62/63 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:38 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 63 pg[9.5( v 50'586 (0'0,50'586] local-lis/les=62/63 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:38 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 63 pg[9.11( v 50'586 (0'0,50'586] local-lis/les=62/63 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v155: 321 pgs: 4 active+remapped, 317 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 382 B/s, 9 objects/s recovering
Dec 05 01:16:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Dec 05 01:16:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec 05 01:16:38 compute-0 ceph-mon[192914]: 6.12 scrub starts
Dec 05 01:16:38 compute-0 ceph-mon[192914]: 6.12 scrub ok
Dec 05 01:16:38 compute-0 ceph-mon[192914]: osdmap e63: 3 total, 3 up, 3 in
Dec 05 01:16:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec 05 01:16:38 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec 05 01:16:38 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec 05 01:16:38 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.18 deep-scrub starts
Dec 05 01:16:38 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.18 deep-scrub ok
Dec 05 01:16:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Dec 05 01:16:39 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 05 01:16:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Dec 05 01:16:39 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Dec 05 01:16:39 compute-0 ceph-mon[192914]: 5.1f deep-scrub starts
Dec 05 01:16:39 compute-0 ceph-mon[192914]: 5.1f deep-scrub ok
Dec 05 01:16:39 compute-0 ceph-mon[192914]: pgmap v155: 321 pgs: 4 active+remapped, 317 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 382 B/s, 9 objects/s recovering
Dec 05 01:16:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 05 01:16:39 compute-0 ceph-mon[192914]: osdmap e64: 3 total, 3 up, 3 in
Dec 05 01:16:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v157: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 651 B/s, 26 objects/s recovering
Dec 05 01:16:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Dec 05 01:16:40 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec 05 01:16:40 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Dec 05 01:16:40 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Dec 05 01:16:40 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Dec 05 01:16:40 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Dec 05 01:16:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Dec 05 01:16:40 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 05 01:16:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Dec 05 01:16:40 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Dec 05 01:16:40 compute-0 ceph-mon[192914]: 2.9 scrub starts
Dec 05 01:16:40 compute-0 ceph-mon[192914]: 2.9 scrub ok
Dec 05 01:16:40 compute-0 ceph-mon[192914]: 4.18 deep-scrub starts
Dec 05 01:16:40 compute-0 ceph-mon[192914]: 4.18 deep-scrub ok
Dec 05 01:16:40 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec 05 01:16:41 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec 05 01:16:41 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec 05 01:16:41 compute-0 ceph-mon[192914]: pgmap v157: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 651 B/s, 26 objects/s recovering
Dec 05 01:16:41 compute-0 ceph-mon[192914]: 6.16 scrub starts
Dec 05 01:16:41 compute-0 ceph-mon[192914]: 6.16 scrub ok
Dec 05 01:16:41 compute-0 ceph-mon[192914]: 2.6 scrub starts
Dec 05 01:16:41 compute-0 ceph-mon[192914]: 2.6 scrub ok
Dec 05 01:16:41 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 05 01:16:41 compute-0 ceph-mon[192914]: osdmap e65: 3 total, 3 up, 3 in
Dec 05 01:16:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:16:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v159: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 327 B/s, 15 objects/s recovering
Dec 05 01:16:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Dec 05 01:16:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec 05 01:16:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Dec 05 01:16:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 05 01:16:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Dec 05 01:16:42 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Dec 05 01:16:42 compute-0 ceph-mon[192914]: 5.1 scrub starts
Dec 05 01:16:42 compute-0 ceph-mon[192914]: 5.1 scrub ok
Dec 05 01:16:42 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.543 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.544 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.549 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.558 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.558 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.560 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.560 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.562 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.561 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.563 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.563 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.564 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.564 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.564 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.562 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.565 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.566 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.566 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.566 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.567 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.564 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.567 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.568 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.569 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.569 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.569 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.570 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.570 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.570 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.570 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.570 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.571 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.571 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.571 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.572 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.573 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.573 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.574 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.574 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.574 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.574 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.574 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.575 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.575 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.575 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.575 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.575 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.576 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.576 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.576 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.576 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:16:42 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 66 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=66 pruub=8.198377609s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.054595947s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:42 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 66 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=66 pruub=8.198309898s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.054595947s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:42 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:42 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 66 pg[9.6( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=66 pruub=8.196864128s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.055664062s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:42 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 66 pg[9.6( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=66 pruub=8.196774483s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.055664062s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:42 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 66 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=66 pruub=8.196623802s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.056686401s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:42 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 66 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=66 pruub=8.196444511s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.056686401s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:42 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 66 pg[9.e( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=66 pruub=8.194192886s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.055496216s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:42 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 66 pg[9.e( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=66 pruub=8.194120407s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.055496216s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:42 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:42 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:42 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:43 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec 05 01:16:43 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec 05 01:16:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Dec 05 01:16:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Dec 05 01:16:43 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Dec 05 01:16:43 compute-0 ceph-mon[192914]: pgmap v159: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 327 B/s, 15 objects/s recovering
Dec 05 01:16:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 05 01:16:43 compute-0 ceph-mon[192914]: osdmap e66: 3 total, 3 up, 3 in
Dec 05 01:16:43 compute-0 ceph-mon[192914]: 2.7 scrub starts
Dec 05 01:16:43 compute-0 ceph-mon[192914]: 2.7 scrub ok
Dec 05 01:16:43 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 67 pg[9.e( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[53,67)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:43 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 67 pg[9.e( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=0 lpr=67 pi=[53,67)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:43 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 67 pg[9.e( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=0 lpr=67 pi=[53,67)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:43 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 67 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=0 lpr=67 pi=[53,67)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:43 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 67 pg[9.6( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=0 lpr=67 pi=[53,67)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:43 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 67 pg[9.6( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=0 lpr=67 pi=[53,67)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:43 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 67 pg[9.e( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[53,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:43 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 67 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=0 lpr=67 pi=[53,67)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:43 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 67 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=0 lpr=67 pi=[53,67)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:43 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 67 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=0 lpr=67 pi=[53,67)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:43 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 67 pg[9.6( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[53,67)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:43 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 67 pg[9.6( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[53,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:43 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 67 pg[9.16( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[53,67)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:43 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 67 pg[9.16( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[53,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:43 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 67 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[53,67)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:43 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 67 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[53,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:43 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Dec 05 01:16:43 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Dec 05 01:16:44 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Dec 05 01:16:44 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Dec 05 01:16:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v162: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 328 B/s, 15 objects/s recovering
Dec 05 01:16:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Dec 05 01:16:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec 05 01:16:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Dec 05 01:16:44 compute-0 ceph-mon[192914]: osdmap e67: 3 total, 3 up, 3 in
Dec 05 01:16:44 compute-0 ceph-mon[192914]: 6.15 scrub starts
Dec 05 01:16:44 compute-0 ceph-mon[192914]: 6.15 scrub ok
Dec 05 01:16:44 compute-0 ceph-mon[192914]: pgmap v162: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 328 B/s, 15 objects/s recovering
Dec 05 01:16:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec 05 01:16:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 05 01:16:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Dec 05 01:16:44 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Dec 05 01:16:44 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 68 pg[9.e( v 50'586 (0'0,50'586] local-lis/les=67/68 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[53,67)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:44 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 68 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=67/68 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[53,67)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:44 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 68 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=67/68 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[53,67)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:44 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 68 pg[9.6( v 50'586 (0'0,50'586] local-lis/les=67/68 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[53,67)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 68 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=15.709008217s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=50'586 mlcod 0'0 active pruub 158.131195068s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 68 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=15.708824158s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 158.131195068s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 68 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=15.708337784s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=50'586 mlcod 0'0 active pruub 158.131195068s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 68 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=15.708277702s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 158.131195068s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 68 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=62/63 n=6 ec=53/44 lis/c=62/62 les/c/f=63/63/0 sis=68 pruub=8.703824043s) [2] r=-1 lpr=68 pi=[62,68)/1 crt=50'586 mlcod 0'0 active pruub 151.127349854s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 68 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=62/63 n=6 ec=53/44 lis/c=62/62 les/c/f=63/63/0 sis=68 pruub=8.703773499s) [2] r=-1 lpr=68 pi=[62,68)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 151.127349854s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 68 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=15.707578659s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=50'586 mlcod 0'0 active pruub 158.131271362s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 68 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=15.707477570s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 158.131271362s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 68 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=68) [2] r=0 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 68 pg[9.f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=68) [2] r=0 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 68 pg[9.17( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=62/62 les/c/f=63/63/0 sis=68) [2] r=0 lpr=68 pi=[62,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 68 pg[9.7( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=68) [2] r=0 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Dec 05 01:16:45 compute-0 ceph-mon[192914]: 6.18 scrub starts
Dec 05 01:16:45 compute-0 ceph-mon[192914]: 6.18 scrub ok
Dec 05 01:16:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 05 01:16:45 compute-0 ceph-mon[192914]: osdmap e68: 3 total, 3 up, 3 in
Dec 05 01:16:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Dec 05 01:16:45 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Dec 05 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 69 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] r=0 lpr=69 pi=[61,69)/1 crt=50'586 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 69 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] r=0 lpr=69 pi=[61,69)/1 crt=50'586 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 69 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] r=0 lpr=69 pi=[61,69)/1 crt=50'586 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 69 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] r=0 lpr=69 pi=[61,69)/1 crt=50'586 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[61,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[61,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[61,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[61,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.17( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=62/62 les/c/f=63/63/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[62,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.17( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=62/62 les/c/f=63/63/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[62,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69) [2] r=0 lpr=69 pi=[53,69)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69) [2] r=0 lpr=69 pi=[53,69)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.7( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[61,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.7( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[61,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.e( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69) [2] r=0 lpr=69 pi=[53,69)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.6( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69) [2] r=0 lpr=69 pi=[53,69)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.6( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69) [2] r=0 lpr=69 pi=[53,69)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.e( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69) [2] r=0 lpr=69 pi=[53,69)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69) [2] r=0 lpr=69 pi=[53,69)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69) [2] r=0 lpr=69 pi=[53,69)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 69 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=62/63 n=6 ec=53/44 lis/c=62/62 les/c/f=63/63/0 sis=69) [2]/[0] r=0 lpr=69 pi=[62,69)/1 crt=50'586 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 69 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=62/63 n=6 ec=53/44 lis/c=62/62 les/c/f=63/63/0 sis=69) [2]/[0] r=0 lpr=69 pi=[62,69)/1 crt=50'586 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:45 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 69 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=67/68 n=6 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69 pruub=15.282924652s) [2] async=[2] r=-1 lpr=69 pi=[53,69)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 151.742507935s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:45 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 69 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=67/68 n=6 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69 pruub=15.282825470s) [2] r=-1 lpr=69 pi=[53,69)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.742507935s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:45 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 69 pg[9.6( v 50'586 (0'0,50'586] local-lis/les=67/68 n=7 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69 pruub=15.282798767s) [2] async=[2] r=-1 lpr=69 pi=[53,69)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 151.742675781s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:45 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 69 pg[9.e( v 50'586 (0'0,50'586] local-lis/les=67/68 n=7 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69 pruub=15.273217201s) [2] async=[2] r=-1 lpr=69 pi=[53,69)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 151.733306885s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 69 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] r=0 lpr=69 pi=[61,69)/1 crt=50'586 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:45 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 69 pg[9.e( v 50'586 (0'0,50'586] local-lis/les=67/68 n=7 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69 pruub=15.273057938s) [2] r=-1 lpr=69 pi=[53,69)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.733306885s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 69 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] r=0 lpr=69 pi=[61,69)/1 crt=50'586 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:45 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 69 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=67/68 n=6 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69 pruub=15.280101776s) [2] async=[2] r=-1 lpr=69 pi=[53,69)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 151.742477417s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:45 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 69 pg[9.6( v 50'586 (0'0,50'586] local-lis/les=67/68 n=7 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69 pruub=15.280090332s) [2] r=-1 lpr=69 pi=[53,69)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.742675781s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:45 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 69 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=67/68 n=6 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69 pruub=15.279090881s) [2] r=-1 lpr=69 pi=[53,69)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.742477417s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:45 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Dec 05 01:16:45 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Dec 05 01:16:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v165: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:16:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Dec 05 01:16:46 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec 05 01:16:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:16:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:16:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:16:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:16:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:16:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:16:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Dec 05 01:16:46 compute-0 ceph-mon[192914]: osdmap e69: 3 total, 3 up, 3 in
Dec 05 01:16:46 compute-0 ceph-mon[192914]: 6.14 scrub starts
Dec 05 01:16:46 compute-0 ceph-mon[192914]: 6.14 scrub ok
Dec 05 01:16:46 compute-0 ceph-mon[192914]: pgmap v165: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:16:46 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec 05 01:16:46 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 05 01:16:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Dec 05 01:16:46 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Dec 05 01:16:46 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 70 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[61,69)/1 crt=50'586 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:46 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 70 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69) [2] r=0 lpr=69 pi=[53,69)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:46 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 70 pg[9.6( v 50'586 (0'0,50'586] local-lis/les=69/70 n=7 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69) [2] r=0 lpr=69 pi=[53,69)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:46 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 70 pg[9.e( v 50'586 (0'0,50'586] local-lis/les=69/70 n=7 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69) [2] r=0 lpr=69 pi=[53,69)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:46 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 70 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69) [2] r=0 lpr=69 pi=[53,69)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:46 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 70 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=69/70 n=7 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[61,69)/1 crt=50'586 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:46 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 70 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=62/62 les/c/f=63/63/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[62,69)/1 crt=50'586 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:46 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 70 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=69/70 n=7 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[61,69)/1 crt=50'586 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:16:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Dec 05 01:16:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Dec 05 01:16:47 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Dec 05 01:16:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 71 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71) [2] r=0 lpr=71 pi=[61,71)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 71 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71) [2] r=0 lpr=71 pi=[61,71)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 71 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71) [2] r=0 lpr=71 pi=[61,71)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 71 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71) [2] r=0 lpr=71 pi=[61,71)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 71 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=69/62 les/c/f=70/63/0 sis=71) [2] r=0 lpr=71 pi=[62,71)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 71 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=69/62 les/c/f=70/63/0 sis=71) [2] r=0 lpr=71 pi=[62,71)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 71 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71) [2] r=0 lpr=71 pi=[61,71)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:47 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 71 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=15.443318367s) [2] async=[2] r=-1 lpr=71 pi=[61,71)/1 crt=50'586 mlcod 50'586 active pruub 159.559448242s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:47 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 71 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=69/70 n=7 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=15.447704315s) [2] async=[2] r=-1 lpr=71 pi=[61,71)/1 crt=50'586 mlcod 50'586 active pruub 159.563919067s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:47 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 71 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=69/70 n=7 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=15.447580338s) [2] r=-1 lpr=71 pi=[61,71)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 159.563919067s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:47 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 71 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=15.443078041s) [2] r=-1 lpr=71 pi=[61,71)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 159.559448242s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:47 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 71 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=69/70 n=7 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=15.447667122s) [2] async=[2] r=-1 lpr=71 pi=[61,71)/1 crt=50'586 mlcod 50'586 active pruub 159.564453125s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:47 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 71 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=69/70 n=7 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=15.447522163s) [2] r=-1 lpr=71 pi=[61,71)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 159.564453125s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:47 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 71 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=69/62 les/c/f=70/63/0 sis=71 pruub=15.446838379s) [2] async=[2] r=-1 lpr=71 pi=[62,71)/1 crt=50'586 mlcod 50'586 active pruub 159.563934326s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 71 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71) [2] r=0 lpr=71 pi=[61,71)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:47 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 71 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=69/62 les/c/f=70/63/0 sis=71 pruub=15.446743965s) [2] r=-1 lpr=71 pi=[62,71)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 159.563934326s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:47 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 70 pg[9.8( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=70 pruub=11.760512352s) [2] r=-1 lpr=70 pi=[53,70)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 150.056045532s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:47 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 71 pg[9.8( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=70 pruub=11.760368347s) [2] r=-1 lpr=70 pi=[53,70)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.056045532s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=70) [2] r=0 lpr=71 pi=[53,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:47 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 70 pg[9.18( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=70 pruub=11.760275841s) [2] r=-1 lpr=70 pi=[53,70)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 150.057830811s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:47 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 71 pg[9.18( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=70 pruub=11.760203362s) [2] r=-1 lpr=70 pi=[53,70)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.057830811s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=70) [2] r=0 lpr=71 pi=[53,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:47 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 05 01:16:47 compute-0 ceph-mon[192914]: osdmap e70: 3 total, 3 up, 3 in
Dec 05 01:16:47 compute-0 ceph-mon[192914]: osdmap e71: 3 total, 3 up, 3 in
Dec 05 01:16:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Dec 05 01:16:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Dec 05 01:16:48 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Dec 05 01:16:48 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[53,72)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:48 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[53,72)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:48 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[53,72)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:48 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[53,72)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:48 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 72 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=71/72 n=6 ec=53/44 lis/c=69/62 les/c/f=70/63/0 sis=71) [2] r=0 lpr=71 pi=[62,71)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:48 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 72 pg[9.8( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=72) [2]/[1] r=0 lpr=72 pi=[53,72)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:48 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 72 pg[9.8( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=72) [2]/[1] r=0 lpr=72 pi=[53,72)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:48 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 72 pg[9.18( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=72) [2]/[1] r=0 lpr=72 pi=[53,72)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v169: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 59 B/s, 6 objects/s recovering
Dec 05 01:16:48 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Dec 05 01:16:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Dec 05 01:16:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec 05 01:16:48 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 72 pg[9.18( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=72) [2]/[1] r=0 lpr=72 pi=[53,72)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:48 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 72 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=71/72 n=7 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71) [2] r=0 lpr=71 pi=[61,71)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:48 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 72 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=71/72 n=7 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71) [2] r=0 lpr=71 pi=[61,71)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:48 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 72 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=71/72 n=6 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71) [2] r=0 lpr=71 pi=[61,71)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:48 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Dec 05 01:16:48 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.4 deep-scrub starts
Dec 05 01:16:48 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.4 deep-scrub ok
Dec 05 01:16:48 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec 05 01:16:48 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec 05 01:16:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Dec 05 01:16:49 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 05 01:16:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Dec 05 01:16:49 compute-0 ceph-mon[192914]: osdmap e72: 3 total, 3 up, 3 in
Dec 05 01:16:49 compute-0 ceph-mon[192914]: pgmap v169: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 59 B/s, 6 objects/s recovering
Dec 05 01:16:49 compute-0 ceph-mon[192914]: 6.19 scrub starts
Dec 05 01:16:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec 05 01:16:49 compute-0 ceph-mon[192914]: 6.19 scrub ok
Dec 05 01:16:49 compute-0 ceph-mon[192914]: 2.4 deep-scrub starts
Dec 05 01:16:49 compute-0 ceph-mon[192914]: 2.4 deep-scrub ok
Dec 05 01:16:49 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Dec 05 01:16:49 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 73 pg[9.8( v 50'586 (0'0,50'586] local-lis/les=72/73 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[53,72)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:49 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 73 pg[9.18( v 50'586 (0'0,50'586] local-lis/les=72/73 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[53,72)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:50 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Dec 05 01:16:50 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Dec 05 01:16:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Dec 05 01:16:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Dec 05 01:16:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v172: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 60 B/s, 6 objects/s recovering
Dec 05 01:16:50 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Dec 05 01:16:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Dec 05 01:16:50 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec 05 01:16:50 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 74 pg[9.8( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=72/53 les/c/f=73/55/0 sis=74) [2] r=0 lpr=74 pi=[53,74)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:50 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 74 pg[9.8( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=72/53 les/c/f=73/55/0 sis=74) [2] r=0 lpr=74 pi=[53,74)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:50 compute-0 ceph-mon[192914]: 4.13 scrub starts
Dec 05 01:16:50 compute-0 ceph-mon[192914]: 4.13 scrub ok
Dec 05 01:16:50 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 05 01:16:50 compute-0 ceph-mon[192914]: osdmap e73: 3 total, 3 up, 3 in
Dec 05 01:16:50 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 74 pg[9.18( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=72/53 les/c/f=73/55/0 sis=74) [2] r=0 lpr=74 pi=[53,74)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:50 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 74 pg[9.18( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=72/53 les/c/f=73/55/0 sis=74) [2] r=0 lpr=74 pi=[53,74)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:50 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 74 pg[9.8( v 50'586 (0'0,50'586] local-lis/les=72/73 n=7 ec=53/44 lis/c=72/53 les/c/f=73/55/0 sis=74 pruub=15.009253502s) [2] async=[2] r=-1 lpr=74 pi=[53,74)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 156.109695435s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:50 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 74 pg[9.8( v 50'586 (0'0,50'586] local-lis/les=72/73 n=7 ec=53/44 lis/c=72/53 les/c/f=73/55/0 sis=74 pruub=15.009100914s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.109695435s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:50 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 74 pg[9.18( v 50'586 (0'0,50'586] local-lis/les=72/73 n=6 ec=53/44 lis/c=72/53 les/c/f=73/55/0 sis=74 pruub=15.016135216s) [2] async=[2] r=-1 lpr=74 pi=[53,74)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 156.117431641s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:50 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 74 pg[9.18( v 50'586 (0'0,50'586] local-lis/les=72/73 n=6 ec=53/44 lis/c=72/53 les/c/f=73/55/0 sis=74 pruub=15.016005516s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.117431641s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:50 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Dec 05 01:16:50 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Dec 05 01:16:50 compute-0 podman[224436]: 2025-12-05 01:16:50.719199746 +0000 UTC m=+0.122662841 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 05 01:16:51 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Dec 05 01:16:51 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Dec 05 01:16:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Dec 05 01:16:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 05 01:16:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Dec 05 01:16:51 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Dec 05 01:16:51 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 75 pg[9.8( v 50'586 (0'0,50'586] local-lis/les=74/75 n=7 ec=53/44 lis/c=72/53 les/c/f=73/55/0 sis=74) [2] r=0 lpr=74 pi=[53,74)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:51 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 75 pg[9.18( v 50'586 (0'0,50'586] local-lis/les=74/75 n=6 ec=53/44 lis/c=72/53 les/c/f=73/55/0 sis=74) [2] r=0 lpr=74 pi=[53,74)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:51 compute-0 ceph-mon[192914]: 6.1a scrub starts
Dec 05 01:16:51 compute-0 ceph-mon[192914]: 6.1a scrub ok
Dec 05 01:16:51 compute-0 ceph-mon[192914]: pgmap v172: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 60 B/s, 6 objects/s recovering
Dec 05 01:16:51 compute-0 ceph-mon[192914]: osdmap e74: 3 total, 3 up, 3 in
Dec 05 01:16:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec 05 01:16:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 05 01:16:51 compute-0 ceph-mon[192914]: osdmap e75: 3 total, 3 up, 3 in
Dec 05 01:16:51 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec 05 01:16:51 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec 05 01:16:51 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec 05 01:16:51 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec 05 01:16:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:16:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v174: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 190 B/s, 9 objects/s recovering
Dec 05 01:16:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Dec 05 01:16:52 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec 05 01:16:52 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Dec 05 01:16:52 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Dec 05 01:16:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Dec 05 01:16:52 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 05 01:16:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Dec 05 01:16:52 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Dec 05 01:16:52 compute-0 ceph-mon[192914]: 6.11 scrub starts
Dec 05 01:16:52 compute-0 ceph-mon[192914]: 6.11 scrub ok
Dec 05 01:16:52 compute-0 ceph-mon[192914]: 6.1b scrub starts
Dec 05 01:16:52 compute-0 ceph-mon[192914]: 6.1b scrub ok
Dec 05 01:16:52 compute-0 ceph-mon[192914]: 2.5 scrub starts
Dec 05 01:16:52 compute-0 ceph-mon[192914]: 2.5 scrub ok
Dec 05 01:16:52 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec 05 01:16:52 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec 05 01:16:52 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec 05 01:16:52 compute-0 podman[224455]: 2025-12-05 01:16:52.707620235 +0000 UTC m=+0.106191064 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:16:52 compute-0 podman[224456]: 2025-12-05 01:16:52.779294762 +0000 UTC m=+0.173165201 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:16:53 compute-0 sshd-session[224503]: Accepted publickey for zuul from 192.168.122.30 port 59712 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 01:16:53 compute-0 systemd-logind[792]: New session 42 of user zuul.
Dec 05 01:16:53 compute-0 ceph-mon[192914]: 4.11 scrub starts
Dec 05 01:16:53 compute-0 ceph-mon[192914]: 4.11 scrub ok
Dec 05 01:16:53 compute-0 ceph-mon[192914]: pgmap v174: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 190 B/s, 9 objects/s recovering
Dec 05 01:16:53 compute-0 ceph-mon[192914]: 2.19 scrub starts
Dec 05 01:16:53 compute-0 ceph-mon[192914]: 2.19 scrub ok
Dec 05 01:16:53 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 05 01:16:53 compute-0 ceph-mon[192914]: osdmap e76: 3 total, 3 up, 3 in
Dec 05 01:16:53 compute-0 ceph-mon[192914]: 2.3 scrub starts
Dec 05 01:16:53 compute-0 ceph-mon[192914]: 2.3 scrub ok
Dec 05 01:16:53 compute-0 systemd[1]: Started Session 42 of User zuul.
Dec 05 01:16:53 compute-0 sshd-session[224503]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:16:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v176: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 153 B/s, 7 objects/s recovering
Dec 05 01:16:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Dec 05 01:16:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec 05 01:16:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Dec 05 01:16:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec 05 01:16:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 05 01:16:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Dec 05 01:16:54 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Dec 05 01:16:54 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 77 pg[9.c( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=77 pruub=12.866765022s) [2] r=-1 lpr=77 pi=[53,77)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 158.059799194s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:54 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 77 pg[9.c( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=77 pruub=12.866698265s) [2] r=-1 lpr=77 pi=[53,77)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.059799194s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:54 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 77 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=77 pruub=12.863279343s) [2] r=-1 lpr=77 pi=[53,77)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 158.060073853s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:54 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 77 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=77 pruub=12.863193512s) [2] r=-1 lpr=77 pi=[53,77)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.060073853s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:54 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 77 pg[9.c( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=77) [2] r=0 lpr=77 pi=[53,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:54 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 77 pg[9.1c( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=77) [2] r=0 lpr=77 pi=[53,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:54 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.a deep-scrub starts
Dec 05 01:16:54 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.a deep-scrub ok
Dec 05 01:16:54 compute-0 podman[224630]: 2025-12-05 01:16:54.316791161 +0000 UTC m=+0.194625506 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125)
Dec 05 01:16:54 compute-0 python3.9[224671]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 05 01:16:54 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Dec 05 01:16:54 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Dec 05 01:16:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Dec 05 01:16:55 compute-0 ceph-mon[192914]: pgmap v176: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 153 B/s, 7 objects/s recovering
Dec 05 01:16:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 05 01:16:55 compute-0 ceph-mon[192914]: osdmap e77: 3 total, 3 up, 3 in
Dec 05 01:16:55 compute-0 ceph-mon[192914]: 2.a deep-scrub starts
Dec 05 01:16:55 compute-0 ceph-mon[192914]: 2.a deep-scrub ok
Dec 05 01:16:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Dec 05 01:16:55 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Dec 05 01:16:55 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 78 pg[9.1c( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[53,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:55 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 78 pg[9.1c( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[53,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:55 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 78 pg[9.c( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[53,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:55 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 78 pg[9.c( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[53,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:55 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 78 pg[9.c( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=78) [2]/[1] r=0 lpr=78 pi=[53,78)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:55 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 78 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=78) [2]/[1] r=0 lpr=78 pi=[53,78)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:55 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 78 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=78) [2]/[1] r=0 lpr=78 pi=[53,78)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:55 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 78 pg[9.c( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=78) [2]/[1] r=0 lpr=78 pi=[53,78)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:55 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec 05 01:16:55 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec 05 01:16:56 compute-0 python3.9[224851]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:16:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v179: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 154 B/s, 7 objects/s recovering
Dec 05 01:16:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Dec 05 01:16:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec 05 01:16:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Dec 05 01:16:56 compute-0 ceph-mon[192914]: 6.13 scrub starts
Dec 05 01:16:56 compute-0 ceph-mon[192914]: 6.13 scrub ok
Dec 05 01:16:56 compute-0 ceph-mon[192914]: osdmap e78: 3 total, 3 up, 3 in
Dec 05 01:16:56 compute-0 ceph-mon[192914]: 5.16 scrub starts
Dec 05 01:16:56 compute-0 ceph-mon[192914]: 5.16 scrub ok
Dec 05 01:16:56 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec 05 01:16:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 05 01:16:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Dec 05 01:16:56 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Dec 05 01:16:56 compute-0 podman[224880]: 2025-12-05 01:16:56.722751565 +0000 UTC m=+0.125041957 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vcs-type=git, release=1214.1726694543, release-0.7.12=, io.openshift.expose-services=, com.redhat.component=ubi9-container, container_name=kepler, build-date=2024-09-18T21:23:30, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, version=9.4, vendor=Red Hat, Inc., io.openshift.tags=base rhel9)
Dec 05 01:16:56 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 79 pg[9.c( v 50'586 (0'0,50'586] local-lis/les=78/79 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[53,78)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:56 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 79 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=78/79 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[53,78)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:16:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Dec 05 01:16:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Dec 05 01:16:57 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Dec 05 01:16:57 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 80 pg[9.c( v 50'586 (0'0,50'586] local-lis/les=78/79 n=7 ec=53/44 lis/c=78/53 les/c/f=79/55/0 sis=80 pruub=15.694095612s) [2] async=[2] r=-1 lpr=80 pi=[53,80)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 163.759887695s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:57 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 80 pg[9.c( v 50'586 (0'0,50'586] local-lis/les=78/79 n=7 ec=53/44 lis/c=78/53 les/c/f=79/55/0 sis=80 pruub=15.693988800s) [2] r=-1 lpr=80 pi=[53,80)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.759887695s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:57 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 80 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=78/79 n=6 ec=53/44 lis/c=78/53 les/c/f=79/55/0 sis=80 pruub=15.690936089s) [2] async=[2] r=-1 lpr=80 pi=[53,80)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 163.759902954s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:57 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 80 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=78/79 n=6 ec=53/44 lis/c=78/53 les/c/f=79/55/0 sis=80 pruub=15.690846443s) [2] r=-1 lpr=80 pi=[53,80)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.759902954s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:16:57 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 80 pg[9.c( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=78/53 les/c/f=79/55/0 sis=80) [2] r=0 lpr=80 pi=[53,80)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:57 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 80 pg[9.c( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=78/53 les/c/f=79/55/0 sis=80) [2] r=0 lpr=80 pi=[53,80)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:57 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 80 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=78/53 les/c/f=79/55/0 sis=80) [2] r=0 lpr=80 pi=[53,80)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:16:57 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 80 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=78/53 les/c/f=79/55/0 sis=80) [2] r=0 lpr=80 pi=[53,80)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:16:57 compute-0 ceph-mon[192914]: pgmap v179: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 154 B/s, 7 objects/s recovering
Dec 05 01:16:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 05 01:16:57 compute-0 ceph-mon[192914]: osdmap e79: 3 total, 3 up, 3 in
Dec 05 01:16:57 compute-0 ceph-mon[192914]: osdmap e80: 3 total, 3 up, 3 in
Dec 05 01:16:57 compute-0 sudo[225025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbnkashrrxmnpxthbvqaqtkmyjoduteh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897416.7631142-45-266254190631252/AnsiballZ_command.py'
Dec 05 01:16:57 compute-0 sudo[225025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:16:57 compute-0 python3.9[225027]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:16:57 compute-0 sudo[225025]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:57 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.e deep-scrub starts
Dec 05 01:16:57 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.e deep-scrub ok
Dec 05 01:16:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Dec 05 01:16:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Dec 05 01:16:58 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Dec 05 01:16:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v183: 321 pgs: 2 activating+remapped, 319 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 12/247 objects misplaced (4.858%)
Dec 05 01:16:58 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 81 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=80/81 n=6 ec=53/44 lis/c=78/53 les/c/f=79/55/0 sis=80) [2] r=0 lpr=80 pi=[53,80)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:58 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 81 pg[9.c( v 50'586 (0'0,50'586] local-lis/les=80/81 n=7 ec=53/44 lis/c=78/53 les/c/f=79/55/0 sis=80) [2] r=0 lpr=80 pi=[53,80)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:16:58 compute-0 sudo[225192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzdampepwstmmpsfzmijqdvdtuwxgzvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897418.2034233-57-68968903810826/AnsiballZ_stat.py'
Dec 05 01:16:58 compute-0 sudo[225192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:16:58 compute-0 podman[225152]: 2025-12-05 01:16:58.951061763 +0000 UTC m=+0.108220751 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, distribution-scope=public, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6, managed_by=edpm_ansible, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 01:16:59 compute-0 ceph-mon[192914]: 4.e deep-scrub starts
Dec 05 01:16:59 compute-0 ceph-mon[192914]: 4.e deep-scrub ok
Dec 05 01:16:59 compute-0 ceph-mon[192914]: osdmap e81: 3 total, 3 up, 3 in
Dec 05 01:16:59 compute-0 ceph-mon[192914]: pgmap v183: 321 pgs: 2 activating+remapped, 319 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 12/247 objects misplaced (4.858%)
Dec 05 01:16:59 compute-0 python3.9[225198]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:16:59 compute-0 sudo[225192]: pam_unix(sudo:session): session closed for user root
Dec 05 01:16:59 compute-0 podman[158197]: time="2025-12-05T01:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:16:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 05 01:16:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6783 "" "Go-http-client/1.1"
Dec 05 01:17:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v184: 321 pgs: 2 activating+remapped, 319 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 12/247 objects misplaced (4.858%)
Dec 05 01:17:00 compute-0 sudo[225351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfnyjcllovgrkyhpjsuooperaqqqqsgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897419.5444841-68-243740860351853/AnsiballZ_file.py'
Dec 05 01:17:00 compute-0 sudo[225351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:17:00 compute-0 python3.9[225353]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:17:00 compute-0 sudo[225351]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:01 compute-0 ceph-mon[192914]: pgmap v184: 321 pgs: 2 activating+remapped, 319 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 12/247 objects misplaced (4.858%)
Dec 05 01:17:01 compute-0 sudo[225503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhofuuzyrezckpcwgcgmnulokhomehni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897420.8477712-77-68640708646530/AnsiballZ_file.py'
Dec 05 01:17:01 compute-0 sudo[225503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:17:01 compute-0 openstack_network_exporter[160350]: ERROR   01:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:17:01 compute-0 openstack_network_exporter[160350]: ERROR   01:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:17:01 compute-0 openstack_network_exporter[160350]: ERROR   01:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:17:01 compute-0 openstack_network_exporter[160350]: ERROR   01:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:17:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:17:01 compute-0 openstack_network_exporter[160350]: ERROR   01:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:17:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:17:01 compute-0 python3.9[225505]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:17:01 compute-0 sudo[225503]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:17:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v185: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec 05 01:17:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Dec 05 01:17:02 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec 05 01:17:02 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Dec 05 01:17:02 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Dec 05 01:17:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Dec 05 01:17:02 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 05 01:17:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Dec 05 01:17:02 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Dec 05 01:17:02 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec 05 01:17:02 compute-0 python3.9[225655]: ansible-ansible.builtin.service_facts Invoked
Dec 05 01:17:02 compute-0 network[225672]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 01:17:02 compute-0 network[225673]: 'network-scripts' will be removed from distribution in near future.
Dec 05 01:17:02 compute-0 network[225674]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 01:17:03 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Dec 05 01:17:03 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Dec 05 01:17:03 compute-0 ceph-mon[192914]: pgmap v185: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec 05 01:17:03 compute-0 ceph-mon[192914]: 5.1e scrub starts
Dec 05 01:17:03 compute-0 ceph-mon[192914]: 5.1e scrub ok
Dec 05 01:17:03 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 05 01:17:03 compute-0 ceph-mon[192914]: osdmap e82: 3 total, 3 up, 3 in
Dec 05 01:17:03 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec 05 01:17:03 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec 05 01:17:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v187: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 1 objects/s recovering
Dec 05 01:17:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Dec 05 01:17:04 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 05 01:17:04 compute-0 podman[225681]: 2025-12-05 01:17:04.18475614 +0000 UTC m=+0.158958516 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 01:17:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Dec 05 01:17:04 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 05 01:17:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Dec 05 01:17:04 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Dec 05 01:17:04 compute-0 ceph-mon[192914]: 2.18 scrub starts
Dec 05 01:17:04 compute-0 ceph-mon[192914]: 2.18 scrub ok
Dec 05 01:17:04 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 05 01:17:05 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Dec 05 01:17:05 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Dec 05 01:17:05 compute-0 ceph-mon[192914]: 4.1a scrub starts
Dec 05 01:17:05 compute-0 ceph-mon[192914]: 4.1a scrub ok
Dec 05 01:17:05 compute-0 ceph-mon[192914]: pgmap v187: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 1 objects/s recovering
Dec 05 01:17:05 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 05 01:17:05 compute-0 ceph-mon[192914]: osdmap e83: 3 total, 3 up, 3 in
Dec 05 01:17:05 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.a scrub starts
Dec 05 01:17:05 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.a scrub ok
Dec 05 01:17:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v189: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Dec 05 01:17:06 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Dec 05 01:17:06 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec 05 01:17:06 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Dec 05 01:17:06 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 05 01:17:06 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Dec 05 01:17:06 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Dec 05 01:17:06 compute-0 ceph-mon[192914]: 2.1d scrub starts
Dec 05 01:17:06 compute-0 ceph-mon[192914]: 2.1d scrub ok
Dec 05 01:17:06 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec 05 01:17:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:17:07 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Dec 05 01:17:07 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Dec 05 01:17:07 compute-0 ceph-mon[192914]: 4.a scrub starts
Dec 05 01:17:07 compute-0 ceph-mon[192914]: 4.a scrub ok
Dec 05 01:17:07 compute-0 ceph-mon[192914]: pgmap v189: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Dec 05 01:17:07 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 05 01:17:07 compute-0 ceph-mon[192914]: osdmap e84: 3 total, 3 up, 3 in
Dec 05 01:17:07 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec 05 01:17:07 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec 05 01:17:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v191: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Dec 05 01:17:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec 05 01:17:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Dec 05 01:17:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 05 01:17:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Dec 05 01:17:08 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Dec 05 01:17:08 compute-0 ceph-mon[192914]: 5.7 scrub starts
Dec 05 01:17:08 compute-0 ceph-mon[192914]: 5.7 scrub ok
Dec 05 01:17:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec 05 01:17:08 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.1b deep-scrub starts
Dec 05 01:17:08 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.1b deep-scrub ok
Dec 05 01:17:09 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Dec 05 01:17:09 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Dec 05 01:17:09 compute-0 ceph-mon[192914]: 5.9 scrub starts
Dec 05 01:17:09 compute-0 ceph-mon[192914]: 5.9 scrub ok
Dec 05 01:17:09 compute-0 ceph-mon[192914]: pgmap v191: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 05 01:17:09 compute-0 ceph-mon[192914]: osdmap e85: 3 total, 3 up, 3 in
Dec 05 01:17:09 compute-0 python3.9[225965]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:17:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v193: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Dec 05 01:17:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec 05 01:17:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Dec 05 01:17:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 05 01:17:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Dec 05 01:17:10 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Dec 05 01:17:10 compute-0 ceph-mon[192914]: 4.1b deep-scrub starts
Dec 05 01:17:10 compute-0 ceph-mon[192914]: 4.1b deep-scrub ok
Dec 05 01:17:10 compute-0 ceph-mon[192914]: 2.1c scrub starts
Dec 05 01:17:10 compute-0 ceph-mon[192914]: 2.1c scrub ok
Dec 05 01:17:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec 05 01:17:10 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec 05 01:17:10 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec 05 01:17:10 compute-0 python3.9[226115]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:17:11 compute-0 ceph-mon[192914]: pgmap v193: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 05 01:17:11 compute-0 ceph-mon[192914]: osdmap e86: 3 total, 3 up, 3 in
Dec 05 01:17:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:17:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v195: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Dec 05 01:17:12 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec 05 01:17:12 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Dec 05 01:17:12 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Dec 05 01:17:12 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Dec 05 01:17:12 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Dec 05 01:17:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Dec 05 01:17:12 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 05 01:17:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Dec 05 01:17:12 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Dec 05 01:17:12 compute-0 ceph-mon[192914]: 6.f scrub starts
Dec 05 01:17:12 compute-0 ceph-mon[192914]: 6.f scrub ok
Dec 05 01:17:12 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec 05 01:17:12 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 87 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=60/61 n=6 ec=53/44 lis/c=60/60 les/c/f=61/61/0 sis=87 pruub=11.967425346s) [2] r=-1 lpr=87 pi=[60,87)/1 crt=50'586 mlcod 0'0 active pruub 181.384109497s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:12 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 87 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=60/61 n=6 ec=53/44 lis/c=60/60 les/c/f=61/61/0 sis=87 pruub=11.966161728s) [2] r=-1 lpr=87 pi=[60,87)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 181.384109497s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:17:12 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=60/60 les/c/f=61/61/0 sis=87) [2] r=0 lpr=87 pi=[60,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:17:12 compute-0 python3.9[226269]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:17:13 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Dec 05 01:17:13 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Dec 05 01:17:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Dec 05 01:17:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Dec 05 01:17:13 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Dec 05 01:17:13 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 88 pg[9.13( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=60/60 les/c/f=61/61/0 sis=88) [2]/[0] r=-1 lpr=88 pi=[60,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:13 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 88 pg[9.13( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=60/60 les/c/f=61/61/0 sis=88) [2]/[0] r=-1 lpr=88 pi=[60,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:17:13 compute-0 ceph-mon[192914]: pgmap v195: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:13 compute-0 ceph-mon[192914]: 5.4 scrub starts
Dec 05 01:17:13 compute-0 ceph-mon[192914]: 5.4 scrub ok
Dec 05 01:17:13 compute-0 ceph-mon[192914]: 5.12 scrub starts
Dec 05 01:17:13 compute-0 ceph-mon[192914]: 5.12 scrub ok
Dec 05 01:17:13 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 05 01:17:13 compute-0 ceph-mon[192914]: osdmap e87: 3 total, 3 up, 3 in
Dec 05 01:17:13 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 88 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=60/61 n=6 ec=53/44 lis/c=60/60 les/c/f=61/61/0 sis=88) [2]/[0] r=0 lpr=88 pi=[60,88)/1 crt=50'586 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:13 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 88 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=60/61 n=6 ec=53/44 lis/c=60/60 les/c/f=61/61/0 sis=88) [2]/[0] r=0 lpr=88 pi=[60,88)/1 crt=50'586 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:17:13 compute-0 sudo[226425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dacozkfklzksswbpukyqadcubildsvih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897433.3319373-125-186956016976042/AnsiballZ_setup.py'
Dec 05 01:17:13 compute-0 sudo[226425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:17:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v198: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:14 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Dec 05 01:17:14 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec 05 01:17:14 compute-0 python3.9[226427]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 01:17:14 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Dec 05 01:17:14 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 05 01:17:14 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Dec 05 01:17:14 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Dec 05 01:17:14 compute-0 ceph-mon[192914]: 5.13 scrub starts
Dec 05 01:17:14 compute-0 ceph-mon[192914]: 5.13 scrub ok
Dec 05 01:17:14 compute-0 ceph-mon[192914]: osdmap e88: 3 total, 3 up, 3 in
Dec 05 01:17:14 compute-0 ceph-mon[192914]: pgmap v198: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:14 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec 05 01:17:14 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 89 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=88/89 n=6 ec=53/44 lis/c=60/60 les/c/f=61/61/0 sis=88) [2]/[0] async=[2] r=0 lpr=88 pi=[60,88)/1 crt=50'586 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:17:14 compute-0 sudo[226425]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:15 compute-0 sudo[226509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebhawcgxgqfxdumxoajrkczbqpdnyeye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897433.3319373-125-186956016976042/AnsiballZ_dnf.py'
Dec 05 01:17:15 compute-0 sudo[226509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:17:15 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Dec 05 01:17:15 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Dec 05 01:17:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Dec 05 01:17:15 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 05 01:17:15 compute-0 ceph-mon[192914]: osdmap e89: 3 total, 3 up, 3 in
Dec 05 01:17:15 compute-0 ceph-mon[192914]: 2.15 scrub starts
Dec 05 01:17:15 compute-0 ceph-mon[192914]: 2.15 scrub ok
Dec 05 01:17:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Dec 05 01:17:15 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Dec 05 01:17:15 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 90 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=88/89 n=6 ec=53/44 lis/c=88/60 les/c/f=89/61/0 sis=90 pruub=15.126440048s) [2] async=[2] r=-1 lpr=90 pi=[60,90)/1 crt=50'586 mlcod 50'586 active pruub 187.626922607s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:15 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 90 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=88/89 n=6 ec=53/44 lis/c=88/60 les/c/f=89/61/0 sis=90 pruub=15.126286507s) [2] r=-1 lpr=90 pi=[60,90)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 187.626922607s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:17:15 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 90 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=88/60 les/c/f=89/61/0 sis=90) [2] r=0 lpr=90 pi=[60,90)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:15 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 90 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=88/60 les/c/f=89/61/0 sis=90) [2] r=0 lpr=90 pi=[60,90)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:17:15 compute-0 python3.9[226511]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 01:17:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:17:16
Dec 05 01:17:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:17:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:17:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['images', 'default.rgw.meta', '.mgr', 'vms', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes']
Dec 05 01:17:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:17:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v201: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Dec 05 01:17:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Dec 05 01:17:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 05 01:17:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:17:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:17:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:17:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:17:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:17:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:17:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:17:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:17:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:17:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:17:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:17:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:17:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:17:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:17:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:17:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:17:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Dec 05 01:17:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 05 01:17:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Dec 05 01:17:16 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Dec 05 01:17:16 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 91 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=91 pruub=8.612095833s) [1] r=-1 lpr=91 pi=[61,91)/1 crt=50'586 mlcod 0'0 active pruub 182.124923706s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:16 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 91 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=91 pruub=8.612011909s) [1] r=-1 lpr=91 pi=[61,91)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 182.124923706s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:17:16 compute-0 ceph-mon[192914]: osdmap e90: 3 total, 3 up, 3 in
Dec 05 01:17:16 compute-0 ceph-mon[192914]: pgmap v201: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Dec 05 01:17:16 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 05 01:17:16 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 91 pg[9.15( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=91) [1] r=0 lpr=91 pi=[61,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:17:16 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 91 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=90/91 n=6 ec=53/44 lis/c=88/60 les/c/f=89/61/0 sis=90) [2] r=0 lpr=90 pi=[60,90)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:17:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:17:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Dec 05 01:17:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Dec 05 01:17:17 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Dec 05 01:17:17 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 92 pg[9.15( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=92) [1]/[0] r=-1 lpr=92 pi=[61,92)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:17 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 92 pg[9.15( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=92) [1]/[0] r=-1 lpr=92 pi=[61,92)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:17:17 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 92 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=92) [1]/[0] r=0 lpr=92 pi=[61,92)/1 crt=50'586 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:17 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 92 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=92) [1]/[0] r=0 lpr=92 pi=[61,92)/1 crt=50'586 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:17:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 05 01:17:17 compute-0 ceph-mon[192914]: osdmap e91: 3 total, 3 up, 3 in
Dec 05 01:17:17 compute-0 ceph-mon[192914]: osdmap e92: 3 total, 3 up, 3 in
Dec 05 01:17:17 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Dec 05 01:17:17 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Dec 05 01:17:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Dec 05 01:17:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Dec 05 01:17:18 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Dec 05 01:17:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v205: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:18 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 93 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=92/93 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=92) [1]/[0] async=[1] r=0 lpr=92 pi=[61,92)/1 crt=50'586 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:17:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Dec 05 01:17:19 compute-0 ceph-mon[192914]: 3.16 scrub starts
Dec 05 01:17:19 compute-0 ceph-mon[192914]: 3.16 scrub ok
Dec 05 01:17:19 compute-0 ceph-mon[192914]: osdmap e93: 3 total, 3 up, 3 in
Dec 05 01:17:19 compute-0 ceph-mon[192914]: pgmap v205: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Dec 05 01:17:19 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Dec 05 01:17:19 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 94 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=92/93 n=6 ec=53/44 lis/c=92/61 les/c/f=93/62/0 sis=94 pruub=15.255028725s) [1] async=[1] r=-1 lpr=94 pi=[61,94)/1 crt=50'586 mlcod 50'586 active pruub 191.404785156s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:19 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 94 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=92/93 n=6 ec=53/44 lis/c=92/61 les/c/f=93/62/0 sis=94 pruub=15.254875183s) [1] r=-1 lpr=94 pi=[61,94)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 191.404785156s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:17:19 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 94 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=92/61 les/c/f=93/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:19 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 94 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=92/61 les/c/f=93/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:17:19 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.f scrub starts
Dec 05 01:17:19 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.f scrub ok
Dec 05 01:17:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Dec 05 01:17:20 compute-0 ceph-mon[192914]: osdmap e94: 3 total, 3 up, 3 in
Dec 05 01:17:20 compute-0 ceph-mon[192914]: 2.f scrub starts
Dec 05 01:17:20 compute-0 ceph-mon[192914]: 2.f scrub ok
Dec 05 01:17:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v207: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Dec 05 01:17:20 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Dec 05 01:17:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 95 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=94/95 n=6 ec=53/44 lis/c=92/61 les/c/f=93/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:17:20 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Dec 05 01:17:20 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Dec 05 01:17:21 compute-0 ceph-mon[192914]: pgmap v207: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:21 compute-0 ceph-mon[192914]: osdmap e95: 3 total, 3 up, 3 in
Dec 05 01:17:21 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.1f deep-scrub starts
Dec 05 01:17:21 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.1f deep-scrub ok
Dec 05 01:17:21 compute-0 podman[226581]: 2025-12-05 01:17:21.713674842 +0000 UTC m=+0.120185122 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Dec 05 01:17:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:17:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v209: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 202 B/s wr, 9 op/s; 43 B/s, 1 objects/s recovering
Dec 05 01:17:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Dec 05 01:17:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec 05 01:17:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Dec 05 01:17:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 05 01:17:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Dec 05 01:17:22 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Dec 05 01:17:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 96 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=96 pruub=12.366931915s) [0] r=-1 lpr=96 pi=[69,96)/1 crt=50'586 mlcod 0'0 active pruub 178.917022705s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 96 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=96 pruub=12.365426064s) [0] r=-1 lpr=96 pi=[69,96)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 178.917022705s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:17:22 compute-0 ceph-mon[192914]: 4.1c scrub starts
Dec 05 01:17:22 compute-0 ceph-mon[192914]: 4.1c scrub ok
Dec 05 01:17:22 compute-0 ceph-mon[192914]: 2.1f deep-scrub starts
Dec 05 01:17:22 compute-0 ceph-mon[192914]: 2.1f deep-scrub ok
Dec 05 01:17:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec 05 01:17:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 96 pg[9.16( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=96) [0] r=0 lpr=96 pi=[69,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:17:22 compute-0 sudo[226600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:17:22 compute-0 sudo[226600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:17:22 compute-0 sudo[226600]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:22 compute-0 sudo[226625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:17:22 compute-0 sudo[226625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:17:22 compute-0 sudo[226625]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:22 compute-0 sudo[226650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:17:22 compute-0 sudo[226650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:17:22 compute-0 sudo[226650]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:22 compute-0 sudo[226675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 05 01:17:22 compute-0 sudo[226675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:17:22 compute-0 podman[226699]: 2025-12-05 01:17:22.8734204 +0000 UTC m=+0.111768770 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:17:23 compute-0 podman[226730]: 2025-12-05 01:17:23.099492366 +0000 UTC m=+0.174952010 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 01:17:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Dec 05 01:17:23 compute-0 ceph-mon[192914]: pgmap v209: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 202 B/s wr, 9 op/s; 43 B/s, 1 objects/s recovering
Dec 05 01:17:23 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 05 01:17:23 compute-0 ceph-mon[192914]: osdmap e96: 3 total, 3 up, 3 in
Dec 05 01:17:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Dec 05 01:17:23 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Dec 05 01:17:23 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 97 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=97) [0]/[2] r=0 lpr=97 pi=[69,97)/1 crt=50'586 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:23 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 97 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=97) [0]/[2] r=0 lpr=97 pi=[69,97)/1 crt=50'586 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:17:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 97 pg[9.16( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=97) [0]/[2] r=-1 lpr=97 pi=[69,97)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 97 pg[9.16( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=97) [0]/[2] r=-1 lpr=97 pi=[69,97)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:17:23 compute-0 podman[226820]: 2025-12-05 01:17:23.640177334 +0000 UTC m=+0.133442850 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 01:17:23 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.11 deep-scrub starts
Dec 05 01:17:23 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.11 deep-scrub ok
Dec 05 01:17:23 compute-0 podman[226820]: 2025-12-05 01:17:23.75762771 +0000 UTC m=+0.250893216 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 05 01:17:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v212: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 203 B/s wr, 9 op/s; 43 B/s, 1 objects/s recovering
Dec 05 01:17:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Dec 05 01:17:24 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec 05 01:17:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Dec 05 01:17:24 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 05 01:17:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Dec 05 01:17:24 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Dec 05 01:17:24 compute-0 ceph-mon[192914]: osdmap e97: 3 total, 3 up, 3 in
Dec 05 01:17:24 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec 05 01:17:24 compute-0 podman[226907]: 2025-12-05 01:17:24.521025772 +0000 UTC m=+0.110539925 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 01:17:24 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 98 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=97/98 n=6 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=97) [0]/[2] async=[0] r=0 lpr=97 pi=[69,97)/1 crt=50'586 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:17:25 compute-0 sudo[226675]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:17:25 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:17:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:17:25 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:17:25 compute-0 sudo[226991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:17:25 compute-0 sudo[226991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:17:25 compute-0 sudo[226991]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Dec 05 01:17:25 compute-0 ceph-mon[192914]: 7.11 deep-scrub starts
Dec 05 01:17:25 compute-0 ceph-mon[192914]: 7.11 deep-scrub ok
Dec 05 01:17:25 compute-0 ceph-mon[192914]: pgmap v212: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 203 B/s wr, 9 op/s; 43 B/s, 1 objects/s recovering
Dec 05 01:17:25 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 05 01:17:25 compute-0 ceph-mon[192914]: osdmap e98: 3 total, 3 up, 3 in
Dec 05 01:17:25 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:17:25 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:17:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Dec 05 01:17:25 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Dec 05 01:17:25 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 99 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=97/98 n=6 ec=53/44 lis/c=97/69 les/c/f=98/70/0 sis=99 pruub=15.491202354s) [0] async=[0] r=-1 lpr=99 pi=[69,99)/1 crt=50'586 mlcod 50'586 active pruub 185.130523682s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:25 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 99 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=97/98 n=6 ec=53/44 lis/c=97/69 les/c/f=98/70/0 sis=99 pruub=15.491126060s) [0] r=-1 lpr=99 pi=[69,99)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 185.130523682s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:17:25 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 99 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=97/69 les/c/f=98/70/0 sis=99) [0] r=0 lpr=99 pi=[69,99)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:25 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 99 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=97/69 les/c/f=98/70/0 sis=99) [0] r=0 lpr=99 pi=[69,99)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:17:25 compute-0 sudo[227016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:17:25 compute-0 sudo[227016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:17:25 compute-0 sudo[227016]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:25 compute-0 sudo[227041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:17:25 compute-0 sudo[227041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:17:25 compute-0 sudo[227041]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:25 compute-0 sudo[227066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:17:25 compute-0 sudo[227066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.225674773718825e-06 of space, bias 1.0, pg target 0.0006677024321156476 quantized to 32 (current 32)
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v215: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Dec 05 01:17:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec 05 01:17:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Dec 05 01:17:26 compute-0 sudo[227066]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:26 compute-0 ceph-mon[192914]: osdmap e99: 3 total, 3 up, 3 in
Dec 05 01:17:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec 05 01:17:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 05 01:17:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Dec 05 01:17:26 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Dec 05 01:17:26 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 100 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=99/100 n=6 ec=53/44 lis/c=97/69 les/c/f=98/70/0 sis=99) [0] r=0 lpr=99 pi=[69,99)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:17:26 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec 05 01:17:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:17:26 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:17:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:17:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:17:26 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec 05 01:17:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:17:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 92cd3238-542e-4ea8-902a-65773c96e4b3 does not exist
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 2ed5db00-e6ab-4bab-befd-b74e0422a497 does not exist
Dec 05 01:17:26 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 9684fcf6-aeb9-4b8f-a184-4295986bdcf9 does not exist
Dec 05 01:17:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:17:26 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:17:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:17:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:17:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:17:26 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:17:26 compute-0 sudo[227122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:17:26 compute-0 sudo[227122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:17:26 compute-0 sudo[227122]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:26 compute-0 sudo[227147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:17:26 compute-0 sudo[227147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:17:26 compute-0 sudo[227147]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:26 compute-0 sudo[227172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:17:26 compute-0 sudo[227172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:17:26 compute-0 sudo[227172]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:26 compute-0 sudo[227198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:17:26 compute-0 sudo[227198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:17:26 compute-0 podman[227196]: 2025-12-05 01:17:26.941142857 +0000 UTC m=+0.131196968 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, container_name=kepler, release=1214.1726694543, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, config_id=edpm, distribution-scope=public, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, io.openshift.expose-services=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 05 01:17:27 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec 05 01:17:27 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec 05 01:17:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:17:27 compute-0 ceph-mon[192914]: pgmap v215: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 05 01:17:27 compute-0 ceph-mon[192914]: osdmap e100: 3 total, 3 up, 3 in
Dec 05 01:17:27 compute-0 ceph-mon[192914]: 2.17 scrub starts
Dec 05 01:17:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:17:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:17:27 compute-0 ceph-mon[192914]: 2.17 scrub ok
Dec 05 01:17:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:17:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:17:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:17:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:17:27 compute-0 ceph-mon[192914]: 5.2 scrub starts
Dec 05 01:17:27 compute-0 ceph-mon[192914]: 5.2 scrub ok
Dec 05 01:17:27 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec 05 01:17:27 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec 05 01:17:27 compute-0 podman[227282]: 2025-12-05 01:17:27.532119589 +0000 UTC m=+0.089522382 container create c638f4b3c296ff6b37e6bb8f4ffde5b21de065aa97fd05aac7fe158f92a4922b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_swirles, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 01:17:27 compute-0 podman[227282]: 2025-12-05 01:17:27.495577626 +0000 UTC m=+0.052980469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:17:27 compute-0 systemd[1]: Started libpod-conmon-c638f4b3c296ff6b37e6bb8f4ffde5b21de065aa97fd05aac7fe158f92a4922b.scope.
Dec 05 01:17:27 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:17:27 compute-0 podman[227282]: 2025-12-05 01:17:27.710020701 +0000 UTC m=+0.267423544 container init c638f4b3c296ff6b37e6bb8f4ffde5b21de065aa97fd05aac7fe158f92a4922b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 05 01:17:27 compute-0 podman[227282]: 2025-12-05 01:17:27.728091622 +0000 UTC m=+0.285494405 container start c638f4b3c296ff6b37e6bb8f4ffde5b21de065aa97fd05aac7fe158f92a4922b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:17:27 compute-0 podman[227282]: 2025-12-05 01:17:27.735498867 +0000 UTC m=+0.292901670 container attach c638f4b3c296ff6b37e6bb8f4ffde5b21de065aa97fd05aac7fe158f92a4922b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec 05 01:17:27 compute-0 crazy_swirles[227298]: 167 167
Dec 05 01:17:27 compute-0 systemd[1]: libpod-c638f4b3c296ff6b37e6bb8f4ffde5b21de065aa97fd05aac7fe158f92a4922b.scope: Deactivated successfully.
Dec 05 01:17:27 compute-0 podman[227282]: 2025-12-05 01:17:27.743701534 +0000 UTC m=+0.301104327 container died c638f4b3c296ff6b37e6bb8f4ffde5b21de065aa97fd05aac7fe158f92a4922b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_swirles, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:17:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c6aba9c84185f3dfd8ca514f8be55e970f49159630f1134d7b1e981e95a2e40-merged.mount: Deactivated successfully.
Dec 05 01:17:27 compute-0 podman[227282]: 2025-12-05 01:17:27.838698728 +0000 UTC m=+0.396101511 container remove c638f4b3c296ff6b37e6bb8f4ffde5b21de065aa97fd05aac7fe158f92a4922b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_swirles, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 05 01:17:27 compute-0 systemd[1]: libpod-conmon-c638f4b3c296ff6b37e6bb8f4ffde5b21de065aa97fd05aac7fe158f92a4922b.scope: Deactivated successfully.
Dec 05 01:17:28 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Dec 05 01:17:28 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Dec 05 01:17:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v217: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Dec 05 01:17:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec 05 01:17:28 compute-0 podman[227324]: 2025-12-05 01:17:28.157520795 +0000 UTC m=+0.105849515 container create cbdced91630455828dee7e0193758cd323addc542c8577ee515458a086445269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_greider, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:17:28 compute-0 podman[227324]: 2025-12-05 01:17:28.118469533 +0000 UTC m=+0.066798313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:17:28 compute-0 systemd[1]: Started libpod-conmon-cbdced91630455828dee7e0193758cd323addc542c8577ee515458a086445269.scope.
Dec 05 01:17:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Dec 05 01:17:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 05 01:17:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Dec 05 01:17:28 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Dec 05 01:17:28 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:17:28 compute-0 ceph-mon[192914]: 5.11 scrub starts
Dec 05 01:17:28 compute-0 ceph-mon[192914]: 5.11 scrub ok
Dec 05 01:17:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec 05 01:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/367b47647045a64f1c363345e6835590b125779398cb65609fdb3f9d16862825/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/367b47647045a64f1c363345e6835590b125779398cb65609fdb3f9d16862825/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/367b47647045a64f1c363345e6835590b125779398cb65609fdb3f9d16862825/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/367b47647045a64f1c363345e6835590b125779398cb65609fdb3f9d16862825/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/367b47647045a64f1c363345e6835590b125779398cb65609fdb3f9d16862825/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:17:28 compute-0 podman[227324]: 2025-12-05 01:17:28.345752013 +0000 UTC m=+0.294080753 container init cbdced91630455828dee7e0193758cd323addc542c8577ee515458a086445269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 01:17:28 compute-0 podman[227324]: 2025-12-05 01:17:28.368309968 +0000 UTC m=+0.316638688 container start cbdced91630455828dee7e0193758cd323addc542c8577ee515458a086445269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:17:28 compute-0 podman[227324]: 2025-12-05 01:17:28.375632521 +0000 UTC m=+0.323961301 container attach cbdced91630455828dee7e0193758cd323addc542c8577ee515458a086445269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 05 01:17:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 101 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=101 pruub=12.510266304s) [2] r=-1 lpr=101 pi=[61,101)/1 crt=50'586 mlcod 0'0 active pruub 198.134582520s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 101 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=101 pruub=12.510182381s) [2] r=-1 lpr=101 pi=[61,101)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 198.134582520s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:17:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 101 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=101) [2] r=0 lpr=101 pi=[61,101)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:17:28 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec 05 01:17:28 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Dec 05 01:17:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Dec 05 01:17:29 compute-0 ceph-mon[192914]: 5.3 scrub starts
Dec 05 01:17:29 compute-0 ceph-mon[192914]: 5.3 scrub ok
Dec 05 01:17:29 compute-0 ceph-mon[192914]: pgmap v217: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 05 01:17:29 compute-0 ceph-mon[192914]: osdmap e101: 3 total, 3 up, 3 in
Dec 05 01:17:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Dec 05 01:17:29 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Dec 05 01:17:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 102 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=102) [2]/[0] r=0 lpr=102 pi=[61,102)/1 crt=50'586 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 102 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=102) [2]/[0] r=0 lpr=102 pi=[61,102)/1 crt=50'586 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:17:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=102) [2]/[0] r=-1 lpr=102 pi=[61,102)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=102) [2]/[0] r=-1 lpr=102 pi=[61,102)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:17:29 compute-0 hungry_greider[227340]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:17:29 compute-0 hungry_greider[227340]: --> relative data size: 1.0
Dec 05 01:17:29 compute-0 hungry_greider[227340]: --> All data devices are unavailable
Dec 05 01:17:29 compute-0 podman[227362]: 2025-12-05 01:17:29.751053788 +0000 UTC m=+0.149043343 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 01:17:29 compute-0 podman[158197]: time="2025-12-05T01:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:17:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 34526 "" "Go-http-client/1.1"
Dec 05 01:17:29 compute-0 systemd[1]: libpod-cbdced91630455828dee7e0193758cd323addc542c8577ee515458a086445269.scope: Deactivated successfully.
Dec 05 01:17:29 compute-0 systemd[1]: libpod-cbdced91630455828dee7e0193758cd323addc542c8577ee515458a086445269.scope: Consumed 1.313s CPU time.
Dec 05 01:17:29 compute-0 podman[227324]: 2025-12-05 01:17:29.782218321 +0000 UTC m=+1.730547031 container died cbdced91630455828dee7e0193758cd323addc542c8577ee515458a086445269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_greider, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 05 01:17:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6790 "" "Go-http-client/1.1"
Dec 05 01:17:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-367b47647045a64f1c363345e6835590b125779398cb65609fdb3f9d16862825-merged.mount: Deactivated successfully.
Dec 05 01:17:29 compute-0 podman[227324]: 2025-12-05 01:17:29.8788412 +0000 UTC m=+1.827169880 container remove cbdced91630455828dee7e0193758cd323addc542c8577ee515458a086445269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:17:29 compute-0 systemd[1]: libpod-conmon-cbdced91630455828dee7e0193758cd323addc542c8577ee515458a086445269.scope: Deactivated successfully.
Dec 05 01:17:29 compute-0 sudo[227198]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:30 compute-0 sudo[227402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:17:30 compute-0 sudo[227402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:17:30 compute-0 sudo[227402]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v220: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 1 objects/s recovering
Dec 05 01:17:30 compute-0 sudo[227427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:17:30 compute-0 sudo[227427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:17:30 compute-0 sudo[227427]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:30 compute-0 sudo[227453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:17:30 compute-0 sudo[227453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:17:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Dec 05 01:17:30 compute-0 sudo[227453]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:30 compute-0 ceph-mon[192914]: 6.8 scrub starts
Dec 05 01:17:30 compute-0 ceph-mon[192914]: 6.8 scrub ok
Dec 05 01:17:30 compute-0 ceph-mon[192914]: osdmap e102: 3 total, 3 up, 3 in
Dec 05 01:17:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Dec 05 01:17:30 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Dec 05 01:17:30 compute-0 sudo[227479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:17:30 compute-0 sudo[227479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:17:30 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 103 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=102/103 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=102) [2]/[0] async=[2] r=0 lpr=102 pi=[61,102)/1 crt=50'586 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:17:31 compute-0 podman[227549]: 2025-12-05 01:17:31.060461525 +0000 UTC m=+0.081015107 container create 52cb86b8ccb63e50230dee68d1579cabefb2f435ed8ac5efa85eb818b7fe8241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 01:17:31 compute-0 systemd[1]: Started libpod-conmon-52cb86b8ccb63e50230dee68d1579cabefb2f435ed8ac5efa85eb818b7fe8241.scope.
Dec 05 01:17:31 compute-0 podman[227549]: 2025-12-05 01:17:31.027784889 +0000 UTC m=+0.048338531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:17:31 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:17:31 compute-0 podman[227549]: 2025-12-05 01:17:31.205368421 +0000 UTC m=+0.225922003 container init 52cb86b8ccb63e50230dee68d1579cabefb2f435ed8ac5efa85eb818b7fe8241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jepsen, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 05 01:17:31 compute-0 podman[227549]: 2025-12-05 01:17:31.216192782 +0000 UTC m=+0.236746354 container start 52cb86b8ccb63e50230dee68d1579cabefb2f435ed8ac5efa85eb818b7fe8241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 05 01:17:31 compute-0 podman[227549]: 2025-12-05 01:17:31.220387038 +0000 UTC m=+0.240940630 container attach 52cb86b8ccb63e50230dee68d1579cabefb2f435ed8ac5efa85eb818b7fe8241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jepsen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:17:31 compute-0 hardcore_jepsen[227565]: 167 167
Dec 05 01:17:31 compute-0 systemd[1]: libpod-52cb86b8ccb63e50230dee68d1579cabefb2f435ed8ac5efa85eb818b7fe8241.scope: Deactivated successfully.
Dec 05 01:17:31 compute-0 podman[227549]: 2025-12-05 01:17:31.230402205 +0000 UTC m=+0.250955847 container died 52cb86b8ccb63e50230dee68d1579cabefb2f435ed8ac5efa85eb818b7fe8241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jepsen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:17:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-6943ac771bfa0bc789a6c4df7ca3b0c20316bcab3851214083cba0f29dc05bd2-merged.mount: Deactivated successfully.
Dec 05 01:17:31 compute-0 podman[227549]: 2025-12-05 01:17:31.306351681 +0000 UTC m=+0.326905263 container remove 52cb86b8ccb63e50230dee68d1579cabefb2f435ed8ac5efa85eb818b7fe8241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jepsen, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:17:31 compute-0 systemd[1]: libpod-conmon-52cb86b8ccb63e50230dee68d1579cabefb2f435ed8ac5efa85eb818b7fe8241.scope: Deactivated successfully.
Dec 05 01:17:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Dec 05 01:17:31 compute-0 ceph-mon[192914]: pgmap v220: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 1 objects/s recovering
Dec 05 01:17:31 compute-0 ceph-mon[192914]: osdmap e103: 3 total, 3 up, 3 in
Dec 05 01:17:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Dec 05 01:17:31 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Dec 05 01:17:31 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 104 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=102/103 n=6 ec=53/44 lis/c=102/61 les/c/f=103/62/0 sis=104 pruub=15.241423607s) [2] async=[2] r=-1 lpr=104 pi=[61,104)/1 crt=50'586 mlcod 50'586 active pruub 203.666870117s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:31 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 104 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=102/103 n=6 ec=53/44 lis/c=102/61 les/c/f=103/62/0 sis=104 pruub=15.241143227s) [2] r=-1 lpr=104 pi=[61,104)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 203.666870117s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:17:31 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 104 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=102/61 les/c/f=103/62/0 sis=104) [2] r=0 lpr=104 pi=[61,104)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:31 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 104 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=102/61 les/c/f=103/62/0 sis=104) [2] r=0 lpr=104 pi=[61,104)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:17:31 compute-0 openstack_network_exporter[160350]: ERROR   01:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:17:31 compute-0 openstack_network_exporter[160350]: ERROR   01:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:17:31 compute-0 openstack_network_exporter[160350]: ERROR   01:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:17:31 compute-0 openstack_network_exporter[160350]: ERROR   01:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:17:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:17:31 compute-0 openstack_network_exporter[160350]: ERROR   01:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:17:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:17:31 compute-0 podman[227592]: 2025-12-05 01:17:31.544745989 +0000 UTC m=+0.072996594 container create 0c09d993b0d2ecd12941bbd444f5d4261fd711cade95120e5571c7863ba5916b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 05 01:17:31 compute-0 podman[227592]: 2025-12-05 01:17:31.513493083 +0000 UTC m=+0.041743708 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:17:31 compute-0 systemd[1]: Started libpod-conmon-0c09d993b0d2ecd12941bbd444f5d4261fd711cade95120e5571c7863ba5916b.scope.
Dec 05 01:17:31 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:17:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309442ed554c246658b5919c2d9917f85f443479f93a38fd1017d6f6c86a3841/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:17:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309442ed554c246658b5919c2d9917f85f443479f93a38fd1017d6f6c86a3841/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:17:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309442ed554c246658b5919c2d9917f85f443479f93a38fd1017d6f6c86a3841/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:17:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309442ed554c246658b5919c2d9917f85f443479f93a38fd1017d6f6c86a3841/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:17:31 compute-0 podman[227592]: 2025-12-05 01:17:31.720038448 +0000 UTC m=+0.248289133 container init 0c09d993b0d2ecd12941bbd444f5d4261fd711cade95120e5571c7863ba5916b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_solomon, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:17:31 compute-0 podman[227592]: 2025-12-05 01:17:31.730437926 +0000 UTC m=+0.258688551 container start 0c09d993b0d2ecd12941bbd444f5d4261fd711cade95120e5571c7863ba5916b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_solomon, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec 05 01:17:31 compute-0 podman[227592]: 2025-12-05 01:17:31.736160815 +0000 UTC m=+0.264411440 container attach 0c09d993b0d2ecd12941bbd444f5d4261fd711cade95120e5571c7863ba5916b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 05 01:17:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:17:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v223: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 4 objects/s recovering
Dec 05 01:17:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Dec 05 01:17:32 compute-0 ceph-mon[192914]: osdmap e104: 3 total, 3 up, 3 in
Dec 05 01:17:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Dec 05 01:17:32 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Dec 05 01:17:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 105 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=104/105 n=6 ec=53/44 lis/c=102/61 les/c/f=103/62/0 sis=104) [2] r=0 lpr=104 pi=[61,104)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:17:32 compute-0 great_solomon[227613]: {
Dec 05 01:17:32 compute-0 great_solomon[227613]:     "0": [
Dec 05 01:17:32 compute-0 great_solomon[227613]:         {
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "devices": [
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "/dev/loop3"
Dec 05 01:17:32 compute-0 great_solomon[227613]:             ],
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "lv_name": "ceph_lv0",
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "lv_size": "21470642176",
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "name": "ceph_lv0",
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "tags": {
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.cluster_name": "ceph",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.crush_device_class": "",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.encrypted": "0",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.osd_id": "0",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.type": "block",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.vdo": "0"
Dec 05 01:17:32 compute-0 great_solomon[227613]:             },
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "type": "block",
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "vg_name": "ceph_vg0"
Dec 05 01:17:32 compute-0 great_solomon[227613]:         }
Dec 05 01:17:32 compute-0 great_solomon[227613]:     ],
Dec 05 01:17:32 compute-0 great_solomon[227613]:     "1": [
Dec 05 01:17:32 compute-0 great_solomon[227613]:         {
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "devices": [
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "/dev/loop4"
Dec 05 01:17:32 compute-0 great_solomon[227613]:             ],
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "lv_name": "ceph_lv1",
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "lv_size": "21470642176",
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "name": "ceph_lv1",
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "tags": {
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.cluster_name": "ceph",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.crush_device_class": "",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.encrypted": "0",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.osd_id": "1",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.type": "block",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.vdo": "0"
Dec 05 01:17:32 compute-0 great_solomon[227613]:             },
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "type": "block",
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "vg_name": "ceph_vg1"
Dec 05 01:17:32 compute-0 great_solomon[227613]:         }
Dec 05 01:17:32 compute-0 great_solomon[227613]:     ],
Dec 05 01:17:32 compute-0 great_solomon[227613]:     "2": [
Dec 05 01:17:32 compute-0 great_solomon[227613]:         {
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "devices": [
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "/dev/loop5"
Dec 05 01:17:32 compute-0 great_solomon[227613]:             ],
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "lv_name": "ceph_lv2",
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "lv_size": "21470642176",
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "name": "ceph_lv2",
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "tags": {
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.cluster_name": "ceph",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.crush_device_class": "",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.encrypted": "0",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.osd_id": "2",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.type": "block",
Dec 05 01:17:32 compute-0 great_solomon[227613]:                 "ceph.vdo": "0"
Dec 05 01:17:32 compute-0 great_solomon[227613]:             },
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "type": "block",
Dec 05 01:17:32 compute-0 great_solomon[227613]:             "vg_name": "ceph_vg2"
Dec 05 01:17:32 compute-0 great_solomon[227613]:         }
Dec 05 01:17:32 compute-0 great_solomon[227613]:     ]
Dec 05 01:17:32 compute-0 great_solomon[227613]: }
Dec 05 01:17:32 compute-0 systemd[1]: libpod-0c09d993b0d2ecd12941bbd444f5d4261fd711cade95120e5571c7863ba5916b.scope: Deactivated successfully.
Dec 05 01:17:32 compute-0 podman[227592]: 2025-12-05 01:17:32.614366919 +0000 UTC m=+1.142617524 container died 0c09d993b0d2ecd12941bbd444f5d4261fd711cade95120e5571c7863ba5916b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_solomon, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:17:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-309442ed554c246658b5919c2d9917f85f443479f93a38fd1017d6f6c86a3841-merged.mount: Deactivated successfully.
Dec 05 01:17:32 compute-0 podman[227592]: 2025-12-05 01:17:32.720164321 +0000 UTC m=+1.248414926 container remove 0c09d993b0d2ecd12941bbd444f5d4261fd711cade95120e5571c7863ba5916b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_solomon, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 05 01:17:32 compute-0 systemd[1]: libpod-conmon-0c09d993b0d2ecd12941bbd444f5d4261fd711cade95120e5571c7863ba5916b.scope: Deactivated successfully.
Dec 05 01:17:32 compute-0 sudo[227479]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:32 compute-0 sudo[227654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:17:32 compute-0 sudo[227654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:17:32 compute-0 sudo[227654]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:32 compute-0 sudo[227679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:17:32 compute-0 sudo[227679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:17:32 compute-0 sudo[227679]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:33 compute-0 sudo[227704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:17:33 compute-0 sudo[227704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:17:33 compute-0 sudo[227704]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:33 compute-0 sudo[227729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:17:33 compute-0 sudo[227729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:17:33 compute-0 ceph-mon[192914]: pgmap v223: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 4 objects/s recovering
Dec 05 01:17:33 compute-0 ceph-mon[192914]: osdmap e105: 3 total, 3 up, 3 in
Dec 05 01:17:33 compute-0 podman[227802]: 2025-12-05 01:17:33.743251761 +0000 UTC m=+0.076627945 container create cbbe367712c5b3f58a4ea3e3d5f24b14684dec1980bbbc5643509f37e78ad9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:17:33 compute-0 podman[227802]: 2025-12-05 01:17:33.708500718 +0000 UTC m=+0.041876982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:17:33 compute-0 systemd[1]: Started libpod-conmon-cbbe367712c5b3f58a4ea3e3d5f24b14684dec1980bbbc5643509f37e78ad9d7.scope.
Dec 05 01:17:33 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:17:33 compute-0 podman[227802]: 2025-12-05 01:17:33.872679519 +0000 UTC m=+0.206055743 container init cbbe367712c5b3f58a4ea3e3d5f24b14684dec1980bbbc5643509f37e78ad9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 05 01:17:33 compute-0 podman[227802]: 2025-12-05 01:17:33.890498893 +0000 UTC m=+0.223875107 container start cbbe367712c5b3f58a4ea3e3d5f24b14684dec1980bbbc5643509f37e78ad9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:17:33 compute-0 podman[227802]: 2025-12-05 01:17:33.898252398 +0000 UTC m=+0.231628672 container attach cbbe367712c5b3f58a4ea3e3d5f24b14684dec1980bbbc5643509f37e78ad9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 05 01:17:33 compute-0 intelligent_mestorf[227818]: 167 167
Dec 05 01:17:33 compute-0 systemd[1]: libpod-cbbe367712c5b3f58a4ea3e3d5f24b14684dec1980bbbc5643509f37e78ad9d7.scope: Deactivated successfully.
Dec 05 01:17:33 compute-0 podman[227802]: 2025-12-05 01:17:33.901128217 +0000 UTC m=+0.234504441 container died cbbe367712c5b3f58a4ea3e3d5f24b14684dec1980bbbc5643509f37e78ad9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 05 01:17:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ec27684b44392eb1b9d336c1d62f00ac749a6063922c58c3a9cf079ac35bc36-merged.mount: Deactivated successfully.
Dec 05 01:17:33 compute-0 podman[227802]: 2025-12-05 01:17:33.981815874 +0000 UTC m=+0.315192058 container remove cbbe367712c5b3f58a4ea3e3d5f24b14684dec1980bbbc5643509f37e78ad9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 05 01:17:33 compute-0 systemd[1]: libpod-conmon-cbbe367712c5b3f58a4ea3e3d5f24b14684dec1980bbbc5643509f37e78ad9d7.scope: Deactivated successfully.
Dec 05 01:17:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v225: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 45 B/s, 3 objects/s recovering
Dec 05 01:17:34 compute-0 podman[227839]: 2025-12-05 01:17:34.317953622 +0000 UTC m=+0.098898813 container create 7a03f112de573fafd50382ee625feb286e94570cec6ccbd2bfbed67b1e6066f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:17:34 compute-0 podman[227839]: 2025-12-05 01:17:34.275545726 +0000 UTC m=+0.056490967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:17:34 compute-0 systemd[1]: Started libpod-conmon-7a03f112de573fafd50382ee625feb286e94570cec6ccbd2bfbed67b1e6066f0.scope.
Dec 05 01:17:34 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:17:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e0dfd6cd5f1b55186ab3e82fdd069ed69487b668fd086d0acadcb966397559e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:17:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e0dfd6cd5f1b55186ab3e82fdd069ed69487b668fd086d0acadcb966397559e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:17:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e0dfd6cd5f1b55186ab3e82fdd069ed69487b668fd086d0acadcb966397559e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:17:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e0dfd6cd5f1b55186ab3e82fdd069ed69487b668fd086d0acadcb966397559e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:17:34 compute-0 ceph-mon[192914]: pgmap v225: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 45 B/s, 3 objects/s recovering
Dec 05 01:17:34 compute-0 podman[227839]: 2025-12-05 01:17:34.507802155 +0000 UTC m=+0.288747306 container init 7a03f112de573fafd50382ee625feb286e94570cec6ccbd2bfbed67b1e6066f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wright, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:17:34 compute-0 podman[227839]: 2025-12-05 01:17:34.523487249 +0000 UTC m=+0.304432390 container start 7a03f112de573fafd50382ee625feb286e94570cec6ccbd2bfbed67b1e6066f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 01:17:34 compute-0 podman[227839]: 2025-12-05 01:17:34.533111566 +0000 UTC m=+0.314056737 container attach 7a03f112de573fafd50382ee625feb286e94570cec6ccbd2bfbed67b1e6066f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wright, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 01:17:34 compute-0 podman[227853]: 2025-12-05 01:17:34.536691775 +0000 UTC m=+0.152189509 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:17:35 compute-0 nice_wright[227862]: {
Dec 05 01:17:35 compute-0 nice_wright[227862]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:17:35 compute-0 nice_wright[227862]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:17:35 compute-0 nice_wright[227862]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:17:35 compute-0 nice_wright[227862]:         "osd_id": 0,
Dec 05 01:17:35 compute-0 nice_wright[227862]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:17:35 compute-0 nice_wright[227862]:         "type": "bluestore"
Dec 05 01:17:35 compute-0 nice_wright[227862]:     },
Dec 05 01:17:35 compute-0 nice_wright[227862]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:17:35 compute-0 nice_wright[227862]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:17:35 compute-0 nice_wright[227862]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:17:35 compute-0 nice_wright[227862]:         "osd_id": 1,
Dec 05 01:17:35 compute-0 nice_wright[227862]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:17:35 compute-0 nice_wright[227862]:         "type": "bluestore"
Dec 05 01:17:35 compute-0 nice_wright[227862]:     },
Dec 05 01:17:35 compute-0 nice_wright[227862]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:17:35 compute-0 nice_wright[227862]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:17:35 compute-0 nice_wright[227862]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:17:35 compute-0 nice_wright[227862]:         "osd_id": 2,
Dec 05 01:17:35 compute-0 nice_wright[227862]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:17:35 compute-0 nice_wright[227862]:         "type": "bluestore"
Dec 05 01:17:35 compute-0 nice_wright[227862]:     }
Dec 05 01:17:35 compute-0 nice_wright[227862]: }
Dec 05 01:17:35 compute-0 systemd[1]: libpod-7a03f112de573fafd50382ee625feb286e94570cec6ccbd2bfbed67b1e6066f0.scope: Deactivated successfully.
Dec 05 01:17:35 compute-0 systemd[1]: libpod-7a03f112de573fafd50382ee625feb286e94570cec6ccbd2bfbed67b1e6066f0.scope: Consumed 1.064s CPU time.
Dec 05 01:17:35 compute-0 podman[227839]: 2025-12-05 01:17:35.594820867 +0000 UTC m=+1.375766108 container died 7a03f112de573fafd50382ee625feb286e94570cec6ccbd2bfbed67b1e6066f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wright, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:17:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e0dfd6cd5f1b55186ab3e82fdd069ed69487b668fd086d0acadcb966397559e-merged.mount: Deactivated successfully.
Dec 05 01:17:35 compute-0 podman[227839]: 2025-12-05 01:17:35.697607297 +0000 UTC m=+1.478552458 container remove 7a03f112de573fafd50382ee625feb286e94570cec6ccbd2bfbed67b1e6066f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 01:17:35 compute-0 systemd[1]: libpod-conmon-7a03f112de573fafd50382ee625feb286e94570cec6ccbd2bfbed67b1e6066f0.scope: Deactivated successfully.
Dec 05 01:17:35 compute-0 sudo[227729]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:17:35 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:17:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:17:35 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:17:35 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d2829ab6-dc82-4126-afdd-adf05cc5cba8 does not exist
Dec 05 01:17:35 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 85e3b550-8871-4c32-aa25-22592fc8669a does not exist
Dec 05 01:17:35 compute-0 sudo[227923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:17:35 compute-0 sudo[227923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:17:35 compute-0 sudo[227923]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:36 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Dec 05 01:17:36 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Dec 05 01:17:36 compute-0 sudo[227949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:17:36 compute-0 sudo[227949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:17:36 compute-0 sudo[227949]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v226: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec 05 01:17:36 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec 05 01:17:36 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec 05 01:17:36 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:17:36 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:17:36 compute-0 ceph-mon[192914]: 5.5 scrub starts
Dec 05 01:17:36 compute-0 ceph-mon[192914]: 5.5 scrub ok
Dec 05 01:17:36 compute-0 ceph-mon[192914]: pgmap v226: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec 05 01:17:37 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.b scrub starts
Dec 05 01:17:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:17:37 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.b scrub ok
Dec 05 01:17:37 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec 05 01:17:37 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec 05 01:17:37 compute-0 ceph-mon[192914]: 3.18 scrub starts
Dec 05 01:17:37 compute-0 ceph-mon[192914]: 3.18 scrub ok
Dec 05 01:17:37 compute-0 ceph-mon[192914]: 2.1b scrub starts
Dec 05 01:17:37 compute-0 ceph-mon[192914]: 2.1b scrub ok
Dec 05 01:17:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v227: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 1 objects/s recovering
Dec 05 01:17:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Dec 05 01:17:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec 05 01:17:38 compute-0 sshd-session[227948]: Connection reset by authenticating user root 45.140.17.124 port 25952 [preauth]
Dec 05 01:17:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Dec 05 01:17:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 05 01:17:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Dec 05 01:17:38 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Dec 05 01:17:38 compute-0 ceph-mon[192914]: 2.b scrub starts
Dec 05 01:17:38 compute-0 ceph-mon[192914]: 2.b scrub ok
Dec 05 01:17:38 compute-0 ceph-mon[192914]: pgmap v227: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 1 objects/s recovering
Dec 05 01:17:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec 05 01:17:39 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Dec 05 01:17:39 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Dec 05 01:17:39 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Dec 05 01:17:39 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Dec 05 01:17:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 05 01:17:39 compute-0 ceph-mon[192914]: osdmap e106: 3 total, 3 up, 3 in
Dec 05 01:17:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v229: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Dec 05 01:17:40 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec 05 01:17:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Dec 05 01:17:40 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 05 01:17:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Dec 05 01:17:40 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Dec 05 01:17:40 compute-0 ceph-mon[192914]: 5.15 scrub starts
Dec 05 01:17:40 compute-0 ceph-mon[192914]: 5.15 scrub ok
Dec 05 01:17:40 compute-0 ceph-mon[192914]: 6.1f scrub starts
Dec 05 01:17:40 compute-0 ceph-mon[192914]: 6.1f scrub ok
Dec 05 01:17:40 compute-0 ceph-mon[192914]: pgmap v229: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:40 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec 05 01:17:40 compute-0 sshd-session[227975]: Invalid user user from 45.140.17.124 port 25956
Dec 05 01:17:41 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Dec 05 01:17:41 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Dec 05 01:17:41 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.d scrub starts
Dec 05 01:17:41 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.d scrub ok
Dec 05 01:17:41 compute-0 sshd-session[227975]: Connection reset by invalid user user 45.140.17.124 port 25956 [preauth]
Dec 05 01:17:41 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 05 01:17:41 compute-0 ceph-mon[192914]: osdmap e107: 3 total, 3 up, 3 in
Dec 05 01:17:41 compute-0 ceph-mon[192914]: 2.d scrub starts
Dec 05 01:17:41 compute-0 ceph-mon[192914]: 2.d scrub ok
Dec 05 01:17:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:17:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v231: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Dec 05 01:17:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec 05 01:17:42 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Dec 05 01:17:42 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Dec 05 01:17:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Dec 05 01:17:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 05 01:17:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Dec 05 01:17:42 compute-0 ceph-mon[192914]: 2.8 scrub starts
Dec 05 01:17:42 compute-0 ceph-mon[192914]: 2.8 scrub ok
Dec 05 01:17:42 compute-0 ceph-mon[192914]: pgmap v231: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:42 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec 05 01:17:42 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 108 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=80/81 n=6 ec=53/44 lis/c=80/80 les/c/f=81/81/0 sis=108 pruub=11.241013527s) [0] r=-1 lpr=108 pi=[80,108)/1 crt=50'586 mlcod 0'0 active pruub 198.502120972s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:42 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 108 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=80/81 n=6 ec=53/44 lis/c=80/80 les/c/f=81/81/0 sis=108 pruub=11.240839958s) [0] r=-1 lpr=108 pi=[80,108)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 198.502120972s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:17:42 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Dec 05 01:17:42 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 108 pg[9.1c( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=80/80 les/c/f=81/81/0 sis=108) [0] r=0 lpr=108 pi=[80,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:17:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Dec 05 01:17:43 compute-0 ceph-mon[192914]: 5.14 scrub starts
Dec 05 01:17:43 compute-0 ceph-mon[192914]: 5.14 scrub ok
Dec 05 01:17:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 05 01:17:43 compute-0 ceph-mon[192914]: osdmap e108: 3 total, 3 up, 3 in
Dec 05 01:17:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Dec 05 01:17:43 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Dec 05 01:17:43 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 109 pg[9.1c( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=80/80 les/c/f=81/81/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[80,109)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:43 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 109 pg[9.1c( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=80/80 les/c/f=81/81/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[80,109)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:17:43 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 109 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=80/81 n=6 ec=53/44 lis/c=80/80 les/c/f=81/81/0 sis=109) [0]/[2] r=0 lpr=109 pi=[80,109)/1 crt=50'586 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:43 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 109 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=80/81 n=6 ec=53/44 lis/c=80/80 les/c/f=81/81/0 sis=109) [0]/[2] r=0 lpr=109 pi=[80,109)/1 crt=50'586 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:17:43 compute-0 sshd-session[228001]: Connection reset by authenticating user root 45.140.17.124 port 29896 [preauth]
Dec 05 01:17:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v234: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Dec 05 01:17:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec 05 01:17:44 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec 05 01:17:44 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec 05 01:17:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Dec 05 01:17:44 compute-0 ceph-mon[192914]: osdmap e109: 3 total, 3 up, 3 in
Dec 05 01:17:44 compute-0 ceph-mon[192914]: pgmap v234: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec 05 01:17:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 05 01:17:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Dec 05 01:17:44 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Dec 05 01:17:45 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.16 deep-scrub starts
Dec 05 01:17:45 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.16 deep-scrub ok
Dec 05 01:17:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 110 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=109/110 n=6 ec=53/44 lis/c=80/80 les/c/f=81/81/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[80,109)/1 crt=50'586 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:17:45 compute-0 sshd-session[228008]: Connection reset by authenticating user root 45.140.17.124 port 29904 [preauth]
Dec 05 01:17:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Dec 05 01:17:45 compute-0 ceph-mon[192914]: 2.11 scrub starts
Dec 05 01:17:45 compute-0 ceph-mon[192914]: 2.11 scrub ok
Dec 05 01:17:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 05 01:17:45 compute-0 ceph-mon[192914]: osdmap e110: 3 total, 3 up, 3 in
Dec 05 01:17:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Dec 05 01:17:45 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Dec 05 01:17:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 111 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=109/110 n=6 ec=53/44 lis/c=109/80 les/c/f=110/81/0 sis=111 pruub=15.265457153s) [0] async=[0] r=-1 lpr=111 pi=[80,111)/1 crt=50'586 mlcod 50'586 active pruub 205.598022461s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 111 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=109/110 n=6 ec=53/44 lis/c=109/80 les/c/f=110/81/0 sis=111 pruub=15.263625145s) [0] r=-1 lpr=111 pi=[80,111)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 205.598022461s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:17:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 111 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=109/80 les/c/f=110/81/0 sis=111) [0] r=0 lpr=111 pi=[80,111)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 111 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=109/80 les/c/f=110/81/0 sis=111) [0] r=0 lpr=111 pi=[80,111)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:17:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v237: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Dec 05 01:17:46 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec 05 01:17:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:17:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:17:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:17:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:17:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:17:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:17:46 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.e deep-scrub starts
Dec 05 01:17:46 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.e deep-scrub ok
Dec 05 01:17:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Dec 05 01:17:46 compute-0 ceph-mon[192914]: 2.16 deep-scrub starts
Dec 05 01:17:46 compute-0 ceph-mon[192914]: 2.16 deep-scrub ok
Dec 05 01:17:46 compute-0 ceph-mon[192914]: osdmap e111: 3 total, 3 up, 3 in
Dec 05 01:17:46 compute-0 ceph-mon[192914]: pgmap v237: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:46 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec 05 01:17:46 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 05 01:17:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Dec 05 01:17:46 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Dec 05 01:17:47 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 112 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=111/112 n=6 ec=53/44 lis/c=109/80 les/c/f=110/81/0 sis=111) [0] r=0 lpr=111 pi=[80,111)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:17:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 112 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=112 pruub=11.538517952s) [0] r=-1 lpr=112 pi=[69,112)/1 crt=50'586 mlcod 0'0 active pruub 202.917968750s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 112 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=112 pruub=11.538378716s) [0] r=-1 lpr=112 pi=[69,112)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 202.917968750s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:17:47 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 112 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=112) [0] r=0 lpr=112 pi=[69,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:17:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:17:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Dec 05 01:17:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Dec 05 01:17:47 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Dec 05 01:17:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 113 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=113) [0]/[2] r=0 lpr=113 pi=[69,113)/1 crt=50'586 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 113 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=113) [0]/[2] r=0 lpr=113 pi=[69,113)/1 crt=50'586 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:17:47 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[69,113)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:47 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[69,113)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:17:47 compute-0 ceph-mon[192914]: 3.e deep-scrub starts
Dec 05 01:17:47 compute-0 ceph-mon[192914]: 3.e deep-scrub ok
Dec 05 01:17:47 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 05 01:17:47 compute-0 ceph-mon[192914]: osdmap e112: 3 total, 3 up, 3 in
Dec 05 01:17:47 compute-0 ceph-mon[192914]: osdmap e113: 3 total, 3 up, 3 in
Dec 05 01:17:48 compute-0 sshd-session[228010]: Connection reset by authenticating user root 45.140.17.124 port 29918 [preauth]
Dec 05 01:17:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Dec 05 01:17:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Dec 05 01:17:48 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Dec 05 01:17:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v241: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:48 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 114 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=113/114 n=6 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[69,113)/1 crt=50'586 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:17:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 05 01:17:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 01:17:48 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.17 scrub starts
Dec 05 01:17:48 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.17 scrub ok
Dec 05 01:17:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Dec 05 01:17:49 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 01:17:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Dec 05 01:17:49 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Dec 05 01:17:49 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 115 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=113/114 n=6 ec=53/44 lis/c=113/69 les/c/f=114/70/0 sis=115 pruub=15.017637253s) [0] async=[0] r=-1 lpr=115 pi=[69,115)/1 crt=50'586 mlcod 50'586 active pruub 208.529220581s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:49 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 115 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=113/114 n=6 ec=53/44 lis/c=113/69 les/c/f=114/70/0 sis=115 pruub=15.017431259s) [0] r=-1 lpr=115 pi=[69,115)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 208.529220581s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:17:49 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 115 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=71/72 n=6 ec=53/44 lis/c=71/71 les/c/f=72/72/0 sis=115 pruub=10.987185478s) [1] r=-1 lpr=115 pi=[71,115)/1 crt=50'586 mlcod 0'0 active pruub 204.499038696s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:49 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 115 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=71/72 n=6 ec=53/44 lis/c=71/71 les/c/f=72/72/0 sis=115 pruub=10.987100601s) [1] r=-1 lpr=115 pi=[71,115)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 204.499038696s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:17:49 compute-0 ceph-mon[192914]: osdmap e114: 3 total, 3 up, 3 in
Dec 05 01:17:49 compute-0 ceph-mon[192914]: pgmap v241: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:17:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 05 01:17:49 compute-0 ceph-mon[192914]: 6.17 scrub starts
Dec 05 01:17:49 compute-0 ceph-mon[192914]: 6.17 scrub ok
Dec 05 01:17:49 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 115 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=113/69 les/c/f=114/70/0 sis=115) [0] r=0 lpr=115 pi=[69,115)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:49 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 115 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=113/69 les/c/f=114/70/0 sis=115) [0] r=0 lpr=115 pi=[69,115)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:17:49 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 115 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=71/71 les/c/f=72/72/0 sis=115) [1] r=0 lpr=115 pi=[71,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:17:49 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Dec 05 01:17:49 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Dec 05 01:17:49 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Dec 05 01:17:49 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Dec 05 01:17:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Dec 05 01:17:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Dec 05 01:17:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v243: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 3 objects/s recovering
Dec 05 01:17:50 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Dec 05 01:17:50 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=71/71 les/c/f=72/72/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[71,116)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:50 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=71/71 les/c/f=72/72/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[71,116)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 05 01:17:50 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 116 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=71/72 n=6 ec=53/44 lis/c=71/71 les/c/f=72/72/0 sis=116) [1]/[2] r=0 lpr=116 pi=[71,116)/1 crt=50'586 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:50 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 116 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=71/72 n=6 ec=53/44 lis/c=71/71 les/c/f=72/72/0 sis=116) [1]/[2] r=0 lpr=116 pi=[71,116)/1 crt=50'586 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 05 01:17:50 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 116 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=115/116 n=6 ec=53/44 lis/c=113/69 les/c/f=114/70/0 sis=115) [0] r=0 lpr=115 pi=[69,115)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:17:50 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 05 01:17:50 compute-0 ceph-mon[192914]: osdmap e115: 3 total, 3 up, 3 in
Dec 05 01:17:50 compute-0 ceph-mon[192914]: 2.13 scrub starts
Dec 05 01:17:50 compute-0 ceph-mon[192914]: 2.13 scrub ok
Dec 05 01:17:50 compute-0 ceph-mon[192914]: 4.14 scrub starts
Dec 05 01:17:50 compute-0 ceph-mon[192914]: 4.14 scrub ok
Dec 05 01:17:50 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec 05 01:17:50 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec 05 01:17:51 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.17 deep-scrub starts
Dec 05 01:17:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Dec 05 01:17:51 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.17 deep-scrub ok
Dec 05 01:17:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Dec 05 01:17:51 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Dec 05 01:17:51 compute-0 ceph-mon[192914]: pgmap v243: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 3 objects/s recovering
Dec 05 01:17:51 compute-0 ceph-mon[192914]: osdmap e116: 3 total, 3 up, 3 in
Dec 05 01:17:51 compute-0 ceph-mon[192914]: osdmap e117: 3 total, 3 up, 3 in
Dec 05 01:17:51 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 117 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=116/117 n=6 ec=53/44 lis/c=71/71 les/c/f=72/72/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[71,116)/1 crt=50'586 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:17:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:17:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Dec 05 01:17:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Dec 05 01:17:52 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Dec 05 01:17:52 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 118 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=116/117 n=6 ec=53/44 lis/c=116/71 les/c/f=117/72/0 sis=118 pruub=15.226383209s) [1] async=[1] r=-1 lpr=118 pi=[71,118)/1 crt=50'586 mlcod 50'586 active pruub 211.724075317s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:52 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 118 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=116/117 n=6 ec=53/44 lis/c=116/71 les/c/f=117/72/0 sis=118 pruub=15.226179123s) [1] r=-1 lpr=118 pi=[71,118)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 211.724075317s@ mbc={}] state<Start>: transitioning to Stray
Dec 05 01:17:52 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 118 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=116/71 les/c/f=117/72/0 sis=118) [1] r=0 lpr=118 pi=[71,118)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 05 01:17:52 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 118 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=116/71 les/c/f=117/72/0 sis=118) [1] r=0 lpr=118 pi=[71,118)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 05 01:17:52 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Dec 05 01:17:52 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Dec 05 01:17:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v247: 321 pgs: 1 active+remapped, 1 peering, 319 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 4 objects/s recovering
Dec 05 01:17:52 compute-0 ceph-mon[192914]: 7.15 scrub starts
Dec 05 01:17:52 compute-0 ceph-mon[192914]: 7.15 scrub ok
Dec 05 01:17:52 compute-0 ceph-mon[192914]: 3.17 deep-scrub starts
Dec 05 01:17:52 compute-0 ceph-mon[192914]: 3.17 deep-scrub ok
Dec 05 01:17:52 compute-0 ceph-mon[192914]: osdmap e118: 3 total, 3 up, 3 in
Dec 05 01:17:52 compute-0 podman[228014]: 2025-12-05 01:17:52.769377396 +0000 UTC m=+0.166915438 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Dec 05 01:17:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Dec 05 01:17:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Dec 05 01:17:53 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Dec 05 01:17:53 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 119 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=118/119 n=6 ec=53/44 lis/c=116/71 les/c/f=117/72/0 sis=118) [1] r=0 lpr=118 pi=[71,118)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 05 01:17:53 compute-0 ceph-mon[192914]: 7.13 scrub starts
Dec 05 01:17:53 compute-0 ceph-mon[192914]: 7.13 scrub ok
Dec 05 01:17:53 compute-0 ceph-mon[192914]: pgmap v247: 321 pgs: 1 active+remapped, 1 peering, 319 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 4 objects/s recovering
Dec 05 01:17:53 compute-0 ceph-mon[192914]: osdmap e119: 3 total, 3 up, 3 in
Dec 05 01:17:53 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Dec 05 01:17:53 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Dec 05 01:17:53 compute-0 podman[228033]: 2025-12-05 01:17:53.720617654 +0000 UTC m=+0.123558916 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 01:17:53 compute-0 podman[228034]: 2025-12-05 01:17:53.846490234 +0000 UTC m=+0.247723238 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 05 01:17:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v249: 321 pgs: 1 active+remapped, 1 peering, 319 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Dec 05 01:17:54 compute-0 ceph-mon[192914]: 4.12 scrub starts
Dec 05 01:17:54 compute-0 ceph-mon[192914]: 4.12 scrub ok
Dec 05 01:17:54 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec 05 01:17:54 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec 05 01:17:54 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Dec 05 01:17:54 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Dec 05 01:17:55 compute-0 ceph-mon[192914]: pgmap v249: 321 pgs: 1 active+remapped, 1 peering, 319 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Dec 05 01:17:55 compute-0 podman[228081]: 2025-12-05 01:17:55.740230226 +0000 UTC m=+0.141463842 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 05 01:17:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v250: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Dec 05 01:17:56 compute-0 ceph-mon[192914]: 4.10 scrub starts
Dec 05 01:17:56 compute-0 ceph-mon[192914]: 4.10 scrub ok
Dec 05 01:17:56 compute-0 ceph-mon[192914]: 3.11 scrub starts
Dec 05 01:17:56 compute-0 ceph-mon[192914]: 3.11 scrub ok
Dec 05 01:17:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:17:57 compute-0 ceph-mon[192914]: pgmap v250: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Dec 05 01:17:57 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.a scrub starts
Dec 05 01:17:57 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.a scrub ok
Dec 05 01:17:57 compute-0 podman[228102]: 2025-12-05 01:17:57.743340332 +0000 UTC m=+0.141691268 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_id=edpm, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, managed_by=edpm_ansible, container_name=kepler, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vendor=Red Hat, Inc.)
Dec 05 01:17:58 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Dec 05 01:17:58 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Dec 05 01:17:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v251: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Dec 05 01:17:58 compute-0 ceph-mon[192914]: 7.a scrub starts
Dec 05 01:17:58 compute-0 ceph-mon[192914]: 7.a scrub ok
Dec 05 01:17:58 compute-0 sudo[226509]: pam_unix(sudo:session): session closed for user root
Dec 05 01:17:59 compute-0 sudo[228271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgkyggfjpyvenalboeyxtssdmyfuuyyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897478.7200384-137-31350878996308/AnsiballZ_command.py'
Dec 05 01:17:59 compute-0 sudo[228271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:17:59 compute-0 ceph-mon[192914]: 3.12 scrub starts
Dec 05 01:17:59 compute-0 ceph-mon[192914]: 3.12 scrub ok
Dec 05 01:17:59 compute-0 ceph-mon[192914]: pgmap v251: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Dec 05 01:17:59 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec 05 01:17:59 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec 05 01:17:59 compute-0 python3.9[228273]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:17:59 compute-0 podman[158197]: time="2025-12-05T01:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:17:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 05 01:17:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6808 "" "Go-http-client/1.1"
Dec 05 01:18:00 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Dec 05 01:18:00 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Dec 05 01:18:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v252: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:00 compute-0 ceph-mon[192914]: 3.15 scrub starts
Dec 05 01:18:00 compute-0 ceph-mon[192914]: 3.15 scrub ok
Dec 05 01:18:00 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Dec 05 01:18:00 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Dec 05 01:18:00 compute-0 podman[228409]: 2025-12-05 01:18:00.726677571 +0000 UTC m=+0.130907330 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, name=ubi9-minimal)
Dec 05 01:18:00 compute-0 sudo[228271]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:01 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec 05 01:18:01 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec 05 01:18:01 compute-0 ceph-mon[192914]: 7.8 scrub starts
Dec 05 01:18:01 compute-0 ceph-mon[192914]: 7.8 scrub ok
Dec 05 01:18:01 compute-0 ceph-mon[192914]: pgmap v252: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:01 compute-0 openstack_network_exporter[160350]: ERROR   01:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:18:01 compute-0 openstack_network_exporter[160350]: ERROR   01:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:18:01 compute-0 openstack_network_exporter[160350]: ERROR   01:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:18:01 compute-0 openstack_network_exporter[160350]: ERROR   01:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:18:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:18:01 compute-0 openstack_network_exporter[160350]: ERROR   01:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:18:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:18:01 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.5 deep-scrub starts
Dec 05 01:18:01 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.5 deep-scrub ok
Dec 05 01:18:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:18:02 compute-0 sudo[228579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcxzjxxelfiyqekiiyubvoibdarcgwke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897481.2508545-145-45958850451748/AnsiballZ_selinux.py'
Dec 05 01:18:02 compute-0 sudo[228579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v253: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:02 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec 05 01:18:02 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec 05 01:18:02 compute-0 ceph-mon[192914]: 7.5 scrub starts
Dec 05 01:18:02 compute-0 ceph-mon[192914]: 7.5 scrub ok
Dec 05 01:18:02 compute-0 ceph-mon[192914]: 3.f scrub starts
Dec 05 01:18:02 compute-0 ceph-mon[192914]: 3.f scrub ok
Dec 05 01:18:02 compute-0 python3.9[228581]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 05 01:18:02 compute-0 sudo[228579]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:03 compute-0 ceph-mon[192914]: 3.5 deep-scrub starts
Dec 05 01:18:03 compute-0 ceph-mon[192914]: 3.5 deep-scrub ok
Dec 05 01:18:03 compute-0 ceph-mon[192914]: pgmap v253: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:03 compute-0 ceph-mon[192914]: 4.f scrub starts
Dec 05 01:18:03 compute-0 ceph-mon[192914]: 4.f scrub ok
Dec 05 01:18:03 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec 05 01:18:03 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec 05 01:18:03 compute-0 sudo[228731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spngzvqslqrlsfjabbxkaztwywlnvvxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897482.91713-156-172376808660135/AnsiballZ_command.py'
Dec 05 01:18:03 compute-0 sudo[228731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:03 compute-0 python3.9[228733]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 05 01:18:03 compute-0 sudo[228731]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v254: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:04 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec 05 01:18:04 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec 05 01:18:04 compute-0 sudo[228883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scnuydjfijoqdjezcshedbhgvkexypwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897484.0040522-164-239815048604646/AnsiballZ_file.py'
Dec 05 01:18:04 compute-0 sudo[228883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:04 compute-0 python3.9[228885]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:18:04 compute-0 podman[228886]: 2025-12-05 01:18:04.744080733 +0000 UTC m=+0.146430820 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:18:04 compute-0 sudo[228883]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:05 compute-0 ceph-mon[192914]: 7.1 scrub starts
Dec 05 01:18:05 compute-0 ceph-mon[192914]: 7.1 scrub ok
Dec 05 01:18:05 compute-0 ceph-mon[192914]: pgmap v254: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:05 compute-0 ceph-mon[192914]: 6.d scrub starts
Dec 05 01:18:05 compute-0 ceph-mon[192914]: 6.d scrub ok
Dec 05 01:18:05 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Dec 05 01:18:05 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Dec 05 01:18:05 compute-0 sudo[229059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adfkaxibajuvkajqbozdrzpconrvhfyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897485.0567956-172-172513573583865/AnsiballZ_mount.py'
Dec 05 01:18:05 compute-0 sudo[229059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:05 compute-0 python3.9[229061]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 05 01:18:05 compute-0 sudo[229059]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v255: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:06 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Dec 05 01:18:06 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Dec 05 01:18:06 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec 05 01:18:06 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec 05 01:18:06 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.1c deep-scrub starts
Dec 05 01:18:06 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.1c deep-scrub ok
Dec 05 01:18:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:18:07 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.f scrub starts
Dec 05 01:18:07 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.f scrub ok
Dec 05 01:18:07 compute-0 sudo[229211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfvphsvtmqxmtfhmsgmvkbcectdwhtro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897486.841447-200-23862320097826/AnsiballZ_file.py'
Dec 05 01:18:07 compute-0 sudo[229211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:07 compute-0 ceph-mon[192914]: 7.2 scrub starts
Dec 05 01:18:07 compute-0 ceph-mon[192914]: 7.2 scrub ok
Dec 05 01:18:07 compute-0 ceph-mon[192914]: pgmap v255: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:07 compute-0 ceph-mon[192914]: 7.9 scrub starts
Dec 05 01:18:07 compute-0 ceph-mon[192914]: 7.9 scrub ok
Dec 05 01:18:07 compute-0 ceph-mon[192914]: 6.c scrub starts
Dec 05 01:18:07 compute-0 ceph-mon[192914]: 6.c scrub ok
Dec 05 01:18:07 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Dec 05 01:18:07 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Dec 05 01:18:07 compute-0 python3.9[229213]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:18:07 compute-0 sudo[229211]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v256: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:08 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec 05 01:18:08 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec 05 01:18:08 compute-0 ceph-mon[192914]: 7.1c deep-scrub starts
Dec 05 01:18:08 compute-0 ceph-mon[192914]: 7.1c deep-scrub ok
Dec 05 01:18:08 compute-0 ceph-mon[192914]: 7.f scrub starts
Dec 05 01:18:08 compute-0 ceph-mon[192914]: 7.f scrub ok
Dec 05 01:18:08 compute-0 sudo[229363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzbbtdjvgmtyzrpbjichkotctfzsfeql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897487.9648826-208-174430464396944/AnsiballZ_stat.py'
Dec 05 01:18:08 compute-0 sudo[229363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:08 compute-0 python3.9[229365]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:18:08 compute-0 sudo[229363]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:09 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.c scrub starts
Dec 05 01:18:09 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.c scrub ok
Dec 05 01:18:09 compute-0 sudo[229441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptqcblczalqratonafpbhaghtxnqkqqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897487.9648826-208-174430464396944/AnsiballZ_file.py'
Dec 05 01:18:09 compute-0 sudo[229441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:09 compute-0 ceph-mon[192914]: 3.8 scrub starts
Dec 05 01:18:09 compute-0 ceph-mon[192914]: 3.8 scrub ok
Dec 05 01:18:09 compute-0 ceph-mon[192914]: pgmap v256: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:09 compute-0 ceph-mon[192914]: 4.d scrub starts
Dec 05 01:18:09 compute-0 ceph-mon[192914]: 4.d scrub ok
Dec 05 01:18:09 compute-0 python3.9[229443]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:18:09 compute-0 sudo[229441]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v257: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:10 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Dec 05 01:18:10 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Dec 05 01:18:10 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec 05 01:18:10 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec 05 01:18:10 compute-0 ceph-mon[192914]: 3.c scrub starts
Dec 05 01:18:10 compute-0 ceph-mon[192914]: 3.c scrub ok
Dec 05 01:18:10 compute-0 ceph-mon[192914]: pgmap v257: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:10 compute-0 ceph-mon[192914]: 6.e scrub starts
Dec 05 01:18:10 compute-0 ceph-mon[192914]: 6.e scrub ok
Dec 05 01:18:10 compute-0 sudo[229593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phjsoulxqpcdnvwvjyahzkqgmpqtbdwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897490.3726342-229-221952513589748/AnsiballZ_stat.py'
Dec 05 01:18:10 compute-0 sudo[229593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:11 compute-0 python3.9[229595]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:18:11 compute-0 sudo[229593]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:11 compute-0 ceph-mon[192914]: 7.4 scrub starts
Dec 05 01:18:11 compute-0 ceph-mon[192914]: 7.4 scrub ok
Dec 05 01:18:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:18:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v258: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:12 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Dec 05 01:18:12 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Dec 05 01:18:12 compute-0 sudo[229747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwhjsywtrxmkumgozijseambyouspyxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897491.7295046-242-164157764892004/AnsiballZ_getent.py'
Dec 05 01:18:12 compute-0 sudo[229747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:12 compute-0 ceph-mon[192914]: pgmap v258: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:12 compute-0 python3.9[229749]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 05 01:18:12 compute-0 sudo[229747]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:13 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec 05 01:18:13 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec 05 01:18:13 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec 05 01:18:13 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec 05 01:18:13 compute-0 ceph-mon[192914]: 3.6 scrub starts
Dec 05 01:18:13 compute-0 ceph-mon[192914]: 3.6 scrub ok
Dec 05 01:18:13 compute-0 sudo[229900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezahzrkdmnljazwtmevrhaoqbfoboarh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897493.2123215-252-199264736236483/AnsiballZ_getent.py'
Dec 05 01:18:13 compute-0 sudo[229900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:13 compute-0 python3.9[229902]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 05 01:18:14 compute-0 sudo[229900]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v259: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:14 compute-0 ceph-mon[192914]: 7.c scrub starts
Dec 05 01:18:14 compute-0 ceph-mon[192914]: 7.c scrub ok
Dec 05 01:18:14 compute-0 ceph-mon[192914]: 6.2 scrub starts
Dec 05 01:18:14 compute-0 ceph-mon[192914]: 6.2 scrub ok
Dec 05 01:18:14 compute-0 ceph-mon[192914]: pgmap v259: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:15 compute-0 sudo[230053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjgdliebmpjmuhcdxjngmbrzttqqbydh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897494.346921-260-204233067185566/AnsiballZ_group.py'
Dec 05 01:18:15 compute-0 sudo[230053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:15 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Dec 05 01:18:15 compute-0 python3.9[230055]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 05 01:18:15 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Dec 05 01:18:15 compute-0 sudo[230053]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:15 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Dec 05 01:18:15 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Dec 05 01:18:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:18:16
Dec 05 01:18:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:18:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:18:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', '.mgr', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'default.rgw.meta', '.rgw.root']
Dec 05 01:18:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:18:16 compute-0 ceph-mon[192914]: 7.6 scrub starts
Dec 05 01:18:16 compute-0 ceph-mon[192914]: 7.6 scrub ok
Dec 05 01:18:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v260: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:18:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:18:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:18:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:18:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:18:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:18:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:18:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:18:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:18:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:18:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:18:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:18:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:18:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:18:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:18:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:18:16 compute-0 sudo[230205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktvuhbkldqdbmotvzmwxylfzyotqzvqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897495.7419512-269-29641959938182/AnsiballZ_file.py'
Dec 05 01:18:16 compute-0 sudo[230205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:16 compute-0 python3.9[230207]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 05 01:18:16 compute-0 sudo[230205]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:16 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec 05 01:18:16 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec 05 01:18:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:18:17 compute-0 ceph-mon[192914]: 6.1 scrub starts
Dec 05 01:18:17 compute-0 ceph-mon[192914]: 6.1 scrub ok
Dec 05 01:18:17 compute-0 ceph-mon[192914]: pgmap v260: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:17 compute-0 sudo[230357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stianmlkmlrkuybsiammwpbtufyprrda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897497.1141648-280-207933875980186/AnsiballZ_dnf.py'
Dec 05 01:18:17 compute-0 sudo[230357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:17 compute-0 python3.9[230359]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 01:18:18 compute-0 ceph-mon[192914]: 6.6 scrub starts
Dec 05 01:18:18 compute-0 ceph-mon[192914]: 6.6 scrub ok
Dec 05 01:18:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v261: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:18 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Dec 05 01:18:18 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Dec 05 01:18:18 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.e deep-scrub starts
Dec 05 01:18:18 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.e deep-scrub ok
Dec 05 01:18:19 compute-0 ceph-mon[192914]: pgmap v261: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:19 compute-0 ceph-mon[192914]: 7.3 scrub starts
Dec 05 01:18:19 compute-0 ceph-mon[192914]: 7.3 scrub ok
Dec 05 01:18:19 compute-0 ceph-mon[192914]: 7.e deep-scrub starts
Dec 05 01:18:19 compute-0 ceph-mon[192914]: 7.e deep-scrub ok
Dec 05 01:18:19 compute-0 sudo[230357]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v262: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:20 compute-0 sudo[230511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aruixlfgjrdnjegajohjgntmfumvrlno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897499.6975834-288-62400524451781/AnsiballZ_file.py'
Dec 05 01:18:20 compute-0 sudo[230511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:20 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec 05 01:18:20 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec 05 01:18:20 compute-0 python3.9[230513]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:18:20 compute-0 sudo[230511]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:21 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.a scrub starts
Dec 05 01:18:21 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.a scrub ok
Dec 05 01:18:21 compute-0 ceph-mon[192914]: pgmap v262: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:21 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Dec 05 01:18:21 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Dec 05 01:18:21 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Dec 05 01:18:21 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Dec 05 01:18:21 compute-0 sudo[230663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gedlrdpoasxpzgnwtrbvrxhudxlcgzza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897500.9587123-296-262499469697121/AnsiballZ_stat.py'
Dec 05 01:18:21 compute-0 sudo[230663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:21 compute-0 python3.9[230665]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:18:21 compute-0 sudo[230663]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:18:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v263: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:22 compute-0 ceph-mon[192914]: 4.9 scrub starts
Dec 05 01:18:22 compute-0 ceph-mon[192914]: 4.9 scrub ok
Dec 05 01:18:22 compute-0 ceph-mon[192914]: 3.a scrub starts
Dec 05 01:18:22 compute-0 ceph-mon[192914]: 3.a scrub ok
Dec 05 01:18:22 compute-0 ceph-mon[192914]: 7.1a scrub starts
Dec 05 01:18:22 compute-0 ceph-mon[192914]: 7.1a scrub ok
Dec 05 01:18:22 compute-0 sudo[230741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jispjfwrzbnrtzwgsvixtorwtfwlnxai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897500.9587123-296-262499469697121/AnsiballZ_file.py'
Dec 05 01:18:22 compute-0 sudo[230741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:22 compute-0 python3.9[230743]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:18:22 compute-0 sudo[230741]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:23 compute-0 ceph-mon[192914]: 4.4 scrub starts
Dec 05 01:18:23 compute-0 ceph-mon[192914]: 4.4 scrub ok
Dec 05 01:18:23 compute-0 ceph-mon[192914]: pgmap v263: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:23 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Dec 05 01:18:23 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Dec 05 01:18:23 compute-0 sudo[230911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdvxkfdhmeyjncuvnooixssgdsfrwhfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897502.9890156-309-102730973839407/AnsiballZ_stat.py'
Dec 05 01:18:23 compute-0 podman[230869]: 2025-12-05 01:18:23.583780342 +0000 UTC m=+0.126150900 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 05 01:18:23 compute-0 sudo[230911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:23 compute-0 python3.9[230916]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:18:23 compute-0 sudo[230911]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v264: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:24 compute-0 podman[230966]: 2025-12-05 01:18:24.302585936 +0000 UTC m=+0.105207960 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:18:24 compute-0 sudo[231023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opzvbjzlhvpiwfhqkhvnjvzvzgdyhywa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897502.9890156-309-102730973839407/AnsiballZ_file.py'
Dec 05 01:18:24 compute-0 sudo[231023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:24 compute-0 ceph-mon[192914]: 3.1e scrub starts
Dec 05 01:18:24 compute-0 ceph-mon[192914]: 3.1e scrub ok
Dec 05 01:18:24 compute-0 podman[230967]: 2025-12-05 01:18:24.368127155 +0000 UTC m=+0.168454934 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 01:18:24 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec 05 01:18:24 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec 05 01:18:24 compute-0 python3.9[231036]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:18:24 compute-0 sudo[231023]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:25 compute-0 ceph-mon[192914]: pgmap v264: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:25 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec 05 01:18:25 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec 05 01:18:25 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec 05 01:18:25 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec 05 01:18:25 compute-0 sudo[231191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htktpfvizifcwvmegdihcamseqmrifvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897505.286649-324-130630395972590/AnsiballZ_dnf.py'
Dec 05 01:18:25 compute-0 sudo[231191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:25 compute-0 sshd-session[230768]: Connection reset by authenticating user root 91.202.233.33 port 50398 [preauth]
Dec 05 01:18:26 compute-0 podman[231193]: 2025-12-05 01:18:26.0016459 +0000 UTC m=+0.135904866 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:18:26 compute-0 python3.9[231194]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 01:18:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v265: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:26 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Dec 05 01:18:26 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Dec 05 01:18:26 compute-0 ceph-mon[192914]: 6.b scrub starts
Dec 05 01:18:26 compute-0 ceph-mon[192914]: 6.b scrub ok
Dec 05 01:18:26 compute-0 ceph-mon[192914]: 3.7 scrub starts
Dec 05 01:18:26 compute-0 ceph-mon[192914]: 3.7 scrub ok
Dec 05 01:18:26 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Dec 05 01:18:26 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Dec 05 01:18:26 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.4 deep-scrub starts
Dec 05 01:18:26 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.4 deep-scrub ok
Dec 05 01:18:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:18:27 compute-0 ceph-mon[192914]: 4.5 scrub starts
Dec 05 01:18:27 compute-0 ceph-mon[192914]: 4.5 scrub ok
Dec 05 01:18:27 compute-0 ceph-mon[192914]: pgmap v265: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:27 compute-0 ceph-mon[192914]: 3.1b scrub starts
Dec 05 01:18:27 compute-0 ceph-mon[192914]: 3.1b scrub ok
Dec 05 01:18:27 compute-0 sudo[231191]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v266: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:28 compute-0 sshd-session[231215]: Connection reset by authenticating user root 91.202.233.33 port 50408 [preauth]
Dec 05 01:18:28 compute-0 ceph-mon[192914]: 3.1d scrub starts
Dec 05 01:18:28 compute-0 ceph-mon[192914]: 3.1d scrub ok
Dec 05 01:18:28 compute-0 ceph-mon[192914]: 6.4 deep-scrub starts
Dec 05 01:18:28 compute-0 ceph-mon[192914]: 6.4 deep-scrub ok
Dec 05 01:18:28 compute-0 podman[231342]: 2025-12-05 01:18:28.720727385 +0000 UTC m=+0.135486144 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., release-0.7.12=, container_name=kepler, release=1214.1726694543, name=ubi9, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 01:18:28 compute-0 python3.9[231383]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:18:29 compute-0 ceph-mon[192914]: pgmap v266: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:29 compute-0 podman[158197]: time="2025-12-05T01:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:18:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 05 01:18:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6807 "" "Go-http-client/1.1"
Dec 05 01:18:29 compute-0 python3.9[231540]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 05 01:18:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v267: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:30 compute-0 ceph-mon[192914]: pgmap v267: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:30 compute-0 sshd-session[231317]: Connection reset by authenticating user root 91.202.233.33 port 50416 [preauth]
Dec 05 01:18:30 compute-0 python3.9[231690]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:18:31 compute-0 openstack_network_exporter[160350]: ERROR   01:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:18:31 compute-0 openstack_network_exporter[160350]: ERROR   01:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:18:31 compute-0 openstack_network_exporter[160350]: ERROR   01:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:18:31 compute-0 openstack_network_exporter[160350]: ERROR   01:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:18:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:18:31 compute-0 openstack_network_exporter[160350]: ERROR   01:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:18:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:18:31 compute-0 podman[231760]: 2025-12-05 01:18:31.718500616 +0000 UTC m=+0.118962408 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-type=git, architecture=x86_64, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container)
Dec 05 01:18:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:18:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v268: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:32 compute-0 sudo[231863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsscqdunxsbnzavjbwpoxomyziaxolus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897511.482795-365-151550849795178/AnsiballZ_systemd.py'
Dec 05 01:18:32 compute-0 sudo[231863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:32 compute-0 python3.9[231865]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:18:32 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 05 01:18:32 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec 05 01:18:32 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 05 01:18:32 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 05 01:18:33 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 05 01:18:33 compute-0 ceph-mon[192914]: pgmap v268: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:33 compute-0 sudo[231863]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:33 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.18 deep-scrub starts
Dec 05 01:18:33 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.18 deep-scrub ok
Dec 05 01:18:33 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Dec 05 01:18:33 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Dec 05 01:18:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v269: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:34 compute-0 ceph-mon[192914]: 7.18 deep-scrub starts
Dec 05 01:18:34 compute-0 ceph-mon[192914]: 7.18 deep-scrub ok
Dec 05 01:18:34 compute-0 python3.9[232026]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 05 01:18:34 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.1e deep-scrub starts
Dec 05 01:18:34 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.1e deep-scrub ok
Dec 05 01:18:35 compute-0 ceph-mon[192914]: 4.8 scrub starts
Dec 05 01:18:35 compute-0 ceph-mon[192914]: 4.8 scrub ok
Dec 05 01:18:35 compute-0 ceph-mon[192914]: pgmap v269: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:35 compute-0 sshd-session[231691]: Invalid user ubuntu from 91.202.233.33 port 32022
Dec 05 01:18:35 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Dec 05 01:18:35 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Dec 05 01:18:35 compute-0 podman[232051]: 2025-12-05 01:18:35.550972767 +0000 UTC m=+0.111541008 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 01:18:35 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Dec 05 01:18:35 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Dec 05 01:18:35 compute-0 sshd-session[231691]: Connection reset by invalid user ubuntu 91.202.233.33 port 32022 [preauth]
Dec 05 01:18:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v270: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:36 compute-0 sudo[232075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:18:36 compute-0 sudo[232075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:18:36 compute-0 sudo[232075]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:36 compute-0 ceph-mon[192914]: 6.1e deep-scrub starts
Dec 05 01:18:36 compute-0 ceph-mon[192914]: 6.1e deep-scrub ok
Dec 05 01:18:36 compute-0 sudo[232101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:18:36 compute-0 sudo[232101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:18:36 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.1b deep-scrub starts
Dec 05 01:18:36 compute-0 sudo[232101]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:36 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.1b deep-scrub ok
Dec 05 01:18:36 compute-0 sudo[232150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:18:36 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 10.5 deep-scrub starts
Dec 05 01:18:36 compute-0 sudo[232150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:18:36 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 10.5 deep-scrub ok
Dec 05 01:18:36 compute-0 sudo[232150]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:36 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Dec 05 01:18:36 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Dec 05 01:18:36 compute-0 sudo[232206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:18:36 compute-0 sudo[232206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:18:36 compute-0 sudo[232309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyzjorlhnhgnsfosbbmwzedphgpiudnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897516.4011476-422-189623920861260/AnsiballZ_systemd.py'
Dec 05 01:18:36 compute-0 sudo[232309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:18:37 compute-0 python3.9[232314]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:18:37 compute-0 ceph-mon[192914]: 10.3 scrub starts
Dec 05 01:18:37 compute-0 ceph-mon[192914]: 10.3 scrub ok
Dec 05 01:18:37 compute-0 ceph-mon[192914]: 6.1c scrub starts
Dec 05 01:18:37 compute-0 ceph-mon[192914]: 6.1c scrub ok
Dec 05 01:18:37 compute-0 ceph-mon[192914]: pgmap v270: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:37 compute-0 ceph-mon[192914]: 7.1b deep-scrub starts
Dec 05 01:18:37 compute-0 ceph-mon[192914]: 7.1b deep-scrub ok
Dec 05 01:18:37 compute-0 sudo[232206]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:18:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:18:37 compute-0 sudo[232309]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:18:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:18:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:18:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:18:37 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 65a90eeb-7a5d-40e3-ab75-f78d704b4ae0 does not exist
Dec 05 01:18:37 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 5547a444-3a2b-40c5-b06a-3deea5e80189 does not exist
Dec 05 01:18:37 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev bc4f2eff-bdb4-4ff8-abe2-ea599c924709 does not exist
Dec 05 01:18:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:18:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:18:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:18:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:18:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:18:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:18:37 compute-0 sshd-session[232074]: Invalid user oracle from 91.202.233.33 port 32030
Dec 05 01:18:37 compute-0 sudo[232341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:18:37 compute-0 sudo[232341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:18:37 compute-0 sudo[232341]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:37 compute-0 sudo[232391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:18:37 compute-0 sudo[232391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:18:37 compute-0 sudo[232391]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:37 compute-0 sudo[232454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:18:37 compute-0 sudo[232454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:18:37 compute-0 sudo[232454]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:37 compute-0 sudo[232495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:18:37 compute-0 sudo[232495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:18:38 compute-0 sshd-session[232074]: Connection reset by invalid user oracle 91.202.233.33 port 32030 [preauth]
Dec 05 01:18:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v271: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:38 compute-0 sudo[232597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itpwhguctnwhokjwgdvqxvazbiemwzmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897517.6690753-422-31421307211358/AnsiballZ_systemd.py'
Dec 05 01:18:38 compute-0 sudo[232597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:38 compute-0 ceph-mon[192914]: 10.5 deep-scrub starts
Dec 05 01:18:38 compute-0 ceph-mon[192914]: 10.5 deep-scrub ok
Dec 05 01:18:38 compute-0 ceph-mon[192914]: 4.7 scrub starts
Dec 05 01:18:38 compute-0 ceph-mon[192914]: 4.7 scrub ok
Dec 05 01:18:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:18:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:18:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:18:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:18:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:18:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:18:38 compute-0 python3.9[232606]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:18:38 compute-0 podman[232627]: 2025-12-05 01:18:38.580216975 +0000 UTC m=+0.090658609 container create 04d928d8897781691e379cb96afb7f8e19c162aee2932a7aa888f5f56c6b8983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 05 01:18:38 compute-0 systemd[194721]: Created slice User Background Tasks Slice.
Dec 05 01:18:38 compute-0 systemd[194721]: Starting Cleanup of User's Temporary Files and Directories...
Dec 05 01:18:38 compute-0 podman[232627]: 2025-12-05 01:18:38.531444639 +0000 UTC m=+0.041886333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:18:38 compute-0 systemd[194721]: Finished Cleanup of User's Temporary Files and Directories.
Dec 05 01:18:38 compute-0 systemd[1]: Started libpod-conmon-04d928d8897781691e379cb96afb7f8e19c162aee2932a7aa888f5f56c6b8983.scope.
Dec 05 01:18:38 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:18:38 compute-0 sudo[232597]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:38 compute-0 podman[232627]: 2025-12-05 01:18:38.730702731 +0000 UTC m=+0.241144375 container init 04d928d8897781691e379cb96afb7f8e19c162aee2932a7aa888f5f56c6b8983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 05 01:18:38 compute-0 podman[232627]: 2025-12-05 01:18:38.745632943 +0000 UTC m=+0.256074557 container start 04d928d8897781691e379cb96afb7f8e19c162aee2932a7aa888f5f56c6b8983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:18:38 compute-0 podman[232627]: 2025-12-05 01:18:38.751675973 +0000 UTC m=+0.262117627 container attach 04d928d8897781691e379cb96afb7f8e19c162aee2932a7aa888f5f56c6b8983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lederberg, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 05 01:18:38 compute-0 vibrant_lederberg[232646]: 167 167
Dec 05 01:18:38 compute-0 systemd[1]: libpod-04d928d8897781691e379cb96afb7f8e19c162aee2932a7aa888f5f56c6b8983.scope: Deactivated successfully.
Dec 05 01:18:38 compute-0 podman[232627]: 2025-12-05 01:18:38.765078361 +0000 UTC m=+0.275519995 container died 04d928d8897781691e379cb96afb7f8e19c162aee2932a7aa888f5f56c6b8983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 05 01:18:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa8541f404ee7442693cbe701a67b8dfca0d4fcfcefb1cc8d7cfedec27699a53-merged.mount: Deactivated successfully.
Dec 05 01:18:38 compute-0 podman[232627]: 2025-12-05 01:18:38.858166308 +0000 UTC m=+0.368607942 container remove 04d928d8897781691e379cb96afb7f8e19c162aee2932a7aa888f5f56c6b8983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 05 01:18:38 compute-0 systemd[1]: libpod-conmon-04d928d8897781691e379cb96afb7f8e19c162aee2932a7aa888f5f56c6b8983.scope: Deactivated successfully.
Dec 05 01:18:39 compute-0 podman[232693]: 2025-12-05 01:18:39.151828995 +0000 UTC m=+0.098820490 container create 286171f8bd0664cccf0b06b1a35a50e0071b8150e8f501c3c65a3d380d4b94fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 05 01:18:39 compute-0 podman[232693]: 2025-12-05 01:18:39.105560639 +0000 UTC m=+0.052552114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:18:39 compute-0 systemd[1]: Started libpod-conmon-286171f8bd0664cccf0b06b1a35a50e0071b8150e8f501c3c65a3d380d4b94fa.scope.
Dec 05 01:18:39 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:18:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8ded3aa6165e7167bf4c4752f28cadf1e8bcfcedee0261560b20a37c75584d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:18:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8ded3aa6165e7167bf4c4752f28cadf1e8bcfcedee0261560b20a37c75584d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:18:39 compute-0 sshd-session[224506]: Connection closed by 192.168.122.30 port 59712
Dec 05 01:18:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8ded3aa6165e7167bf4c4752f28cadf1e8bcfcedee0261560b20a37c75584d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:18:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8ded3aa6165e7167bf4c4752f28cadf1e8bcfcedee0261560b20a37c75584d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:18:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8ded3aa6165e7167bf4c4752f28cadf1e8bcfcedee0261560b20a37c75584d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:18:39 compute-0 sshd-session[224503]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:18:39 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Dec 05 01:18:39 compute-0 systemd[1]: session-42.scope: Consumed 1min 26.842s CPU time.
Dec 05 01:18:39 compute-0 systemd-logind[792]: Session 42 logged out. Waiting for processes to exit.
Dec 05 01:18:39 compute-0 systemd-logind[792]: Removed session 42.
Dec 05 01:18:39 compute-0 podman[232693]: 2025-12-05 01:18:39.330802015 +0000 UTC m=+0.277793500 container init 286171f8bd0664cccf0b06b1a35a50e0071b8150e8f501c3c65a3d380d4b94fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rubin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:18:39 compute-0 ceph-mon[192914]: pgmap v271: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:39 compute-0 podman[232693]: 2025-12-05 01:18:39.367697466 +0000 UTC m=+0.314688961 container start 286171f8bd0664cccf0b06b1a35a50e0071b8150e8f501c3c65a3d380d4b94fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:18:39 compute-0 podman[232693]: 2025-12-05 01:18:39.375733863 +0000 UTC m=+0.322725408 container attach 286171f8bd0664cccf0b06b1a35a50e0071b8150e8f501c3c65a3d380d4b94fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rubin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:18:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v272: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:40 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.1d deep-scrub starts
Dec 05 01:18:40 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.1d deep-scrub ok
Dec 05 01:18:40 compute-0 intelligent_rubin[232710]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:18:40 compute-0 intelligent_rubin[232710]: --> relative data size: 1.0
Dec 05 01:18:40 compute-0 intelligent_rubin[232710]: --> All data devices are unavailable
Dec 05 01:18:40 compute-0 systemd[1]: libpod-286171f8bd0664cccf0b06b1a35a50e0071b8150e8f501c3c65a3d380d4b94fa.scope: Deactivated successfully.
Dec 05 01:18:40 compute-0 systemd[1]: libpod-286171f8bd0664cccf0b06b1a35a50e0071b8150e8f501c3c65a3d380d4b94fa.scope: Consumed 1.362s CPU time.
Dec 05 01:18:40 compute-0 podman[232693]: 2025-12-05 01:18:40.795532515 +0000 UTC m=+1.742524020 container died 286171f8bd0664cccf0b06b1a35a50e0071b8150e8f501c3c65a3d380d4b94fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 05 01:18:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd8ded3aa6165e7167bf4c4752f28cadf1e8bcfcedee0261560b20a37c75584d-merged.mount: Deactivated successfully.
Dec 05 01:18:40 compute-0 podman[232693]: 2025-12-05 01:18:40.927330984 +0000 UTC m=+1.874322449 container remove 286171f8bd0664cccf0b06b1a35a50e0071b8150e8f501c3c65a3d380d4b94fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rubin, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 05 01:18:40 compute-0 systemd[1]: libpod-conmon-286171f8bd0664cccf0b06b1a35a50e0071b8150e8f501c3c65a3d380d4b94fa.scope: Deactivated successfully.
Dec 05 01:18:40 compute-0 sudo[232495]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:41 compute-0 sudo[232754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:18:41 compute-0 sudo[232754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:18:41 compute-0 sudo[232754]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:41 compute-0 sudo[232779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:18:41 compute-0 sudo[232779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:18:41 compute-0 sudo[232779]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:41 compute-0 ceph-mon[192914]: pgmap v272: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:41 compute-0 sudo[232804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:18:41 compute-0 sudo[232804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:18:41 compute-0 sudo[232804]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:41 compute-0 sudo[232829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:18:41 compute-0 sudo[232829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:18:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:18:42 compute-0 podman[232893]: 2025-12-05 01:18:42.186136965 +0000 UTC m=+0.099280212 container create bc0f2f2f4db2171a2506e1edfcd70414b68c145a78a848f54dcfd7e66cddbf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 01:18:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v273: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:42 compute-0 podman[232893]: 2025-12-05 01:18:42.149180002 +0000 UTC m=+0.062323299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:18:42 compute-0 systemd[1]: Started libpod-conmon-bc0f2f2f4db2171a2506e1edfcd70414b68c145a78a848f54dcfd7e66cddbf9e.scope.
Dec 05 01:18:42 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:18:42 compute-0 podman[232893]: 2025-12-05 01:18:42.32028266 +0000 UTC m=+0.233425897 container init bc0f2f2f4db2171a2506e1edfcd70414b68c145a78a848f54dcfd7e66cddbf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shockley, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:18:42 compute-0 podman[232893]: 2025-12-05 01:18:42.331705353 +0000 UTC m=+0.244848570 container start bc0f2f2f4db2171a2506e1edfcd70414b68c145a78a848f54dcfd7e66cddbf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:18:42 compute-0 podman[232893]: 2025-12-05 01:18:42.336124138 +0000 UTC m=+0.249267375 container attach bc0f2f2f4db2171a2506e1edfcd70414b68c145a78a848f54dcfd7e66cddbf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 05 01:18:42 compute-0 relaxed_shockley[232909]: 167 167
Dec 05 01:18:42 compute-0 systemd[1]: libpod-bc0f2f2f4db2171a2506e1edfcd70414b68c145a78a848f54dcfd7e66cddbf9e.scope: Deactivated successfully.
Dec 05 01:18:42 compute-0 podman[232893]: 2025-12-05 01:18:42.342197379 +0000 UTC m=+0.255340616 container died bc0f2f2f4db2171a2506e1edfcd70414b68c145a78a848f54dcfd7e66cddbf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:18:42 compute-0 ceph-mon[192914]: 6.1d deep-scrub starts
Dec 05 01:18:42 compute-0 ceph-mon[192914]: 6.1d deep-scrub ok
Dec 05 01:18:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-cde7122bd31390438d706f94baadad9c6222eca31e60c700c4201fa5c8ac8e3c-merged.mount: Deactivated successfully.
Dec 05 01:18:42 compute-0 podman[232893]: 2025-12-05 01:18:42.413700696 +0000 UTC m=+0.326843923 container remove bc0f2f2f4db2171a2506e1edfcd70414b68c145a78a848f54dcfd7e66cddbf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shockley, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Dec 05 01:18:42 compute-0 systemd[1]: libpod-conmon-bc0f2f2f4db2171a2506e1edfcd70414b68c145a78a848f54dcfd7e66cddbf9e.scope: Deactivated successfully.
Dec 05 01:18:42 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Dec 05 01:18:42 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.543 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.545 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:18:42 compute-0 podman[232933]: 2025-12-05 01:18:42.652371651 +0000 UTC m=+0.059981893 container create 62a1f87e773d6181bd57fb7f849e734132434e7d9472986f310b8da17330670b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclean, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 05 01:18:42 compute-0 podman[232933]: 2025-12-05 01:18:42.633382405 +0000 UTC m=+0.040992677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:18:42 compute-0 systemd[1]: Started libpod-conmon-62a1f87e773d6181bd57fb7f849e734132434e7d9472986f310b8da17330670b.scope.
Dec 05 01:18:42 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9908a7b1aa45e442419ad9a72967f2fa10226124fafa061f7d9dea2a79eadc08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9908a7b1aa45e442419ad9a72967f2fa10226124fafa061f7d9dea2a79eadc08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9908a7b1aa45e442419ad9a72967f2fa10226124fafa061f7d9dea2a79eadc08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9908a7b1aa45e442419ad9a72967f2fa10226124fafa061f7d9dea2a79eadc08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:18:42 compute-0 podman[232933]: 2025-12-05 01:18:42.808517297 +0000 UTC m=+0.216127579 container init 62a1f87e773d6181bd57fb7f849e734132434e7d9472986f310b8da17330670b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 05 01:18:42 compute-0 podman[232933]: 2025-12-05 01:18:42.824040235 +0000 UTC m=+0.231650477 container start 62a1f87e773d6181bd57fb7f849e734132434e7d9472986f310b8da17330670b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclean, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:18:42 compute-0 podman[232933]: 2025-12-05 01:18:42.830205839 +0000 UTC m=+0.237816081 container attach 62a1f87e773d6181bd57fb7f849e734132434e7d9472986f310b8da17330670b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclean, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 05 01:18:43 compute-0 ceph-mon[192914]: pgmap v273: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:43 compute-0 ceph-mon[192914]: 3.1f scrub starts
Dec 05 01:18:43 compute-0 ceph-mon[192914]: 3.1f scrub ok
Dec 05 01:18:43 compute-0 magical_mclean[232947]: {
Dec 05 01:18:43 compute-0 magical_mclean[232947]:     "0": [
Dec 05 01:18:43 compute-0 magical_mclean[232947]:         {
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "devices": [
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "/dev/loop3"
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             ],
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "lv_name": "ceph_lv0",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "lv_size": "21470642176",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "name": "ceph_lv0",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "tags": {
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.cluster_name": "ceph",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.crush_device_class": "",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.encrypted": "0",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.osd_id": "0",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.type": "block",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.vdo": "0"
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             },
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "type": "block",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "vg_name": "ceph_vg0"
Dec 05 01:18:43 compute-0 magical_mclean[232947]:         }
Dec 05 01:18:43 compute-0 magical_mclean[232947]:     ],
Dec 05 01:18:43 compute-0 magical_mclean[232947]:     "1": [
Dec 05 01:18:43 compute-0 magical_mclean[232947]:         {
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "devices": [
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "/dev/loop4"
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             ],
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "lv_name": "ceph_lv1",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "lv_size": "21470642176",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "name": "ceph_lv1",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "tags": {
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.cluster_name": "ceph",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.crush_device_class": "",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.encrypted": "0",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.osd_id": "1",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.type": "block",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.vdo": "0"
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             },
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "type": "block",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "vg_name": "ceph_vg1"
Dec 05 01:18:43 compute-0 magical_mclean[232947]:         }
Dec 05 01:18:43 compute-0 magical_mclean[232947]:     ],
Dec 05 01:18:43 compute-0 magical_mclean[232947]:     "2": [
Dec 05 01:18:43 compute-0 magical_mclean[232947]:         {
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "devices": [
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "/dev/loop5"
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             ],
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "lv_name": "ceph_lv2",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "lv_size": "21470642176",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "name": "ceph_lv2",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "tags": {
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.cluster_name": "ceph",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.crush_device_class": "",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.encrypted": "0",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.osd_id": "2",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.type": "block",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:                 "ceph.vdo": "0"
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             },
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "type": "block",
Dec 05 01:18:43 compute-0 magical_mclean[232947]:             "vg_name": "ceph_vg2"
Dec 05 01:18:43 compute-0 magical_mclean[232947]:         }
Dec 05 01:18:43 compute-0 magical_mclean[232947]:     ]
Dec 05 01:18:43 compute-0 magical_mclean[232947]: }
Dec 05 01:18:43 compute-0 systemd[1]: libpod-62a1f87e773d6181bd57fb7f849e734132434e7d9472986f310b8da17330670b.scope: Deactivated successfully.
Dec 05 01:18:43 compute-0 podman[232933]: 2025-12-05 01:18:43.76370003 +0000 UTC m=+1.171310302 container died 62a1f87e773d6181bd57fb7f849e734132434e7d9472986f310b8da17330670b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 01:18:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-9908a7b1aa45e442419ad9a72967f2fa10226124fafa061f7d9dea2a79eadc08-merged.mount: Deactivated successfully.
Dec 05 01:18:43 compute-0 podman[232933]: 2025-12-05 01:18:43.877269575 +0000 UTC m=+1.284879807 container remove 62a1f87e773d6181bd57fb7f849e734132434e7d9472986f310b8da17330670b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclean, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:18:43 compute-0 systemd[1]: libpod-conmon-62a1f87e773d6181bd57fb7f849e734132434e7d9472986f310b8da17330670b.scope: Deactivated successfully.
Dec 05 01:18:43 compute-0 sudo[232829]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:44 compute-0 sudo[232968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:18:44 compute-0 sudo[232968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:18:44 compute-0 sudo[232968]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v274: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:44 compute-0 sudo[232993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:18:44 compute-0 sudo[232993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:18:44 compute-0 sudo[232993]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:44 compute-0 sudo[233018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:18:44 compute-0 sudo[233018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:18:44 compute-0 sudo[233018]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:44 compute-0 sudo[233043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:18:44 compute-0 sudo[233043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:18:44 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Dec 05 01:18:44 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Dec 05 01:18:45 compute-0 podman[233108]: 2025-12-05 01:18:45.074369244 +0000 UTC m=+0.072387164 container create bb014ca333761e7a8cd0ebd20e0364a5d59dd035f0ba360c449baeaf7b61bf6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:18:45 compute-0 systemd[1]: Started libpod-conmon-bb014ca333761e7a8cd0ebd20e0364a5d59dd035f0ba360c449baeaf7b61bf6c.scope.
Dec 05 01:18:45 compute-0 podman[233108]: 2025-12-05 01:18:45.051662373 +0000 UTC m=+0.049680333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:18:45 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:18:45 compute-0 podman[233108]: 2025-12-05 01:18:45.220776415 +0000 UTC m=+0.218794395 container init bb014ca333761e7a8cd0ebd20e0364a5d59dd035f0ba360c449baeaf7b61bf6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_maxwell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 05 01:18:45 compute-0 podman[233108]: 2025-12-05 01:18:45.236432227 +0000 UTC m=+0.234450167 container start bb014ca333761e7a8cd0ebd20e0364a5d59dd035f0ba360c449baeaf7b61bf6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_maxwell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 01:18:45 compute-0 podman[233108]: 2025-12-05 01:18:45.243054564 +0000 UTC m=+0.241072554 container attach bb014ca333761e7a8cd0ebd20e0364a5d59dd035f0ba360c449baeaf7b61bf6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_maxwell, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:18:45 compute-0 nifty_maxwell[233124]: 167 167
Dec 05 01:18:45 compute-0 systemd[1]: libpod-bb014ca333761e7a8cd0ebd20e0364a5d59dd035f0ba360c449baeaf7b61bf6c.scope: Deactivated successfully.
Dec 05 01:18:45 compute-0 podman[233108]: 2025-12-05 01:18:45.249631009 +0000 UTC m=+0.247648959 container died bb014ca333761e7a8cd0ebd20e0364a5d59dd035f0ba360c449baeaf7b61bf6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_maxwell, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 05 01:18:45 compute-0 sshd-session[233126]: Accepted publickey for zuul from 192.168.122.30 port 39410 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 01:18:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-9753250624c0ca09cd4803b1415b488cf49b012326f0e58782cb53c21b90be3c-merged.mount: Deactivated successfully.
Dec 05 01:18:45 compute-0 systemd-logind[792]: New session 43 of user zuul.
Dec 05 01:18:45 compute-0 systemd[1]: Started Session 43 of User zuul.
Dec 05 01:18:45 compute-0 rsyslogd[188644]: imjournal from <compute-0:podman>: begin to drop messages due to rate-limiting
Dec 05 01:18:45 compute-0 podman[233108]: 2025-12-05 01:18:45.344756094 +0000 UTC m=+0.342774044 container remove bb014ca333761e7a8cd0ebd20e0364a5d59dd035f0ba360c449baeaf7b61bf6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:18:45 compute-0 sshd-session[233126]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:18:45 compute-0 systemd[1]: libpod-conmon-bb014ca333761e7a8cd0ebd20e0364a5d59dd035f0ba360c449baeaf7b61bf6c.scope: Deactivated successfully.
Dec 05 01:18:45 compute-0 ceph-mon[192914]: pgmap v274: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:45 compute-0 ceph-mon[192914]: 7.1f scrub starts
Dec 05 01:18:45 compute-0 ceph-mon[192914]: 7.1f scrub ok
Dec 05 01:18:45 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 10.a scrub starts
Dec 05 01:18:45 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 10.a scrub ok
Dec 05 01:18:45 compute-0 podman[233173]: 2025-12-05 01:18:45.581550385 +0000 UTC m=+0.075323786 container create a99c1fca6a28520cb31ea52d09a5d3b7aa43a12582852321dd45365176d26c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:18:45 compute-0 podman[233173]: 2025-12-05 01:18:45.554159672 +0000 UTC m=+0.047933143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:18:45 compute-0 systemd[1]: Started libpod-conmon-a99c1fca6a28520cb31ea52d09a5d3b7aa43a12582852321dd45365176d26c5e.scope.
Dec 05 01:18:45 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08e8f883f3d4e1d420a41947be8652a0cde2833fef24c2aab6040325f1663f21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08e8f883f3d4e1d420a41947be8652a0cde2833fef24c2aab6040325f1663f21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08e8f883f3d4e1d420a41947be8652a0cde2833fef24c2aab6040325f1663f21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08e8f883f3d4e1d420a41947be8652a0cde2833fef24c2aab6040325f1663f21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:18:45 compute-0 podman[233173]: 2025-12-05 01:18:45.736994872 +0000 UTC m=+0.230768363 container init a99c1fca6a28520cb31ea52d09a5d3b7aa43a12582852321dd45365176d26c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:18:45 compute-0 podman[233173]: 2025-12-05 01:18:45.768717557 +0000 UTC m=+0.262490988 container start a99c1fca6a28520cb31ea52d09a5d3b7aa43a12582852321dd45365176d26c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_robinson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:18:45 compute-0 podman[233173]: 2025-12-05 01:18:45.775396185 +0000 UTC m=+0.269169676 container attach a99c1fca6a28520cb31ea52d09a5d3b7aa43a12582852321dd45365176d26c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_robinson, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:18:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:18:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:18:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v275: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:18:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:18:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:18:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:18:46 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 10.c scrub starts
Dec 05 01:18:46 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 10.c scrub ok
Dec 05 01:18:46 compute-0 python3.9[233322]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:18:46 compute-0 boring_robinson[233218]: {
Dec 05 01:18:46 compute-0 boring_robinson[233218]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:18:46 compute-0 boring_robinson[233218]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:18:46 compute-0 boring_robinson[233218]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:18:46 compute-0 boring_robinson[233218]:         "osd_id": 0,
Dec 05 01:18:46 compute-0 boring_robinson[233218]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:18:46 compute-0 boring_robinson[233218]:         "type": "bluestore"
Dec 05 01:18:46 compute-0 boring_robinson[233218]:     },
Dec 05 01:18:46 compute-0 boring_robinson[233218]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:18:46 compute-0 boring_robinson[233218]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:18:46 compute-0 boring_robinson[233218]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:18:46 compute-0 boring_robinson[233218]:         "osd_id": 1,
Dec 05 01:18:46 compute-0 boring_robinson[233218]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:18:46 compute-0 boring_robinson[233218]:         "type": "bluestore"
Dec 05 01:18:46 compute-0 boring_robinson[233218]:     },
Dec 05 01:18:46 compute-0 boring_robinson[233218]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:18:46 compute-0 boring_robinson[233218]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:18:46 compute-0 boring_robinson[233218]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:18:46 compute-0 boring_robinson[233218]:         "osd_id": 2,
Dec 05 01:18:46 compute-0 boring_robinson[233218]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:18:46 compute-0 boring_robinson[233218]:         "type": "bluestore"
Dec 05 01:18:46 compute-0 boring_robinson[233218]:     }
Dec 05 01:18:46 compute-0 boring_robinson[233218]: }
Dec 05 01:18:46 compute-0 systemd[1]: libpod-a99c1fca6a28520cb31ea52d09a5d3b7aa43a12582852321dd45365176d26c5e.scope: Deactivated successfully.
Dec 05 01:18:46 compute-0 systemd[1]: libpod-a99c1fca6a28520cb31ea52d09a5d3b7aa43a12582852321dd45365176d26c5e.scope: Consumed 1.163s CPU time.
Dec 05 01:18:46 compute-0 podman[233173]: 2025-12-05 01:18:46.927481075 +0000 UTC m=+1.421254516 container died a99c1fca6a28520cb31ea52d09a5d3b7aa43a12582852321dd45365176d26c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_robinson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 01:18:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-08e8f883f3d4e1d420a41947be8652a0cde2833fef24c2aab6040325f1663f21-merged.mount: Deactivated successfully.
Dec 05 01:18:47 compute-0 podman[233173]: 2025-12-05 01:18:47.040572346 +0000 UTC m=+1.534345747 container remove a99c1fca6a28520cb31ea52d09a5d3b7aa43a12582852321dd45365176d26c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 05 01:18:47 compute-0 systemd[1]: libpod-conmon-a99c1fca6a28520cb31ea52d09a5d3b7aa43a12582852321dd45365176d26c5e.scope: Deactivated successfully.
Dec 05 01:18:47 compute-0 sudo[233043]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:18:47 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:18:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:18:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:18:47 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:18:47 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 56d0832e-2c3c-4833-89e4-a36b15703cc3 does not exist
Dec 05 01:18:47 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a0115bd2-f212-4781-a855-6b38ffb0a699 does not exist
Dec 05 01:18:47 compute-0 sudo[233364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:18:47 compute-0 sudo[233364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:18:47 compute-0 sudo[233364]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:47 compute-0 sudo[233413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:18:47 compute-0 sudo[233413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:18:47 compute-0 sudo[233413]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:47 compute-0 ceph-mon[192914]: 10.a scrub starts
Dec 05 01:18:47 compute-0 ceph-mon[192914]: 10.a scrub ok
Dec 05 01:18:47 compute-0 ceph-mon[192914]: pgmap v275: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:47 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:18:47 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:18:47 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Dec 05 01:18:47 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Dec 05 01:18:47 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 8.1 deep-scrub starts
Dec 05 01:18:47 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 8.1 deep-scrub ok
Dec 05 01:18:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v276: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:48 compute-0 sudo[233563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeqkolxjqtnaynaipizpsnglboqnxavz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897527.5495567-36-174095625982464/AnsiballZ_getent.py'
Dec 05 01:18:48 compute-0 sudo[233563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:48 compute-0 ceph-mon[192914]: 10.c scrub starts
Dec 05 01:18:48 compute-0 ceph-mon[192914]: 10.c scrub ok
Dec 05 01:18:48 compute-0 python3.9[233565]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 05 01:18:48 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Dec 05 01:18:48 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Dec 05 01:18:48 compute-0 sudo[233563]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:48 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Dec 05 01:18:48 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Dec 05 01:18:48 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Dec 05 01:18:48 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Dec 05 01:18:49 compute-0 ceph-mon[192914]: 10.18 scrub starts
Dec 05 01:18:49 compute-0 ceph-mon[192914]: 10.18 scrub ok
Dec 05 01:18:49 compute-0 ceph-mon[192914]: 8.1 deep-scrub starts
Dec 05 01:18:49 compute-0 ceph-mon[192914]: 8.1 deep-scrub ok
Dec 05 01:18:49 compute-0 ceph-mon[192914]: pgmap v276: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:49 compute-0 ceph-mon[192914]: 3.9 scrub starts
Dec 05 01:18:49 compute-0 ceph-mon[192914]: 3.9 scrub ok
Dec 05 01:18:49 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Dec 05 01:18:49 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Dec 05 01:18:49 compute-0 sudo[233717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydwichacfrhsoquynsmfdiyvzrbtbxkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897529.003667-48-110908470691984/AnsiballZ_setup.py'
Dec 05 01:18:49 compute-0 sudo[233717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:49 compute-0 python3.9[233719]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 01:18:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v277: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:50 compute-0 sudo[233717]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:50 compute-0 ceph-mon[192914]: 10.1b scrub starts
Dec 05 01:18:50 compute-0 ceph-mon[192914]: 10.1b scrub ok
Dec 05 01:18:50 compute-0 ceph-mon[192914]: 8.3 scrub starts
Dec 05 01:18:50 compute-0 ceph-mon[192914]: 8.3 scrub ok
Dec 05 01:18:50 compute-0 ceph-mon[192914]: 10.9 scrub starts
Dec 05 01:18:50 compute-0 ceph-mon[192914]: 10.9 scrub ok
Dec 05 01:18:50 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Dec 05 01:18:50 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Dec 05 01:18:51 compute-0 sudo[233801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guhozrhlkgrpzcuhphsgwlzgxrcdjlwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897529.003667-48-110908470691984/AnsiballZ_dnf.py'
Dec 05 01:18:51 compute-0 sudo[233801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:51 compute-0 python3.9[233803]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 05 01:18:51 compute-0 ceph-mon[192914]: pgmap v277: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:51 compute-0 ceph-mon[192914]: 10.1c scrub starts
Dec 05 01:18:51 compute-0 ceph-mon[192914]: 10.1c scrub ok
Dec 05 01:18:51 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Dec 05 01:18:51 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Dec 05 01:18:51 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Dec 05 01:18:51 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Dec 05 01:18:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:18:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v278: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:52 compute-0 ceph-mon[192914]: 10.8 scrub starts
Dec 05 01:18:52 compute-0 ceph-mon[192914]: 10.8 scrub ok
Dec 05 01:18:52 compute-0 ceph-mon[192914]: 10.1d scrub starts
Dec 05 01:18:52 compute-0 ceph-mon[192914]: 10.1d scrub ok
Dec 05 01:18:52 compute-0 ceph-mon[192914]: pgmap v278: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:52 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Dec 05 01:18:52 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Dec 05 01:18:52 compute-0 sudo[233801]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:53 compute-0 ceph-mon[192914]: 8.5 scrub starts
Dec 05 01:18:53 compute-0 ceph-mon[192914]: 8.5 scrub ok
Dec 05 01:18:53 compute-0 sudo[233954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzbclziomfbdxflsbtwbzdmzcssfxnki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897533.1351514-62-32403700906437/AnsiballZ_dnf.py'
Dec 05 01:18:53 compute-0 sudo[233954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:53 compute-0 podman[233956]: 2025-12-05 01:18:53.916152859 +0000 UTC m=+0.151101205 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 05 01:18:54 compute-0 python3.9[233957]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 01:18:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v279: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:54 compute-0 ceph-mon[192914]: pgmap v279: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:54 compute-0 podman[233979]: 2025-12-05 01:18:54.713124997 +0000 UTC m=+0.118251607 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:18:54 compute-0 podman[233980]: 2025-12-05 01:18:54.811774051 +0000 UTC m=+0.209778650 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 01:18:55 compute-0 sudo[233954]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:55 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Dec 05 01:18:55 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Dec 05 01:18:56 compute-0 ceph-mon[192914]: 10.4 scrub starts
Dec 05 01:18:56 compute-0 ceph-mon[192914]: 10.4 scrub ok
Dec 05 01:18:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v280: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:56 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Dec 05 01:18:56 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Dec 05 01:18:56 compute-0 sudo[234192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njndqlgggpalevrxtbkvshxhcyvprlyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897535.776829-70-143920715795650/AnsiballZ_systemd.py'
Dec 05 01:18:56 compute-0 podman[234151]: 2025-12-05 01:18:56.656655169 +0000 UTC m=+0.125930905 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:18:56 compute-0 sudo[234192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:56 compute-0 python3.9[234197]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 01:18:57 compute-0 sudo[234192]: pam_unix(sudo:session): session closed for user root
Dec 05 01:18:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:18:57 compute-0 ceph-mon[192914]: pgmap v280: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:57 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Dec 05 01:18:57 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Dec 05 01:18:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v281: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:58 compute-0 ceph-mon[192914]: 10.1f scrub starts
Dec 05 01:18:58 compute-0 ceph-mon[192914]: 10.1f scrub ok
Dec 05 01:18:58 compute-0 ceph-mon[192914]: 10.15 scrub starts
Dec 05 01:18:58 compute-0 ceph-mon[192914]: 10.15 scrub ok
Dec 05 01:18:58 compute-0 python3.9[234350]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:18:58 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Dec 05 01:18:58 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Dec 05 01:18:58 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 10.d scrub starts
Dec 05 01:18:58 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 10.d scrub ok
Dec 05 01:18:59 compute-0 ceph-mon[192914]: pgmap v281: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:18:59 compute-0 ceph-mon[192914]: 10.d scrub starts
Dec 05 01:18:59 compute-0 ceph-mon[192914]: 10.d scrub ok
Dec 05 01:18:59 compute-0 sudo[234516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvmsctufunqcyujrlemfjgdsaewfsoab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897538.720294-88-257790242905190/AnsiballZ_sefcontext.py'
Dec 05 01:18:59 compute-0 sudo[234516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:18:59 compute-0 podman[234474]: 2025-12-05 01:18:59.563844613 +0000 UTC m=+0.163512616 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, io.buildah.version=1.29.0, io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, name=ubi9, vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 01:18:59 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Dec 05 01:18:59 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Dec 05 01:18:59 compute-0 podman[158197]: time="2025-12-05T01:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:18:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 05 01:18:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6827 "" "Go-http-client/1.1"
Dec 05 01:18:59 compute-0 python3.9[234522]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 05 01:19:00 compute-0 sudo[234516]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v282: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:00 compute-0 ceph-mon[192914]: 8.2 scrub starts
Dec 05 01:19:00 compute-0 ceph-mon[192914]: 8.2 scrub ok
Dec 05 01:19:00 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Dec 05 01:19:00 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Dec 05 01:19:01 compute-0 python3.9[234672]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:19:01 compute-0 ceph-mon[192914]: 11.15 scrub starts
Dec 05 01:19:01 compute-0 ceph-mon[192914]: 11.15 scrub ok
Dec 05 01:19:01 compute-0 ceph-mon[192914]: pgmap v282: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:01 compute-0 openstack_network_exporter[160350]: ERROR   01:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:19:01 compute-0 openstack_network_exporter[160350]: ERROR   01:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:19:01 compute-0 openstack_network_exporter[160350]: ERROR   01:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:19:01 compute-0 openstack_network_exporter[160350]: ERROR   01:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:19:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:19:01 compute-0 openstack_network_exporter[160350]: ERROR   01:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:19:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:19:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:19:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v283: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:02 compute-0 ceph-mon[192914]: 8.7 scrub starts
Dec 05 01:19:02 compute-0 ceph-mon[192914]: 8.7 scrub ok
Dec 05 01:19:02 compute-0 podman[234802]: 2025-12-05 01:19:02.492502582 +0000 UTC m=+0.106874487 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, name=ubi9-minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=9.6, architecture=x86_64, vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7)
Dec 05 01:19:02 compute-0 sudo[234847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mspeitdewpfjzbcclnpgxrsamkeltxyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897541.9464219-106-23920640437139/AnsiballZ_dnf.py'
Dec 05 01:19:02 compute-0 sudo[234847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:19:02 compute-0 python3.9[234849]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 01:19:03 compute-0 ceph-mon[192914]: pgmap v283: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:04 compute-0 sudo[234847]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v284: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:04 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Dec 05 01:19:04 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Dec 05 01:19:04 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.d scrub starts
Dec 05 01:19:04 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.d scrub ok
Dec 05 01:19:04 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 10.1e deep-scrub starts
Dec 05 01:19:04 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 10.1e deep-scrub ok
Dec 05 01:19:05 compute-0 sudo[235000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqotnjfgnivpjlgwyxukkpmepizcgdxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897544.4973836-114-7485493555410/AnsiballZ_command.py'
Dec 05 01:19:05 compute-0 sudo[235000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:19:05 compute-0 ceph-mon[192914]: pgmap v284: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:05 compute-0 ceph-mon[192914]: 8.8 scrub starts
Dec 05 01:19:05 compute-0 ceph-mon[192914]: 8.8 scrub ok
Dec 05 01:19:05 compute-0 ceph-mon[192914]: 10.1e deep-scrub starts
Dec 05 01:19:05 compute-0 ceph-mon[192914]: 10.1e deep-scrub ok
Dec 05 01:19:05 compute-0 python3.9[235002]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:19:05 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 8.a scrub starts
Dec 05 01:19:05 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 8.a scrub ok
Dec 05 01:19:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v285: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:06 compute-0 ceph-mon[192914]: 11.d scrub starts
Dec 05 01:19:06 compute-0 ceph-mon[192914]: 11.d scrub ok
Dec 05 01:19:06 compute-0 sudo[235000]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:06 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Dec 05 01:19:06 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Dec 05 01:19:06 compute-0 podman[235161]: 2025-12-05 01:19:06.686692482 +0000 UTC m=+0.093972043 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 01:19:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:19:07 compute-0 ceph-mon[192914]: 8.a scrub starts
Dec 05 01:19:07 compute-0 ceph-mon[192914]: 8.a scrub ok
Dec 05 01:19:07 compute-0 ceph-mon[192914]: pgmap v285: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:07 compute-0 ceph-mon[192914]: 10.16 scrub starts
Dec 05 01:19:07 compute-0 ceph-mon[192914]: 10.16 scrub ok
Dec 05 01:19:07 compute-0 sudo[235310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkxcvqthlddtmwctulnsekdeyjkqrutk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897546.7678049-122-145647690839123/AnsiballZ_file.py'
Dec 05 01:19:07 compute-0 sudo[235310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:19:07 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.b scrub starts
Dec 05 01:19:07 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.b scrub ok
Dec 05 01:19:07 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 10.1 deep-scrub starts
Dec 05 01:19:07 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 10.1 deep-scrub ok
Dec 05 01:19:07 compute-0 python3.9[235312]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 01:19:07 compute-0 sudo[235310]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v286: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:08 compute-0 ceph-mon[192914]: 10.1 deep-scrub starts
Dec 05 01:19:08 compute-0 ceph-mon[192914]: 10.1 deep-scrub ok
Dec 05 01:19:08 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Dec 05 01:19:08 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Dec 05 01:19:08 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Dec 05 01:19:08 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Dec 05 01:19:09 compute-0 python3.9[235462]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:19:09 compute-0 ceph-mon[192914]: 11.b scrub starts
Dec 05 01:19:09 compute-0 ceph-mon[192914]: 11.b scrub ok
Dec 05 01:19:09 compute-0 ceph-mon[192914]: pgmap v286: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:09 compute-0 ceph-mon[192914]: 10.17 scrub starts
Dec 05 01:19:09 compute-0 ceph-mon[192914]: 10.17 scrub ok
Dec 05 01:19:09 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Dec 05 01:19:09 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Dec 05 01:19:09 compute-0 sudo[235614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zudadvwppmrbczdxoylbzowmtzylbigk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897549.3508532-138-99790783682836/AnsiballZ_dnf.py'
Dec 05 01:19:09 compute-0 sudo[235614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:19:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v287: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:10 compute-0 python3.9[235616]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 01:19:10 compute-0 ceph-mon[192914]: 11.8 scrub starts
Dec 05 01:19:10 compute-0 ceph-mon[192914]: 11.8 scrub ok
Dec 05 01:19:10 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Dec 05 01:19:10 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Dec 05 01:19:11 compute-0 ceph-mon[192914]: 11.3 scrub starts
Dec 05 01:19:11 compute-0 ceph-mon[192914]: 11.3 scrub ok
Dec 05 01:19:11 compute-0 ceph-mon[192914]: pgmap v287: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:11 compute-0 sudo[235614]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:11 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Dec 05 01:19:11 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Dec 05 01:19:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:19:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v288: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:12 compute-0 sudo[235767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gutcbdgrkbewmwgdpyciezhzcocetphj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897551.7267764-147-111857337595172/AnsiballZ_dnf.py'
Dec 05 01:19:12 compute-0 sudo[235767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:19:12 compute-0 ceph-mon[192914]: 9.2 scrub starts
Dec 05 01:19:12 compute-0 ceph-mon[192914]: 9.2 scrub ok
Dec 05 01:19:12 compute-0 ceph-mon[192914]: 10.7 scrub starts
Dec 05 01:19:12 compute-0 ceph-mon[192914]: 10.7 scrub ok
Dec 05 01:19:12 compute-0 python3.9[235769]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 01:19:12 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Dec 05 01:19:12 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Dec 05 01:19:12 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Dec 05 01:19:12 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Dec 05 01:19:13 compute-0 ceph-mon[192914]: pgmap v288: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:13 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Dec 05 01:19:13 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Dec 05 01:19:13 compute-0 sudo[235767]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v289: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:14 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Dec 05 01:19:14 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Dec 05 01:19:14 compute-0 ceph-mon[192914]: 8.13 scrub starts
Dec 05 01:19:14 compute-0 ceph-mon[192914]: 8.13 scrub ok
Dec 05 01:19:14 compute-0 ceph-mon[192914]: 8.4 scrub starts
Dec 05 01:19:14 compute-0 ceph-mon[192914]: 8.4 scrub ok
Dec 05 01:19:14 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Dec 05 01:19:14 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Dec 05 01:19:14 compute-0 sudo[235920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alsvsehzbsoowomycmhvtwcbguetcpqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897554.130442-159-183378154225932/AnsiballZ_stat.py'
Dec 05 01:19:14 compute-0 sudo[235920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:19:14 compute-0 python3.9[235922]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:19:14 compute-0 sudo[235920]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:15 compute-0 ceph-mon[192914]: 8.16 scrub starts
Dec 05 01:19:15 compute-0 ceph-mon[192914]: 8.16 scrub ok
Dec 05 01:19:15 compute-0 ceph-mon[192914]: pgmap v289: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:15 compute-0 ceph-mon[192914]: 8.17 scrub starts
Dec 05 01:19:15 compute-0 ceph-mon[192914]: 8.17 scrub ok
Dec 05 01:19:15 compute-0 ceph-mon[192914]: 8.14 scrub starts
Dec 05 01:19:15 compute-0 ceph-mon[192914]: 8.14 scrub ok
Dec 05 01:19:15 compute-0 sudo[236074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoqmuyefxezkkbmewmkgctilhcpcbhuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897555.1441834-167-279035709719038/AnsiballZ_slurp.py'
Dec 05 01:19:15 compute-0 sudo[236074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:19:16 compute-0 python3.9[236076]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Dec 05 01:19:16 compute-0 sudo[236074]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:19:16
Dec 05 01:19:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:19:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:19:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images']
Dec 05 01:19:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:19:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:19:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:19:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v290: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:19:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:19:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:19:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:19:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:19:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:19:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:19:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:19:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:19:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:19:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:19:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:19:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:19:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:19:16 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Dec 05 01:19:16 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Dec 05 01:19:16 compute-0 ceph-mon[192914]: pgmap v290: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:16 compute-0 ceph-mon[192914]: 8.19 scrub starts
Dec 05 01:19:16 compute-0 sshd-session[233142]: Connection closed by 192.168.122.30 port 39410
Dec 05 01:19:16 compute-0 sshd-session[233126]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:19:17 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Dec 05 01:19:17 compute-0 systemd[1]: session-43.scope: Consumed 26.623s CPU time.
Dec 05 01:19:17 compute-0 systemd-logind[792]: Session 43 logged out. Waiting for processes to exit.
Dec 05 01:19:17 compute-0 systemd-logind[792]: Removed session 43.
Dec 05 01:19:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:19:17 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Dec 05 01:19:17 compute-0 ceph-mon[192914]: 8.19 scrub ok
Dec 05 01:19:17 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Dec 05 01:19:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v291: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:18 compute-0 ceph-mon[192914]: 8.1b scrub starts
Dec 05 01:19:18 compute-0 ceph-mon[192914]: 8.1b scrub ok
Dec 05 01:19:18 compute-0 ceph-mon[192914]: pgmap v291: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:18 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Dec 05 01:19:18 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Dec 05 01:19:19 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Dec 05 01:19:19 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Dec 05 01:19:19 compute-0 ceph-mon[192914]: 11.17 scrub starts
Dec 05 01:19:19 compute-0 ceph-mon[192914]: 11.17 scrub ok
Dec 05 01:19:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v292: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:20 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Dec 05 01:19:20 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Dec 05 01:19:20 compute-0 ceph-mon[192914]: 8.1e scrub starts
Dec 05 01:19:20 compute-0 ceph-mon[192914]: 8.1e scrub ok
Dec 05 01:19:20 compute-0 ceph-mon[192914]: pgmap v292: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:21 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Dec 05 01:19:21 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Dec 05 01:19:21 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Dec 05 01:19:21 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Dec 05 01:19:21 compute-0 ceph-mon[192914]: 9.4 scrub starts
Dec 05 01:19:21 compute-0 ceph-mon[192914]: 9.4 scrub ok
Dec 05 01:19:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:19:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v293: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:22 compute-0 sshd-session[236102]: Accepted publickey for zuul from 192.168.122.30 port 36334 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 01:19:22 compute-0 systemd-logind[792]: New session 44 of user zuul.
Dec 05 01:19:22 compute-0 systemd[1]: Started Session 44 of User zuul.
Dec 05 01:19:22 compute-0 sshd-session[236102]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:19:22 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 9.a scrub starts
Dec 05 01:19:22 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 9.a scrub ok
Dec 05 01:19:22 compute-0 ceph-mon[192914]: 11.1a scrub starts
Dec 05 01:19:22 compute-0 ceph-mon[192914]: 8.10 scrub starts
Dec 05 01:19:22 compute-0 ceph-mon[192914]: 11.1a scrub ok
Dec 05 01:19:22 compute-0 ceph-mon[192914]: 8.10 scrub ok
Dec 05 01:19:22 compute-0 ceph-mon[192914]: pgmap v293: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:23 compute-0 python3.9[236255]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:19:23 compute-0 ceph-mon[192914]: 9.a scrub starts
Dec 05 01:19:23 compute-0 ceph-mon[192914]: 9.a scrub ok
Dec 05 01:19:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v294: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:24 compute-0 podman[236325]: 2025-12-05 01:19:24.750365745 +0000 UTC m=+0.157941108 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 01:19:24 compute-0 ceph-mon[192914]: pgmap v294: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:24 compute-0 podman[236379]: 2025-12-05 01:19:24.910665098 +0000 UTC m=+0.133595330 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:19:25 compute-0 podman[236427]: 2025-12-05 01:19:25.093283662 +0000 UTC m=+0.145728024 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 05 01:19:25 compute-0 python3.9[236470]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 01:19:25 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Dec 05 01:19:25 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Dec 05 01:19:25 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Dec 05 01:19:25 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Dec 05 01:19:25 compute-0 ceph-mon[192914]: 11.1b scrub starts
Dec 05 01:19:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:19:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:19:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:19:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:19:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:19:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:19:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:19:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:19:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:19:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:19:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:19:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:19:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:19:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:19:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:19:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:19:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:19:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:19:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:19:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:19:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:19:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:19:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:19:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v295: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:26 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Dec 05 01:19:26 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Dec 05 01:19:26 compute-0 ceph-mon[192914]: 11.1b scrub ok
Dec 05 01:19:26 compute-0 ceph-mon[192914]: 9.10 scrub starts
Dec 05 01:19:26 compute-0 ceph-mon[192914]: 9.10 scrub ok
Dec 05 01:19:26 compute-0 ceph-mon[192914]: pgmap v295: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:26 compute-0 podman[236652]: 2025-12-05 01:19:26.951409794 +0000 UTC m=+0.112160356 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi)
Dec 05 01:19:27 compute-0 python3.9[236696]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:19:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:19:27 compute-0 sshd-session[236105]: Connection closed by 192.168.122.30 port 36334
Dec 05 01:19:27 compute-0 sshd-session[236102]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:19:27 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Dec 05 01:19:27 compute-0 systemd[1]: session-44.scope: Consumed 4.166s CPU time.
Dec 05 01:19:27 compute-0 systemd-logind[792]: Session 44 logged out. Waiting for processes to exit.
Dec 05 01:19:27 compute-0 systemd-logind[792]: Removed session 44.
Dec 05 01:19:27 compute-0 ceph-mon[192914]: 9.12 scrub starts
Dec 05 01:19:27 compute-0 ceph-mon[192914]: 9.12 scrub ok
Dec 05 01:19:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v296: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:28 compute-0 ceph-mon[192914]: pgmap v296: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:29 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Dec 05 01:19:29 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Dec 05 01:19:29 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Dec 05 01:19:29 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Dec 05 01:19:29 compute-0 podman[158197]: time="2025-12-05T01:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:19:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 05 01:19:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6809 "" "Go-http-client/1.1"
Dec 05 01:19:30 compute-0 ceph-mon[192914]: 11.14 scrub starts
Dec 05 01:19:30 compute-0 ceph-mon[192914]: 11.14 scrub ok
Dec 05 01:19:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v297: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:30 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 8.d scrub starts
Dec 05 01:19:30 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 8.d scrub ok
Dec 05 01:19:30 compute-0 podman[236723]: 2025-12-05 01:19:30.693411094 +0000 UTC m=+0.108249036 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, version=9.4, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=kepler, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, distribution-scope=public, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 01:19:31 compute-0 ceph-mon[192914]: 9.14 scrub starts
Dec 05 01:19:31 compute-0 ceph-mon[192914]: 9.14 scrub ok
Dec 05 01:19:31 compute-0 ceph-mon[192914]: pgmap v297: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:31 compute-0 openstack_network_exporter[160350]: ERROR   01:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:19:31 compute-0 openstack_network_exporter[160350]: ERROR   01:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:19:31 compute-0 openstack_network_exporter[160350]: ERROR   01:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:19:31 compute-0 openstack_network_exporter[160350]: ERROR   01:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:19:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:19:31 compute-0 openstack_network_exporter[160350]: ERROR   01:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:19:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:19:31 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Dec 05 01:19:31 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Dec 05 01:19:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:19:32 compute-0 ceph-mon[192914]: 8.d scrub starts
Dec 05 01:19:32 compute-0 ceph-mon[192914]: 8.d scrub ok
Dec 05 01:19:32 compute-0 ceph-mon[192914]: 9.1a scrub starts
Dec 05 01:19:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v298: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:32 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 11.f scrub starts
Dec 05 01:19:32 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 11.f scrub ok
Dec 05 01:19:32 compute-0 podman[236742]: 2025-12-05 01:19:32.733171861 +0000 UTC m=+0.134641610 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., version=9.6, architecture=x86_64, distribution-scope=public, release=1755695350, io.openshift.expose-services=)
Dec 05 01:19:33 compute-0 ceph-mon[192914]: 9.1a scrub ok
Dec 05 01:19:33 compute-0 ceph-mon[192914]: pgmap v298: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:33 compute-0 ceph-mon[192914]: 11.f scrub starts
Dec 05 01:19:33 compute-0 ceph-mon[192914]: 11.f scrub ok
Dec 05 01:19:33 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Dec 05 01:19:33 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Dec 05 01:19:33 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Dec 05 01:19:33 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Dec 05 01:19:33 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 11.e scrub starts
Dec 05 01:19:33 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 11.e scrub ok
Dec 05 01:19:33 compute-0 sshd-session[236763]: Accepted publickey for zuul from 192.168.122.30 port 39996 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 01:19:33 compute-0 systemd-logind[792]: New session 45 of user zuul.
Dec 05 01:19:33 compute-0 systemd[1]: Started Session 45 of User zuul.
Dec 05 01:19:33 compute-0 sshd-session[236763]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:19:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v299: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:34 compute-0 ceph-mon[192914]: 11.5 scrub starts
Dec 05 01:19:34 compute-0 ceph-mon[192914]: 11.5 scrub ok
Dec 05 01:19:34 compute-0 ceph-mon[192914]: 8.15 scrub starts
Dec 05 01:19:34 compute-0 ceph-mon[192914]: 8.15 scrub ok
Dec 05 01:19:34 compute-0 ceph-mon[192914]: 11.e scrub starts
Dec 05 01:19:34 compute-0 ceph-mon[192914]: 11.e scrub ok
Dec 05 01:19:34 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Dec 05 01:19:34 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Dec 05 01:19:34 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Dec 05 01:19:34 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Dec 05 01:19:35 compute-0 python3.9[236916]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:19:35 compute-0 ceph-mon[192914]: pgmap v299: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:35 compute-0 ceph-mon[192914]: 11.7 scrub starts
Dec 05 01:19:35 compute-0 ceph-mon[192914]: 11.7 scrub ok
Dec 05 01:19:35 compute-0 ceph-mon[192914]: 11.2 scrub starts
Dec 05 01:19:35 compute-0 ceph-mon[192914]: 11.2 scrub ok
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:19:35.294144) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897575294263, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7159, "num_deletes": 251, "total_data_size": 8790720, "memory_usage": 9078040, "flush_reason": "Manual Compaction"}
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897575334330, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7125117, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 140, "largest_seqno": 7296, "table_properties": {"data_size": 7098972, "index_size": 17006, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8069, "raw_key_size": 74276, "raw_average_key_size": 23, "raw_value_size": 7037355, "raw_average_value_size": 2196, "num_data_blocks": 747, "num_entries": 3204, "num_filter_entries": 3204, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897146, "oldest_key_time": 1764897146, "file_creation_time": 1764897575, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 40255 microseconds, and 19585 cpu microseconds.
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:19:35.334399) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7125117 bytes OK
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:19:35.334427) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:19:35.336610) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:19:35.336626) EVENT_LOG_v1 {"time_micros": 1764897575336622, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:19:35.336656) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 8759676, prev total WAL file size 8759676, number of live WAL files 2.
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:19:35.339402) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(6958KB) 13(52KB) 8(1944B)]
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897575339558, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7181214, "oldest_snapshot_seqno": -1}
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3019 keys, 7137086 bytes, temperature: kUnknown
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897575403529, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7137086, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7111430, "index_size": 17034, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7557, "raw_key_size": 72321, "raw_average_key_size": 23, "raw_value_size": 7051379, "raw_average_value_size": 2335, "num_data_blocks": 750, "num_entries": 3019, "num_filter_entries": 3019, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764897575, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:19:35.403799) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7137086 bytes
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:19:35.405740) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.1 rd, 111.4 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(6.8, 0.0 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3309, records dropped: 290 output_compression: NoCompression
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:19:35.405762) EVENT_LOG_v1 {"time_micros": 1764897575405750, "job": 4, "event": "compaction_finished", "compaction_time_micros": 64061, "compaction_time_cpu_micros": 32407, "output_level": 6, "num_output_files": 1, "total_output_size": 7137086, "num_input_records": 3309, "num_output_records": 3019, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897575407148, "job": 4, "event": "table_file_deletion", "file_number": 19}
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897575407221, "job": 4, "event": "table_file_deletion", "file_number": 13}
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897575407251, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec 05 01:19:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:19:35.339110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:19:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v300: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:36 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 11.a scrub starts
Dec 05 01:19:36 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 11.a scrub ok
Dec 05 01:19:36 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Dec 05 01:19:36 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Dec 05 01:19:36 compute-0 python3.9[237071]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:19:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:19:37 compute-0 ceph-mon[192914]: pgmap v300: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:37 compute-0 ceph-mon[192914]: 11.a scrub starts
Dec 05 01:19:37 compute-0 ceph-mon[192914]: 11.a scrub ok
Dec 05 01:19:37 compute-0 podman[237179]: 2025-12-05 01:19:37.712780224 +0000 UTC m=+0.118184126 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:19:37 compute-0 sudo[237250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmwmsauffpvqyqhtqbeefockdpeccjau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897577.2267826-40-4094259482096/AnsiballZ_setup.py'
Dec 05 01:19:37 compute-0 sudo[237250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:19:38 compute-0 python3.9[237252]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 01:19:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v301: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:38 compute-0 ceph-mon[192914]: 11.1c scrub starts
Dec 05 01:19:38 compute-0 ceph-mon[192914]: 11.1c scrub ok
Dec 05 01:19:38 compute-0 sudo[237250]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:38 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.1e deep-scrub starts
Dec 05 01:19:38 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.1e deep-scrub ok
Dec 05 01:19:38 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 8.e deep-scrub starts
Dec 05 01:19:38 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 8.e deep-scrub ok
Dec 05 01:19:39 compute-0 sudo[237334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klltsabukkovkptnhpxiuitswntoetvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897577.2267826-40-4094259482096/AnsiballZ_dnf.py'
Dec 05 01:19:39 compute-0 sudo[237334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:19:39 compute-0 python3.9[237336]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 01:19:39 compute-0 ceph-mon[192914]: pgmap v301: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:39 compute-0 ceph-mon[192914]: 11.1e deep-scrub starts
Dec 05 01:19:39 compute-0 ceph-mon[192914]: 11.1e deep-scrub ok
Dec 05 01:19:39 compute-0 ceph-mon[192914]: 8.e deep-scrub starts
Dec 05 01:19:39 compute-0 ceph-mon[192914]: 8.e deep-scrub ok
Dec 05 01:19:39 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 11.c scrub starts
Dec 05 01:19:39 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 11.c scrub ok
Dec 05 01:19:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v302: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:40 compute-0 ceph-mon[192914]: 11.c scrub starts
Dec 05 01:19:40 compute-0 ceph-mon[192914]: 11.c scrub ok
Dec 05 01:19:40 compute-0 sudo[237334]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:41 compute-0 ceph-mon[192914]: pgmap v302: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:41 compute-0 sudo[237487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmmoeydxrbxwtiqtmgygkihwdgfvbefd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897580.872817-52-69168402020001/AnsiballZ_setup.py'
Dec 05 01:19:41 compute-0 sudo[237487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:19:41 compute-0 python3.9[237489]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 01:19:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:19:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v303: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:42 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Dec 05 01:19:42 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Dec 05 01:19:42 compute-0 sudo[237487]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:43 compute-0 ceph-mon[192914]: pgmap v303: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:43 compute-0 ceph-mon[192914]: 11.1f scrub starts
Dec 05 01:19:43 compute-0 ceph-mon[192914]: 11.1f scrub ok
Dec 05 01:19:43 compute-0 sudo[237690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzovlqqemmtreushmqebbypsnhfgbpdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897582.688356-63-188524011247356/AnsiballZ_file.py'
Dec 05 01:19:43 compute-0 sudo[237690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:19:43 compute-0 python3.9[237692]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:19:43 compute-0 sudo[237690]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v304: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:44 compute-0 sudo[237842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whalwzftrhokkfnwnjwhkdtbhbpohcaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897583.9047723-71-112492733499336/AnsiballZ_command.py'
Dec 05 01:19:44 compute-0 sudo[237842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:19:44 compute-0 python3.9[237844]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:19:44 compute-0 sudo[237842]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:45 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Dec 05 01:19:45 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Dec 05 01:19:45 compute-0 ceph-mon[192914]: pgmap v304: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:45 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Dec 05 01:19:45 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Dec 05 01:19:46 compute-0 sudo[238006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtnksnyoagxywkmujhzdnekurhnsmspi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897585.255811-79-140324922609308/AnsiballZ_stat.py'
Dec 05 01:19:46 compute-0 sudo[238006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:19:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:19:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:19:46 compute-0 python3.9[238008]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:19:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:19:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:19:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:19:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:19:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v305: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:46 compute-0 sudo[238006]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:46 compute-0 ceph-mon[192914]: 8.1c scrub starts
Dec 05 01:19:46 compute-0 ceph-mon[192914]: 8.1c scrub ok
Dec 05 01:19:46 compute-0 ceph-mon[192914]: 11.13 scrub starts
Dec 05 01:19:46 compute-0 ceph-mon[192914]: 11.13 scrub ok
Dec 05 01:19:46 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Dec 05 01:19:46 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Dec 05 01:19:46 compute-0 sudo[238084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhcjqpuohnojrrbdxxfabajehrxjejep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897585.255811-79-140324922609308/AnsiballZ_file.py'
Dec 05 01:19:46 compute-0 sudo[238084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:19:46 compute-0 python3.9[238086]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:19:46 compute-0 sudo[238084]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:19:47 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.11 deep-scrub starts
Dec 05 01:19:47 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.11 deep-scrub ok
Dec 05 01:19:47 compute-0 ceph-mon[192914]: pgmap v305: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:47 compute-0 sudo[238163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:19:47 compute-0 sudo[238163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:19:47 compute-0 sudo[238163]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:47 compute-0 sudo[238206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:19:47 compute-0 sudo[238206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:19:47 compute-0 sudo[238206]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:47 compute-0 sudo[238248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:19:47 compute-0 sudo[238248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:19:47 compute-0 sudo[238248]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:47 compute-0 sudo[238318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyiygacnrlznjldpzcahlhssxsrfibbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897587.268635-91-43368250933548/AnsiballZ_stat.py'
Dec 05 01:19:47 compute-0 sudo[238318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:19:47 compute-0 sudo[238309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:19:47 compute-0 sudo[238309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:19:48 compute-0 python3.9[238333]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:19:48 compute-0 sudo[238318]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v306: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:48 compute-0 ceph-mon[192914]: 11.16 scrub starts
Dec 05 01:19:48 compute-0 ceph-mon[192914]: 11.16 scrub ok
Dec 05 01:19:48 compute-0 ceph-mon[192914]: 11.11 deep-scrub starts
Dec 05 01:19:48 compute-0 ceph-mon[192914]: 11.11 deep-scrub ok
Dec 05 01:19:48 compute-0 sudo[238309]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:48 compute-0 sudo[238444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmvqsfnjwyutoowjbmbuxskmetcnjgoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897587.268635-91-43368250933548/AnsiballZ_file.py'
Dec 05 01:19:48 compute-0 sudo[238444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:19:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:19:48 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:19:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:19:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:19:48 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Dec 05 01:19:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:19:48 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Dec 05 01:19:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:19:48 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 0d210cde-2e19-4e49-b6fc-ca628352bea1 does not exist
Dec 05 01:19:48 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 94fa9d63-69a2-4ac6-8182-713160622854 does not exist
Dec 05 01:19:48 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d7d38a31-13e9-4a2e-888f-de0ab7fcaf78 does not exist
Dec 05 01:19:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:19:48 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:19:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:19:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:19:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:19:48 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:19:48 compute-0 sudo[238447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:19:48 compute-0 python3.9[238446]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:19:48 compute-0 sudo[238447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:19:48 compute-0 sudo[238447]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:48 compute-0 sudo[238444]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:48 compute-0 sudo[238472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:19:48 compute-0 sudo[238472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:19:48 compute-0 sudo[238472]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:49 compute-0 sudo[238521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:19:49 compute-0 sudo[238521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:19:49 compute-0 sudo[238521]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:49 compute-0 sudo[238563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:19:49 compute-0 sudo[238563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:19:49 compute-0 ceph-mon[192914]: pgmap v306: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:19:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:19:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:19:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:19:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:19:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:19:49 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Dec 05 01:19:49 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Dec 05 01:19:49 compute-0 podman[238708]: 2025-12-05 01:19:49.758458392 +0000 UTC m=+0.080727309 container create d5fa3597fcc615c29678fdd11994e1ae34a3b35bab0ff9cd197426342963edd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mccarthy, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:19:49 compute-0 podman[238708]: 2025-12-05 01:19:49.72647291 +0000 UTC m=+0.048741907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:19:49 compute-0 systemd[1]: Started libpod-conmon-d5fa3597fcc615c29678fdd11994e1ae34a3b35bab0ff9cd197426342963edd7.scope.
Dec 05 01:19:49 compute-0 sudo[238751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adouqlqqnpwmhasmmmmnbpnlosoyrwax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897589.1049025-104-92591170278805/AnsiballZ_ini_file.py'
Dec 05 01:19:49 compute-0 sudo[238751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:19:49 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:19:49 compute-0 podman[238708]: 2025-12-05 01:19:49.906076348 +0000 UTC m=+0.228345335 container init d5fa3597fcc615c29678fdd11994e1ae34a3b35bab0ff9cd197426342963edd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 01:19:49 compute-0 podman[238708]: 2025-12-05 01:19:49.923803168 +0000 UTC m=+0.246072115 container start d5fa3597fcc615c29678fdd11994e1ae34a3b35bab0ff9cd197426342963edd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mccarthy, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:19:49 compute-0 podman[238708]: 2025-12-05 01:19:49.931106464 +0000 UTC m=+0.253375471 container attach d5fa3597fcc615c29678fdd11994e1ae34a3b35bab0ff9cd197426342963edd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mccarthy, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:19:49 compute-0 intelligent_mccarthy[238753]: 167 167
Dec 05 01:19:49 compute-0 systemd[1]: libpod-d5fa3597fcc615c29678fdd11994e1ae34a3b35bab0ff9cd197426342963edd7.scope: Deactivated successfully.
Dec 05 01:19:49 compute-0 conmon[238753]: conmon d5fa3597fcc615c29678 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d5fa3597fcc615c29678fdd11994e1ae34a3b35bab0ff9cd197426342963edd7.scope/container/memory.events
Dec 05 01:19:49 compute-0 podman[238708]: 2025-12-05 01:19:49.939705317 +0000 UTC m=+0.261974314 container died d5fa3597fcc615c29678fdd11994e1ae34a3b35bab0ff9cd197426342963edd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mccarthy, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:19:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfecc8c5466544c3945e4c7c354090e98b9665e7e836f5179a64ca8dd05808d8-merged.mount: Deactivated successfully.
Dec 05 01:19:50 compute-0 podman[238708]: 2025-12-05 01:19:50.028233475 +0000 UTC m=+0.350502392 container remove d5fa3597fcc615c29678fdd11994e1ae34a3b35bab0ff9cd197426342963edd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mccarthy, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:19:50 compute-0 systemd[1]: libpod-conmon-d5fa3597fcc615c29678fdd11994e1ae34a3b35bab0ff9cd197426342963edd7.scope: Deactivated successfully.
Dec 05 01:19:50 compute-0 python3.9[238755]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:19:50 compute-0 sudo[238751]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v307: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:50 compute-0 podman[238802]: 2025-12-05 01:19:50.309600674 +0000 UTC m=+0.075920053 container create 5640215ee8af0d9b24b306292e9542cc290dd73fad327fd929790058bb3136d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haibt, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 01:19:50 compute-0 podman[238802]: 2025-12-05 01:19:50.28040101 +0000 UTC m=+0.046720399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:19:50 compute-0 systemd[1]: Started libpod-conmon-5640215ee8af0d9b24b306292e9542cc290dd73fad327fd929790058bb3136d9.scope.
Dec 05 01:19:50 compute-0 ceph-mon[192914]: 11.1d scrub starts
Dec 05 01:19:50 compute-0 ceph-mon[192914]: 11.1d scrub ok
Dec 05 01:19:50 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d05ff8655a9decd9f8f2e07f2095b120afdc91ce193caf22d8bdea30af1bbe1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d05ff8655a9decd9f8f2e07f2095b120afdc91ce193caf22d8bdea30af1bbe1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d05ff8655a9decd9f8f2e07f2095b120afdc91ce193caf22d8bdea30af1bbe1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d05ff8655a9decd9f8f2e07f2095b120afdc91ce193caf22d8bdea30af1bbe1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d05ff8655a9decd9f8f2e07f2095b120afdc91ce193caf22d8bdea30af1bbe1d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:19:50 compute-0 podman[238802]: 2025-12-05 01:19:50.513693743 +0000 UTC m=+0.280013122 container init 5640215ee8af0d9b24b306292e9542cc290dd73fad327fd929790058bb3136d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haibt, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 05 01:19:50 compute-0 podman[238802]: 2025-12-05 01:19:50.541075826 +0000 UTC m=+0.307395175 container start 5640215ee8af0d9b24b306292e9542cc290dd73fad327fd929790058bb3136d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haibt, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:19:50 compute-0 podman[238802]: 2025-12-05 01:19:50.546791137 +0000 UTC m=+0.313110486 container attach 5640215ee8af0d9b24b306292e9542cc290dd73fad327fd929790058bb3136d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haibt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 01:19:50 compute-0 sudo[238948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpfpdobvpdjlntbajabghimokvdoryjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897590.3711119-104-5946924469084/AnsiballZ_ini_file.py'
Dec 05 01:19:51 compute-0 sudo[238948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:19:51 compute-0 python3.9[238950]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:19:51 compute-0 sudo[238948]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:51 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Dec 05 01:19:51 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Dec 05 01:19:51 compute-0 ceph-mon[192914]: 10.13 scrub starts
Dec 05 01:19:51 compute-0 ceph-mon[192914]: 10.13 scrub ok
Dec 05 01:19:51 compute-0 ceph-mon[192914]: pgmap v307: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:51 compute-0 infallible_haibt[238845]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:19:51 compute-0 infallible_haibt[238845]: --> relative data size: 1.0
Dec 05 01:19:51 compute-0 infallible_haibt[238845]: --> All data devices are unavailable
Dec 05 01:19:51 compute-0 systemd[1]: libpod-5640215ee8af0d9b24b306292e9542cc290dd73fad327fd929790058bb3136d9.scope: Deactivated successfully.
Dec 05 01:19:51 compute-0 systemd[1]: libpod-5640215ee8af0d9b24b306292e9542cc290dd73fad327fd929790058bb3136d9.scope: Consumed 1.202s CPU time.
Dec 05 01:19:51 compute-0 podman[238802]: 2025-12-05 01:19:51.812685418 +0000 UTC m=+1.579004817 container died 5640215ee8af0d9b24b306292e9542cc290dd73fad327fd929790058bb3136d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 05 01:19:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-d05ff8655a9decd9f8f2e07f2095b120afdc91ce193caf22d8bdea30af1bbe1d-merged.mount: Deactivated successfully.
Dec 05 01:19:51 compute-0 podman[238802]: 2025-12-05 01:19:51.919351188 +0000 UTC m=+1.685670527 container remove 5640215ee8af0d9b24b306292e9542cc290dd73fad327fd929790058bb3136d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_haibt, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 05 01:19:51 compute-0 systemd[1]: libpod-conmon-5640215ee8af0d9b24b306292e9542cc290dd73fad327fd929790058bb3136d9.scope: Deactivated successfully.
Dec 05 01:19:51 compute-0 sudo[238563]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:52 compute-0 sudo[239156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unwqjobanjynrjxiyzlqtyrlguytyoje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897591.522603-104-226491626941414/AnsiballZ_ini_file.py'
Dec 05 01:19:52 compute-0 sudo[239156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:19:52 compute-0 sudo[239115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:19:52 compute-0 sudo[239115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:19:52 compute-0 sudo[239115]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:19:52 compute-0 sudo[239163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:19:52 compute-0 sudo[239163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:19:52 compute-0 sudo[239163]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v308: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:52 compute-0 python3.9[239160]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:19:52 compute-0 sudo[239156]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:52 compute-0 sudo[239188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:19:52 compute-0 sudo[239188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:19:52 compute-0 sudo[239188]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:52 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Dec 05 01:19:52 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Dec 05 01:19:52 compute-0 ceph-mon[192914]: 8.12 scrub starts
Dec 05 01:19:52 compute-0 ceph-mon[192914]: 8.12 scrub ok
Dec 05 01:19:52 compute-0 sudo[239226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:19:52 compute-0 sudo[239226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:19:53 compute-0 podman[239396]: 2025-12-05 01:19:53.031423517 +0000 UTC m=+0.095706391 container create 742c6569f495aeed0a5a40678307073e92ee0006b0647fdfe9a29f58771af22f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wright, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 05 01:19:53 compute-0 podman[239396]: 2025-12-05 01:19:53.003679585 +0000 UTC m=+0.067962549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:19:53 compute-0 systemd[1]: Started libpod-conmon-742c6569f495aeed0a5a40678307073e92ee0006b0647fdfe9a29f58771af22f.scope.
Dec 05 01:19:53 compute-0 sudo[239441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eclnvsqzvrhsktdsrzqzgxtmakeaivwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897592.5520525-104-58835080788483/AnsiballZ_ini_file.py'
Dec 05 01:19:53 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:19:53 compute-0 sudo[239441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:19:53 compute-0 podman[239396]: 2025-12-05 01:19:53.172159259 +0000 UTC m=+0.236442173 container init 742c6569f495aeed0a5a40678307073e92ee0006b0647fdfe9a29f58771af22f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 01:19:53 compute-0 podman[239396]: 2025-12-05 01:19:53.191465353 +0000 UTC m=+0.255748227 container start 742c6569f495aeed0a5a40678307073e92ee0006b0647fdfe9a29f58771af22f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wright, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:19:53 compute-0 podman[239396]: 2025-12-05 01:19:53.19631495 +0000 UTC m=+0.260597854 container attach 742c6569f495aeed0a5a40678307073e92ee0006b0647fdfe9a29f58771af22f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 01:19:53 compute-0 sweet_wright[239442]: 167 167
Dec 05 01:19:53 compute-0 systemd[1]: libpod-742c6569f495aeed0a5a40678307073e92ee0006b0647fdfe9a29f58771af22f.scope: Deactivated successfully.
Dec 05 01:19:53 compute-0 podman[239396]: 2025-12-05 01:19:53.204531792 +0000 UTC m=+0.268814666 container died 742c6569f495aeed0a5a40678307073e92ee0006b0647fdfe9a29f58771af22f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wright, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:19:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-303d6bf19998bfc2cb4276271660c7b4c8596015ee722286e0d67c1153d2d98b-merged.mount: Deactivated successfully.
Dec 05 01:19:53 compute-0 podman[239396]: 2025-12-05 01:19:53.267659583 +0000 UTC m=+0.331942467 container remove 742c6569f495aeed0a5a40678307073e92ee0006b0647fdfe9a29f58771af22f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wright, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 01:19:53 compute-0 systemd[1]: libpod-conmon-742c6569f495aeed0a5a40678307073e92ee0006b0647fdfe9a29f58771af22f.scope: Deactivated successfully.
Dec 05 01:19:53 compute-0 python3.9[239446]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:19:53 compute-0 sudo[239441]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:53 compute-0 ceph-mon[192914]: pgmap v308: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:53 compute-0 ceph-mon[192914]: 8.11 scrub starts
Dec 05 01:19:53 compute-0 ceph-mon[192914]: 8.11 scrub ok
Dec 05 01:19:53 compute-0 podman[239467]: 2025-12-05 01:19:53.525600392 +0000 UTC m=+0.072933379 container create 891b936a29fb2e7fb28de4132ce7a60e1e1d438af41dc2145965e8eec3b8afa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 01:19:53 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Dec 05 01:19:53 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Dec 05 01:19:53 compute-0 podman[239467]: 2025-12-05 01:19:53.501248645 +0000 UTC m=+0.048581662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:19:53 compute-0 systemd[1]: Started libpod-conmon-891b936a29fb2e7fb28de4132ce7a60e1e1d438af41dc2145965e8eec3b8afa1.scope.
Dec 05 01:19:53 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69be90b84f7b498a382d637b862829bd639fd05802638b0352eebeb5340e00cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69be90b84f7b498a382d637b862829bd639fd05802638b0352eebeb5340e00cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69be90b84f7b498a382d637b862829bd639fd05802638b0352eebeb5340e00cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69be90b84f7b498a382d637b862829bd639fd05802638b0352eebeb5340e00cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:19:53 compute-0 podman[239467]: 2025-12-05 01:19:53.730400921 +0000 UTC m=+0.277733928 container init 891b936a29fb2e7fb28de4132ce7a60e1e1d438af41dc2145965e8eec3b8afa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lovelace, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:19:53 compute-0 podman[239467]: 2025-12-05 01:19:53.757762373 +0000 UTC m=+0.305095360 container start 891b936a29fb2e7fb28de4132ce7a60e1e1d438af41dc2145965e8eec3b8afa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lovelace, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Dec 05 01:19:53 compute-0 podman[239467]: 2025-12-05 01:19:53.764345569 +0000 UTC m=+0.311678556 container attach 891b936a29fb2e7fb28de4132ce7a60e1e1d438af41dc2145965e8eec3b8afa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lovelace, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:19:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v309: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:54 compute-0 sudo[239637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riuswvgzixqyaexknccytvgyevxhgvso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897593.8136868-135-217083656087033/AnsiballZ_dnf.py'
Dec 05 01:19:54 compute-0 sudo[239637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:19:54 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Dec 05 01:19:54 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Dec 05 01:19:54 compute-0 ceph-mon[192914]: 11.9 scrub starts
Dec 05 01:19:54 compute-0 ceph-mon[192914]: 11.9 scrub ok
Dec 05 01:19:54 compute-0 python3.9[239639]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]: {
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:     "0": [
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:         {
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "devices": [
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "/dev/loop3"
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             ],
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "lv_name": "ceph_lv0",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "lv_size": "21470642176",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "name": "ceph_lv0",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "tags": {
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.cluster_name": "ceph",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.crush_device_class": "",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.encrypted": "0",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.osd_id": "0",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.type": "block",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.vdo": "0"
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             },
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "type": "block",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "vg_name": "ceph_vg0"
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:         }
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:     ],
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:     "1": [
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:         {
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "devices": [
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "/dev/loop4"
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             ],
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "lv_name": "ceph_lv1",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "lv_size": "21470642176",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "name": "ceph_lv1",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "tags": {
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.cluster_name": "ceph",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.crush_device_class": "",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.encrypted": "0",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.osd_id": "1",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.type": "block",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.vdo": "0"
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             },
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "type": "block",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "vg_name": "ceph_vg1"
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:         }
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:     ],
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:     "2": [
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:         {
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "devices": [
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "/dev/loop5"
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             ],
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "lv_name": "ceph_lv2",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "lv_size": "21470642176",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "name": "ceph_lv2",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "tags": {
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.cluster_name": "ceph",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.crush_device_class": "",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.encrypted": "0",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.osd_id": "2",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.type": "block",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:                 "ceph.vdo": "0"
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             },
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "type": "block",
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:             "vg_name": "ceph_vg2"
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:         }
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]:     ]
Dec 05 01:19:54 compute-0 blissful_lovelace[239507]: }
Dec 05 01:19:54 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Dec 05 01:19:54 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Dec 05 01:19:54 compute-0 systemd[1]: libpod-891b936a29fb2e7fb28de4132ce7a60e1e1d438af41dc2145965e8eec3b8afa1.scope: Deactivated successfully.
Dec 05 01:19:54 compute-0 podman[239467]: 2025-12-05 01:19:54.626321922 +0000 UTC m=+1.173654929 container died 891b936a29fb2e7fb28de4132ce7a60e1e1d438af41dc2145965e8eec3b8afa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lovelace, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Dec 05 01:19:54 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 10.e scrub starts
Dec 05 01:19:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-69be90b84f7b498a382d637b862829bd639fd05802638b0352eebeb5340e00cc-merged.mount: Deactivated successfully.
Dec 05 01:19:54 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 10.e scrub ok
Dec 05 01:19:54 compute-0 podman[239467]: 2025-12-05 01:19:54.769345328 +0000 UTC m=+1.316678335 container remove 891b936a29fb2e7fb28de4132ce7a60e1e1d438af41dc2145965e8eec3b8afa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lovelace, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:19:54 compute-0 systemd[1]: libpod-conmon-891b936a29fb2e7fb28de4132ce7a60e1e1d438af41dc2145965e8eec3b8afa1.scope: Deactivated successfully.
Dec 05 01:19:54 compute-0 sudo[239226]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:54 compute-0 sudo[239663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:19:54 compute-0 sudo[239663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:19:55 compute-0 sudo[239663]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:55 compute-0 podman[239656]: 2025-12-05 01:19:55.028333196 +0000 UTC m=+0.194073707 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:19:55 compute-0 podman[239698]: 2025-12-05 01:19:55.146819969 +0000 UTC m=+0.143782728 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 01:19:55 compute-0 sudo[239707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:19:55 compute-0 sudo[239707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:19:55 compute-0 sudo[239707]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:55 compute-0 sudo[239748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:19:55 compute-0 sudo[239748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:19:55 compute-0 sudo[239748]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:55 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Dec 05 01:19:55 compute-0 podman[239746]: 2025-12-05 01:19:55.360867199 +0000 UTC m=+0.179527697 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible)
Dec 05 01:19:55 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Dec 05 01:19:55 compute-0 sudo[239790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:19:55 compute-0 sudo[239790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:19:55 compute-0 ceph-mon[192914]: 10.10 scrub starts
Dec 05 01:19:55 compute-0 ceph-mon[192914]: 10.10 scrub ok
Dec 05 01:19:55 compute-0 ceph-mon[192914]: pgmap v309: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:55 compute-0 ceph-mon[192914]: 10.1a scrub starts
Dec 05 01:19:55 compute-0 ceph-mon[192914]: 10.1a scrub ok
Dec 05 01:19:55 compute-0 ceph-mon[192914]: 10.e scrub starts
Dec 05 01:19:55 compute-0 ceph-mon[192914]: 10.e scrub ok
Dec 05 01:19:55 compute-0 ceph-mon[192914]: 11.18 scrub starts
Dec 05 01:19:55 compute-0 ceph-mon[192914]: 11.18 scrub ok
Dec 05 01:19:55 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 8.c scrub starts
Dec 05 01:19:55 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 8.c scrub ok
Dec 05 01:19:55 compute-0 sudo[239637]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:55 compute-0 podman[239860]: 2025-12-05 01:19:55.944534448 +0000 UTC m=+0.095197016 container create 94e1a9ba55f7bdc945efc672288458c13aafc08cc4128924c298820e7006c8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 01:19:55 compute-0 podman[239860]: 2025-12-05 01:19:55.905200258 +0000 UTC m=+0.055862866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:19:56 compute-0 systemd[1]: Started libpod-conmon-94e1a9ba55f7bdc945efc672288458c13aafc08cc4128924c298820e7006c8d2.scope.
Dec 05 01:19:56 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:19:56 compute-0 podman[239860]: 2025-12-05 01:19:56.092358199 +0000 UTC m=+0.243020807 container init 94e1a9ba55f7bdc945efc672288458c13aafc08cc4128924c298820e7006c8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_newton, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:19:56 compute-0 podman[239860]: 2025-12-05 01:19:56.108835365 +0000 UTC m=+0.259497923 container start 94e1a9ba55f7bdc945efc672288458c13aafc08cc4128924c298820e7006c8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_newton, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:19:56 compute-0 podman[239860]: 2025-12-05 01:19:56.116065029 +0000 UTC m=+0.266727657 container attach 94e1a9ba55f7bdc945efc672288458c13aafc08cc4128924c298820e7006c8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_newton, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:19:56 compute-0 reverent_newton[239900]: 167 167
Dec 05 01:19:56 compute-0 systemd[1]: libpod-94e1a9ba55f7bdc945efc672288458c13aafc08cc4128924c298820e7006c8d2.scope: Deactivated successfully.
Dec 05 01:19:56 compute-0 podman[239860]: 2025-12-05 01:19:56.122990254 +0000 UTC m=+0.273652822 container died 94e1a9ba55f7bdc945efc672288458c13aafc08cc4128924c298820e7006c8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:19:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f951c476f9344a9a11cba8fec828b116c48058eba4c81236962657a42631fed8-merged.mount: Deactivated successfully.
Dec 05 01:19:56 compute-0 podman[239860]: 2025-12-05 01:19:56.202990821 +0000 UTC m=+0.353653389 container remove 94e1a9ba55f7bdc945efc672288458c13aafc08cc4128924c298820e7006c8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_newton, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:19:56 compute-0 systemd[1]: libpod-conmon-94e1a9ba55f7bdc945efc672288458c13aafc08cc4128924c298820e7006c8d2.scope: Deactivated successfully.
Dec 05 01:19:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v310: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:56 compute-0 podman[239954]: 2025-12-05 01:19:56.497681507 +0000 UTC m=+0.073697211 container create a568795848d499b9a506b6d1e454ae5d4e898b79500a20040e191eb9b680f37c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:19:56 compute-0 ceph-mon[192914]: 8.c scrub starts
Dec 05 01:19:56 compute-0 ceph-mon[192914]: 8.c scrub ok
Dec 05 01:19:56 compute-0 ceph-mon[192914]: pgmap v310: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:56 compute-0 systemd[1]: Started libpod-conmon-a568795848d499b9a506b6d1e454ae5d4e898b79500a20040e191eb9b680f37c.scope.
Dec 05 01:19:56 compute-0 podman[239954]: 2025-12-05 01:19:56.472666841 +0000 UTC m=+0.048682575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:19:56 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cf4bd019919616249411f661e3aac91396b34456c3411ab1aa3f04d7e36524/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cf4bd019919616249411f661e3aac91396b34456c3411ab1aa3f04d7e36524/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cf4bd019919616249411f661e3aac91396b34456c3411ab1aa3f04d7e36524/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cf4bd019919616249411f661e3aac91396b34456c3411ab1aa3f04d7e36524/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:19:56 compute-0 podman[239954]: 2025-12-05 01:19:56.665683628 +0000 UTC m=+0.241699362 container init a568795848d499b9a506b6d1e454ae5d4e898b79500a20040e191eb9b680f37c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:19:56 compute-0 podman[239954]: 2025-12-05 01:19:56.685679642 +0000 UTC m=+0.261695346 container start a568795848d499b9a506b6d1e454ae5d4e898b79500a20040e191eb9b680f37c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 05 01:19:56 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Dec 05 01:19:56 compute-0 podman[239954]: 2025-12-05 01:19:56.691480006 +0000 UTC m=+0.267495750 container attach a568795848d499b9a506b6d1e454ae5d4e898b79500a20040e191eb9b680f37c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 01:19:56 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Dec 05 01:19:56 compute-0 sudo[240069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwxavwulzctlwbsiprmalhwmlwlcugyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897596.348783-146-279769862525856/AnsiballZ_setup.py'
Dec 05 01:19:56 compute-0 sudo[240069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:19:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:19:57 compute-0 podman[240072]: 2025-12-05 01:19:57.153321348 +0000 UTC m=+0.135384422 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:19:57 compute-0 python3.9[240071]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:19:57 compute-0 sudo[240069]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:57 compute-0 ceph-mon[192914]: 11.1 scrub starts
Dec 05 01:19:57 compute-0 ceph-mon[192914]: 11.1 scrub ok
Dec 05 01:19:57 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 8.f scrub starts
Dec 05 01:19:57 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 8.f scrub ok
Dec 05 01:19:57 compute-0 elegant_poincare[240008]: {
Dec 05 01:19:57 compute-0 elegant_poincare[240008]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:19:57 compute-0 elegant_poincare[240008]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:19:57 compute-0 elegant_poincare[240008]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:19:57 compute-0 elegant_poincare[240008]:         "osd_id": 0,
Dec 05 01:19:57 compute-0 elegant_poincare[240008]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:19:57 compute-0 elegant_poincare[240008]:         "type": "bluestore"
Dec 05 01:19:57 compute-0 elegant_poincare[240008]:     },
Dec 05 01:19:57 compute-0 elegant_poincare[240008]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:19:57 compute-0 elegant_poincare[240008]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:19:57 compute-0 elegant_poincare[240008]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:19:57 compute-0 elegant_poincare[240008]:         "osd_id": 1,
Dec 05 01:19:57 compute-0 elegant_poincare[240008]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:19:57 compute-0 elegant_poincare[240008]:         "type": "bluestore"
Dec 05 01:19:57 compute-0 elegant_poincare[240008]:     },
Dec 05 01:19:57 compute-0 elegant_poincare[240008]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:19:57 compute-0 elegant_poincare[240008]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:19:57 compute-0 elegant_poincare[240008]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:19:57 compute-0 elegant_poincare[240008]:         "osd_id": 2,
Dec 05 01:19:57 compute-0 elegant_poincare[240008]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:19:57 compute-0 elegant_poincare[240008]:         "type": "bluestore"
Dec 05 01:19:57 compute-0 elegant_poincare[240008]:     }
Dec 05 01:19:57 compute-0 elegant_poincare[240008]: }
Dec 05 01:19:57 compute-0 systemd[1]: libpod-a568795848d499b9a506b6d1e454ae5d4e898b79500a20040e191eb9b680f37c.scope: Deactivated successfully.
Dec 05 01:19:57 compute-0 systemd[1]: libpod-a568795848d499b9a506b6d1e454ae5d4e898b79500a20040e191eb9b680f37c.scope: Consumed 1.170s CPU time.
Dec 05 01:19:57 compute-0 podman[239954]: 2025-12-05 01:19:57.868846068 +0000 UTC m=+1.444861802 container died a568795848d499b9a506b6d1e454ae5d4e898b79500a20040e191eb9b680f37c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_poincare, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Dec 05 01:19:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-66cf4bd019919616249411f661e3aac91396b34456c3411ab1aa3f04d7e36524-merged.mount: Deactivated successfully.
Dec 05 01:19:57 compute-0 podman[239954]: 2025-12-05 01:19:57.996556562 +0000 UTC m=+1.572572286 container remove a568795848d499b9a506b6d1e454ae5d4e898b79500a20040e191eb9b680f37c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:19:58 compute-0 systemd[1]: libpod-conmon-a568795848d499b9a506b6d1e454ae5d4e898b79500a20040e191eb9b680f37c.scope: Deactivated successfully.
Dec 05 01:19:58 compute-0 sudo[239790]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:19:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:19:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:19:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:19:58 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev da9cf32a-b2e5-4a6f-b39f-ff66d8c24618 does not exist
Dec 05 01:19:58 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 20e4e4e7-eaef-4eba-aef2-e1380011236a does not exist
Dec 05 01:19:58 compute-0 sudo[240292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozzbhvlrueppbcatmwbfnhoaabsmiouv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897597.5881429-154-87510715548071/AnsiballZ_stat.py'
Dec 05 01:19:58 compute-0 sudo[240292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:19:58 compute-0 sudo[240271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:19:58 compute-0 sudo[240271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:19:58 compute-0 sudo[240271]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v311: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:58 compute-0 sudo[240310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:19:58 compute-0 sudo[240310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:19:58 compute-0 sudo[240310]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:58 compute-0 python3.9[240307]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:19:58 compute-0 sudo[240292]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:58 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Dec 05 01:19:58 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Dec 05 01:19:59 compute-0 ceph-mon[192914]: 8.f scrub starts
Dec 05 01:19:59 compute-0 ceph-mon[192914]: 8.f scrub ok
Dec 05 01:19:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:19:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:19:59 compute-0 ceph-mon[192914]: pgmap v311: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:19:59 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Dec 05 01:19:59 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Dec 05 01:19:59 compute-0 sudo[240484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtuovqwemjsyzqponfmpjjrislwwkwdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897598.8186517-163-89694258916942/AnsiballZ_stat.py'
Dec 05 01:19:59 compute-0 sudo[240484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:19:59 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Dec 05 01:19:59 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Dec 05 01:19:59 compute-0 python3.9[240486]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:19:59 compute-0 sudo[240484]: pam_unix(sudo:session): session closed for user root
Dec 05 01:19:59 compute-0 podman[158197]: time="2025-12-05T01:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:19:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 05 01:19:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6821 "" "Go-http-client/1.1"
Dec 05 01:20:00 compute-0 ceph-mon[192914]: 10.19 scrub starts
Dec 05 01:20:00 compute-0 ceph-mon[192914]: 10.19 scrub ok
Dec 05 01:20:00 compute-0 ceph-mon[192914]: 11.12 scrub starts
Dec 05 01:20:00 compute-0 ceph-mon[192914]: 11.12 scrub ok
Dec 05 01:20:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v312: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:00 compute-0 sudo[240636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrvpiinbhnwvatxbcaouckdjcenywepr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897600.0126114-173-127560635837979/AnsiballZ_command.py'
Dec 05 01:20:00 compute-0 sudo[240636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:00 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Dec 05 01:20:00 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Dec 05 01:20:00 compute-0 python3.9[240638]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:20:00 compute-0 sudo[240636]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:00 compute-0 podman[240640]: 2025-12-05 01:20:00.920207911 +0000 UTC m=+0.115909882 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_id=edpm, io.openshift.expose-services=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release=1214.1726694543, release-0.7.12=, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.4)
Dec 05 01:20:01 compute-0 ceph-mon[192914]: 10.6 scrub starts
Dec 05 01:20:01 compute-0 ceph-mon[192914]: 10.6 scrub ok
Dec 05 01:20:01 compute-0 ceph-mon[192914]: pgmap v312: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:01 compute-0 openstack_network_exporter[160350]: ERROR   01:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:20:01 compute-0 openstack_network_exporter[160350]: ERROR   01:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:20:01 compute-0 openstack_network_exporter[160350]: ERROR   01:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:20:01 compute-0 openstack_network_exporter[160350]: ERROR   01:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:20:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:20:01 compute-0 openstack_network_exporter[160350]: ERROR   01:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:20:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:20:01 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Dec 05 01:20:01 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Dec 05 01:20:01 compute-0 sudo[240808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uavjpzdiiqjhravdpdaxnptqlmuuwpen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897601.1721928-183-103260897155099/AnsiballZ_service_facts.py'
Dec 05 01:20:01 compute-0 sudo[240808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:02 compute-0 ceph-mon[192914]: 8.9 scrub starts
Dec 05 01:20:02 compute-0 ceph-mon[192914]: 8.9 scrub ok
Dec 05 01:20:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:20:02 compute-0 python3.9[240810]: ansible-service_facts Invoked
Dec 05 01:20:02 compute-0 network[240827]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 01:20:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v313: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:02 compute-0 network[240828]: 'network-scripts' will be removed from distribution in near future.
Dec 05 01:20:02 compute-0 network[240829]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 01:20:02 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Dec 05 01:20:02 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Dec 05 01:20:03 compute-0 ceph-mon[192914]: 10.11 scrub starts
Dec 05 01:20:03 compute-0 ceph-mon[192914]: 10.11 scrub ok
Dec 05 01:20:03 compute-0 ceph-mon[192914]: pgmap v313: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:03 compute-0 podman[240836]: 2025-12-05 01:20:03.426312227 +0000 UTC m=+0.140522546 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, architecture=x86_64, version=9.6, distribution-scope=public, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vcs-type=git)
Dec 05 01:20:04 compute-0 ceph-mon[192914]: 11.4 scrub starts
Dec 05 01:20:04 compute-0 ceph-mon[192914]: 11.4 scrub ok
Dec 05 01:20:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v314: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:04 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 9.e scrub starts
Dec 05 01:20:04 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 9.e scrub ok
Dec 05 01:20:04 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 10.f scrub starts
Dec 05 01:20:04 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 10.f scrub ok
Dec 05 01:20:04 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Dec 05 01:20:04 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Dec 05 01:20:05 compute-0 ceph-mon[192914]: pgmap v314: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:05 compute-0 ceph-mon[192914]: 9.e scrub starts
Dec 05 01:20:05 compute-0 ceph-mon[192914]: 9.e scrub ok
Dec 05 01:20:05 compute-0 ceph-mon[192914]: 8.18 scrub starts
Dec 05 01:20:05 compute-0 ceph-mon[192914]: 8.18 scrub ok
Dec 05 01:20:05 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Dec 05 01:20:05 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Dec 05 01:20:05 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Dec 05 01:20:05 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Dec 05 01:20:06 compute-0 ceph-mon[192914]: 10.f scrub starts
Dec 05 01:20:06 compute-0 ceph-mon[192914]: 10.f scrub ok
Dec 05 01:20:06 compute-0 ceph-mon[192914]: 9.6 scrub starts
Dec 05 01:20:06 compute-0 ceph-mon[192914]: 9.6 scrub ok
Dec 05 01:20:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v315: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:06 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 10.b scrub starts
Dec 05 01:20:06 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 10.b scrub ok
Dec 05 01:20:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:20:07 compute-0 ceph-mon[192914]: 10.12 scrub starts
Dec 05 01:20:07 compute-0 ceph-mon[192914]: 10.12 scrub ok
Dec 05 01:20:07 compute-0 ceph-mon[192914]: pgmap v315: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:07 compute-0 ceph-mon[192914]: 10.b scrub starts
Dec 05 01:20:07 compute-0 sudo[240808]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:08 compute-0 ceph-mon[192914]: 10.b scrub ok
Dec 05 01:20:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v316: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:08 compute-0 podman[241044]: 2025-12-05 01:20:08.374600036 +0000 UTC m=+0.130004890 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 01:20:08 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Dec 05 01:20:08 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Dec 05 01:20:08 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Dec 05 01:20:08 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Dec 05 01:20:08 compute-0 sudo[241165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fblqyicsecahcpjrzuvltneyhjvpmzoh ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764897608.0821047-198-32738721393758/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764897608.0821047-198-32738721393758/args'
Dec 05 01:20:08 compute-0 sudo[241165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:08 compute-0 sudo[241165]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:09 compute-0 ceph-mon[192914]: pgmap v316: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:09 compute-0 ceph-mon[192914]: 8.1f scrub starts
Dec 05 01:20:09 compute-0 ceph-mon[192914]: 8.1f scrub ok
Dec 05 01:20:09 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 9.f scrub starts
Dec 05 01:20:09 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 9.f scrub ok
Dec 05 01:20:09 compute-0 sudo[241332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjbviqkeahyoejkhblnklbcbjoxmwhdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897609.4068987-209-6182266339623/AnsiballZ_dnf.py'
Dec 05 01:20:09 compute-0 sudo[241332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:10 compute-0 ceph-mon[192914]: 10.14 scrub starts
Dec 05 01:20:10 compute-0 ceph-mon[192914]: 10.14 scrub ok
Dec 05 01:20:10 compute-0 ceph-mon[192914]: 9.f scrub starts
Dec 05 01:20:10 compute-0 python3.9[241334]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 01:20:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v317: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:10 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Dec 05 01:20:10 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Dec 05 01:20:11 compute-0 ceph-mon[192914]: 9.f scrub ok
Dec 05 01:20:11 compute-0 ceph-mon[192914]: pgmap v317: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:11 compute-0 ceph-mon[192914]: 10.2 scrub starts
Dec 05 01:20:11 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Dec 05 01:20:11 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Dec 05 01:20:11 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Dec 05 01:20:11 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Dec 05 01:20:11 compute-0 sudo[241332]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:20:12 compute-0 ceph-mon[192914]: 10.2 scrub ok
Dec 05 01:20:12 compute-0 ceph-mon[192914]: 9.17 scrub starts
Dec 05 01:20:12 compute-0 ceph-mon[192914]: 9.17 scrub ok
Dec 05 01:20:12 compute-0 ceph-mon[192914]: 8.1d scrub starts
Dec 05 01:20:12 compute-0 ceph-mon[192914]: 8.1d scrub ok
Dec 05 01:20:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v318: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:12 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Dec 05 01:20:12 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Dec 05 01:20:12 compute-0 sudo[241485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eednqgmnxgruzcjffogplvkhyrmsqgsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897612.0408697-222-161129295410427/AnsiballZ_package_facts.py'
Dec 05 01:20:12 compute-0 sudo[241485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:13 compute-0 python3.9[241487]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 05 01:20:13 compute-0 ceph-mon[192914]: pgmap v318: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:13 compute-0 ceph-mon[192914]: 11.19 scrub starts
Dec 05 01:20:13 compute-0 ceph-mon[192914]: 11.19 scrub ok
Dec 05 01:20:13 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Dec 05 01:20:13 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Dec 05 01:20:13 compute-0 sudo[241485]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v319: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:14 compute-0 ceph-mon[192914]: 9.7 scrub starts
Dec 05 01:20:14 compute-0 ceph-mon[192914]: 9.7 scrub ok
Dec 05 01:20:14 compute-0 sudo[241637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erpvnkbfpoqajudwtoqnjwqedeajadvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897614.1067433-232-71840004854192/AnsiballZ_stat.py'
Dec 05 01:20:14 compute-0 sudo[241637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:14 compute-0 python3.9[241639]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:20:14 compute-0 sudo[241637]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:15 compute-0 ceph-mon[192914]: pgmap v319: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:15 compute-0 sudo[241715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brtwmuboixicrrkqwpobplwwoozolarj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897614.1067433-232-71840004854192/AnsiballZ_file.py'
Dec 05 01:20:15 compute-0 sudo[241715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:15 compute-0 python3.9[241717]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:20:15 compute-0 sudo[241715]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:15 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Dec 05 01:20:15 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Dec 05 01:20:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:20:16
Dec 05 01:20:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:20:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:20:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'vms', 'images', 'default.rgw.log', 'volumes', '.rgw.root', 'backups', 'default.rgw.control']
Dec 05 01:20:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:20:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:20:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:20:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:20:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:20:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:20:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:20:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v320: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:20:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:20:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:20:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:20:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:20:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:20:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:20:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:20:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:20:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:20:16 compute-0 sudo[241867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvienidsudtoakggoerxcshzyztzaror ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897615.9138029-244-155170964118175/AnsiballZ_stat.py'
Dec 05 01:20:16 compute-0 sudo[241867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:16 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Dec 05 01:20:16 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Dec 05 01:20:16 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Dec 05 01:20:16 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Dec 05 01:20:16 compute-0 python3.9[241869]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:20:16 compute-0 sudo[241867]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:20:17 compute-0 sudo[241945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmrscurwreolzlnwlvgooepfpnxayoqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897615.9138029-244-155170964118175/AnsiballZ_file.py'
Dec 05 01:20:17 compute-0 sudo[241945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:17 compute-0 ceph-mon[192914]: 9.15 scrub starts
Dec 05 01:20:17 compute-0 ceph-mon[192914]: 9.15 scrub ok
Dec 05 01:20:17 compute-0 ceph-mon[192914]: pgmap v320: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:17 compute-0 ceph-mon[192914]: 11.10 scrub starts
Dec 05 01:20:17 compute-0 ceph-mon[192914]: 11.10 scrub ok
Dec 05 01:20:17 compute-0 python3.9[241947]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:20:17 compute-0 sudo[241945]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:17 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Dec 05 01:20:17 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Dec 05 01:20:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v321: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:18 compute-0 ceph-mon[192914]: 9.1f scrub starts
Dec 05 01:20:18 compute-0 ceph-mon[192914]: 9.1f scrub ok
Dec 05 01:20:18 compute-0 ceph-mon[192914]: 8.1a scrub starts
Dec 05 01:20:18 compute-0 ceph-mon[192914]: 8.1a scrub ok
Dec 05 01:20:18 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Dec 05 01:20:18 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Dec 05 01:20:18 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Dec 05 01:20:18 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Dec 05 01:20:18 compute-0 sudo[242097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzvbahhgvscunjjjotfslxwypduarzca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897618.0342088-262-142453290157415/AnsiballZ_lineinfile.py'
Dec 05 01:20:18 compute-0 sudo[242097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:19 compute-0 python3.9[242099]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:20:19 compute-0 sudo[242097]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:19 compute-0 ceph-mon[192914]: pgmap v321: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:19 compute-0 ceph-mon[192914]: 9.18 scrub starts
Dec 05 01:20:19 compute-0 ceph-mon[192914]: 9.18 scrub ok
Dec 05 01:20:19 compute-0 ceph-mon[192914]: 8.6 scrub starts
Dec 05 01:20:19 compute-0 ceph-mon[192914]: 8.6 scrub ok
Dec 05 01:20:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v322: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:20 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Dec 05 01:20:20 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Dec 05 01:20:20 compute-0 sudo[242250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqjxcccgsnkkzhazujgtneyjdxrhblld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897620.1242194-277-241369586509813/AnsiballZ_setup.py'
Dec 05 01:20:20 compute-0 sudo[242250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:21 compute-0 python3.9[242252]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 01:20:21 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 9.c scrub starts
Dec 05 01:20:21 compute-0 ceph-mon[192914]: pgmap v322: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:21 compute-0 ceph-mon[192914]: 9.8 scrub starts
Dec 05 01:20:21 compute-0 ceph-mon[192914]: 9.8 scrub ok
Dec 05 01:20:21 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 9.c scrub ok
Dec 05 01:20:21 compute-0 sudo[242250]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:20:22 compute-0 sudo[242334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atbeprbwspdoqdniwjgtwzelhpigktrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897620.1242194-277-241369586509813/AnsiballZ_systemd.py'
Dec 05 01:20:22 compute-0 sudo[242334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v323: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:22 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Dec 05 01:20:22 compute-0 ceph-mon[192914]: 9.c scrub starts
Dec 05 01:20:22 compute-0 ceph-mon[192914]: 9.c scrub ok
Dec 05 01:20:22 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Dec 05 01:20:22 compute-0 python3.9[242336]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:20:22 compute-0 sudo[242334]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:23 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Dec 05 01:20:23 compute-0 ceph-mon[192914]: pgmap v323: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:23 compute-0 ceph-mon[192914]: 9.13 scrub starts
Dec 05 01:20:23 compute-0 ceph-mon[192914]: 9.13 scrub ok
Dec 05 01:20:23 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Dec 05 01:20:23 compute-0 sshd-session[236766]: Connection closed by 192.168.122.30 port 39996
Dec 05 01:20:23 compute-0 sshd-session[236763]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:20:23 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Dec 05 01:20:23 compute-0 systemd[1]: session-45.scope: Consumed 39.304s CPU time.
Dec 05 01:20:23 compute-0 systemd-logind[792]: Session 45 logged out. Waiting for processes to exit.
Dec 05 01:20:23 compute-0 systemd-logind[792]: Removed session 45.
Dec 05 01:20:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v324: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:24 compute-0 ceph-mon[192914]: 9.19 scrub starts
Dec 05 01:20:24 compute-0 ceph-mon[192914]: 9.19 scrub ok
Dec 05 01:20:25 compute-0 ceph-mon[192914]: pgmap v324: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:25 compute-0 podman[242364]: 2025-12-05 01:20:25.736438923 +0000 UTC m=+0.133268118 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:20:25 compute-0 podman[242363]: 2025-12-05 01:20:25.759205465 +0000 UTC m=+0.160617079 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4)
Dec 05 01:20:25 compute-0 podman[242365]: 2025-12-05 01:20:25.77926899 +0000 UTC m=+0.173042209 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 01:20:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:20:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:20:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:20:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:20:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:20:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:20:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:20:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:20:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:20:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:20:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:20:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:20:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:20:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:20:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:20:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:20:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:20:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:20:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:20:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:20:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:20:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:20:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:20:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v325: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:20:27 compute-0 ceph-mon[192914]: pgmap v325: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:27 compute-0 podman[242427]: 2025-12-05 01:20:27.686258367 +0000 UTC m=+0.103642103 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 01:20:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v326: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:28 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 8.b scrub starts
Dec 05 01:20:28 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 8.b scrub ok
Dec 05 01:20:29 compute-0 ceph-mon[192914]: pgmap v326: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:29 compute-0 ceph-mon[192914]: 8.b scrub starts
Dec 05 01:20:29 compute-0 ceph-mon[192914]: 8.b scrub ok
Dec 05 01:20:29 compute-0 sshd-session[242446]: Accepted publickey for zuul from 192.168.122.30 port 57634 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 01:20:29 compute-0 systemd-logind[792]: New session 46 of user zuul.
Dec 05 01:20:29 compute-0 systemd[1]: Started Session 46 of User zuul.
Dec 05 01:20:29 compute-0 sshd-session[242446]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:20:29 compute-0 podman[158197]: time="2025-12-05T01:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:20:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 05 01:20:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6824 "" "Go-http-client/1.1"
Dec 05 01:20:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v327: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:30 compute-0 sudo[242599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agpwjhsgirmnjrtpuxbsdhjiybhhfqrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897629.869837-22-88900534152307/AnsiballZ_file.py'
Dec 05 01:20:30 compute-0 sudo[242599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:30 compute-0 python3.9[242601]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:20:30 compute-0 sudo[242599]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:31 compute-0 openstack_network_exporter[160350]: ERROR   01:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:20:31 compute-0 openstack_network_exporter[160350]: ERROR   01:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:20:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:20:31 compute-0 openstack_network_exporter[160350]: ERROR   01:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:20:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:20:31 compute-0 openstack_network_exporter[160350]: ERROR   01:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:20:31 compute-0 openstack_network_exporter[160350]: ERROR   01:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:20:31 compute-0 ceph-mon[192914]: pgmap v327: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:31 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Dec 05 01:20:31 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Dec 05 01:20:31 compute-0 podman[242678]: 2025-12-05 01:20:31.715690403 +0000 UTC m=+0.130633593 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, release=1214.1726694543, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vcs-type=git, distribution-scope=public, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.openshift.expose-services=, config_id=edpm)
Dec 05 01:20:31 compute-0 sudo[242771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgxchxfkolqdwrzqdauetnnpthpnemkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897631.240361-34-3192764441993/AnsiballZ_stat.py'
Dec 05 01:20:31 compute-0 sudo[242771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:20:32 compute-0 python3.9[242773]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:20:32 compute-0 sudo[242771]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v328: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:32 compute-0 ceph-mon[192914]: 11.6 scrub starts
Dec 05 01:20:32 compute-0 ceph-mon[192914]: 11.6 scrub ok
Dec 05 01:20:32 compute-0 sudo[242849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcpwcqkxwvwiuqtwvmbptfuibmpqknyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897631.240361-34-3192764441993/AnsiballZ_file.py'
Dec 05 01:20:32 compute-0 sudo[242849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:32 compute-0 python3.9[242851]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:20:32 compute-0 sudo[242849]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:33 compute-0 sshd-session[242449]: Connection closed by 192.168.122.30 port 57634
Dec 05 01:20:33 compute-0 sshd-session[242446]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:20:33 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Dec 05 01:20:33 compute-0 systemd[1]: session-46.scope: Consumed 2.775s CPU time.
Dec 05 01:20:33 compute-0 systemd-logind[792]: Session 46 logged out. Waiting for processes to exit.
Dec 05 01:20:33 compute-0 systemd-logind[792]: Removed session 46.
Dec 05 01:20:33 compute-0 ceph-mon[192914]: pgmap v328: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:33 compute-0 podman[242876]: 2025-12-05 01:20:33.714818816 +0000 UTC m=+0.125117578 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, build-date=2025-08-20T13:12:41, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., name=ubi9-minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, container_name=openstack_network_exporter, io.openshift.expose-services=, io.buildah.version=1.33.7, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 01:20:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v329: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:34 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.b scrub starts
Dec 05 01:20:34 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.b scrub ok
Dec 05 01:20:35 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Dec 05 01:20:35 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Dec 05 01:20:35 compute-0 ceph-mon[192914]: pgmap v329: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:35 compute-0 ceph-mon[192914]: 9.b scrub starts
Dec 05 01:20:35 compute-0 ceph-mon[192914]: 9.b scrub ok
Dec 05 01:20:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v330: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:36 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.d deep-scrub starts
Dec 05 01:20:36 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.d deep-scrub ok
Dec 05 01:20:36 compute-0 ceph-mon[192914]: 9.1 scrub starts
Dec 05 01:20:36 compute-0 ceph-mon[192914]: 9.1 scrub ok
Dec 05 01:20:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:20:37 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Dec 05 01:20:37 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Dec 05 01:20:37 compute-0 ceph-mon[192914]: pgmap v330: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:37 compute-0 ceph-mon[192914]: 9.d deep-scrub starts
Dec 05 01:20:37 compute-0 ceph-mon[192914]: 9.d deep-scrub ok
Dec 05 01:20:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v331: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:38 compute-0 ceph-mon[192914]: 9.1d scrub starts
Dec 05 01:20:38 compute-0 ceph-mon[192914]: 9.1d scrub ok
Dec 05 01:20:38 compute-0 ceph-mon[192914]: pgmap v331: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:38 compute-0 sshd-session[242896]: Accepted publickey for zuul from 192.168.122.30 port 58452 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 01:20:38 compute-0 systemd-logind[792]: New session 47 of user zuul.
Dec 05 01:20:38 compute-0 systemd[1]: Started Session 47 of User zuul.
Dec 05 01:20:38 compute-0 sshd-session[242896]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:20:38 compute-0 podman[242898]: 2025-12-05 01:20:38.703538314 +0000 UTC m=+0.104744394 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:20:40 compute-0 python3.9[243073]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:20:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v332: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:40 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Dec 05 01:20:40 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Dec 05 01:20:41 compute-0 ceph-mon[192914]: pgmap v332: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:41 compute-0 ceph-mon[192914]: 9.1b scrub starts
Dec 05 01:20:41 compute-0 ceph-mon[192914]: 9.1b scrub ok
Dec 05 01:20:41 compute-0 sudo[243227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bczcobxaobujvyvzuzxdmnttbzdscvfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897640.839714-33-261889265228695/AnsiballZ_file.py'
Dec 05 01:20:41 compute-0 sudo[243227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:41 compute-0 python3.9[243229]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:20:41 compute-0 sudo[243227]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:20:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v333: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:42 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Dec 05 01:20:42 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.544 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.545 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:20:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:20:42 compute-0 sudo[243403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aouwqrsgrccnhicenudwixnfospuhkjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897642.119222-41-84845698760853/AnsiballZ_stat.py'
Dec 05 01:20:42 compute-0 sudo[243403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:43 compute-0 python3.9[243405]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:20:43 compute-0 sudo[243403]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:43 compute-0 ceph-mon[192914]: pgmap v333: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:43 compute-0 ceph-mon[192914]: 9.9 scrub starts
Dec 05 01:20:43 compute-0 ceph-mon[192914]: 9.9 scrub ok
Dec 05 01:20:43 compute-0 sudo[243481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkdqzuyrqmwxqepkxvocpfidwjfhxiys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897642.119222-41-84845698760853/AnsiballZ_file.py'
Dec 05 01:20:43 compute-0 sudo[243481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:43 compute-0 python3.9[243483]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.6w_i1pns recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:20:43 compute-0 sudo[243481]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v334: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:44 compute-0 sudo[243633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwycnxobgkokfzyxewnozwmjiaippyio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897644.2802384-61-221803879117662/AnsiballZ_stat.py'
Dec 05 01:20:44 compute-0 sudo[243633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:45 compute-0 python3.9[243635]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:20:45 compute-0 sudo[243633]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:45 compute-0 ceph-mon[192914]: pgmap v334: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:45 compute-0 sudo[243711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxsmtcnpigtjsgyksngzkxxbssmdkrmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897644.2802384-61-221803879117662/AnsiballZ_file.py'
Dec 05 01:20:45 compute-0 sudo[243711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:45 compute-0 python3.9[243713]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=._2p1y6gf recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:20:45 compute-0 sudo[243711]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:20:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:20:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:20:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:20:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:20:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:20:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v335: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:46 compute-0 sudo[243863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeuhspuvstlidqhomhoejhkuobznkzbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897646.1116953-74-199218125357830/AnsiballZ_file.py'
Dec 05 01:20:46 compute-0 sudo[243863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:46 compute-0 python3.9[243865]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:20:46 compute-0 sudo[243863]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:20:47 compute-0 ceph-mon[192914]: pgmap v335: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:47 compute-0 sudo[244015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agzewqlgkjxmbgfpsdaoclkcxkfmfcke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897647.1820185-82-250143335638052/AnsiballZ_stat.py'
Dec 05 01:20:47 compute-0 sudo[244015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:47 compute-0 python3.9[244017]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:20:47 compute-0 sudo[244015]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v336: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:48 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.3 deep-scrub starts
Dec 05 01:20:48 compute-0 sudo[244093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjruyhhbhlyfzqcdbbmzqcwvdsjpirja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897647.1820185-82-250143335638052/AnsiballZ_file.py'
Dec 05 01:20:48 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.3 deep-scrub ok
Dec 05 01:20:48 compute-0 sudo[244093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:48 compute-0 python3.9[244095]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:20:48 compute-0 sudo[244093]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:49 compute-0 ceph-mon[192914]: pgmap v336: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:49 compute-0 ceph-mon[192914]: 9.3 deep-scrub starts
Dec 05 01:20:49 compute-0 ceph-mon[192914]: 9.3 deep-scrub ok
Dec 05 01:20:49 compute-0 sudo[244245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgdfplydwavbwypenbedyhnreybmjpyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897648.8861592-82-276557864061127/AnsiballZ_stat.py'
Dec 05 01:20:49 compute-0 sudo[244245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:49 compute-0 python3.9[244248]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:20:49 compute-0 sudo[244245]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:50 compute-0 sudo[244324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cchcvilcdldyxgrssfysbjcphrizahfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897648.8861592-82-276557864061127/AnsiballZ_file.py'
Dec 05 01:20:50 compute-0 sudo[244324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v337: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:50 compute-0 python3.9[244326]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:20:50 compute-0 sudo[244324]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:51 compute-0 sudo[244476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cofckcjolpaeptcihwzpcjfrxkjqgtfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897650.7237287-105-136069569345665/AnsiballZ_file.py'
Dec 05 01:20:51 compute-0 sudo[244476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:51 compute-0 ceph-mon[192914]: pgmap v337: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:51 compute-0 python3.9[244478]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:20:51 compute-0 sudo[244476]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:51 compute-0 rsyslogd[188644]: imjournal: 1573 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec 05 01:20:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:20:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v338: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:52 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.11 deep-scrub starts
Dec 05 01:20:52 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.11 deep-scrub ok
Dec 05 01:20:52 compute-0 sudo[244628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddvasfplrugestnmiktyrgkyhwiokspj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897651.8248148-113-6890015879993/AnsiballZ_stat.py'
Dec 05 01:20:52 compute-0 sudo[244628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:52 compute-0 python3.9[244630]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:20:52 compute-0 sudo[244628]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:53 compute-0 sudo[244706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgqxpnnhmionuyusuexkeaxfuvmpuszq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897651.8248148-113-6890015879993/AnsiballZ_file.py'
Dec 05 01:20:53 compute-0 sudo[244706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:53 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Dec 05 01:20:53 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Dec 05 01:20:53 compute-0 python3.9[244708]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:20:53 compute-0 sudo[244706]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:53 compute-0 ceph-mon[192914]: pgmap v338: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:53 compute-0 ceph-mon[192914]: 9.11 deep-scrub starts
Dec 05 01:20:53 compute-0 ceph-mon[192914]: 9.11 deep-scrub ok
Dec 05 01:20:54 compute-0 sudo[244858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeymxwxidtdhovahmertahnemjdfvaeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897653.6336076-125-4748279547064/AnsiballZ_stat.py'
Dec 05 01:20:54 compute-0 sudo[244858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v339: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:54 compute-0 python3.9[244860]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:20:54 compute-0 sudo[244858]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:54 compute-0 ceph-mon[192914]: 9.5 scrub starts
Dec 05 01:20:54 compute-0 ceph-mon[192914]: 9.5 scrub ok
Dec 05 01:20:54 compute-0 sudo[244936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpwtamjtpzvitfjzoocutwqrmfpknhxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897653.6336076-125-4748279547064/AnsiballZ_file.py'
Dec 05 01:20:54 compute-0 sudo[244936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:55 compute-0 python3.9[244938]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:20:55 compute-0 sudo[244936]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:55 compute-0 ceph-mon[192914]: pgmap v339: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:56 compute-0 sudo[245116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnphqbcdobxcmuoyjzemkorgdhiqqjnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897655.351074-137-141458717763422/AnsiballZ_systemd.py'
Dec 05 01:20:56 compute-0 sudo[245116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:56 compute-0 podman[245067]: 2025-12-05 01:20:56.238876377 +0000 UTC m=+0.114412046 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:20:56 compute-0 podman[245063]: 2025-12-05 01:20:56.265760065 +0000 UTC m=+0.145398380 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 01:20:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v340: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:56 compute-0 podman[245070]: 2025-12-05 01:20:56.313276154 +0000 UTC m=+0.182116714 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 05 01:20:56 compute-0 python3.9[245134]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:20:56 compute-0 systemd[1]: Reloading.
Dec 05 01:20:56 compute-0 systemd-rc-local-generator[245178]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:20:56 compute-0 systemd-sysv-generator[245183]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:20:57 compute-0 sudo[245116]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:20:57 compute-0 ceph-mon[192914]: pgmap v340: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:57 compute-0 sudo[245360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsptewndsmtfvbedewcvrjcbjxblpsmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897657.3585694-145-254436645444978/AnsiballZ_stat.py'
Dec 05 01:20:57 compute-0 podman[245317]: 2025-12-05 01:20:57.973510564 +0000 UTC m=+0.124388947 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:20:57 compute-0 sudo[245360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:58 compute-0 python3.9[245365]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:20:58 compute-0 sudo[245360]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v341: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:58 compute-0 sudo[245391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:20:58 compute-0 sudo[245391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:20:58 compute-0 sudo[245391]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:58 compute-0 sudo[245428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:20:58 compute-0 sudo[245428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:20:58 compute-0 sudo[245428]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:58 compute-0 sudo[245499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njkuiyqehvytnimqxuexfwosjptqtsjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897657.3585694-145-254436645444978/AnsiballZ_file.py'
Dec 05 01:20:58 compute-0 sudo[245499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:58 compute-0 sudo[245484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:20:58 compute-0 sudo[245484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:20:58 compute-0 sudo[245484]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:58 compute-0 sudo[245519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:20:58 compute-0 sudo[245519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:20:58 compute-0 python3.9[245515]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:20:58 compute-0 sudo[245499]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:59 compute-0 ceph-mon[192914]: pgmap v341: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:20:59 compute-0 sudo[245519]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:20:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:20:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:20:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:20:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:20:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:20:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d01fcc6d-d310-4ca7-b745-d968aa92e7e6 does not exist
Dec 05 01:20:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 846cb35c-cfa8-4d1b-9425-58baddde12a9 does not exist
Dec 05 01:20:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d78811b8-b6d4-4a3c-b535-dbbd18dc1832 does not exist
Dec 05 01:20:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:20:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:20:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:20:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:20:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:20:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:20:59 compute-0 sudo[245690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:20:59 compute-0 sudo[245690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:20:59 compute-0 sudo[245690]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:59 compute-0 podman[158197]: time="2025-12-05T01:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:20:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 05 01:20:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6834 "" "Go-http-client/1.1"
Dec 05 01:20:59 compute-0 sudo[245759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tloprecvjoqoelugirrpmarnapytubhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897659.2388477-157-239389665934235/AnsiballZ_stat.py'
Dec 05 01:20:59 compute-0 sudo[245759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:20:59 compute-0 sudo[245742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:20:59 compute-0 sudo[245742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:20:59 compute-0 sudo[245742]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:59 compute-0 sudo[245775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:20:59 compute-0 sudo[245775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:20:59 compute-0 sudo[245775]: pam_unix(sudo:session): session closed for user root
Dec 05 01:20:59 compute-0 python3.9[245770]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:21:00 compute-0 sudo[245759]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:00 compute-0 sudo[245801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:21:00 compute-0 sudo[245801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:21:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v342: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:00 compute-0 sudo[245925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vslbkaemzcfobccggqmncodttsehambu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897659.2388477-157-239389665934235/AnsiballZ_file.py'
Dec 05 01:21:00 compute-0 sudo[245925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:21:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:21:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:21:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:21:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:21:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:21:00 compute-0 python3.9[245927]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:21:00 compute-0 sudo[245925]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:00 compute-0 podman[245941]: 2025-12-05 01:21:00.62583485 +0000 UTC m=+0.079555783 container create 605225d971bcd7cbda80fecec1e958dc41de3fc2758e590a15ee691ee9d8efc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jennings, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Dec 05 01:21:00 compute-0 podman[245941]: 2025-12-05 01:21:00.601862334 +0000 UTC m=+0.055583357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:21:00 compute-0 systemd[1]: Started libpod-conmon-605225d971bcd7cbda80fecec1e958dc41de3fc2758e590a15ee691ee9d8efc0.scope.
Dec 05 01:21:00 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:21:00 compute-0 podman[245941]: 2025-12-05 01:21:00.769260703 +0000 UTC m=+0.222981646 container init 605225d971bcd7cbda80fecec1e958dc41de3fc2758e590a15ee691ee9d8efc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 01:21:00 compute-0 podman[245941]: 2025-12-05 01:21:00.779681687 +0000 UTC m=+0.233402660 container start 605225d971bcd7cbda80fecec1e958dc41de3fc2758e590a15ee691ee9d8efc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jennings, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:21:00 compute-0 podman[245941]: 2025-12-05 01:21:00.786392156 +0000 UTC m=+0.240113099 container attach 605225d971bcd7cbda80fecec1e958dc41de3fc2758e590a15ee691ee9d8efc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:21:00 compute-0 compassionate_jennings[245977]: 167 167
Dec 05 01:21:00 compute-0 systemd[1]: libpod-605225d971bcd7cbda80fecec1e958dc41de3fc2758e590a15ee691ee9d8efc0.scope: Deactivated successfully.
Dec 05 01:21:00 compute-0 podman[245941]: 2025-12-05 01:21:00.795441731 +0000 UTC m=+0.249162704 container died 605225d971bcd7cbda80fecec1e958dc41de3fc2758e590a15ee691ee9d8efc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jennings, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:21:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-868785e3d73f9132ed6b15913403ad22a096fa6ad32de2e13f80acabda5dcd68-merged.mount: Deactivated successfully.
Dec 05 01:21:00 compute-0 podman[245941]: 2025-12-05 01:21:00.881262341 +0000 UTC m=+0.334983284 container remove 605225d971bcd7cbda80fecec1e958dc41de3fc2758e590a15ee691ee9d8efc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jennings, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 01:21:00 compute-0 systemd[1]: libpod-conmon-605225d971bcd7cbda80fecec1e958dc41de3fc2758e590a15ee691ee9d8efc0.scope: Deactivated successfully.
Dec 05 01:21:01 compute-0 podman[246060]: 2025-12-05 01:21:01.151389015 +0000 UTC m=+0.089948666 container create fe7eddc0eb3a0e0ff9fe9d5b53d3282f59f84b825823d5d6138f88d435be5c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hermann, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 05 01:21:01 compute-0 podman[246060]: 2025-12-05 01:21:01.123103688 +0000 UTC m=+0.061663339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:21:01 compute-0 systemd[1]: Started libpod-conmon-fe7eddc0eb3a0e0ff9fe9d5b53d3282f59f84b825823d5d6138f88d435be5c6c.scope.
Dec 05 01:21:01 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4fa784ec7787b8d76e978e9218ad3bb88f4febd257ed63856dd47a869932469/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4fa784ec7787b8d76e978e9218ad3bb88f4febd257ed63856dd47a869932469/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4fa784ec7787b8d76e978e9218ad3bb88f4febd257ed63856dd47a869932469/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4fa784ec7787b8d76e978e9218ad3bb88f4febd257ed63856dd47a869932469/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4fa784ec7787b8d76e978e9218ad3bb88f4febd257ed63856dd47a869932469/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:21:01 compute-0 podman[246060]: 2025-12-05 01:21:01.329457145 +0000 UTC m=+0.268016886 container init fe7eddc0eb3a0e0ff9fe9d5b53d3282f59f84b825823d5d6138f88d435be5c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hermann, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:21:01 compute-0 podman[246060]: 2025-12-05 01:21:01.349178251 +0000 UTC m=+0.287737892 container start fe7eddc0eb3a0e0ff9fe9d5b53d3282f59f84b825823d5d6138f88d435be5c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hermann, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:21:01 compute-0 podman[246060]: 2025-12-05 01:21:01.355053907 +0000 UTC m=+0.293613558 container attach fe7eddc0eb3a0e0ff9fe9d5b53d3282f59f84b825823d5d6138f88d435be5c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hermann, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 05 01:21:01 compute-0 sudo[246148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhlnkhywdbkjcedrqxptiifgtkyftoio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897660.8706694-169-203491069988379/AnsiballZ_systemd.py'
Dec 05 01:21:01 compute-0 sudo[246148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:01 compute-0 openstack_network_exporter[160350]: ERROR   01:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:21:01 compute-0 openstack_network_exporter[160350]: ERROR   01:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:21:01 compute-0 openstack_network_exporter[160350]: ERROR   01:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:21:01 compute-0 openstack_network_exporter[160350]: ERROR   01:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:21:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:21:01 compute-0 openstack_network_exporter[160350]: ERROR   01:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:21:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:21:01 compute-0 ceph-mon[192914]: pgmap v342: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:01 compute-0 python3.9[246150]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:21:01 compute-0 systemd[1]: Reloading.
Dec 05 01:21:01 compute-0 systemd-sysv-generator[246191]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:21:01 compute-0 systemd-rc-local-generator[246187]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:21:02 compute-0 podman[246152]: 2025-12-05 01:21:02.000282435 +0000 UTC m=+0.198313121 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release=1214.1726694543, release-0.7.12=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, name=ubi9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc.)
Dec 05 01:21:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:21:02 compute-0 systemd[1]: Starting Create netns directory...
Dec 05 01:21:02 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 05 01:21:02 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 05 01:21:02 compute-0 systemd[1]: Finished Create netns directory.
Dec 05 01:21:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v343: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:02 compute-0 sudo[246148]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:02 compute-0 ceph-mon[192914]: pgmap v343: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:02 compute-0 nervous_hermann[246116]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:21:02 compute-0 nervous_hermann[246116]: --> relative data size: 1.0
Dec 05 01:21:02 compute-0 nervous_hermann[246116]: --> All data devices are unavailable
Dec 05 01:21:02 compute-0 systemd[1]: libpod-fe7eddc0eb3a0e0ff9fe9d5b53d3282f59f84b825823d5d6138f88d435be5c6c.scope: Deactivated successfully.
Dec 05 01:21:02 compute-0 systemd[1]: libpod-fe7eddc0eb3a0e0ff9fe9d5b53d3282f59f84b825823d5d6138f88d435be5c6c.scope: Consumed 1.181s CPU time.
Dec 05 01:21:02 compute-0 podman[246060]: 2025-12-05 01:21:02.640572184 +0000 UTC m=+1.579131825 container died fe7eddc0eb3a0e0ff9fe9d5b53d3282f59f84b825823d5d6138f88d435be5c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hermann, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:21:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4fa784ec7787b8d76e978e9218ad3bb88f4febd257ed63856dd47a869932469-merged.mount: Deactivated successfully.
Dec 05 01:21:02 compute-0 podman[246060]: 2025-12-05 01:21:02.746670175 +0000 UTC m=+1.685229846 container remove fe7eddc0eb3a0e0ff9fe9d5b53d3282f59f84b825823d5d6138f88d435be5c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 05 01:21:02 compute-0 systemd[1]: libpod-conmon-fe7eddc0eb3a0e0ff9fe9d5b53d3282f59f84b825823d5d6138f88d435be5c6c.scope: Deactivated successfully.
Dec 05 01:21:02 compute-0 sudo[245801]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:02 compute-0 sudo[246321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:21:02 compute-0 sudo[246321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:21:02 compute-0 sudo[246321]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:03 compute-0 sudo[246346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:21:03 compute-0 sudo[246346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:21:03 compute-0 sudo[246346]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:03 compute-0 sudo[246394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:21:03 compute-0 sudo[246394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:21:03 compute-0 sudo[246394]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:03 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Dec 05 01:21:03 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Dec 05 01:21:03 compute-0 sudo[246436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:21:03 compute-0 sudo[246436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:21:03 compute-0 python3.9[246494]: ansible-ansible.builtin.service_facts Invoked
Dec 05 01:21:03 compute-0 network[246536]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 01:21:03 compute-0 network[246537]: 'network-scripts' will be removed from distribution in near future.
Dec 05 01:21:03 compute-0 network[246542]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 01:21:03 compute-0 podman[246555]: 2025-12-05 01:21:03.841130307 +0000 UTC m=+0.077198587 container create 22c70057f07abf69dfed84da7df0205dea158fd55e19e05c20f5c89420561834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_khayyam, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 01:21:03 compute-0 podman[246555]: 2025-12-05 01:21:03.806678066 +0000 UTC m=+0.042746396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:21:04 compute-0 ceph-mon[192914]: 9.16 scrub starts
Dec 05 01:21:04 compute-0 ceph-mon[192914]: 9.16 scrub ok
Dec 05 01:21:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v344: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:04 compute-0 systemd[1]: Started libpod-conmon-22c70057f07abf69dfed84da7df0205dea158fd55e19e05c20f5c89420561834.scope.
Dec 05 01:21:04 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:21:04 compute-0 podman[246555]: 2025-12-05 01:21:04.654220877 +0000 UTC m=+0.890289167 container init 22c70057f07abf69dfed84da7df0205dea158fd55e19e05c20f5c89420561834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_khayyam, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:21:04 compute-0 podman[246555]: 2025-12-05 01:21:04.675431415 +0000 UTC m=+0.911499675 container start 22c70057f07abf69dfed84da7df0205dea158fd55e19e05c20f5c89420561834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_khayyam, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:21:04 compute-0 podman[246555]: 2025-12-05 01:21:04.680077465 +0000 UTC m=+0.916145795 container attach 22c70057f07abf69dfed84da7df0205dea158fd55e19e05c20f5c89420561834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 05 01:21:04 compute-0 zen_khayyam[246573]: 167 167
Dec 05 01:21:04 compute-0 systemd[1]: libpod-22c70057f07abf69dfed84da7df0205dea158fd55e19e05c20f5c89420561834.scope: Deactivated successfully.
Dec 05 01:21:04 compute-0 podman[246555]: 2025-12-05 01:21:04.692585448 +0000 UTC m=+0.928653698 container died 22c70057f07abf69dfed84da7df0205dea158fd55e19e05c20f5c89420561834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_khayyam, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 01:21:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4c067d9d830b76cde5d90373a2fb48473aee060a614c8ff2e79c22f013b32dc-merged.mount: Deactivated successfully.
Dec 05 01:21:04 compute-0 podman[246555]: 2025-12-05 01:21:04.759218806 +0000 UTC m=+0.995287056 container remove 22c70057f07abf69dfed84da7df0205dea158fd55e19e05c20f5c89420561834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:21:04 compute-0 podman[246574]: 2025-12-05 01:21:04.775588598 +0000 UTC m=+0.161543825 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.buildah.version=1.33.7, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal)
Dec 05 01:21:04 compute-0 systemd[1]: libpod-conmon-22c70057f07abf69dfed84da7df0205dea158fd55e19e05c20f5c89420561834.scope: Deactivated successfully.
Dec 05 01:21:05 compute-0 podman[246623]: 2025-12-05 01:21:05.023023533 +0000 UTC m=+0.103973492 container create 6e7f901daf180bc4e68a6335eb04d166250ecbc872d5eaeec0a49f3d39d5784d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_elion, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:21:05 compute-0 podman[246623]: 2025-12-05 01:21:04.988335015 +0000 UTC m=+0.069285064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:21:05 compute-0 systemd[1]: Started libpod-conmon-6e7f901daf180bc4e68a6335eb04d166250ecbc872d5eaeec0a49f3d39d5784d.scope.
Dec 05 01:21:05 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:21:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864b5963274e08ddeea75086aff247ecbd2014138ca74235a13c94a91e203d46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:21:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864b5963274e08ddeea75086aff247ecbd2014138ca74235a13c94a91e203d46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:21:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864b5963274e08ddeea75086aff247ecbd2014138ca74235a13c94a91e203d46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:21:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864b5963274e08ddeea75086aff247ecbd2014138ca74235a13c94a91e203d46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:21:05 compute-0 podman[246623]: 2025-12-05 01:21:05.190288018 +0000 UTC m=+0.271238057 container init 6e7f901daf180bc4e68a6335eb04d166250ecbc872d5eaeec0a49f3d39d5784d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 05 01:21:05 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Dec 05 01:21:05 compute-0 podman[246623]: 2025-12-05 01:21:05.214338096 +0000 UTC m=+0.295288085 container start 6e7f901daf180bc4e68a6335eb04d166250ecbc872d5eaeec0a49f3d39d5784d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_elion, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:21:05 compute-0 podman[246623]: 2025-12-05 01:21:05.220596602 +0000 UTC m=+0.301546641 container attach 6e7f901daf180bc4e68a6335eb04d166250ecbc872d5eaeec0a49f3d39d5784d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_elion, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Dec 05 01:21:05 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Dec 05 01:21:05 compute-0 ceph-mon[192914]: pgmap v344: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:06 compute-0 vigilant_elion[246645]: {
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:     "0": [
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:         {
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "devices": [
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "/dev/loop3"
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             ],
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "lv_name": "ceph_lv0",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "lv_size": "21470642176",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "name": "ceph_lv0",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "tags": {
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.cluster_name": "ceph",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.crush_device_class": "",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.encrypted": "0",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.osd_id": "0",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.type": "block",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.vdo": "0"
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             },
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "type": "block",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "vg_name": "ceph_vg0"
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:         }
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:     ],
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:     "1": [
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:         {
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "devices": [
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "/dev/loop4"
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             ],
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "lv_name": "ceph_lv1",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "lv_size": "21470642176",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "name": "ceph_lv1",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "tags": {
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.cluster_name": "ceph",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.crush_device_class": "",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.encrypted": "0",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.osd_id": "1",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.type": "block",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.vdo": "0"
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             },
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "type": "block",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "vg_name": "ceph_vg1"
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:         }
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:     ],
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:     "2": [
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:         {
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "devices": [
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "/dev/loop5"
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             ],
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "lv_name": "ceph_lv2",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "lv_size": "21470642176",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "name": "ceph_lv2",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "tags": {
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.cluster_name": "ceph",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.crush_device_class": "",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.encrypted": "0",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.osd_id": "2",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.type": "block",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:                 "ceph.vdo": "0"
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             },
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "type": "block",
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:             "vg_name": "ceph_vg2"
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:         }
Dec 05 01:21:06 compute-0 vigilant_elion[246645]:     ]
Dec 05 01:21:06 compute-0 vigilant_elion[246645]: }
Dec 05 01:21:06 compute-0 systemd[1]: libpod-6e7f901daf180bc4e68a6335eb04d166250ecbc872d5eaeec0a49f3d39d5784d.scope: Deactivated successfully.
Dec 05 01:21:06 compute-0 podman[246623]: 2025-12-05 01:21:06.135229285 +0000 UTC m=+1.216179264 container died 6e7f901daf180bc4e68a6335eb04d166250ecbc872d5eaeec0a49f3d39d5784d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_elion, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 05 01:21:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-864b5963274e08ddeea75086aff247ecbd2014138ca74235a13c94a91e203d46-merged.mount: Deactivated successfully.
Dec 05 01:21:06 compute-0 podman[246623]: 2025-12-05 01:21:06.266787294 +0000 UTC m=+1.347737253 container remove 6e7f901daf180bc4e68a6335eb04d166250ecbc872d5eaeec0a49f3d39d5784d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_elion, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 05 01:21:06 compute-0 ceph-mon[192914]: 9.1c scrub starts
Dec 05 01:21:06 compute-0 ceph-mon[192914]: 9.1c scrub ok
Dec 05 01:21:06 compute-0 systemd[1]: libpod-conmon-6e7f901daf180bc4e68a6335eb04d166250ecbc872d5eaeec0a49f3d39d5784d.scope: Deactivated successfully.
Dec 05 01:21:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v345: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:06 compute-0 sudo[246436]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:06 compute-0 sudo[246703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:21:06 compute-0 sudo[246703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:21:06 compute-0 sudo[246703]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:06 compute-0 sudo[246731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:21:06 compute-0 sudo[246731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:21:06 compute-0 sudo[246731]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:06 compute-0 sudo[246760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:21:06 compute-0 sudo[246760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:21:06 compute-0 sudo[246760]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:06 compute-0 sudo[246789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:21:06 compute-0 sudo[246789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:21:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:21:07 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.1e deep-scrub starts
Dec 05 01:21:07 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.1e deep-scrub ok
Dec 05 01:21:07 compute-0 ceph-mon[192914]: pgmap v345: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:07 compute-0 podman[246866]: 2025-12-05 01:21:07.485980111 +0000 UTC m=+0.087492907 container create 729cb852d0c6a243a3f4323e4d0426c14c0c2074bc3839a8f7e8d9572cedde2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mclean, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 01:21:07 compute-0 podman[246866]: 2025-12-05 01:21:07.449799891 +0000 UTC m=+0.051312737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:21:07 compute-0 systemd[1]: Started libpod-conmon-729cb852d0c6a243a3f4323e4d0426c14c0c2074bc3839a8f7e8d9572cedde2e.scope.
Dec 05 01:21:07 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:21:07 compute-0 podman[246866]: 2025-12-05 01:21:07.620025839 +0000 UTC m=+0.221538705 container init 729cb852d0c6a243a3f4323e4d0426c14c0c2074bc3839a8f7e8d9572cedde2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mclean, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 05 01:21:07 compute-0 podman[246866]: 2025-12-05 01:21:07.636026691 +0000 UTC m=+0.237539477 container start 729cb852d0c6a243a3f4323e4d0426c14c0c2074bc3839a8f7e8d9572cedde2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mclean, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 01:21:07 compute-0 podman[246866]: 2025-12-05 01:21:07.644354715 +0000 UTC m=+0.245867561 container attach 729cb852d0c6a243a3f4323e4d0426c14c0c2074bc3839a8f7e8d9572cedde2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 05 01:21:07 compute-0 romantic_mclean[246887]: 167 167
Dec 05 01:21:07 compute-0 systemd[1]: libpod-729cb852d0c6a243a3f4323e4d0426c14c0c2074bc3839a8f7e8d9572cedde2e.scope: Deactivated successfully.
Dec 05 01:21:07 compute-0 podman[246866]: 2025-12-05 01:21:07.650114398 +0000 UTC m=+0.251627184 container died 729cb852d0c6a243a3f4323e4d0426c14c0c2074bc3839a8f7e8d9572cedde2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:21:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c168ee61dcd1ffdc481cf498599225c61813993f64b977d1c2c58c4cdb958711-merged.mount: Deactivated successfully.
Dec 05 01:21:07 compute-0 podman[246866]: 2025-12-05 01:21:07.727272253 +0000 UTC m=+0.328785019 container remove 729cb852d0c6a243a3f4323e4d0426c14c0c2074bc3839a8f7e8d9572cedde2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mclean, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 01:21:07 compute-0 systemd[1]: libpod-conmon-729cb852d0c6a243a3f4323e4d0426c14c0c2074bc3839a8f7e8d9572cedde2e.scope: Deactivated successfully.
Dec 05 01:21:07 compute-0 podman[246921]: 2025-12-05 01:21:07.986259743 +0000 UTC m=+0.069839669 container create 808474f990db6bf5e01a5c06a27235c17476ab24dca2d08f042d36e4501781a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 05 01:21:08 compute-0 podman[246921]: 2025-12-05 01:21:07.958319106 +0000 UTC m=+0.041899062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:21:08 compute-0 systemd[1]: Started libpod-conmon-808474f990db6bf5e01a5c06a27235c17476ab24dca2d08f042d36e4501781a3.scope.
Dec 05 01:21:08 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a01127a6b8799c994beb2b982b949f9cc458d996b2c557f38824421d9dd4d73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a01127a6b8799c994beb2b982b949f9cc458d996b2c557f38824421d9dd4d73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a01127a6b8799c994beb2b982b949f9cc458d996b2c557f38824421d9dd4d73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a01127a6b8799c994beb2b982b949f9cc458d996b2c557f38824421d9dd4d73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:21:08 compute-0 podman[246921]: 2025-12-05 01:21:08.149874076 +0000 UTC m=+0.233454022 container init 808474f990db6bf5e01a5c06a27235c17476ab24dca2d08f042d36e4501781a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_meninsky, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:21:08 compute-0 podman[246921]: 2025-12-05 01:21:08.173539753 +0000 UTC m=+0.257119699 container start 808474f990db6bf5e01a5c06a27235c17476ab24dca2d08f042d36e4501781a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 05 01:21:08 compute-0 podman[246921]: 2025-12-05 01:21:08.1805366 +0000 UTC m=+0.264116586 container attach 808474f990db6bf5e01a5c06a27235c17476ab24dca2d08f042d36e4501781a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 01:21:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v346: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:08 compute-0 ceph-mon[192914]: 9.1e deep-scrub starts
Dec 05 01:21:08 compute-0 ceph-mon[192914]: 9.1e deep-scrub ok
Dec 05 01:21:09 compute-0 admiring_meninsky[246941]: {
Dec 05 01:21:09 compute-0 admiring_meninsky[246941]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:21:09 compute-0 admiring_meninsky[246941]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:21:09 compute-0 admiring_meninsky[246941]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:21:09 compute-0 admiring_meninsky[246941]:         "osd_id": 0,
Dec 05 01:21:09 compute-0 admiring_meninsky[246941]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:21:09 compute-0 admiring_meninsky[246941]:         "type": "bluestore"
Dec 05 01:21:09 compute-0 admiring_meninsky[246941]:     },
Dec 05 01:21:09 compute-0 admiring_meninsky[246941]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:21:09 compute-0 admiring_meninsky[246941]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:21:09 compute-0 admiring_meninsky[246941]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:21:09 compute-0 admiring_meninsky[246941]:         "osd_id": 1,
Dec 05 01:21:09 compute-0 admiring_meninsky[246941]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:21:09 compute-0 admiring_meninsky[246941]:         "type": "bluestore"
Dec 05 01:21:09 compute-0 admiring_meninsky[246941]:     },
Dec 05 01:21:09 compute-0 admiring_meninsky[246941]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:21:09 compute-0 admiring_meninsky[246941]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:21:09 compute-0 admiring_meninsky[246941]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:21:09 compute-0 admiring_meninsky[246941]:         "osd_id": 2,
Dec 05 01:21:09 compute-0 admiring_meninsky[246941]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:21:09 compute-0 admiring_meninsky[246941]:         "type": "bluestore"
Dec 05 01:21:09 compute-0 admiring_meninsky[246941]:     }
Dec 05 01:21:09 compute-0 admiring_meninsky[246941]: }
Dec 05 01:21:09 compute-0 systemd[1]: libpod-808474f990db6bf5e01a5c06a27235c17476ab24dca2d08f042d36e4501781a3.scope: Deactivated successfully.
Dec 05 01:21:09 compute-0 podman[246921]: 2025-12-05 01:21:09.3391363 +0000 UTC m=+1.422716306 container died 808474f990db6bf5e01a5c06a27235c17476ab24dca2d08f042d36e4501781a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_meninsky, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:21:09 compute-0 systemd[1]: libpod-808474f990db6bf5e01a5c06a27235c17476ab24dca2d08f042d36e4501781a3.scope: Consumed 1.162s CPU time.
Dec 05 01:21:09 compute-0 ceph-mon[192914]: pgmap v346: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a01127a6b8799c994beb2b982b949f9cc458d996b2c557f38824421d9dd4d73-merged.mount: Deactivated successfully.
Dec 05 01:21:09 compute-0 podman[246921]: 2025-12-05 01:21:09.456712814 +0000 UTC m=+1.540292750 container remove 808474f990db6bf5e01a5c06a27235c17476ab24dca2d08f042d36e4501781a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_meninsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:21:09 compute-0 systemd[1]: libpod-conmon-808474f990db6bf5e01a5c06a27235c17476ab24dca2d08f042d36e4501781a3.scope: Deactivated successfully.
Dec 05 01:21:09 compute-0 sudo[246789]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:21:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:21:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:21:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:21:09 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d02dab09-89e8-4b15-8878-b7bb2756d48d does not exist
Dec 05 01:21:09 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev fcc66374-e570-4807-ac04-720a205552f6 does not exist
Dec 05 01:21:09 compute-0 podman[247024]: 2025-12-05 01:21:09.523936299 +0000 UTC m=+0.146819399 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:21:09 compute-0 sudo[247104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:21:09 compute-0 sudo[247104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:21:09 compute-0 sudo[247104]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:09 compute-0 sudo[247141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:21:09 compute-0 sudo[247141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:21:09 compute-0 sudo[247141]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:09 compute-0 sudo[247227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kremtqeqcgqlgwdwfuyopjiguuxabrsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897669.4152937-195-173297189760433/AnsiballZ_stat.py'
Dec 05 01:21:09 compute-0 sudo[247227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:10 compute-0 python3.9[247229]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:21:10 compute-0 sudo[247227]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v347: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:21:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:21:10 compute-0 sudo[247305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggaqobztwvvlaugoefkaavrnzxtgcmjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897669.4152937-195-173297189760433/AnsiballZ_file.py'
Dec 05 01:21:10 compute-0 sudo[247305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:10 compute-0 python3.9[247307]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:21:10 compute-0 sudo[247305]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:11 compute-0 sshd-session[188743]: Received disconnect from 38.102.83.179 port 45342:11: disconnected by user
Dec 05 01:21:11 compute-0 sshd-session[188743]: Disconnected from user zuul 38.102.83.179 port 45342
Dec 05 01:21:11 compute-0 sshd-session[188740]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:21:11 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Dec 05 01:21:11 compute-0 systemd[1]: session-25.scope: Consumed 2min 46.666s CPU time.
Dec 05 01:21:11 compute-0 systemd-logind[792]: Session 25 logged out. Waiting for processes to exit.
Dec 05 01:21:11 compute-0 systemd-logind[792]: Removed session 25.
Dec 05 01:21:11 compute-0 ceph-mon[192914]: pgmap v347: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:11 compute-0 sudo[247457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zozmodlfakaddvkhbinhnmnbjtryfkpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897671.2041514-208-170588776363840/AnsiballZ_file.py'
Dec 05 01:21:11 compute-0 sudo[247457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:11 compute-0 python3.9[247459]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:21:11 compute-0 sudo[247457]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:21:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v348: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:12 compute-0 ceph-mon[192914]: pgmap v348: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:12 compute-0 sudo[247609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbanpgeuynkiuumdqphswzodubozrqfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897672.253557-216-278990633266037/AnsiballZ_stat.py'
Dec 05 01:21:12 compute-0 sudo[247609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:13 compute-0 python3.9[247611]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:21:13 compute-0 sudo[247609]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:13 compute-0 sudo[247687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hllnhzoslewblaraooqarblussuebzmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897672.253557-216-278990633266037/AnsiballZ_file.py'
Dec 05 01:21:13 compute-0 sudo[247687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:13 compute-0 python3.9[247689]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:21:13 compute-0 sudo[247687]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v349: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:15 compute-0 sudo[247839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svdcndoblxapepylcormtbumoqefynps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897674.2137833-231-197944050325291/AnsiballZ_timezone.py'
Dec 05 01:21:15 compute-0 sudo[247839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:15 compute-0 python3.9[247841]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 05 01:21:15 compute-0 systemd[1]: Starting Time & Date Service...
Dec 05 01:21:15 compute-0 ceph-mon[192914]: pgmap v349: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:15 compute-0 systemd[1]: Started Time & Date Service.
Dec 05 01:21:15 compute-0 sudo[247839]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:21:16
Dec 05 01:21:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:21:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:21:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['images', 'vms', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'volumes']
Dec 05 01:21:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:21:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:21:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:21:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:21:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:21:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:21:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:21:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v350: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:21:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:21:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:21:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:21:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:21:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:21:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:21:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:21:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:21:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:21:16 compute-0 sudo[247995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajasnamkibzpebsqnfmccbxrmegcqqsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897675.9253418-240-112108639383592/AnsiballZ_file.py'
Dec 05 01:21:16 compute-0 sudo[247995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:16 compute-0 python3.9[247997]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:21:16 compute-0 sudo[247995]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:21:17 compute-0 ceph-mon[192914]: pgmap v350: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:17 compute-0 sudo[248147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhxfuynskikqjzjdxshfztyefkxlqlze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897676.995069-248-83085210602244/AnsiballZ_stat.py'
Dec 05 01:21:17 compute-0 sudo[248147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:17 compute-0 python3.9[248149]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:21:17 compute-0 sudo[248147]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:18 compute-0 sudo[248225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faqgvtrqntbkqjrxgfrqixustufdrqxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897676.995069-248-83085210602244/AnsiballZ_file.py'
Dec 05 01:21:18 compute-0 sudo[248225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v351: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:18 compute-0 python3.9[248227]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:21:18 compute-0 sudo[248225]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:19 compute-0 sudo[248377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awxfnrnxetdurpaudkktobparugujdoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897678.7891793-260-98451997982707/AnsiballZ_stat.py'
Dec 05 01:21:19 compute-0 sudo[248377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:19 compute-0 ceph-mon[192914]: pgmap v351: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:19 compute-0 python3.9[248379]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:21:19 compute-0 sudo[248377]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:20 compute-0 sudo[248456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtexfqxbkprrqijquearoqpczcankaiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897678.7891793-260-98451997982707/AnsiballZ_file.py'
Dec 05 01:21:20 compute-0 sudo[248456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:20 compute-0 python3.9[248458]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.52juk_mk recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:21:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v352: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:20 compute-0 sudo[248456]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:21 compute-0 sudo[248608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcllwjsgduwgioekmgwqcmhnunmdwsgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897680.6122673-272-252640830036092/AnsiballZ_stat.py'
Dec 05 01:21:21 compute-0 sudo[248608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:21 compute-0 python3.9[248610]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:21:21 compute-0 sudo[248608]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:21 compute-0 ceph-mon[192914]: pgmap v352: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:21 compute-0 sudo[248686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgrckydaagyndfezudrjyvonlvtugctd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897680.6122673-272-252640830036092/AnsiballZ_file.py'
Dec 05 01:21:21 compute-0 sudo[248686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:22 compute-0 python3.9[248688]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:21:22 compute-0 sudo[248686]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:21:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v353: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:23 compute-0 sudo[248838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxxgaewjxewrwfxflawfjfwrqaapazhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897682.4127693-285-125352474556510/AnsiballZ_command.py'
Dec 05 01:21:23 compute-0 sudo[248838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:23 compute-0 python3.9[248840]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:21:23 compute-0 sudo[248838]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:23 compute-0 ceph-mon[192914]: pgmap v353: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v354: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:24 compute-0 sudo[248991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akroumoudmsmpewydemfidwgpxugwqut ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764897683.6322465-293-255851327008475/AnsiballZ_edpm_nftables_from_files.py'
Dec 05 01:21:24 compute-0 sudo[248991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:24 compute-0 python3[248993]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 05 01:21:24 compute-0 sudo[248991]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:25 compute-0 ceph-mon[192914]: pgmap v354: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:25 compute-0 sudo[249143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxojtrnvmkgntpilqlqfcitrieadvfue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897684.9547732-301-139870064471511/AnsiballZ_stat.py'
Dec 05 01:21:25 compute-0 sudo[249143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:25 compute-0 python3.9[249145]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:21:25 compute-0 sudo[249143]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:21:26 compute-0 sudo[249221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nenjglixgnsjubhhkflwcploosryclbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897684.9547732-301-139870064471511/AnsiballZ_file.py'
Dec 05 01:21:26 compute-0 sudo[249221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v355: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:26 compute-0 python3.9[249223]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:21:26 compute-0 sudo[249221]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:26 compute-0 podman[249245]: 2025-12-05 01:21:26.705016867 +0000 UTC m=+0.104938369 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute)
Dec 05 01:21:26 compute-0 podman[249249]: 2025-12-05 01:21:26.740150387 +0000 UTC m=+0.136359405 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:21:26 compute-0 podman[249250]: 2025-12-05 01:21:26.747560646 +0000 UTC m=+0.140271875 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:21:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:21:27 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 01:21:27 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 01:21:27 compute-0 sudo[249440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsdndhobhxgqofevjcjkzddhjthklesd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897686.7359653-313-95446282295306/AnsiballZ_stat.py'
Dec 05 01:21:27 compute-0 sudo[249440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:27 compute-0 ceph-mon[192914]: pgmap v355: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:27 compute-0 python3.9[249442]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:21:27 compute-0 sudo[249440]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:27 compute-0 sudo[249518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxtfxpjrbdhkzzetszdwzlkdaycyjneu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897686.7359653-313-95446282295306/AnsiballZ_file.py'
Dec 05 01:21:27 compute-0 sudo[249518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:28 compute-0 python3.9[249520]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:21:28 compute-0 sudo[249518]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:28 compute-0 podman[249572]: 2025-12-05 01:21:28.730145514 +0000 UTC m=+0.138608238 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 01:21:29 compute-0 sudo[249688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjxgkmoetapzhqinpowjzipkgkurichb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897688.5209494-325-81342861585610/AnsiballZ_stat.py'
Dec 05 01:21:29 compute-0 sudo[249688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:29 compute-0 python3.9[249690]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:21:29 compute-0 sudo[249688]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:29 compute-0 ceph-mon[192914]: pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:29 compute-0 podman[158197]: time="2025-12-05T01:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:21:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 05 01:21:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6826 "" "Go-http-client/1.1"
Dec 05 01:21:29 compute-0 sudo[249766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnggecvrttzzrifmzfozgpyivubjosbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897688.5209494-325-81342861585610/AnsiballZ_file.py'
Dec 05 01:21:29 compute-0 sudo[249766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:30 compute-0 python3.9[249768]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:21:30 compute-0 sudo[249766]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:30 compute-0 sudo[249918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrxhwldnjvreosfjqdtstklnvcglhpdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897690.384974-337-54967062378153/AnsiballZ_stat.py'
Dec 05 01:21:30 compute-0 sudo[249918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:31 compute-0 python3.9[249920]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:21:31 compute-0 sudo[249918]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:31 compute-0 openstack_network_exporter[160350]: ERROR   01:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:21:31 compute-0 openstack_network_exporter[160350]: ERROR   01:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:21:31 compute-0 openstack_network_exporter[160350]: ERROR   01:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:21:31 compute-0 openstack_network_exporter[160350]: ERROR   01:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:21:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:21:31 compute-0 openstack_network_exporter[160350]: ERROR   01:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:21:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:21:31 compute-0 ceph-mon[192914]: pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:31 compute-0 sudo[249996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzceguazrxsuqxcxmxizfbgbdgmkuhpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897690.384974-337-54967062378153/AnsiballZ_file.py'
Dec 05 01:21:31 compute-0 sudo[249996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:31 compute-0 python3.9[249998]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:21:31 compute-0 sudo[249996]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:21:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:32 compute-0 ceph-mon[192914]: pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:32 compute-0 podman[250098]: 2025-12-05 01:21:32.702655696 +0000 UTC m=+0.109360853 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, managed_by=edpm_ansible, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, io.openshift.expose-services=)
Dec 05 01:21:32 compute-0 sudo[250165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufkvbhhzmmyuiikgyfnjerpywcyuskbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897692.200979-349-34067223260363/AnsiballZ_stat.py'
Dec 05 01:21:32 compute-0 sudo[250165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:33 compute-0 python3.9[250167]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:21:33 compute-0 sudo[250165]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:33 compute-0 sudo[250243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjyjfpdxnjfqwguwnouirnkotclmhwjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897692.200979-349-34067223260363/AnsiballZ_file.py'
Dec 05 01:21:33 compute-0 sudo[250243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:33 compute-0 python3.9[250245]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:21:33 compute-0 sudo[250243]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:34 compute-0 sudo[250395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slezcjeapfhcxprmlcouzinzijhrcihc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897694.0782945-362-167940151765351/AnsiballZ_command.py'
Dec 05 01:21:34 compute-0 sudo[250395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:34 compute-0 python3.9[250397]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:21:34 compute-0 sudo[250395]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:35 compute-0 ceph-mon[192914]: pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:35 compute-0 podman[250477]: 2025-12-05 01:21:35.749375341 +0000 UTC m=+0.157260484 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, architecture=x86_64, maintainer=Red Hat, Inc., vcs-type=git)
Dec 05 01:21:36 compute-0 sudo[250569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwknbyiwghapsiwehwsfxwthbqxsigbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897695.222-370-188779963985856/AnsiballZ_blockinfile.py'
Dec 05 01:21:36 compute-0 sudo[250569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:36 compute-0 python3.9[250571]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:21:36 compute-0 sudo[250569]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:21:37 compute-0 sudo[250721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrxowntdugwlfeupaglahealyamjpacz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897696.6394494-379-106630633509258/AnsiballZ_file.py'
Dec 05 01:21:37 compute-0 sudo[250721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:37 compute-0 ceph-mon[192914]: pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:37 compute-0 python3.9[250723]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:21:37 compute-0 sudo[250721]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:38 compute-0 sudo[250873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvhosfpayvphqpgmiylgtfllrhlzwhah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897697.719872-379-241353880210635/AnsiballZ_file.py'
Dec 05 01:21:38 compute-0 sudo[250873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:38 compute-0 python3.9[250875]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:21:38 compute-0 sudo[250873]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:39 compute-0 ceph-mon[192914]: pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:39 compute-0 sudo[251040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpxbwkotyhbhijyyzugqolvaizvgoanv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897698.886144-394-57760882468789/AnsiballZ_mount.py'
Dec 05 01:21:39 compute-0 sudo[251040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:39 compute-0 podman[250999]: 2025-12-05 01:21:39.719596707 +0000 UTC m=+0.126729752 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 01:21:39 compute-0 python3.9[251049]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 05 01:21:39 compute-0 sudo[251040]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:41 compute-0 sudo[251200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcakaphqficgzcmnvhbemixrqsuksrwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897700.455729-394-189788955830285/AnsiballZ_mount.py'
Dec 05 01:21:41 compute-0 sudo[251200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:41 compute-0 python3.9[251202]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 05 01:21:41 compute-0 sudo[251200]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:41 compute-0 ceph-mon[192914]: pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:41 compute-0 sshd-session[242916]: Connection closed by 192.168.122.30 port 58452
Dec 05 01:21:41 compute-0 sshd-session[242896]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:21:41 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Dec 05 01:21:41 compute-0 systemd[1]: session-47.scope: Consumed 51.898s CPU time.
Dec 05 01:21:41 compute-0 systemd-logind[792]: Session 47 logged out. Waiting for processes to exit.
Dec 05 01:21:41 compute-0 systemd-logind[792]: Removed session 47.
Dec 05 01:21:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:21:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:43 compute-0 ceph-mon[192914]: pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:45 compute-0 ceph-mon[192914]: pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:45 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 05 01:21:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:21:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:21:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:21:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:21:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:21:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:21:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:21:47 compute-0 sshd-session[251229]: Accepted publickey for zuul from 192.168.122.30 port 34226 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 01:21:47 compute-0 systemd-logind[792]: New session 48 of user zuul.
Dec 05 01:21:47 compute-0 systemd[1]: Started Session 48 of User zuul.
Dec 05 01:21:47 compute-0 sshd-session[251229]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:21:47 compute-0 ceph-mon[192914]: pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:48 compute-0 sudo[251382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wavsxitpphosmnusfgnhflbaxhcwczuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897707.4496312-16-12102570231317/AnsiballZ_tempfile.py'
Dec 05 01:21:48 compute-0 sudo[251382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:48 compute-0 python3.9[251384]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 05 01:21:48 compute-0 sudo[251382]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:49 compute-0 ceph-mon[192914]: pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:49 compute-0 sudo[251535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrltiqywdpwtgjhrflxjgckojzkrqyiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897708.8986652-28-232162756898369/AnsiballZ_stat.py'
Dec 05 01:21:49 compute-0 sudo[251535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:49 compute-0 python3.9[251537]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:21:49 compute-0 sudo[251535]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:50 compute-0 sudo[251689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbiqwghkzbzrfbkeuygbnxutfnsgixst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897710.1055226-36-122028143953190/AnsiballZ_slurp.py'
Dec 05 01:21:50 compute-0 sudo[251689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:51 compute-0 python3.9[251691]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec 05 01:21:51 compute-0 sudo[251689]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:51 compute-0 ceph-mon[192914]: pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:51 compute-0 sudo[251841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydebhziosntmzypwayzdkyzrkshyksls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897711.3883753-44-120664731327028/AnsiballZ_stat.py'
Dec 05 01:21:51 compute-0 sudo[251841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:21:52 compute-0 python3.9[251843]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.xo5an9og follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:21:52 compute-0 sudo[251841]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:53 compute-0 sudo[251966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teqxxqhhqrzmbhlvirdsumczcghkqnkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897711.3883753-44-120664731327028/AnsiballZ_copy.py'
Dec 05 01:21:53 compute-0 sudo[251966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:53 compute-0 python3.9[251968]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.xo5an9og mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764897711.3883753-44-120664731327028/.source.xo5an9og _original_basename=.bkge3syv follow=False checksum=33c8d724534b96f4d35998d48c2d1395ac713ad6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:21:53 compute-0 sudo[251966]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:53 compute-0 ceph-mon[192914]: pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:54 compute-0 sudo[252118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqzqjcoerryqjpiwodypslvdvsqahkir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897713.6132572-59-261965251105083/AnsiballZ_setup.py'
Dec 05 01:21:54 compute-0 sudo[252118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:54 compute-0 python3.9[252120]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:21:54 compute-0 sudo[252118]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:55 compute-0 ceph-mon[192914]: pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:55 compute-0 sudo[252270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhmxspctorgmwjwmmxkqunpwwtvnanby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897715.206971-68-167837754739992/AnsiballZ_blockinfile.py'
Dec 05 01:21:55 compute-0 sudo[252270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:56 compute-0 python3.9[252272]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD34iQxSDvRWxXWiq324tvvnkHz60HCvPTP/DU7o5oImJ7L5PeQTe9tPl2QVsPDuWSCrwTEupWDG8h+dMSTlGmE2dOPB66Zq0d9sww65ZtOq0JsaxhPfTB3aJe6aQDcYq9WQ/1T/lNE0Do7wQL88mneNtNMuLZD9Irm2WwDI38II50hBLyhLkuA6ik5m8wn++kFZPdu0pcYz24ameu4wB8DSKH8UAT3GBfc11AP8MuI6xtpcOT5Dr88jHtVEYH8eW4XWrKQeyZddDcJui/f6NqC4NrPSF4YgDRQ1z6/33N2E9EycvbOgdOt9pq1jpYaWkMHl2KeaAbNoAdSuXTGDhvCzv18a5QdOMVV7965nJMnpteZZjrhzpHSFkbnMvAaoktDOMhKkfPYUY6HhVdkVM7FntS5oT76c92NL3HNHDuV7Oh57/0epCuWK6LT+2z9SlP7VUPaUa2c/nZDSTeZO/gJmuyeJ9Iu8XtE1KvGRpHt6zVpKl1uyEoc+M5SO7YG+r8=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIIWlZK7FF2zVpeujHX1SXvuy5F4vd69JtXI65jfCGUb
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG3QjvzM+uHT65E6nwIhM59XNE6tJ4oKmErztLJ1wZJkltdzzAyZYA6BiT1RzCPoMNPk9MeYIRcQ8NtPcaWiPtU=
                                              create=True mode=0644 path=/tmp/ansible.xo5an9og state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:21:56 compute-0 sudo[252270]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:56 compute-0 ceph-mon[192914]: pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:21:57 compute-0 sudo[252466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdvlveabrbbgqkcwhhjcafwbxsirzwnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897716.4303987-76-165304538901506/AnsiballZ_command.py'
Dec 05 01:21:57 compute-0 sudo[252466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:57 compute-0 podman[252396]: 2025-12-05 01:21:57.174370331 +0000 UTC m=+0.130283793 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 01:21:57 compute-0 podman[252397]: 2025-12-05 01:21:57.180653718 +0000 UTC m=+0.134960645 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:21:57 compute-0 podman[252398]: 2025-12-05 01:21:57.216470098 +0000 UTC m=+0.164280682 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 05 01:21:57 compute-0 python3.9[252479]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.xo5an9og' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:21:57 compute-0 sudo[252466]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:58 compute-0 sudo[252637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btgarnsfijybugiwyidxporwjtgsvblq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897717.6747384-84-63542099765179/AnsiballZ_file.py'
Dec 05 01:21:58 compute-0 sudo[252637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:21:58 compute-0 python3.9[252639]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.xo5an9og state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:21:58 compute-0 sudo[252637]: pam_unix(sudo:session): session closed for user root
Dec 05 01:21:59 compute-0 sshd-session[251232]: Connection closed by 192.168.122.30 port 34226
Dec 05 01:21:59 compute-0 sshd-session[251229]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:21:59 compute-0 systemd-logind[792]: Session 48 logged out. Waiting for processes to exit.
Dec 05 01:21:59 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Dec 05 01:21:59 compute-0 systemd[1]: session-48.scope: Consumed 9.117s CPU time.
Dec 05 01:21:59 compute-0 systemd-logind[792]: Removed session 48.
Dec 05 01:21:59 compute-0 podman[252664]: 2025-12-05 01:21:59.333373941 +0000 UTC m=+0.164585480 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 01:21:59 compute-0 ceph-mon[192914]: pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:21:59 compute-0 podman[158197]: time="2025-12-05T01:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:21:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 05 01:21:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6833 "" "Go-http-client/1.1"
Dec 05 01:22:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:01 compute-0 ceph-mon[192914]: pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:01 compute-0 openstack_network_exporter[160350]: ERROR   01:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:22:01 compute-0 openstack_network_exporter[160350]: ERROR   01:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:22:01 compute-0 openstack_network_exporter[160350]: ERROR   01:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:22:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:22:01 compute-0 openstack_network_exporter[160350]: ERROR   01:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:22:01 compute-0 openstack_network_exporter[160350]: ERROR   01:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:22:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:22:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:22:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:03 compute-0 ceph-mon[192914]: pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:03 compute-0 podman[252683]: 2025-12-05 01:22:03.748523689 +0000 UTC m=+0.151524272 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, release=1214.1726694543, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, container_name=kepler, managed_by=edpm_ansible, name=ubi9)
Dec 05 01:22:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:04 compute-0 sshd-session[252702]: Accepted publickey for zuul from 192.168.122.30 port 39828 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 01:22:04 compute-0 systemd-logind[792]: New session 49 of user zuul.
Dec 05 01:22:04 compute-0 systemd[1]: Started Session 49 of User zuul.
Dec 05 01:22:04 compute-0 sshd-session[252702]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:22:05 compute-0 ceph-mon[192914]: pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:05 compute-0 python3.9[252855]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:22:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:06 compute-0 podman[252907]: 2025-12-05 01:22:06.737558045 +0000 UTC m=+0.142482577 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, release=1755695350, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 01:22:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.176682) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897727176744, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1669, "num_deletes": 251, "total_data_size": 2324004, "memory_usage": 2366936, "flush_reason": "Manual Compaction"}
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897727193200, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1368293, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7297, "largest_seqno": 8965, "table_properties": {"data_size": 1362814, "index_size": 2426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16506, "raw_average_key_size": 20, "raw_value_size": 1349632, "raw_average_value_size": 1714, "num_data_blocks": 114, "num_entries": 787, "num_filter_entries": 787, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897576, "oldest_key_time": 1764897576, "file_creation_time": 1764897727, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 16612 microseconds, and 9526 cpu microseconds.
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.193295) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1368293 bytes OK
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.193322) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.195869) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.195940) EVENT_LOG_v1 {"time_micros": 1764897727195931, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.195965) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2316510, prev total WAL file size 2316510, number of live WAL files 2.
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.197472) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1336KB)], [20(6969KB)]
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897727197565, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8505379, "oldest_snapshot_seqno": -1}
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3362 keys, 6764616 bytes, temperature: kUnknown
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897727268499, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6764616, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6738999, "index_size": 16100, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 80442, "raw_average_key_size": 23, "raw_value_size": 6675084, "raw_average_value_size": 1985, "num_data_blocks": 714, "num_entries": 3362, "num_filter_entries": 3362, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764897727, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.268736) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6764616 bytes
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.271295) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.8 rd, 95.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 6.8 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(11.2) write-amplify(4.9) OK, records in: 3806, records dropped: 444 output_compression: NoCompression
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.271319) EVENT_LOG_v1 {"time_micros": 1764897727271307, "job": 6, "event": "compaction_finished", "compaction_time_micros": 71002, "compaction_time_cpu_micros": 35154, "output_level": 6, "num_output_files": 1, "total_output_size": 6764616, "num_input_records": 3806, "num_output_records": 3362, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897727271718, "job": 6, "event": "table_file_deletion", "file_number": 22}
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897727273152, "job": 6, "event": "table_file_deletion", "file_number": 20}
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.197215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.273480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.273487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.273490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.273494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.273497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:22:07 compute-0 sudo[253030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyvtmiqspqzxovvlkeeeyrptlgwdepez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897726.552422-32-56554545856557/AnsiballZ_systemd.py'
Dec 05 01:22:07 compute-0 sudo[253030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:07 compute-0 ceph-mon[192914]: pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:07 compute-0 python3.9[253032]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 05 01:22:07 compute-0 sudo[253030]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:08 compute-0 sudo[253184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fliicohsbppmaxwyemvxytnkicrxhfpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897728.1613953-40-163055785984431/AnsiballZ_systemd.py'
Dec 05 01:22:08 compute-0 sudo[253184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:09 compute-0 python3.9[253186]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 01:22:09 compute-0 sudo[253184]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:09 compute-0 ceph-mon[192914]: pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:09 compute-0 sudo[253264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:22:09 compute-0 sudo[253264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:22:09 compute-0 sudo[253264]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:10 compute-0 sudo[253313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:22:10 compute-0 sudo[253313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:22:10 compute-0 sudo[253313]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:10 compute-0 podman[253295]: 2025-12-05 01:22:10.061336589 +0000 UTC m=+0.135775248 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 01:22:10 compute-0 sudo[253358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:22:10 compute-0 sudo[253358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:22:10 compute-0 sudo[253358]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:10 compute-0 sudo[253413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:22:10 compute-0 sudo[253413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:22:10 compute-0 sudo[253462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbtdxgiiocbpobanrvyoremlxsdadpxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897729.5056596-49-253104734115481/AnsiballZ_command.py'
Dec 05 01:22:10 compute-0 sudo[253462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:10 compute-0 python3.9[253465]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:22:10 compute-0 sudo[253462]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:10 compute-0 sudo[253413]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:22:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:22:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:22:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:22:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:22:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:22:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 47d0fcca-d11a-4321-a1b7-e154a77eeb34 does not exist
Dec 05 01:22:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 28de8e22-77a8-45c5-bed4-161e65fd60f2 does not exist
Dec 05 01:22:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 668affb1-948d-4c65-a7c4-40c5b48f8125 does not exist
Dec 05 01:22:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:22:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:22:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:22:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:22:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:22:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:22:11 compute-0 sudo[253573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:22:11 compute-0 sudo[253573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:22:11 compute-0 sudo[253573]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:11 compute-0 sudo[253598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:22:11 compute-0 sudo[253598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:22:11 compute-0 sudo[253598]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:11 compute-0 sudo[253646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:22:11 compute-0 sudo[253646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:22:11 compute-0 sudo[253646]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:11 compute-0 sudo[253702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:22:11 compute-0 sudo[253744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crkgmyqwtspxkqgiahgqovwmthuyikwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897730.7877536-57-96605758489443/AnsiballZ_stat.py'
Dec 05 01:22:11 compute-0 ceph-mon[192914]: pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:22:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:22:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:22:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:22:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:22:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:22:11 compute-0 sudo[253702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:22:11 compute-0 sudo[253744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:11 compute-0 python3.9[253748]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:22:11 compute-0 sudo[253744]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:11 compute-0 podman[253810]: 2025-12-05 01:22:11.992236629 +0000 UTC m=+0.071615790 container create dd60e40133abcdeccfc104a417199540b556360f796693fb359749e92271844b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 05 01:22:12 compute-0 podman[253810]: 2025-12-05 01:22:11.961310057 +0000 UTC m=+0.040689238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:22:12 compute-0 systemd[1]: Started libpod-conmon-dd60e40133abcdeccfc104a417199540b556360f796693fb359749e92271844b.scope.
Dec 05 01:22:12 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:22:12 compute-0 podman[253810]: 2025-12-05 01:22:12.120222197 +0000 UTC m=+0.199601408 container init dd60e40133abcdeccfc104a417199540b556360f796693fb359749e92271844b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_liskov, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Dec 05 01:22:12 compute-0 podman[253810]: 2025-12-05 01:22:12.131853295 +0000 UTC m=+0.211232436 container start dd60e40133abcdeccfc104a417199540b556360f796693fb359749e92271844b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:22:12 compute-0 podman[253810]: 2025-12-05 01:22:12.137569086 +0000 UTC m=+0.216948317 container attach dd60e40133abcdeccfc104a417199540b556360f796693fb359749e92271844b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 05 01:22:12 compute-0 quirky_liskov[253872]: 167 167
Dec 05 01:22:12 compute-0 systemd[1]: libpod-dd60e40133abcdeccfc104a417199540b556360f796693fb359749e92271844b.scope: Deactivated successfully.
Dec 05 01:22:12 compute-0 conmon[253872]: conmon dd60e40133abcdeccfc1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dd60e40133abcdeccfc104a417199540b556360f796693fb359749e92271844b.scope/container/memory.events
Dec 05 01:22:12 compute-0 podman[253810]: 2025-12-05 01:22:12.143425511 +0000 UTC m=+0.222804652 container died dd60e40133abcdeccfc104a417199540b556360f796693fb359749e92271844b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:22:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:22:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d1a295b33b11b86339b2ff2f7f456a577562b16dbbe1ca2fbc683ec65f6a593-merged.mount: Deactivated successfully.
Dec 05 01:22:12 compute-0 podman[253810]: 2025-12-05 01:22:12.198525404 +0000 UTC m=+0.277904545 container remove dd60e40133abcdeccfc104a417199540b556360f796693fb359749e92271844b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:22:12 compute-0 systemd[1]: libpod-conmon-dd60e40133abcdeccfc104a417199540b556360f796693fb359749e92271844b.scope: Deactivated successfully.
Dec 05 01:22:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:12 compute-0 podman[253914]: 2025-12-05 01:22:12.450126867 +0000 UTC m=+0.085707347 container create 1f876abc7569049eb4698c47d122e61867b20a4bc8857dd2cd3c09952bb4ea73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_nash, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 05 01:22:12 compute-0 podman[253914]: 2025-12-05 01:22:12.406880207 +0000 UTC m=+0.042460697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:22:12 compute-0 systemd[1]: Started libpod-conmon-1f876abc7569049eb4698c47d122e61867b20a4bc8857dd2cd3c09952bb4ea73.scope.
Dec 05 01:22:12 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:22:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d979e2a9e2d1658dace9c8c4008334a512fc4ff498083e50a6d029a9373b17ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:22:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d979e2a9e2d1658dace9c8c4008334a512fc4ff498083e50a6d029a9373b17ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:22:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d979e2a9e2d1658dace9c8c4008334a512fc4ff498083e50a6d029a9373b17ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:22:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d979e2a9e2d1658dace9c8c4008334a512fc4ff498083e50a6d029a9373b17ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:22:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d979e2a9e2d1658dace9c8c4008334a512fc4ff498083e50a6d029a9373b17ed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:22:12 compute-0 podman[253914]: 2025-12-05 01:22:12.605227789 +0000 UTC m=+0.240808269 container init 1f876abc7569049eb4698c47d122e61867b20a4bc8857dd2cd3c09952bb4ea73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:22:12 compute-0 podman[253914]: 2025-12-05 01:22:12.630721037 +0000 UTC m=+0.266301507 container start 1f876abc7569049eb4698c47d122e61867b20a4bc8857dd2cd3c09952bb4ea73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_nash, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:22:12 compute-0 podman[253914]: 2025-12-05 01:22:12.636693936 +0000 UTC m=+0.272274416 container attach 1f876abc7569049eb4698c47d122e61867b20a4bc8857dd2cd3c09952bb4ea73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_nash, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:22:12 compute-0 sudo[253995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqrmilvhsocjzwttedlgqyjtcacuuazj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897731.9758348-66-66540254790329/AnsiballZ_file.py'
Dec 05 01:22:12 compute-0 sudo[253995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:12 compute-0 python3.9[253997]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:22:12 compute-0 sudo[253995]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:13 compute-0 sshd-session[252705]: Connection closed by 192.168.122.30 port 39828
Dec 05 01:22:13 compute-0 sshd-session[252702]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:22:13 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Dec 05 01:22:13 compute-0 systemd[1]: session-49.scope: Consumed 6.673s CPU time.
Dec 05 01:22:13 compute-0 systemd-logind[792]: Session 49 logged out. Waiting for processes to exit.
Dec 05 01:22:13 compute-0 systemd-logind[792]: Removed session 49.
Dec 05 01:22:13 compute-0 ceph-mon[192914]: pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:13 compute-0 jolly_nash[253963]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:22:13 compute-0 jolly_nash[253963]: --> relative data size: 1.0
Dec 05 01:22:13 compute-0 jolly_nash[253963]: --> All data devices are unavailable
Dec 05 01:22:13 compute-0 systemd[1]: libpod-1f876abc7569049eb4698c47d122e61867b20a4bc8857dd2cd3c09952bb4ea73.scope: Deactivated successfully.
Dec 05 01:22:13 compute-0 systemd[1]: libpod-1f876abc7569049eb4698c47d122e61867b20a4bc8857dd2cd3c09952bb4ea73.scope: Consumed 1.210s CPU time.
Dec 05 01:22:13 compute-0 podman[253914]: 2025-12-05 01:22:13.885680674 +0000 UTC m=+1.521261204 container died 1f876abc7569049eb4698c47d122e61867b20a4bc8857dd2cd3c09952bb4ea73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_nash, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 05 01:22:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d979e2a9e2d1658dace9c8c4008334a512fc4ff498083e50a6d029a9373b17ed-merged.mount: Deactivated successfully.
Dec 05 01:22:13 compute-0 podman[253914]: 2025-12-05 01:22:13.977022999 +0000 UTC m=+1.612603489 container remove 1f876abc7569049eb4698c47d122e61867b20a4bc8857dd2cd3c09952bb4ea73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 01:22:13 compute-0 systemd[1]: libpod-conmon-1f876abc7569049eb4698c47d122e61867b20a4bc8857dd2cd3c09952bb4ea73.scope: Deactivated successfully.
Dec 05 01:22:14 compute-0 sudo[253702]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:14 compute-0 sudo[254057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:22:14 compute-0 sudo[254057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:22:14 compute-0 sudo[254057]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:14 compute-0 sudo[254082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:22:14 compute-0 sudo[254082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:22:14 compute-0 sudo[254082]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:14 compute-0 sudo[254107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:22:14 compute-0 sudo[254107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:22:14 compute-0 sudo[254107]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:14 compute-0 sudo[254132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:22:14 compute-0 sudo[254132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:22:15 compute-0 podman[254196]: 2025-12-05 01:22:15.035455395 +0000 UTC m=+0.082738263 container create 9185669a55200ca3524a40e13be7e3edb0cf5a9fc540a91bc5f77dcde3dc15a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 05 01:22:15 compute-0 podman[254196]: 2025-12-05 01:22:15.004724149 +0000 UTC m=+0.052007017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:22:15 compute-0 systemd[1]: Started libpod-conmon-9185669a55200ca3524a40e13be7e3edb0cf5a9fc540a91bc5f77dcde3dc15a7.scope.
Dec 05 01:22:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:22:15 compute-0 podman[254196]: 2025-12-05 01:22:15.199469699 +0000 UTC m=+0.246752597 container init 9185669a55200ca3524a40e13be7e3edb0cf5a9fc540a91bc5f77dcde3dc15a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:22:15 compute-0 podman[254196]: 2025-12-05 01:22:15.218978169 +0000 UTC m=+0.266261027 container start 9185669a55200ca3524a40e13be7e3edb0cf5a9fc540a91bc5f77dcde3dc15a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:22:15 compute-0 podman[254196]: 2025-12-05 01:22:15.226073799 +0000 UTC m=+0.273356657 container attach 9185669a55200ca3524a40e13be7e3edb0cf5a9fc540a91bc5f77dcde3dc15a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 05 01:22:15 compute-0 cranky_kepler[254212]: 167 167
Dec 05 01:22:15 compute-0 systemd[1]: libpod-9185669a55200ca3524a40e13be7e3edb0cf5a9fc540a91bc5f77dcde3dc15a7.scope: Deactivated successfully.
Dec 05 01:22:15 compute-0 podman[254196]: 2025-12-05 01:22:15.23216717 +0000 UTC m=+0.279450028 container died 9185669a55200ca3524a40e13be7e3edb0cf5a9fc540a91bc5f77dcde3dc15a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:22:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d8ad037f440980b1ce19a8ec44826361b6b8855acee7c1abd138348b90b5b13-merged.mount: Deactivated successfully.
Dec 05 01:22:15 compute-0 podman[254196]: 2025-12-05 01:22:15.316171918 +0000 UTC m=+0.363454786 container remove 9185669a55200ca3524a40e13be7e3edb0cf5a9fc540a91bc5f77dcde3dc15a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:22:15 compute-0 systemd[1]: libpod-conmon-9185669a55200ca3524a40e13be7e3edb0cf5a9fc540a91bc5f77dcde3dc15a7.scope: Deactivated successfully.
Dec 05 01:22:15 compute-0 ceph-mon[192914]: pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:15 compute-0 podman[254235]: 2025-12-05 01:22:15.608251991 +0000 UTC m=+0.105682790 container create e326a4b2487385ee7167c6186a5880b95adf239279c7c204ec0abbbb51dba515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_zhukovsky, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 01:22:15 compute-0 podman[254235]: 2025-12-05 01:22:15.572407281 +0000 UTC m=+0.069838160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:22:15 compute-0 systemd[1]: Started libpod-conmon-e326a4b2487385ee7167c6186a5880b95adf239279c7c204ec0abbbb51dba515.scope.
Dec 05 01:22:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95e6b01fd1c992d2337fd5ec62f6b20d0875ead1e39b0fbc291bbd6e309f53e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95e6b01fd1c992d2337fd5ec62f6b20d0875ead1e39b0fbc291bbd6e309f53e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95e6b01fd1c992d2337fd5ec62f6b20d0875ead1e39b0fbc291bbd6e309f53e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95e6b01fd1c992d2337fd5ec62f6b20d0875ead1e39b0fbc291bbd6e309f53e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:22:15 compute-0 podman[254235]: 2025-12-05 01:22:15.780758504 +0000 UTC m=+0.278189333 container init e326a4b2487385ee7167c6186a5880b95adf239279c7c204ec0abbbb51dba515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_zhukovsky, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 01:22:15 compute-0 podman[254235]: 2025-12-05 01:22:15.798207886 +0000 UTC m=+0.295638655 container start e326a4b2487385ee7167c6186a5880b95adf239279c7c204ec0abbbb51dba515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 05 01:22:15 compute-0 podman[254235]: 2025-12-05 01:22:15.802777175 +0000 UTC m=+0.300208034 container attach e326a4b2487385ee7167c6186a5880b95adf239279c7c204ec0abbbb51dba515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_zhukovsky, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 05 01:22:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:22:16
Dec 05 01:22:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:22:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:22:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'volumes', 'vms', 'default.rgw.meta', 'images', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', 'backups']
Dec 05 01:22:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:22:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:22:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:22:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:22:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:22:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:22:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:22:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:22:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:22:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:22:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:22:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:22:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:22:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:22:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:22:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:22:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]: {
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:     "0": [
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:         {
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "devices": [
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "/dev/loop3"
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             ],
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "lv_name": "ceph_lv0",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "lv_size": "21470642176",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "name": "ceph_lv0",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "tags": {
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.cluster_name": "ceph",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.crush_device_class": "",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.encrypted": "0",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.osd_id": "0",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.type": "block",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.vdo": "0"
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             },
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "type": "block",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "vg_name": "ceph_vg0"
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:         }
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:     ],
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:     "1": [
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:         {
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "devices": [
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "/dev/loop4"
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             ],
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "lv_name": "ceph_lv1",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "lv_size": "21470642176",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "name": "ceph_lv1",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "tags": {
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.cluster_name": "ceph",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.crush_device_class": "",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.encrypted": "0",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.osd_id": "1",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.type": "block",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.vdo": "0"
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             },
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "type": "block",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "vg_name": "ceph_vg1"
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:         }
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:     ],
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:     "2": [
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:         {
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "devices": [
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "/dev/loop5"
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             ],
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "lv_name": "ceph_lv2",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "lv_size": "21470642176",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "name": "ceph_lv2",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "tags": {
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.cluster_name": "ceph",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.crush_device_class": "",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.encrypted": "0",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.osd_id": "2",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.type": "block",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:                 "ceph.vdo": "0"
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             },
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "type": "block",
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:             "vg_name": "ceph_vg2"
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:         }
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]:     ]
Dec 05 01:22:16 compute-0 peaceful_zhukovsky[254251]: }
Dec 05 01:22:16 compute-0 systemd[1]: libpod-e326a4b2487385ee7167c6186a5880b95adf239279c7c204ec0abbbb51dba515.scope: Deactivated successfully.
Dec 05 01:22:16 compute-0 conmon[254251]: conmon e326a4b2487385ee7167 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e326a4b2487385ee7167c6186a5880b95adf239279c7c204ec0abbbb51dba515.scope/container/memory.events
Dec 05 01:22:16 compute-0 podman[254235]: 2025-12-05 01:22:16.62384724 +0000 UTC m=+1.121278049 container died e326a4b2487385ee7167c6186a5880b95adf239279c7c204ec0abbbb51dba515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:22:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-95e6b01fd1c992d2337fd5ec62f6b20d0875ead1e39b0fbc291bbd6e309f53e3-merged.mount: Deactivated successfully.
Dec 05 01:22:16 compute-0 podman[254235]: 2025-12-05 01:22:16.713819396 +0000 UTC m=+1.211250205 container remove e326a4b2487385ee7167c6186a5880b95adf239279c7c204ec0abbbb51dba515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Dec 05 01:22:16 compute-0 systemd[1]: libpod-conmon-e326a4b2487385ee7167c6186a5880b95adf239279c7c204ec0abbbb51dba515.scope: Deactivated successfully.
Dec 05 01:22:16 compute-0 sudo[254132]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:16 compute-0 sudo[254271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:22:16 compute-0 sudo[254271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:22:16 compute-0 sudo[254271]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:17 compute-0 sudo[254296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:22:17 compute-0 sudo[254296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:22:17 compute-0 sudo[254296]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:17 compute-0 sudo[254321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:22:17 compute-0 sudo[254321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:22:17 compute-0 sudo[254321]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:22:17 compute-0 sudo[254346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:22:17 compute-0 sudo[254346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:22:17 compute-0 ceph-mon[192914]: pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:17 compute-0 podman[254410]: 2025-12-05 01:22:17.839827287 +0000 UTC m=+0.079969575 container create 898a8e595deaeb7520a8bd5611ceaafe10ae304fd47f7316f79e39894a544b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 05 01:22:17 compute-0 podman[254410]: 2025-12-05 01:22:17.806797386 +0000 UTC m=+0.046939674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:22:17 compute-0 systemd[1]: Started libpod-conmon-898a8e595deaeb7520a8bd5611ceaafe10ae304fd47f7316f79e39894a544b94.scope.
Dec 05 01:22:17 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:22:17 compute-0 podman[254410]: 2025-12-05 01:22:17.986574724 +0000 UTC m=+0.226717072 container init 898a8e595deaeb7520a8bd5611ceaafe10ae304fd47f7316f79e39894a544b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_montalcini, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 01:22:18 compute-0 podman[254410]: 2025-12-05 01:22:18.004145119 +0000 UTC m=+0.244287397 container start 898a8e595deaeb7520a8bd5611ceaafe10ae304fd47f7316f79e39894a544b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_montalcini, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 05 01:22:18 compute-0 podman[254410]: 2025-12-05 01:22:18.011024013 +0000 UTC m=+0.251166361 container attach 898a8e595deaeb7520a8bd5611ceaafe10ae304fd47f7316f79e39894a544b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_montalcini, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 05 01:22:18 compute-0 competent_montalcini[254427]: 167 167
Dec 05 01:22:18 compute-0 systemd[1]: libpod-898a8e595deaeb7520a8bd5611ceaafe10ae304fd47f7316f79e39894a544b94.scope: Deactivated successfully.
Dec 05 01:22:18 compute-0 conmon[254427]: conmon 898a8e595deaeb7520a8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-898a8e595deaeb7520a8bd5611ceaafe10ae304fd47f7316f79e39894a544b94.scope/container/memory.events
Dec 05 01:22:18 compute-0 podman[254410]: 2025-12-05 01:22:18.016489247 +0000 UTC m=+0.256631535 container died 898a8e595deaeb7520a8bd5611ceaafe10ae304fd47f7316f79e39894a544b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_montalcini, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:22:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-a344dce82905bbfc6e2934a2f0fd6eaf989577f887695f057ddc79ccf3a0a9d3-merged.mount: Deactivated successfully.
Dec 05 01:22:18 compute-0 podman[254410]: 2025-12-05 01:22:18.093607581 +0000 UTC m=+0.333749879 container remove 898a8e595deaeb7520a8bd5611ceaafe10ae304fd47f7316f79e39894a544b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:22:18 compute-0 systemd[1]: libpod-conmon-898a8e595deaeb7520a8bd5611ceaafe10ae304fd47f7316f79e39894a544b94.scope: Deactivated successfully.
Dec 05 01:22:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:18 compute-0 podman[254450]: 2025-12-05 01:22:18.356610655 +0000 UTC m=+0.091648144 container create e706692ebac34d4c4bca8f1bd4190b672cc9e386afe5fc1d206496f08f2c5d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:22:18 compute-0 podman[254450]: 2025-12-05 01:22:18.323949455 +0000 UTC m=+0.058987004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:22:18 compute-0 systemd[1]: Started libpod-conmon-e706692ebac34d4c4bca8f1bd4190b672cc9e386afe5fc1d206496f08f2c5d32.scope.
Dec 05 01:22:18 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:22:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40c0f18fed29409e68750afa5a7be05e9b7662ab88c1fe7698719e39234d416/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:22:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40c0f18fed29409e68750afa5a7be05e9b7662ab88c1fe7698719e39234d416/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:22:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40c0f18fed29409e68750afa5a7be05e9b7662ab88c1fe7698719e39234d416/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:22:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40c0f18fed29409e68750afa5a7be05e9b7662ab88c1fe7698719e39234d416/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:22:18 compute-0 podman[254450]: 2025-12-05 01:22:18.60260702 +0000 UTC m=+0.337644559 container init e706692ebac34d4c4bca8f1bd4190b672cc9e386afe5fc1d206496f08f2c5d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_matsumoto, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 01:22:18 compute-0 ceph-mon[192914]: pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:18 compute-0 podman[254450]: 2025-12-05 01:22:18.641151676 +0000 UTC m=+0.376189155 container start e706692ebac34d4c4bca8f1bd4190b672cc9e386afe5fc1d206496f08f2c5d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_matsumoto, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:22:18 compute-0 podman[254450]: 2025-12-05 01:22:18.64907386 +0000 UTC m=+0.384111369 container attach e706692ebac34d4c4bca8f1bd4190b672cc9e386afe5fc1d206496f08f2c5d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 05 01:22:18 compute-0 sshd-session[254469]: Accepted publickey for zuul from 192.168.122.30 port 58448 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 01:22:18 compute-0 systemd-logind[792]: New session 50 of user zuul.
Dec 05 01:22:18 compute-0 systemd[1]: Started Session 50 of User zuul.
Dec 05 01:22:18 compute-0 sshd-session[254469]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:22:19 compute-0 upbeat_matsumoto[254466]: {
Dec 05 01:22:19 compute-0 upbeat_matsumoto[254466]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:22:19 compute-0 upbeat_matsumoto[254466]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:22:19 compute-0 upbeat_matsumoto[254466]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:22:19 compute-0 upbeat_matsumoto[254466]:         "osd_id": 0,
Dec 05 01:22:19 compute-0 upbeat_matsumoto[254466]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:22:19 compute-0 upbeat_matsumoto[254466]:         "type": "bluestore"
Dec 05 01:22:19 compute-0 upbeat_matsumoto[254466]:     },
Dec 05 01:22:19 compute-0 upbeat_matsumoto[254466]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:22:19 compute-0 upbeat_matsumoto[254466]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:22:19 compute-0 upbeat_matsumoto[254466]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:22:19 compute-0 upbeat_matsumoto[254466]:         "osd_id": 1,
Dec 05 01:22:19 compute-0 upbeat_matsumoto[254466]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:22:19 compute-0 upbeat_matsumoto[254466]:         "type": "bluestore"
Dec 05 01:22:19 compute-0 upbeat_matsumoto[254466]:     },
Dec 05 01:22:19 compute-0 upbeat_matsumoto[254466]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:22:19 compute-0 upbeat_matsumoto[254466]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:22:19 compute-0 upbeat_matsumoto[254466]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:22:19 compute-0 upbeat_matsumoto[254466]:         "osd_id": 2,
Dec 05 01:22:19 compute-0 upbeat_matsumoto[254466]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:22:19 compute-0 upbeat_matsumoto[254466]:         "type": "bluestore"
Dec 05 01:22:19 compute-0 upbeat_matsumoto[254466]:     }
Dec 05 01:22:19 compute-0 upbeat_matsumoto[254466]: }
Dec 05 01:22:19 compute-0 systemd[1]: libpod-e706692ebac34d4c4bca8f1bd4190b672cc9e386afe5fc1d206496f08f2c5d32.scope: Deactivated successfully.
Dec 05 01:22:19 compute-0 podman[254450]: 2025-12-05 01:22:19.860981081 +0000 UTC m=+1.596018590 container died e706692ebac34d4c4bca8f1bd4190b672cc9e386afe5fc1d206496f08f2c5d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:22:19 compute-0 systemd[1]: libpod-e706692ebac34d4c4bca8f1bd4190b672cc9e386afe5fc1d206496f08f2c5d32.scope: Consumed 1.219s CPU time.
Dec 05 01:22:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-d40c0f18fed29409e68750afa5a7be05e9b7662ab88c1fe7698719e39234d416-merged.mount: Deactivated successfully.
Dec 05 01:22:19 compute-0 podman[254450]: 2025-12-05 01:22:19.986456178 +0000 UTC m=+1.721493637 container remove e706692ebac34d4c4bca8f1bd4190b672cc9e386afe5fc1d206496f08f2c5d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_matsumoto, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 05 01:22:20 compute-0 systemd[1]: libpod-conmon-e706692ebac34d4c4bca8f1bd4190b672cc9e386afe5fc1d206496f08f2c5d32.scope: Deactivated successfully.
Dec 05 01:22:20 compute-0 sudo[254346]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:22:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:22:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:22:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:22:20 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 69757484-275e-43bc-9abb-04d954c800f9 does not exist
Dec 05 01:22:20 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 1571835d-f16c-4ffd-937b-4e85d124500c does not exist
Dec 05 01:22:20 compute-0 sudo[254666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:22:20 compute-0 sudo[254666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:22:20 compute-0 sudo[254666]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:20 compute-0 python3.9[254665]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:22:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:20 compute-0 sudo[254691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:22:20 compute-0 sudo[254691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:22:20 compute-0 sudo[254691]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:22:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:22:21 compute-0 ceph-mon[192914]: pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:21 compute-0 sudo[254869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mumgyufxbbpmftvihbydpopwmosapvko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897741.0165122-34-169400900258755/AnsiballZ_setup.py'
Dec 05 01:22:21 compute-0 sudo[254869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:21 compute-0 python3.9[254871]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 01:22:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:22:22 compute-0 sudo[254869]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:22 compute-0 sudo[254953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmdcaymvjpxibxrretfovnmedstkxtca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897741.0165122-34-169400900258755/AnsiballZ_dnf.py'
Dec 05 01:22:22 compute-0 sudo[254953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:23 compute-0 python3.9[254955]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 05 01:22:23 compute-0 ceph-mon[192914]: pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:24 compute-0 sudo[254953]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:25 compute-0 ceph-mon[192914]: pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:25 compute-0 python3.9[255106]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:22:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 01:22:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 600.0 total, 600.0 interval
                                            Cumulative writes: 2026 writes, 8997 keys, 2026 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                            Cumulative WAL: 2026 writes, 2026 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 2026 writes, 8997 keys, 2026 commit groups, 1.0 writes per commit group, ingest: 10.88 MB, 0.02 MB/s
                                            Interval WAL: 2026 writes, 2026 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    131.8      0.06              0.03         3    0.021       0      0       0.0       0.0
                                              L6      1/0    6.45 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    110.8     98.2      0.14              0.07         2    0.068    7115    734       0.0       0.0
                                             Sum      1/0    6.45 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     76.0    108.7      0.20              0.10         5    0.039    7115    734       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     77.9    111.3      0.19              0.10         4    0.048    7115    734       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    110.8     98.2      0.14              0.07         2    0.068    7115    734       0.0       0.0
                                            High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    142.4      0.06              0.03         2    0.028       0      0       0.0       0.0
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.0 total, 600.0 interval
                                            Flush(GB): cumulative 0.008, interval 0.008
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.2 seconds
                                            Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.2 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56463779d1f0#2 capacity: 308.00 MB usage: 573.61 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000101 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(35,486.89 KB,0.154376%) FilterBlock(6,27.55 KB,0.00873417%) IndexBlock(6,59.17 KB,0.0187614%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Dec 05 01:22:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:22:27 compute-0 python3.9[255257]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 01:22:27 compute-0 ceph-mon[192914]: pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:27 compute-0 podman[255314]: 2025-12-05 01:22:27.704076361 +0000 UTC m=+0.115663892 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:22:27 compute-0 podman[255310]: 2025-12-05 01:22:27.713676051 +0000 UTC m=+0.121392203 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:22:27 compute-0 podman[255316]: 2025-12-05 01:22:27.766427418 +0000 UTC m=+0.167754960 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:22:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:28 compute-0 python3.9[255472]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:22:29 compute-0 python3.9[255622]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:22:29 compute-0 ceph-mon[192914]: pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:29 compute-0 podman[255649]: 2025-12-05 01:22:29.724019882 +0000 UTC m=+0.139059911 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:22:29 compute-0 podman[158197]: time="2025-12-05T01:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:22:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 05 01:22:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6833 "" "Go-http-client/1.1"
Dec 05 01:22:30 compute-0 sshd-session[254475]: Connection closed by 192.168.122.30 port 58448
Dec 05 01:22:30 compute-0 sshd-session[254469]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:22:30 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Dec 05 01:22:30 compute-0 systemd[1]: session-50.scope: Consumed 8.967s CPU time.
Dec 05 01:22:30 compute-0 systemd-logind[792]: Session 50 logged out. Waiting for processes to exit.
Dec 05 01:22:30 compute-0 systemd-logind[792]: Removed session 50.
Dec 05 01:22:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:31 compute-0 openstack_network_exporter[160350]: ERROR   01:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:22:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:22:31 compute-0 openstack_network_exporter[160350]: ERROR   01:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:22:31 compute-0 openstack_network_exporter[160350]: ERROR   01:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:22:31 compute-0 openstack_network_exporter[160350]: ERROR   01:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:22:31 compute-0 openstack_network_exporter[160350]: ERROR   01:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:22:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:22:31 compute-0 ceph-mon[192914]: pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:22:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:33 compute-0 ceph-mon[192914]: pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:34 compute-0 podman[255668]: 2025-12-05 01:22:34.680424013 +0000 UTC m=+0.096766538 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.openshift.expose-services=, release-0.7.12=, name=ubi9, release=1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 01:22:35 compute-0 ceph-mon[192914]: pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:36 compute-0 sshd-session[255687]: Accepted publickey for zuul from 192.168.122.30 port 41034 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 01:22:36 compute-0 systemd-logind[792]: New session 51 of user zuul.
Dec 05 01:22:36 compute-0 systemd[1]: Started Session 51 of User zuul.
Dec 05 01:22:36 compute-0 sshd-session[255687]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:22:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:22:37 compute-0 ceph-mon[192914]: pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:37 compute-0 podman[255814]: 2025-12-05 01:22:37.644819037 +0000 UTC m=+0.170259018 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, version=9.6, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, container_name=openstack_network_exporter)
Dec 05 01:22:37 compute-0 python3.9[255853]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:22:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:39 compute-0 ceph-mon[192914]: pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:40 compute-0 sudo[256030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrlonlnocuguenzegdskzxacmmxxkgde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897759.5652452-50-148891188685016/AnsiballZ_file.py'
Dec 05 01:22:40 compute-0 podman[255991]: 2025-12-05 01:22:40.344182615 +0000 UTC m=+0.120471736 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 01:22:40 compute-0 sudo[256030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:40 compute-0 python3.9[256040]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:22:40 compute-0 sudo[256030]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:41 compute-0 sudo[256190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghmutcjlftpxzffjbylsewpylpzcfsyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897760.811745-50-149916716104794/AnsiballZ_file.py'
Dec 05 01:22:41 compute-0 sudo[256190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:41 compute-0 ceph-mon[192914]: pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:41 compute-0 python3.9[256192]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:22:41 compute-0 sudo[256190]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:22:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.544 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.545 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.549 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:22:42 compute-0 ceph-mon[192914]: pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:42 compute-0 sudo[256343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqbwiqgbupuyncqklgqeuolycolonzct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897762.0597355-65-228324282895253/AnsiballZ_stat.py'
Dec 05 01:22:42 compute-0 sudo[256343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:42 compute-0 python3.9[256345]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:22:42 compute-0 sudo[256343]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:43 compute-0 sudo[256421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpqlgnlgdanljbzekhftopuweoqwhymm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897762.0597355-65-228324282895253/AnsiballZ_file.py'
Dec 05 01:22:43 compute-0 sudo[256421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:43 compute-0 python3.9[256423]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/ovn/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/ovn/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:22:43 compute-0 sudo[256421]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:44 compute-0 sudo[256573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uczjkunqxybhbnkukdcuagmdmpnrjtod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897763.868494-65-138004696861630/AnsiballZ_stat.py'
Dec 05 01:22:44 compute-0 sudo[256573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:44 compute-0 python3.9[256575]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:22:44 compute-0 sudo[256573]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:45 compute-0 sudo[256651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wimdubtehycbdfrdjyxjcdguhjrfajtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897763.868494-65-138004696861630/AnsiballZ_file.py'
Dec 05 01:22:45 compute-0 sudo[256651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:45 compute-0 python3.9[256653]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/ovn/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/ovn/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:22:45 compute-0 sudo[256651]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:45 compute-0 ceph-mon[192914]: pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:22:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:22:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:22:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:22:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:22:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:22:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:46 compute-0 sudo[256803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaltkqcvutwdtjvwqaofdpowymzauxpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897765.8275497-65-51578100992030/AnsiballZ_stat.py'
Dec 05 01:22:46 compute-0 sudo[256803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:46 compute-0 python3.9[256805]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:22:46 compute-0 sudo[256803]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:47 compute-0 sudo[256881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgjwpsmtdzfmyywfivnyftpivhcztdla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897765.8275497-65-51578100992030/AnsiballZ_file.py'
Dec 05 01:22:47 compute-0 sudo[256881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:22:47 compute-0 python3.9[256883]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/ovn/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/ovn/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:22:47 compute-0 sudo[256881]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:47 compute-0 ceph-mon[192914]: pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:48 compute-0 sudo[257033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jytblunmbbbrgfjnmnmizdiqkgfhfjzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897767.632599-100-30349675548325/AnsiballZ_file.py'
Dec 05 01:22:48 compute-0 sudo[257033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:48 compute-0 python3.9[257035]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:22:48 compute-0 sudo[257033]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:49 compute-0 sudo[257185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvdgihdfvjtujebdwtjjquvjpglouulc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897768.6573956-100-213432157261647/AnsiballZ_file.py'
Dec 05 01:22:49 compute-0 sudo[257185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:49 compute-0 ceph-mon[192914]: pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:49 compute-0 python3.9[257187]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:22:49 compute-0 sudo[257185]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:50 compute-0 sudo[257338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymsgovifhbwebxhowbjynkyelmqhfzpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897769.8100526-115-272364028158867/AnsiballZ_stat.py'
Dec 05 01:22:50 compute-0 sudo[257338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:50 compute-0 python3.9[257340]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:22:50 compute-0 sudo[257338]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:51 compute-0 sudo[257416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqykmfkxbpyhgwomrruanzeremplsyks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897769.8100526-115-272364028158867/AnsiballZ_file.py'
Dec 05 01:22:51 compute-0 sudo[257416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:51 compute-0 python3.9[257418]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:22:51 compute-0 sudo[257416]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:51 compute-0 ceph-mon[192914]: pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:51 compute-0 sudo[257568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcowhhgaykknnihviixxzfjflasruqek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897771.53844-115-189365934496115/AnsiballZ_stat.py'
Dec 05 01:22:51 compute-0 sudo[257568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:22:52 compute-0 python3.9[257570]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:22:52 compute-0 sudo[257568]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:52 compute-0 sudo[257646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdlqbcpuvtmelqlhvarvrpgjmbgpwxpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897771.53844-115-189365934496115/AnsiballZ_file.py'
Dec 05 01:22:52 compute-0 sudo[257646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:52 compute-0 python3.9[257648]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:22:52 compute-0 sudo[257646]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:53 compute-0 ceph-mon[192914]: pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:53 compute-0 sudo[257798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpkovoaigphyhjspbohnjqhexphthsjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897773.2069097-115-40448478281157/AnsiballZ_stat.py'
Dec 05 01:22:53 compute-0 sudo[257798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:53 compute-0 python3.9[257800]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:22:54 compute-0 sudo[257798]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:54 compute-0 sudo[257876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qarvgxfdaeoynizxdzootrlypegtmxzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897773.2069097-115-40448478281157/AnsiballZ_file.py'
Dec 05 01:22:54 compute-0 sudo[257876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:54 compute-0 python3.9[257878]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/telemetry/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:22:54 compute-0 sudo[257876]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:55 compute-0 ceph-mon[192914]: pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:55 compute-0 sudo[258028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksedoknzriofvptwkzvqklejhqoypbbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897775.0932763-150-208587718329269/AnsiballZ_file.py'
Dec 05 01:22:55 compute-0 sudo[258028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:55 compute-0 python3.9[258030]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:22:55 compute-0 sudo[258028]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:56 compute-0 sudo[258180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybwlyjusqlcozeqxbvykyglyftkpcwft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897776.1073933-150-210684776180521/AnsiballZ_file.py'
Dec 05 01:22:56 compute-0 sudo[258180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:56 compute-0 python3.9[258182]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:22:56 compute-0 sudo[258180]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:22:57 compute-0 ceph-mon[192914]: pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:57 compute-0 sudo[258332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otjtypnwqbxopqoolqwntpgrsbwzxsnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897777.1337445-165-142445557368831/AnsiballZ_stat.py'
Dec 05 01:22:57 compute-0 sudo[258332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:57 compute-0 python3.9[258334]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:22:57 compute-0 sudo[258332]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:58 compute-0 podman[258406]: 2025-12-05 01:22:58.708826467 +0000 UTC m=+0.106019504 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:22:58 compute-0 podman[258405]: 2025-12-05 01:22:58.720139511 +0000 UTC m=+0.123942072 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 01:22:58 compute-0 podman[258407]: 2025-12-05 01:22:58.778307106 +0000 UTC m=+0.166007230 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:22:58 compute-0 sudo[258520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtvxlevhlilbjwqsdpewvqspffjcherb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897777.1337445-165-142445557368831/AnsiballZ_copy.py'
Dec 05 01:22:58 compute-0 sudo[258520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:58 compute-0 python3.9[258522]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764897777.1337445-165-142445557368831/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=52f8f3f3b93de18584ffd2f1bf0edd763c8f4107 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:22:59 compute-0 sudo[258520]: pam_unix(sudo:session): session closed for user root
Dec 05 01:22:59 compute-0 ceph-mon[192914]: pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:22:59 compute-0 podman[158197]: time="2025-12-05T01:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:22:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 05 01:22:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6827 "" "Go-http-client/1.1"
Dec 05 01:22:59 compute-0 sudo[258672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iedmwbgjcxwabdmtchpwpxbxousmmdft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897779.2800305-165-278497685336608/AnsiballZ_stat.py'
Dec 05 01:22:59 compute-0 sudo[258672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:22:59 compute-0 podman[258674]: 2025-12-05 01:22:59.971743456 +0000 UTC m=+0.128478817 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 05 01:23:00 compute-0 python3.9[258675]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:23:00 compute-0 sudo[258672]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:00 compute-0 sudo[258815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgliudkdyopqrihhuwpaqwikpwzvcmrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897779.2800305-165-278497685336608/AnsiballZ_copy.py'
Dec 05 01:23:00 compute-0 sudo[258815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:01 compute-0 python3.9[258817]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764897779.2800305-165-278497685336608/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=464076ef88dcc89aa3cbba91e13b4b726d71f651 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:23:01 compute-0 sudo[258815]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:01 compute-0 openstack_network_exporter[160350]: ERROR   01:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:23:01 compute-0 openstack_network_exporter[160350]: ERROR   01:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:23:01 compute-0 openstack_network_exporter[160350]: ERROR   01:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:23:01 compute-0 openstack_network_exporter[160350]: ERROR   01:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:23:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:23:01 compute-0 openstack_network_exporter[160350]: ERROR   01:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:23:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:23:01 compute-0 ceph-mon[192914]: pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:01 compute-0 sudo[258967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wexuqokvvpjtqnaqmxsxgpeqtuuhtbho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897781.2572145-165-53604373244575/AnsiballZ_stat.py'
Dec 05 01:23:01 compute-0 sudo[258967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:02 compute-0 python3.9[258969]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:23:02 compute-0 sudo[258967]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:23:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:02 compute-0 sudo[259090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txotokkyqflgryuihegvftbppfelljmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897781.2572145-165-53604373244575/AnsiballZ_copy.py'
Dec 05 01:23:02 compute-0 sudo[259090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:02 compute-0 python3.9[259092]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764897781.2572145-165-53604373244575/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=f1718879b0aeb9c0646235a3fdcd720acf7caa59 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:23:02 compute-0 sudo[259090]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:03 compute-0 ceph-mon[192914]: pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:03 compute-0 sudo[259242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfelmjjkasabzcqnkrunxfzseofffwwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897783.2531831-209-53740078214161/AnsiballZ_file.py'
Dec 05 01:23:03 compute-0 sudo[259242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:04 compute-0 python3.9[259244]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:23:04 compute-0 sudo[259242]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:04 compute-0 sudo[259410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpsamxuuyotmnhqykikdyhxajbuvcmoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897784.293585-209-39106938086113/AnsiballZ_file.py'
Dec 05 01:23:04 compute-0 sudo[259410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:04 compute-0 podman[259368]: 2025-12-05 01:23:04.875677083 +0000 UTC m=+0.119119078 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, maintainer=Red Hat, Inc., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, vcs-type=git, container_name=kepler, version=9.4, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container)
Dec 05 01:23:05 compute-0 python3.9[259415]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:23:05 compute-0 sudo[259410]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:05 compute-0 ceph-mon[192914]: pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:05 compute-0 sudo[259567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzphzygfrcgmmeimxzflamttowofinbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897785.4127336-224-223884502030509/AnsiballZ_stat.py'
Dec 05 01:23:05 compute-0 sudo[259567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:06 compute-0 python3.9[259569]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:23:06 compute-0 sudo[259567]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:06 compute-0 ceph-mon[192914]: pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:06 compute-0 sudo[259645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkujnzpnbcbsqfsbixdixxciyvpyiaza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897785.4127336-224-223884502030509/AnsiballZ_file.py'
Dec 05 01:23:06 compute-0 sudo[259645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:06 compute-0 python3.9[259647]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/libvirt/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/libvirt/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:23:06 compute-0 sudo[259645]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:23:07 compute-0 sudo[259797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycysvjksweqlruvacmoczwcamhlchokc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897787.0941818-224-189441456455251/AnsiballZ_stat.py'
Dec 05 01:23:07 compute-0 sudo[259797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:07 compute-0 python3.9[259799]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:23:07 compute-0 sudo[259797]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:08 compute-0 sudo[259892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbtzxfdazavdwrhitvnivrwduszmevyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897787.0941818-224-189441456455251/AnsiballZ_file.py'
Dec 05 01:23:08 compute-0 sudo[259892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:08 compute-0 podman[259849]: 2025-12-05 01:23:08.347309668 +0000 UTC m=+0.135712879 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, version=9.6, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 01:23:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.442468) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897788442581, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 721, "num_deletes": 251, "total_data_size": 926766, "memory_usage": 939560, "flush_reason": "Manual Compaction"}
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897788454868, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 918662, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8966, "largest_seqno": 9686, "table_properties": {"data_size": 914936, "index_size": 1570, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8048, "raw_average_key_size": 18, "raw_value_size": 907452, "raw_average_value_size": 2090, "num_data_blocks": 73, "num_entries": 434, "num_filter_entries": 434, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897727, "oldest_key_time": 1764897727, "file_creation_time": 1764897788, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 12542 microseconds, and 6372 cpu microseconds.
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.455017) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 918662 bytes OK
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.455042) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.457309) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.457331) EVENT_LOG_v1 {"time_micros": 1764897788457324, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.457355) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 923052, prev total WAL file size 923052, number of live WAL files 2.
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.458617) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(897KB)], [23(6606KB)]
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897788458716, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7683278, "oldest_snapshot_seqno": -1}
Dec 05 01:23:08 compute-0 python3.9[259898]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/libvirt/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/libvirt/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3282 keys, 6082342 bytes, temperature: kUnknown
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897788522735, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6082342, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6058392, "index_size": 14625, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 79565, "raw_average_key_size": 24, "raw_value_size": 5996976, "raw_average_value_size": 1827, "num_data_blocks": 639, "num_entries": 3282, "num_filter_entries": 3282, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764897788, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.523604) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6082342 bytes
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.526690) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.8 rd, 94.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 6.5 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(15.0) write-amplify(6.6) OK, records in: 3796, records dropped: 514 output_compression: NoCompression
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.526734) EVENT_LOG_v1 {"time_micros": 1764897788526714, "job": 8, "event": "compaction_finished", "compaction_time_micros": 64109, "compaction_time_cpu_micros": 31640, "output_level": 6, "num_output_files": 1, "total_output_size": 6082342, "num_input_records": 3796, "num_output_records": 3282, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897788528047, "job": 8, "event": "table_file_deletion", "file_number": 25}
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897788530986, "job": 8, "event": "table_file_deletion", "file_number": 23}
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.458271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.531226) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.531233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.531238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.531241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.531245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:23:08 compute-0 sudo[259892]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:09 compute-0 sudo[260048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiwptkrjsubjtikhzmwrmjzlvvtbfwae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897788.7929444-224-139133030812978/AnsiballZ_stat.py'
Dec 05 01:23:09 compute-0 sudo[260048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:09 compute-0 ceph-mon[192914]: pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:09 compute-0 python3.9[260050]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:23:09 compute-0 sudo[260048]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:10 compute-0 sudo[260126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fifbfepxkdiobveasezcxlqonxefguqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897788.7929444-224-139133030812978/AnsiballZ_file.py'
Dec 05 01:23:10 compute-0 sudo[260126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:10 compute-0 python3.9[260128]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/libvirt/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/libvirt/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:23:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:10 compute-0 sudo[260126]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:10 compute-0 podman[260153]: 2025-12-05 01:23:10.719564752 +0000 UTC m=+0.128768145 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:23:11 compute-0 sudo[260302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuajjfzshmqzwdeyjelzzkimoqtayvdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897790.7732966-259-207420311246423/AnsiballZ_file.py'
Dec 05 01:23:11 compute-0 sudo[260302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:11 compute-0 ceph-mon[192914]: pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:11 compute-0 python3.9[260304]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:23:11 compute-0 sudo[260302]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:23:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:12 compute-0 sudo[260454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzrzgcnjqzrwfnyglrswejaaqopkwmpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897791.8264103-259-238501026378454/AnsiballZ_file.py'
Dec 05 01:23:12 compute-0 sudo[260454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:12 compute-0 python3.9[260456]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:23:12 compute-0 sudo[260454]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:13 compute-0 ceph-mon[192914]: pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:13 compute-0 sudo[260606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iihcmznuhithmcoxtofaqhggdceltmjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897792.9422653-274-98450141594907/AnsiballZ_stat.py'
Dec 05 01:23:13 compute-0 sudo[260606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:13 compute-0 python3.9[260608]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:23:13 compute-0 sudo[260606]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:14 compute-0 sudo[260684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuiowttdyagunbghhqrxrlroyjzaajen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897792.9422653-274-98450141594907/AnsiballZ_file.py'
Dec 05 01:23:14 compute-0 sudo[260684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:14 compute-0 python3.9[260686]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:23:14 compute-0 sudo[260684]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:15 compute-0 sudo[260836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlrbjthxxaquexjwodujvlpkvndwkptf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897794.73546-274-142128809167816/AnsiballZ_stat.py'
Dec 05 01:23:15 compute-0 sudo[260836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:15 compute-0 python3.9[260838]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:23:15 compute-0 ceph-mon[192914]: pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:15 compute-0 sudo[260836]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:15 compute-0 sudo[260914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osstdhbehhcjicmaddcrymszsjslefxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897794.73546-274-142128809167816/AnsiballZ_file.py'
Dec 05 01:23:15 compute-0 sudo[260914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:23:16
Dec 05 01:23:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:23:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:23:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['volumes', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'images', 'default.rgw.meta', 'default.rgw.control']
Dec 05 01:23:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:23:16 compute-0 python3.9[260916]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:23:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:23:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:23:16 compute-0 sudo[260914]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:23:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:23:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:23:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:23:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:23:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:23:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:23:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:23:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:23:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:23:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:23:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:23:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:23:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:23:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:17 compute-0 sudo[261066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orxvjjxinamuksbgljphfpldzzeuluxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897796.4785094-274-63321677425433/AnsiballZ_stat.py'
Dec 05 01:23:17 compute-0 sudo[261066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:23:17 compute-0 python3.9[261068]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:23:17 compute-0 sudo[261066]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:17 compute-0 ceph-mon[192914]: pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:17 compute-0 sudo[261144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyzjimirsbqujiefiyyndmjomrmxdvhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897796.4785094-274-63321677425433/AnsiballZ_file.py'
Dec 05 01:23:17 compute-0 sudo[261144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:18 compute-0 python3.9[261146]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:23:18 compute-0 sudo[261144]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:19 compute-0 sudo[261296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkkcigmbempckxnwyusewzdaduvipcjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897798.9412773-325-11336512496307/AnsiballZ_file.py'
Dec 05 01:23:19 compute-0 sudo[261296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:19 compute-0 ceph-mon[192914]: pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:19 compute-0 python3.9[261299]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:23:19 compute-0 sudo[261296]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:20 compute-0 sudo[261399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:23:20 compute-0 sudo[261399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:23:20 compute-0 sudo[261399]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:20 compute-0 sudo[261448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:23:20 compute-0 sudo[261448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:23:20 compute-0 sudo[261448]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:20 compute-0 sudo[261497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhzydnihpgemiwdqkpajuohnxkqdaktm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897800.0854194-333-178699962604550/AnsiballZ_stat.py'
Dec 05 01:23:20 compute-0 sudo[261497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:20 compute-0 sudo[261501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:23:20 compute-0 sudo[261501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:23:20 compute-0 sudo[261501]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:20 compute-0 sudo[261527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:23:20 compute-0 sudo[261527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:23:20 compute-0 python3.9[261502]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:23:21 compute-0 sudo[261497]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:21 compute-0 ceph-mon[192914]: pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:21 compute-0 sudo[261652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yricjcktggcaphitomicjebqiswxcdao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897800.0854194-333-178699962604550/AnsiballZ_file.py'
Dec 05 01:23:21 compute-0 sudo[261652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:21 compute-0 sudo[261527]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:23:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:23:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:23:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:23:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:23:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:23:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a0393788-1c88-474b-836f-a898dfc8ed71 does not exist
Dec 05 01:23:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b6e3c338-3276-4072-bdbb-f728d98eee8e does not exist
Dec 05 01:23:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 66534c34-b662-49f4-8e91-91ee001b37b8 does not exist
Dec 05 01:23:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:23:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:23:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:23:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:23:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:23:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:23:21 compute-0 python3.9[261658]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:23:21 compute-0 sudo[261652]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:21 compute-0 sudo[261659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:23:21 compute-0 sudo[261659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:23:21 compute-0 sudo[261659]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:21 compute-0 sudo[261687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:23:21 compute-0 sudo[261687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:23:21 compute-0 sudo[261687]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:22 compute-0 sudo[261733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:23:22 compute-0 sudo[261733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:23:22 compute-0 sudo[261733]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:22 compute-0 sudo[261771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:23:22 compute-0 sudo[261771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:23:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:23:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:23:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:23:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:23:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:23:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:23:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:23:22 compute-0 sudo[261943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gskcbywgqdqnvaituhxmuowvztshqivi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897802.098725-346-114562710412046/AnsiballZ_file.py'
Dec 05 01:23:22 compute-0 sudo[261943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:22 compute-0 podman[261944]: 2025-12-05 01:23:22.649753571 +0000 UTC m=+0.082813270 container create d9a8e2834d2d85e6617f028ff922ab107ebfacb66e0fc658e13e62785ca2105e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_driscoll, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:23:22 compute-0 podman[261944]: 2025-12-05 01:23:22.620066567 +0000 UTC m=+0.053126346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:23:22 compute-0 systemd[1]: Started libpod-conmon-d9a8e2834d2d85e6617f028ff922ab107ebfacb66e0fc658e13e62785ca2105e.scope.
Dec 05 01:23:22 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:23:22 compute-0 python3.9[261955]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:23:22 compute-0 podman[261944]: 2025-12-05 01:23:22.804664991 +0000 UTC m=+0.237724760 container init d9a8e2834d2d85e6617f028ff922ab107ebfacb66e0fc658e13e62785ca2105e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 05 01:23:22 compute-0 podman[261944]: 2025-12-05 01:23:22.819145014 +0000 UTC m=+0.252204693 container start d9a8e2834d2d85e6617f028ff922ab107ebfacb66e0fc658e13e62785ca2105e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_driscoll, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 05 01:23:22 compute-0 sudo[261943]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:22 compute-0 podman[261944]: 2025-12-05 01:23:22.824261076 +0000 UTC m=+0.257320755 container attach d9a8e2834d2d85e6617f028ff922ab107ebfacb66e0fc658e13e62785ca2105e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_driscoll, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 01:23:22 compute-0 quizzical_driscoll[261962]: 167 167
Dec 05 01:23:22 compute-0 systemd[1]: libpod-d9a8e2834d2d85e6617f028ff922ab107ebfacb66e0fc658e13e62785ca2105e.scope: Deactivated successfully.
Dec 05 01:23:22 compute-0 conmon[261962]: conmon d9a8e2834d2d85e6617f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d9a8e2834d2d85e6617f028ff922ab107ebfacb66e0fc658e13e62785ca2105e.scope/container/memory.events
Dec 05 01:23:22 compute-0 podman[261967]: 2025-12-05 01:23:22.911697793 +0000 UTC m=+0.057447296 container died d9a8e2834d2d85e6617f028ff922ab107ebfacb66e0fc658e13e62785ca2105e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:23:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-081c40a12d670955e9fe6f2fb8220ea55e29aed5b0186d7d30d2c2c63bfcfb9a-merged.mount: Deactivated successfully.
Dec 05 01:23:22 compute-0 podman[261967]: 2025-12-05 01:23:22.987800156 +0000 UTC m=+0.133549599 container remove d9a8e2834d2d85e6617f028ff922ab107ebfacb66e0fc658e13e62785ca2105e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_driscoll, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 05 01:23:22 compute-0 systemd[1]: libpod-conmon-d9a8e2834d2d85e6617f028ff922ab107ebfacb66e0fc658e13e62785ca2105e.scope: Deactivated successfully.
Dec 05 01:23:23 compute-0 podman[262056]: 2025-12-05 01:23:23.254784397 +0000 UTC m=+0.082027248 container create aaa77535f9a35c6452e5177475cf65a0f39cc44c5631d5933cfb0d6ad16c072f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_liskov, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 01:23:23 compute-0 podman[262056]: 2025-12-05 01:23:23.217513762 +0000 UTC m=+0.044756654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:23:23 compute-0 systemd[1]: Started libpod-conmon-aaa77535f9a35c6452e5177475cf65a0f39cc44c5631d5933cfb0d6ad16c072f.scope.
Dec 05 01:23:23 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27dd6b3d6a05ebdbc9e91b9e113acc2f4c3fd5fba946d622d8aab17c40380324/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27dd6b3d6a05ebdbc9e91b9e113acc2f4c3fd5fba946d622d8aab17c40380324/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27dd6b3d6a05ebdbc9e91b9e113acc2f4c3fd5fba946d622d8aab17c40380324/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27dd6b3d6a05ebdbc9e91b9e113acc2f4c3fd5fba946d622d8aab17c40380324/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27dd6b3d6a05ebdbc9e91b9e113acc2f4c3fd5fba946d622d8aab17c40380324/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:23:23 compute-0 podman[262056]: 2025-12-05 01:23:23.422781701 +0000 UTC m=+0.250024532 container init aaa77535f9a35c6452e5177475cf65a0f39cc44c5631d5933cfb0d6ad16c072f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_liskov, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Dec 05 01:23:23 compute-0 podman[262056]: 2025-12-05 01:23:23.450108399 +0000 UTC m=+0.277351200 container start aaa77535f9a35c6452e5177475cf65a0f39cc44c5631d5933cfb0d6ad16c072f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 05 01:23:23 compute-0 podman[262056]: 2025-12-05 01:23:23.454608393 +0000 UTC m=+0.281851204 container attach aaa77535f9a35c6452e5177475cf65a0f39cc44c5631d5933cfb0d6ad16c072f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_liskov, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:23:23 compute-0 ceph-mon[192914]: pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:23 compute-0 sudo[262158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vumdzbvlajmumreygdontrqyzvvxyvvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897803.0780754-354-22151287282668/AnsiballZ_stat.py'
Dec 05 01:23:23 compute-0 sudo[262158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:23 compute-0 python3.9[262160]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:23:23 compute-0 sudo[262158]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:24 compute-0 sudo[262246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szlaafyvervdmthiupkfhzupsbzxkrzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897803.0780754-354-22151287282668/AnsiballZ_file.py'
Dec 05 01:23:24 compute-0 sudo[262246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:24 compute-0 python3.9[262249]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:23:24 compute-0 sudo[262246]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:24 compute-0 pensive_liskov[262103]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:23:24 compute-0 pensive_liskov[262103]: --> relative data size: 1.0
Dec 05 01:23:24 compute-0 pensive_liskov[262103]: --> All data devices are unavailable
Dec 05 01:23:24 compute-0 systemd[1]: libpod-aaa77535f9a35c6452e5177475cf65a0f39cc44c5631d5933cfb0d6ad16c072f.scope: Deactivated successfully.
Dec 05 01:23:24 compute-0 systemd[1]: libpod-aaa77535f9a35c6452e5177475cf65a0f39cc44c5631d5933cfb0d6ad16c072f.scope: Consumed 1.243s CPU time.
Dec 05 01:23:24 compute-0 podman[262056]: 2025-12-05 01:23:24.782843656 +0000 UTC m=+1.610086507 container died aaa77535f9a35c6452e5177475cf65a0f39cc44c5631d5933cfb0d6ad16c072f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_liskov, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Dec 05 01:23:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-27dd6b3d6a05ebdbc9e91b9e113acc2f4c3fd5fba946d622d8aab17c40380324-merged.mount: Deactivated successfully.
Dec 05 01:23:24 compute-0 podman[262056]: 2025-12-05 01:23:24.876094235 +0000 UTC m=+1.703337086 container remove aaa77535f9a35c6452e5177475cf65a0f39cc44c5631d5933cfb0d6ad16c072f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_liskov, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 01:23:24 compute-0 systemd[1]: libpod-conmon-aaa77535f9a35c6452e5177475cf65a0f39cc44c5631d5933cfb0d6ad16c072f.scope: Deactivated successfully.
Dec 05 01:23:24 compute-0 sudo[261771]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:25 compute-0 sudo[262314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:23:25 compute-0 sudo[262314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:23:25 compute-0 sudo[262314]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:25 compute-0 sudo[262372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:23:25 compute-0 sudo[262372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:23:25 compute-0 sudo[262372]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:25 compute-0 sudo[262426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:23:25 compute-0 sudo[262426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:23:25 compute-0 sudo[262426]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:25 compute-0 sudo[262474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:23:25 compute-0 sudo[262474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:23:25 compute-0 sudo[262526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxjhcarowzgfvxbcglhboqoygwpgufvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897804.9697242-367-238958053940576/AnsiballZ_file.py'
Dec 05 01:23:25 compute-0 sudo[262526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:25 compute-0 ceph-mon[192914]: pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:25 compute-0 python3.9[262528]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:23:25 compute-0 sudo[262526]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:26 compute-0 podman[262595]: 2025-12-05 01:23:26.056632668 +0000 UTC m=+0.090065252 container create 0cf8b23ca318673529745fd3262d7e817089858ea371607800bca698006b084e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lalande, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:23:26 compute-0 podman[262595]: 2025-12-05 01:23:26.022372717 +0000 UTC m=+0.055805341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:23:26 compute-0 systemd[1]: Started libpod-conmon-0cf8b23ca318673529745fd3262d7e817089858ea371607800bca698006b084e.scope.
Dec 05 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:23:26 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:23:26 compute-0 podman[262595]: 2025-12-05 01:23:26.195433561 +0000 UTC m=+0.228866185 container init 0cf8b23ca318673529745fd3262d7e817089858ea371607800bca698006b084e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:23:26 compute-0 podman[262595]: 2025-12-05 01:23:26.213347688 +0000 UTC m=+0.246780262 container start 0cf8b23ca318673529745fd3262d7e817089858ea371607800bca698006b084e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 01:23:26 compute-0 podman[262595]: 2025-12-05 01:23:26.221183546 +0000 UTC m=+0.254616130 container attach 0cf8b23ca318673529745fd3262d7e817089858ea371607800bca698006b084e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:23:26 compute-0 interesting_lalande[262650]: 167 167
Dec 05 01:23:26 compute-0 systemd[1]: libpod-0cf8b23ca318673529745fd3262d7e817089858ea371607800bca698006b084e.scope: Deactivated successfully.
Dec 05 01:23:26 compute-0 conmon[262650]: conmon 0cf8b23ca31867352974 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0cf8b23ca318673529745fd3262d7e817089858ea371607800bca698006b084e.scope/container/memory.events
Dec 05 01:23:26 compute-0 podman[262595]: 2025-12-05 01:23:26.228342825 +0000 UTC m=+0.261775409 container died 0cf8b23ca318673529745fd3262d7e817089858ea371607800bca698006b084e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lalande, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:23:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec5f42265189c78743d83baaa4192545635f7779fa1cb87d9fed17d31ff3996a-merged.mount: Deactivated successfully.
Dec 05 01:23:26 compute-0 podman[262595]: 2025-12-05 01:23:26.309512688 +0000 UTC m=+0.342945262 container remove 0cf8b23ca318673529745fd3262d7e817089858ea371607800bca698006b084e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lalande, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 05 01:23:26 compute-0 systemd[1]: libpod-conmon-0cf8b23ca318673529745fd3262d7e817089858ea371607800bca698006b084e.scope: Deactivated successfully.
Dec 05 01:23:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:26 compute-0 sudo[262763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcwxkwtxtmixwotlyylhzaeakpwybmge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897806.030414-375-111248857391653/AnsiballZ_stat.py'
Dec 05 01:23:26 compute-0 sudo[262763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:26 compute-0 podman[262750]: 2025-12-05 01:23:26.597483372 +0000 UTC m=+0.077891803 container create 9469e7dcc437e8dcc610a8018fcde3b0c34725d41f7695fab2cdcc99640057b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 05 01:23:26 compute-0 ceph-mon[192914]: pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:26 compute-0 podman[262750]: 2025-12-05 01:23:26.573415304 +0000 UTC m=+0.053823775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:23:26 compute-0 systemd[1]: Started libpod-conmon-9469e7dcc437e8dcc610a8018fcde3b0c34725d41f7695fab2cdcc99640057b7.scope.
Dec 05 01:23:26 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:23:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d87f8fe926990233977213a879a96dd90de19d030f8c376dde24a7d9164a6eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:23:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d87f8fe926990233977213a879a96dd90de19d030f8c376dde24a7d9164a6eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:23:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d87f8fe926990233977213a879a96dd90de19d030f8c376dde24a7d9164a6eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:23:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d87f8fe926990233977213a879a96dd90de19d030f8c376dde24a7d9164a6eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:23:26 compute-0 podman[262750]: 2025-12-05 01:23:26.761063363 +0000 UTC m=+0.241471824 container init 9469e7dcc437e8dcc610a8018fcde3b0c34725d41f7695fab2cdcc99640057b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:23:26 compute-0 python3.9[262768]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:23:26 compute-0 podman[262750]: 2025-12-05 01:23:26.793158124 +0000 UTC m=+0.273566585 container start 9469e7dcc437e8dcc610a8018fcde3b0c34725d41f7695fab2cdcc99640057b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 05 01:23:26 compute-0 podman[262750]: 2025-12-05 01:23:26.799424738 +0000 UTC m=+0.279833229 container attach 9469e7dcc437e8dcc610a8018fcde3b0c34725d41f7695fab2cdcc99640057b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:23:26 compute-0 sudo[262763]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:23:27 compute-0 sudo[262898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpinvpnfbtkbqhnmonueipyvvtuakwiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897806.030414-375-111248857391653/AnsiballZ_copy.py'
Dec 05 01:23:27 compute-0 sudo[262898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:27 compute-0 gallant_chaum[262773]: {
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:     "0": [
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:         {
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "devices": [
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "/dev/loop3"
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             ],
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "lv_name": "ceph_lv0",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "lv_size": "21470642176",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "name": "ceph_lv0",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "tags": {
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.cluster_name": "ceph",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.crush_device_class": "",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.encrypted": "0",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.osd_id": "0",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.type": "block",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.vdo": "0"
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             },
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "type": "block",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "vg_name": "ceph_vg0"
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:         }
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:     ],
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:     "1": [
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:         {
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "devices": [
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "/dev/loop4"
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             ],
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "lv_name": "ceph_lv1",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "lv_size": "21470642176",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "name": "ceph_lv1",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "tags": {
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.cluster_name": "ceph",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.crush_device_class": "",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.encrypted": "0",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.osd_id": "1",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.type": "block",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.vdo": "0"
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             },
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "type": "block",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "vg_name": "ceph_vg1"
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:         }
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:     ],
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:     "2": [
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:         {
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "devices": [
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "/dev/loop5"
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             ],
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "lv_name": "ceph_lv2",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "lv_size": "21470642176",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "name": "ceph_lv2",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "tags": {
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.cluster_name": "ceph",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.crush_device_class": "",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.encrypted": "0",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.osd_id": "2",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.type": "block",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:                 "ceph.vdo": "0"
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             },
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "type": "block",
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:             "vg_name": "ceph_vg2"
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:         }
Dec 05 01:23:27 compute-0 gallant_chaum[262773]:     ]
Dec 05 01:23:27 compute-0 gallant_chaum[262773]: }
Dec 05 01:23:27 compute-0 python3.9[262900]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764897806.030414-375-111248857391653/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aad3215deeeb1eba7754fd1a27527afcf2bb5051 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:23:27 compute-0 systemd[1]: libpod-9469e7dcc437e8dcc610a8018fcde3b0c34725d41f7695fab2cdcc99640057b7.scope: Deactivated successfully.
Dec 05 01:23:27 compute-0 podman[262750]: 2025-12-05 01:23:27.642372678 +0000 UTC m=+1.122781139 container died 9469e7dcc437e8dcc610a8018fcde3b0c34725d41f7695fab2cdcc99640057b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 05 01:23:27 compute-0 sudo[262898]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d87f8fe926990233977213a879a96dd90de19d030f8c376dde24a7d9164a6eb-merged.mount: Deactivated successfully.
Dec 05 01:23:27 compute-0 podman[262750]: 2025-12-05 01:23:27.757761032 +0000 UTC m=+1.238169463 container remove 9469e7dcc437e8dcc610a8018fcde3b0c34725d41f7695fab2cdcc99640057b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:23:27 compute-0 systemd[1]: libpod-conmon-9469e7dcc437e8dcc610a8018fcde3b0c34725d41f7695fab2cdcc99640057b7.scope: Deactivated successfully.
Dec 05 01:23:27 compute-0 sudo[262474]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:27 compute-0 sudo[262941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:23:27 compute-0 sudo[262941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:23:27 compute-0 sudo[262941]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:28 compute-0 sudo[262987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:23:28 compute-0 sudo[262987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:23:28 compute-0 sudo[262987]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:28 compute-0 sudo[263041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:23:28 compute-0 sudo[263041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:23:28 compute-0 sudo[263041]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:28 compute-0 sudo[263091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:23:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:28 compute-0 sudo[263091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:23:28 compute-0 sudo[263168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stwndjkncjfudhutgrgvokluqxhlngmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897808.0215318-391-39711538938804/AnsiballZ_file.py'
Dec 05 01:23:28 compute-0 sudo[263168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:28 compute-0 python3.9[263177]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:23:28 compute-0 sudo[263168]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:28 compute-0 podman[263207]: 2025-12-05 01:23:28.970647922 +0000 UTC m=+0.084515607 container create ae95d83de8985874c37d2d7e1e33636aecafc70342fa979d3d53047fa610a431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 01:23:29 compute-0 systemd[1]: Started libpod-conmon-ae95d83de8985874c37d2d7e1e33636aecafc70342fa979d3d53047fa610a431.scope.
Dec 05 01:23:29 compute-0 podman[263207]: 2025-12-05 01:23:28.937544633 +0000 UTC m=+0.051412378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:23:29 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:23:29 compute-0 podman[263207]: 2025-12-05 01:23:29.097784872 +0000 UTC m=+0.211652577 container init ae95d83de8985874c37d2d7e1e33636aecafc70342fa979d3d53047fa610a431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_aryabhata, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:23:29 compute-0 podman[263207]: 2025-12-05 01:23:29.112208892 +0000 UTC m=+0.226076557 container start ae95d83de8985874c37d2d7e1e33636aecafc70342fa979d3d53047fa610a431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_aryabhata, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:23:29 compute-0 infallible_aryabhata[263258]: 167 167
Dec 05 01:23:29 compute-0 systemd[1]: libpod-ae95d83de8985874c37d2d7e1e33636aecafc70342fa979d3d53047fa610a431.scope: Deactivated successfully.
Dec 05 01:23:29 compute-0 podman[263207]: 2025-12-05 01:23:29.119510545 +0000 UTC m=+0.233378250 container attach ae95d83de8985874c37d2d7e1e33636aecafc70342fa979d3d53047fa610a431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:23:29 compute-0 podman[263207]: 2025-12-05 01:23:29.119860655 +0000 UTC m=+0.233728330 container died ae95d83de8985874c37d2d7e1e33636aecafc70342fa979d3d53047fa610a431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 05 01:23:29 compute-0 podman[263245]: 2025-12-05 01:23:29.130622473 +0000 UTC m=+0.108446071 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 05 01:23:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-5388b753c3d0ebddf0dfb472a1d55fec451bfd3207f17e41d74855f0a3ef8303-merged.mount: Deactivated successfully.
Dec 05 01:23:29 compute-0 podman[263252]: 2025-12-05 01:23:29.168743682 +0000 UTC m=+0.125763303 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 05 01:23:29 compute-0 podman[263207]: 2025-12-05 01:23:29.176135497 +0000 UTC m=+0.290003152 container remove ae95d83de8985874c37d2d7e1e33636aecafc70342fa979d3d53047fa610a431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_aryabhata, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 05 01:23:29 compute-0 podman[263248]: 2025-12-05 01:23:29.181774353 +0000 UTC m=+0.141538200 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 01:23:29 compute-0 systemd[1]: libpod-conmon-ae95d83de8985874c37d2d7e1e33636aecafc70342fa979d3d53047fa610a431.scope: Deactivated successfully.
Dec 05 01:23:29 compute-0 podman[263408]: 2025-12-05 01:23:29.365216226 +0000 UTC m=+0.061775166 container create a93000f623a68c53de01aea12d0fa32f16038f16acf2afb1afd826acb8af7172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:23:29 compute-0 systemd[1]: Started libpod-conmon-a93000f623a68c53de01aea12d0fa32f16038f16acf2afb1afd826acb8af7172.scope.
Dec 05 01:23:29 compute-0 podman[263408]: 2025-12-05 01:23:29.347061842 +0000 UTC m=+0.043620792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:23:29 compute-0 ceph-mon[192914]: pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:29 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:23:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1706f5290762916aea071dc474495e13c50783a0dfbf396f9280b4a7e9a54a04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:23:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1706f5290762916aea071dc474495e13c50783a0dfbf396f9280b4a7e9a54a04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:23:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1706f5290762916aea071dc474495e13c50783a0dfbf396f9280b4a7e9a54a04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:23:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1706f5290762916aea071dc474495e13c50783a0dfbf396f9280b4a7e9a54a04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:23:29 compute-0 podman[263408]: 2025-12-05 01:23:29.484495507 +0000 UTC m=+0.181054457 container init a93000f623a68c53de01aea12d0fa32f16038f16acf2afb1afd826acb8af7172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bardeen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 01:23:29 compute-0 podman[263408]: 2025-12-05 01:23:29.496060018 +0000 UTC m=+0.192618958 container start a93000f623a68c53de01aea12d0fa32f16038f16acf2afb1afd826acb8af7172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 05 01:23:29 compute-0 podman[263408]: 2025-12-05 01:23:29.501322594 +0000 UTC m=+0.197881534 container attach a93000f623a68c53de01aea12d0fa32f16038f16acf2afb1afd826acb8af7172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bardeen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:23:29 compute-0 sudo[263481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfrhmjycodmnibvddceamqlxepzmcraz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897809.1252224-399-152724320691118/AnsiballZ_stat.py'
Dec 05 01:23:29 compute-0 sudo[263481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:29 compute-0 podman[158197]: time="2025-12-05T01:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:23:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 34393 "" "Go-http-client/1.1"
Dec 05 01:23:29 compute-0 python3.9[263483]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:23:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7244 "" "Go-http-client/1.1"
Dec 05 01:23:29 compute-0 sudo[263481]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:30 compute-0 sudo[263577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqixzbycdankaflibdwkvuyyrkqqksls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897809.1252224-399-152724320691118/AnsiballZ_file.py'
Dec 05 01:23:30 compute-0 sudo[263577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:30 compute-0 podman[263534]: 2025-12-05 01:23:30.315406034 +0000 UTC m=+0.125056853 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:23:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:30 compute-0 python3.9[263585]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:23:30 compute-0 sudo[263577]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:30 compute-0 unruffled_bardeen[263450]: {
Dec 05 01:23:30 compute-0 unruffled_bardeen[263450]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:23:30 compute-0 unruffled_bardeen[263450]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:23:30 compute-0 unruffled_bardeen[263450]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:23:30 compute-0 unruffled_bardeen[263450]:         "osd_id": 0,
Dec 05 01:23:30 compute-0 unruffled_bardeen[263450]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:23:30 compute-0 unruffled_bardeen[263450]:         "type": "bluestore"
Dec 05 01:23:30 compute-0 unruffled_bardeen[263450]:     },
Dec 05 01:23:30 compute-0 unruffled_bardeen[263450]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:23:30 compute-0 unruffled_bardeen[263450]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:23:30 compute-0 unruffled_bardeen[263450]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:23:30 compute-0 unruffled_bardeen[263450]:         "osd_id": 1,
Dec 05 01:23:30 compute-0 unruffled_bardeen[263450]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:23:30 compute-0 unruffled_bardeen[263450]:         "type": "bluestore"
Dec 05 01:23:30 compute-0 unruffled_bardeen[263450]:     },
Dec 05 01:23:30 compute-0 unruffled_bardeen[263450]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:23:30 compute-0 unruffled_bardeen[263450]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:23:30 compute-0 unruffled_bardeen[263450]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:23:30 compute-0 unruffled_bardeen[263450]:         "osd_id": 2,
Dec 05 01:23:30 compute-0 unruffled_bardeen[263450]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:23:30 compute-0 unruffled_bardeen[263450]:         "type": "bluestore"
Dec 05 01:23:30 compute-0 unruffled_bardeen[263450]:     }
Dec 05 01:23:30 compute-0 unruffled_bardeen[263450]: }
Dec 05 01:23:30 compute-0 systemd[1]: libpod-a93000f623a68c53de01aea12d0fa32f16038f16acf2afb1afd826acb8af7172.scope: Deactivated successfully.
Dec 05 01:23:30 compute-0 systemd[1]: libpod-a93000f623a68c53de01aea12d0fa32f16038f16acf2afb1afd826acb8af7172.scope: Consumed 1.075s CPU time.
Dec 05 01:23:30 compute-0 podman[263408]: 2025-12-05 01:23:30.591060136 +0000 UTC m=+1.287619086 container died a93000f623a68c53de01aea12d0fa32f16038f16acf2afb1afd826acb8af7172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:23:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-1706f5290762916aea071dc474495e13c50783a0dfbf396f9280b4a7e9a54a04-merged.mount: Deactivated successfully.
Dec 05 01:23:30 compute-0 podman[263408]: 2025-12-05 01:23:30.67370693 +0000 UTC m=+1.370265870 container remove a93000f623a68c53de01aea12d0fa32f16038f16acf2afb1afd826acb8af7172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bardeen, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 01:23:30 compute-0 systemd[1]: libpod-conmon-a93000f623a68c53de01aea12d0fa32f16038f16acf2afb1afd826acb8af7172.scope: Deactivated successfully.
Dec 05 01:23:30 compute-0 sudo[263091]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:23:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:23:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:23:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:23:30 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 2c3fc684-eb87-466e-a1d3-32d9715143a1 does not exist
Dec 05 01:23:30 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 25616021-5693-4488-ba3a-567f689a08d8 does not exist
Dec 05 01:23:30 compute-0 sudo[263666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:23:30 compute-0 sudo[263666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:23:30 compute-0 sudo[263666]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:31 compute-0 sudo[263722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:23:31 compute-0 sudo[263722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:23:31 compute-0 sudo[263722]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:31 compute-0 sudo[263820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwyuldmatbvfzbqahwkjuwvlqsssrgaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897810.78755-412-63750725201530/AnsiballZ_file.py'
Dec 05 01:23:31 compute-0 sudo[263820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:31 compute-0 openstack_network_exporter[160350]: ERROR   01:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:23:31 compute-0 openstack_network_exporter[160350]: ERROR   01:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:23:31 compute-0 openstack_network_exporter[160350]: ERROR   01:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:23:31 compute-0 openstack_network_exporter[160350]: ERROR   01:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:23:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:23:31 compute-0 openstack_network_exporter[160350]: ERROR   01:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:23:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:23:31 compute-0 python3.9[263822]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:23:31 compute-0 ceph-mon[192914]: pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:31 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:23:31 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:23:31 compute-0 sudo[263820]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:23:32 compute-0 sudo[263972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmbtnrczlkehovaxnyrigznfdvkbnuyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897811.7205164-420-248497788820134/AnsiballZ_stat.py'
Dec 05 01:23:32 compute-0 sudo[263972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:32 compute-0 python3.9[263974]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:23:32 compute-0 sudo[263972]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:32 compute-0 sudo[264050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rymxausvfwgtzfjksibrmvjbwbzgfamk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897811.7205164-420-248497788820134/AnsiballZ_file.py'
Dec 05 01:23:32 compute-0 sudo[264050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:33 compute-0 python3.9[264052]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:23:33 compute-0 sudo[264050]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:33 compute-0 ceph-mon[192914]: pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:33 compute-0 sudo[264202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxqxluhrukpmzzazmjptrgaplwuuhhvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897813.4781258-433-269153357486540/AnsiballZ_file.py'
Dec 05 01:23:33 compute-0 sudo[264202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:34 compute-0 python3.9[264204]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:23:34 compute-0 sudo[264202]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:35 compute-0 sudo[264371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esobzuwhwjigmvbuykampacbiuqoogfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897814.5606225-441-63491421155376/AnsiballZ_stat.py'
Dec 05 01:23:35 compute-0 podman[264328]: 2025-12-05 01:23:35.142679082 +0000 UTC m=+0.108598966 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.29.0, version=9.4, name=ubi9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vcs-type=git, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 01:23:35 compute-0 sudo[264371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:35 compute-0 python3.9[264376]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:23:35 compute-0 sudo[264371]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:35 compute-0 ceph-mon[192914]: pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:36 compute-0 sudo[264497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aimvteoctzcvjnbfzeghsaopaidfsxzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897814.5606225-441-63491421155376/AnsiballZ_copy.py'
Dec 05 01:23:36 compute-0 sudo[264497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:36 compute-0 python3.9[264499]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764897814.5606225-441-63491421155376/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aad3215deeeb1eba7754fd1a27527afcf2bb5051 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:23:36 compute-0 sudo[264497]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:23:37 compute-0 sudo[264649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyubveyhdnsjczaofrknahobsbxqqcth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897816.708431-457-261202960432418/AnsiballZ_file.py'
Dec 05 01:23:37 compute-0 sudo[264649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:37 compute-0 python3.9[264651]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:23:37 compute-0 ceph-mon[192914]: pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:37 compute-0 sudo[264649]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:38 compute-0 sudo[264801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thfpdljpjzoyhbxakgyvcihualozqwou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897817.8290977-465-174681337786078/AnsiballZ_stat.py'
Dec 05 01:23:38 compute-0 sudo[264801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:38 compute-0 podman[264803]: 2025-12-05 01:23:38.565135041 +0000 UTC m=+0.134909306 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, version=9.6, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7)
Dec 05 01:23:38 compute-0 python3.9[264804]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:23:38 compute-0 sudo[264801]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:39 compute-0 sudo[264901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaoaglknvgxpdqiwfkfvrsqblbaqafep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897817.8290977-465-174681337786078/AnsiballZ_file.py'
Dec 05 01:23:39 compute-0 sudo[264901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:39 compute-0 python3.9[264903]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:23:39 compute-0 sudo[264901]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:39 compute-0 ceph-mon[192914]: pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:40 compute-0 sudo[265053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctireokkjnjljregsxapybqeqcotcmgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897819.7022023-478-34231755153877/AnsiballZ_file.py'
Dec 05 01:23:40 compute-0 sudo[265053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:40 compute-0 python3.9[265055]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:23:40 compute-0 sudo[265053]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:41 compute-0 ceph-mon[192914]: pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:41 compute-0 sudo[265218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpyzuyrzlffolyzgaewmvnlnrfuriygf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897820.79878-486-36230523337332/AnsiballZ_stat.py'
Dec 05 01:23:41 compute-0 sudo[265218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:41 compute-0 podman[265179]: 2025-12-05 01:23:41.59025981 +0000 UTC m=+0.109721537 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 01:23:41 compute-0 python3.9[265224]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:23:41 compute-0 sudo[265218]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:42 compute-0 sudo[265305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbcdlksdiiyzchfdhzcqqivqcbyclirq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897820.79878-486-36230523337332/AnsiballZ_file.py'
Dec 05 01:23:42 compute-0 sudo[265305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:23:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:42 compute-0 python3.9[265307]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:23:42 compute-0 sudo[265305]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:42 compute-0 sshd-session[255690]: Connection closed by 192.168.122.30 port 41034
Dec 05 01:23:42 compute-0 sshd-session[255687]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:23:42 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Dec 05 01:23:42 compute-0 systemd[1]: session-51.scope: Consumed 58.018s CPU time.
Dec 05 01:23:42 compute-0 systemd-logind[792]: Session 51 logged out. Waiting for processes to exit.
Dec 05 01:23:42 compute-0 systemd-logind[792]: Removed session 51.
Dec 05 01:23:43 compute-0 ceph-mon[192914]: pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:45 compute-0 ceph-mon[192914]: pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:23:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:23:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:23:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:23:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:23:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:23:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:46 compute-0 ceph-mon[192914]: pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:23:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:49 compute-0 sshd-session[265332]: Accepted publickey for zuul from 192.168.122.30 port 53926 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 01:23:49 compute-0 systemd-logind[792]: New session 52 of user zuul.
Dec 05 01:23:49 compute-0 systemd[1]: Started Session 52 of User zuul.
Dec 05 01:23:49 compute-0 sshd-session[265332]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:23:49 compute-0 ceph-mon[192914]: pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:50 compute-0 sudo[265486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcnuoiswwgbpjnghqlicrtlibmpcqour ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897829.3582823-22-23854067828541/AnsiballZ_file.py'
Dec 05 01:23:50 compute-0 sudo[265486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:50 compute-0 python3.9[265488]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:23:50 compute-0 sudo[265486]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:51 compute-0 sudo[265638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-audwyumtmdykavgiqwozepykgevzuskv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897830.7374527-34-141551198185031/AnsiballZ_stat.py'
Dec 05 01:23:51 compute-0 sudo[265638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:51 compute-0 ceph-mon[192914]: pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:51 compute-0 python3.9[265640]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:23:51 compute-0 sudo[265638]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:23:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:52 compute-0 sudo[265761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyghvayrohcdlqupkjmcedzhpqdtjqja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897830.7374527-34-141551198185031/AnsiballZ_copy.py'
Dec 05 01:23:52 compute-0 sudo[265761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:52 compute-0 python3.9[265763]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764897830.7374527-34-141551198185031/.source.conf _original_basename=ceph.conf follow=False checksum=2f9fd2109b8acc302f3e55353e83658c9c265fc5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:23:52 compute-0 sudo[265761]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:53 compute-0 ceph-mon[192914]: pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:53 compute-0 sudo[265913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mscgrsyqmgyrvqvtybcrcycrxuqjloxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897832.9908364-34-60207153475353/AnsiballZ_stat.py'
Dec 05 01:23:53 compute-0 sudo[265913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:53 compute-0 python3.9[265915]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:23:53 compute-0 sudo[265913]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:54 compute-0 sudo[266036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcogawaijhxyzmueofzqnzlpnrdqarwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897832.9908364-34-60207153475353/AnsiballZ_copy.py'
Dec 05 01:23:54 compute-0 sudo[266036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:23:54 compute-0 python3.9[266038]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764897832.9908364-34-60207153475353/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=1ccf2af1c4d9cd0d8c5f12e3a57b95f6f703bc49 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:23:54 compute-0 sudo[266036]: pam_unix(sudo:session): session closed for user root
Dec 05 01:23:55 compute-0 sshd-session[265335]: Connection closed by 192.168.122.30 port 53926
Dec 05 01:23:55 compute-0 sshd-session[265332]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:23:55 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Dec 05 01:23:55 compute-0 systemd-logind[792]: Session 52 logged out. Waiting for processes to exit.
Dec 05 01:23:55 compute-0 systemd[1]: session-52.scope: Consumed 4.993s CPU time.
Dec 05 01:23:55 compute-0 systemd-logind[792]: Removed session 52.
Dec 05 01:23:55 compute-0 ceph-mon[192914]: pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:23:57 compute-0 ceph-mon[192914]: pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:59 compute-0 ceph-mon[192914]: pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:23:59 compute-0 podman[266064]: 2025-12-05 01:23:59.717182144 +0000 UTC m=+0.116177546 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:23:59 compute-0 podman[266063]: 2025-12-05 01:23:59.730134313 +0000 UTC m=+0.139476092 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 05 01:23:59 compute-0 podman[158197]: time="2025-12-05T01:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:23:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 05 01:23:59 compute-0 podman[266065]: 2025-12-05 01:23:59.754814459 +0000 UTC m=+0.143113114 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 05 01:23:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6829 "" "Go-http-client/1.1"
Dec 05 01:24:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:00 compute-0 podman[266129]: 2025-12-05 01:24:00.699449142 +0000 UTC m=+0.110424486 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm)
Dec 05 01:24:00 compute-0 sshd-session[266148]: Accepted publickey for zuul from 192.168.122.30 port 48796 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 01:24:00 compute-0 systemd-logind[792]: New session 53 of user zuul.
Dec 05 01:24:00 compute-0 systemd[1]: Started Session 53 of User zuul.
Dec 05 01:24:00 compute-0 sshd-session[266148]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:24:01 compute-0 openstack_network_exporter[160350]: ERROR   01:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:24:01 compute-0 openstack_network_exporter[160350]: ERROR   01:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:24:01 compute-0 openstack_network_exporter[160350]: ERROR   01:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:24:01 compute-0 openstack_network_exporter[160350]: ERROR   01:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:24:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:24:01 compute-0 openstack_network_exporter[160350]: ERROR   01:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:24:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:24:01 compute-0 ceph-mon[192914]: pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:24:02 compute-0 python3.9[266301]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:24:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:03 compute-0 ceph-mon[192914]: pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:03 compute-0 sudo[266455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkqiqluxuxughzjkuykygfqmlubxgnqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897843.0432029-34-190768834303795/AnsiballZ_file.py'
Dec 05 01:24:03 compute-0 sudo[266455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:03 compute-0 python3.9[266457]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:24:03 compute-0 sudo[266455]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:04 compute-0 ceph-mon[192914]: pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:04 compute-0 sudo[266607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfuqjkzfjxciyicyvecejrlnzxlqqtxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897844.212412-34-116638561133244/AnsiballZ_file.py'
Dec 05 01:24:04 compute-0 sudo[266607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:04 compute-0 python3.9[266609]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:24:04 compute-0 sudo[266607]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:05 compute-0 podman[266715]: 2025-12-05 01:24:05.72772015 +0000 UTC m=+0.127446209 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, vcs-type=git, container_name=kepler, io.openshift.expose-services=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.4, distribution-scope=public, managed_by=edpm_ansible, io.openshift.tags=base rhel9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Dec 05 01:24:06 compute-0 python3.9[266777]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:24:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:07 compute-0 sudo[266927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asfujmosjztwunlkughggryfobsezdot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897846.3266065-57-247595937952655/AnsiballZ_seboolean.py'
Dec 05 01:24:07 compute-0 sudo[266927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:24:07 compute-0 python3.9[266929]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 05 01:24:07 compute-0 ceph-mon[192914]: pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:08 compute-0 sudo[266927]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:09 compute-0 sudo[267091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezojfnthmulrkdeotipedkjbtncwouba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897848.6848576-67-28094678615993/AnsiballZ_setup.py'
Dec 05 01:24:09 compute-0 sudo[267091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:09 compute-0 podman[267053]: 2025-12-05 01:24:09.329502348 +0000 UTC m=+0.164390515 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 01:24:09 compute-0 ceph-mon[192914]: pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:09 compute-0 python3.9[267101]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 01:24:10 compute-0 sudo[267091]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:10 compute-0 sudo[267183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arudidzavbleyzqrnsbhhletjpmrqgzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897848.6848576-67-28094678615993/AnsiballZ_dnf.py'
Dec 05 01:24:10 compute-0 sudo[267183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:10 compute-0 python3.9[267185]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 01:24:11 compute-0 ceph-mon[192914]: pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:24:12 compute-0 sudo[267183]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:12 compute-0 podman[267242]: 2025-12-05 01:24:12.681279325 +0000 UTC m=+0.095851232 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 01:24:13 compute-0 sudo[267358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyopbfbmwcngejthicwuxzjjcsaadqfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897852.5032933-79-167343325280423/AnsiballZ_systemd.py'
Dec 05 01:24:13 compute-0 sudo[267358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:13 compute-0 ceph-mon[192914]: pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:13 compute-0 python3.9[267360]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 01:24:13 compute-0 sudo[267358]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:14 compute-0 sudo[267513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpnhvcjbhzjxnbuwrbtnrjdyxdzobixb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764897854.1850345-87-173675070869886/AnsiballZ_edpm_nftables_snippet.py'
Dec 05 01:24:14 compute-0 sudo[267513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:15 compute-0 python3[267515]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec 05 01:24:15 compute-0 sudo[267513]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:15 compute-0 ceph-mon[192914]: pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:24:16
Dec 05 01:24:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:24:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:24:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'default.rgw.control']
Dec 05 01:24:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:24:16 compute-0 sudo[267665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yygbfsqgphtosuxsyfxgylqisrhhuang ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897855.6007073-96-51489218543789/AnsiballZ_file.py'
Dec 05 01:24:16 compute-0 sudo[267665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:24:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:24:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:24:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:24:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:24:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:24:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:24:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:24:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:24:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:24:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:24:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:24:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:24:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:24:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:24:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:24:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:16 compute-0 python3.9[267667]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:24:16 compute-0 sudo[267665]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:24:17 compute-0 sudo[267817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrwlqbifhlckaplbfzlhavzwmglmoemu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897856.6717286-104-39060902085663/AnsiballZ_stat.py'
Dec 05 01:24:17 compute-0 sudo[267817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:17 compute-0 ceph-mon[192914]: pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:17 compute-0 python3.9[267819]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:24:17 compute-0 sudo[267817]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:18 compute-0 sudo[267895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzodhyybomxdszyeyrkmzbdmutjhfwes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897856.6717286-104-39060902085663/AnsiballZ_file.py'
Dec 05 01:24:18 compute-0 sudo[267895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:18 compute-0 python3.9[267897]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:24:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:18 compute-0 sudo[267895]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:19 compute-0 sudo[268047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsienvsuepqaecmkxtgfgcfqsgezrrtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897858.7335942-116-44084053885153/AnsiballZ_stat.py'
Dec 05 01:24:19 compute-0 sudo[268047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:19 compute-0 python3.9[268049]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:24:19 compute-0 ceph-mon[192914]: pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:19 compute-0 sudo[268047]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:20 compute-0 sudo[268126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cctmtkihurjjetwaijnfsaotvlkjmyoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897858.7335942-116-44084053885153/AnsiballZ_file.py'
Dec 05 01:24:20 compute-0 sudo[268126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:20 compute-0 python3.9[268128]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.g5v6a4tq recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:24:20 compute-0 sudo[268126]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:21 compute-0 sudo[268278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abdjwxgeunublvbpexfcxbenwnsrvxcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897860.8398569-128-174307751482561/AnsiballZ_stat.py'
Dec 05 01:24:21 compute-0 sudo[268278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:21 compute-0 ceph-mon[192914]: pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:21 compute-0 python3.9[268280]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:24:21 compute-0 sudo[268278]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:22 compute-0 sudo[268356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxpjakcgmrhlcmckfbmcsbcjfloyucog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897860.8398569-128-174307751482561/AnsiballZ_file.py'
Dec 05 01:24:22 compute-0 sudo[268356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:24:22 compute-0 python3.9[268358]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:24:22 compute-0 sudo[268356]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 01:24:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Cumulative writes: 5455 writes, 23K keys, 5455 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                            Cumulative WAL: 5455 writes, 783 syncs, 6.97 writes per sync, written: 0.02 GB, 0.03 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 5455 writes, 23K keys, 5455 commit groups, 1.0 writes per commit group, ingest: 18.45 MB, 0.03 MB/s
                                            Interval WAL: 5455 writes, 783 syncs, 6.97 writes per sync, written: 0.02 GB, 0.03 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Dec 05 01:24:23 compute-0 sudo[268508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqutjmkajyqhrtipqjezvbqsrdkcxoph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897862.680693-141-99102625060311/AnsiballZ_command.py'
Dec 05 01:24:23 compute-0 sudo[268508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:23 compute-0 ceph-mon[192914]: pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:23 compute-0 python3.9[268510]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:24:23 compute-0 sudo[268508]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:24 compute-0 ceph-mon[192914]: pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:24 compute-0 sudo[268661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toaywvbcjokzrvexcqxmfmpgtmxylirs ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764897864.1175995-149-273499020043791/AnsiballZ_edpm_nftables_from_files.py'
Dec 05 01:24:24 compute-0 sudo[268661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:25 compute-0 python3[268663]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 05 01:24:25 compute-0 sudo[268661]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:25 compute-0 sudo[268813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpunuyislpdskwshywcfzmcgbkrijhpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897865.347942-157-37560458992273/AnsiballZ_stat.py'
Dec 05 01:24:25 compute-0 sudo[268813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:26 compute-0 python3.9[268815]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:24:26 compute-0 sudo[268813]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:24:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:26 compute-0 sudo[268891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvvzbnqzsvijsmqsxlmbrnhjiujfhnpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897865.347942-157-37560458992273/AnsiballZ_file.py'
Dec 05 01:24:26 compute-0 sudo[268891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:26 compute-0 python3.9[268893]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:24:26 compute-0 sudo[268891]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:24:27 compute-0 ceph-mon[192914]: pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:27 compute-0 sudo[269043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdmualkkqapywuoipjuztkquwrsouzsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897867.1034508-169-141753908942258/AnsiballZ_stat.py'
Dec 05 01:24:27 compute-0 sudo[269043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:27 compute-0 python3.9[269045]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:24:27 compute-0 sudo[269043]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:28 compute-0 sudo[269121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frggfkvdtlrwfuysqaebjoevcgsdbhqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897867.1034508-169-141753908942258/AnsiballZ_file.py'
Dec 05 01:24:28 compute-0 sudo[269121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:28 compute-0 python3.9[269123]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:24:28 compute-0 sudo[269121]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 01:24:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 600.2 total, 600.0 interval
                                            Cumulative writes: 6827 writes, 28K keys, 6827 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                            Cumulative WAL: 6827 writes, 1147 syncs, 5.95 writes per sync, written: 0.02 GB, 0.03 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 6827 writes, 28K keys, 6827 commit groups, 1.0 writes per commit group, ingest: 19.51 MB, 0.03 MB/s
                                            Interval WAL: 6827 writes, 1147 syncs, 5.95 writes per sync, written: 0.02 GB, 0.03 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.2 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.2 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.2 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.2 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.2 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.2 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.2 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.2 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.2 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.2 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.2 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.2 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Dec 05 01:24:29 compute-0 ceph-mon[192914]: pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:29 compute-0 sudo[269273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxqgmwxhzclhqqodseqbvxsywwihkhmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897868.9355185-181-72729532823376/AnsiballZ_stat.py'
Dec 05 01:24:29 compute-0 sudo[269273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:29 compute-0 python3.9[269275]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:24:29 compute-0 podman[158197]: time="2025-12-05T01:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:24:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 05 01:24:29 compute-0 sudo[269273]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6834 "" "Go-http-client/1.1"
Dec 05 01:24:30 compute-0 sudo[269384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewwwdfplhroyxvsmpizlekhecqwqpjpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897868.9355185-181-72729532823376/AnsiballZ_file.py'
Dec 05 01:24:30 compute-0 sudo[269384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:30 compute-0 podman[269330]: 2025-12-05 01:24:30.239328797 +0000 UTC m=+0.101023676 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:24:30 compute-0 podman[269326]: 2025-12-05 01:24:30.259202078 +0000 UTC m=+0.118993684 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.4)
Dec 05 01:24:30 compute-0 podman[269333]: 2025-12-05 01:24:30.305958156 +0000 UTC m=+0.154983063 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 01:24:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:30 compute-0 python3.9[269406]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:24:30 compute-0 sudo[269384]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:31 compute-0 sudo[269519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:24:31 compute-0 sudo[269519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:24:31 compute-0 sudo[269519]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:31 compute-0 sudo[269629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljifkrxrpxqwslffdlalhwyowokpjwhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897870.7364724-193-235665037792560/AnsiballZ_stat.py'
Dec 05 01:24:31 compute-0 sudo[269629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:31 compute-0 podman[269566]: 2025-12-05 01:24:31.302262455 +0000 UTC m=+0.137895349 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 01:24:31 compute-0 sudo[269581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:24:31 compute-0 sudo[269581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:24:31 compute-0 sudo[269581]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:31 compute-0 sudo[269640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:24:31 compute-0 sudo[269640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:24:31 compute-0 sudo[269640]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:31 compute-0 openstack_network_exporter[160350]: ERROR   01:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:24:31 compute-0 openstack_network_exporter[160350]: ERROR   01:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:24:31 compute-0 openstack_network_exporter[160350]: ERROR   01:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:24:31 compute-0 openstack_network_exporter[160350]: ERROR   01:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:24:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:24:31 compute-0 openstack_network_exporter[160350]: ERROR   01:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:24:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:24:31 compute-0 python3.9[269637]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:24:31 compute-0 sudo[269665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 05 01:24:31 compute-0 sudo[269665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:24:31 compute-0 sudo[269629]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:31 compute-0 ceph-mon[192914]: pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:31 compute-0 sudo[269665]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:24:31 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:24:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:24:31 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:24:31 compute-0 sudo[269784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cakhyieuudkhhqttwniglsrxlvcawfqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897870.7364724-193-235665037792560/AnsiballZ_file.py'
Dec 05 01:24:31 compute-0 sudo[269784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:32 compute-0 sudo[269786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:24:32 compute-0 sudo[269786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:24:32 compute-0 sudo[269786]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:32 compute-0 python3.9[269787]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:24:32 compute-0 sudo[269784]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:32 compute-0 sudo[269812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:24:32 compute-0 sudo[269812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:24:32 compute-0 sudo[269812]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:24:32 compute-0 sudo[269857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:24:32 compute-0 sudo[269857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:24:32 compute-0 sudo[269857]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:32 compute-0 sudo[269889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:24:32 compute-0 sudo[269889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:24:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:24:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:24:32 compute-0 ceph-mon[192914]: pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:32 compute-0 sudo[270053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsitteyhewkfnywxiwjxropqvfhvzyiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897872.3781242-205-227602196840889/AnsiballZ_stat.py'
Dec 05 01:24:32 compute-0 sudo[270053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:33 compute-0 sudo[269889]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:24:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:24:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:24:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:24:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:24:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:24:33 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev fc2fce19-d4c6-4095-a713-ef78f60e1453 does not exist
Dec 05 01:24:33 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 525d6ea9-0132-4da9-96b4-e1c760c0d5a9 does not exist
Dec 05 01:24:33 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 28956989-4073-4862-9d95-29fd9d056ba8 does not exist
Dec 05 01:24:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:24:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:24:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:24:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:24:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:24:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:24:33 compute-0 python3.9[270057]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:24:33 compute-0 sudo[270068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:24:33 compute-0 sudo[270068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:24:33 compute-0 sudo[270068]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:33 compute-0 sudo[270053]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:33 compute-0 sudo[270095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:24:33 compute-0 sudo[270095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:24:33 compute-0 sudo[270095]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:33 compute-0 sudo[270143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:24:33 compute-0 sudo[270143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:24:33 compute-0 sudo[270143]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:33 compute-0 sudo[270191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:24:33 compute-0 sudo[270191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:24:33 compute-0 sudo[270243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igprjhcszyugvlpkvwgcrfyvdwhmymsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897872.3781242-205-227602196840889/AnsiballZ_file.py'
Dec 05 01:24:33 compute-0 sudo[270243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:24:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:24:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:24:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:24:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:24:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:24:33 compute-0 python3.9[270245]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:24:33 compute-0 sudo[270243]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:34 compute-0 podman[270309]: 2025-12-05 01:24:34.233305382 +0000 UTC m=+0.091229664 container create 4432dc223af16b58ae709b900935a91eeb8a9de4b491f89810eff4c88ec3d329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_babbage, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Dec 05 01:24:34 compute-0 podman[270309]: 2025-12-05 01:24:34.194521495 +0000 UTC m=+0.052445847 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:24:34 compute-0 systemd[1]: Started libpod-conmon-4432dc223af16b58ae709b900935a91eeb8a9de4b491f89810eff4c88ec3d329.scope.
Dec 05 01:24:34 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:24:34 compute-0 podman[270309]: 2025-12-05 01:24:34.401855561 +0000 UTC m=+0.259779853 container init 4432dc223af16b58ae709b900935a91eeb8a9de4b491f89810eff4c88ec3d329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_babbage, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 05 01:24:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:34 compute-0 podman[270309]: 2025-12-05 01:24:34.429395336 +0000 UTC m=+0.287319618 container start 4432dc223af16b58ae709b900935a91eeb8a9de4b491f89810eff4c88ec3d329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 01:24:34 compute-0 podman[270309]: 2025-12-05 01:24:34.437449619 +0000 UTC m=+0.295373971 container attach 4432dc223af16b58ae709b900935a91eeb8a9de4b491f89810eff4c88ec3d329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:24:34 compute-0 stupefied_babbage[270362]: 167 167
Dec 05 01:24:34 compute-0 systemd[1]: libpod-4432dc223af16b58ae709b900935a91eeb8a9de4b491f89810eff4c88ec3d329.scope: Deactivated successfully.
Dec 05 01:24:34 compute-0 podman[270309]: 2025-12-05 01:24:34.443074265 +0000 UTC m=+0.300998547 container died 4432dc223af16b58ae709b900935a91eeb8a9de4b491f89810eff4c88ec3d329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_babbage, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 05 01:24:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-5073fe0cac13245b6930e8052a7f88b56a234751d21893a01f0640f521910ad2-merged.mount: Deactivated successfully.
Dec 05 01:24:34 compute-0 podman[270309]: 2025-12-05 01:24:34.524234078 +0000 UTC m=+0.382158340 container remove 4432dc223af16b58ae709b900935a91eeb8a9de4b491f89810eff4c88ec3d329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:24:34 compute-0 systemd[1]: libpod-conmon-4432dc223af16b58ae709b900935a91eeb8a9de4b491f89810eff4c88ec3d329.scope: Deactivated successfully.
Dec 05 01:24:34 compute-0 podman[270447]: 2025-12-05 01:24:34.777980883 +0000 UTC m=+0.082994395 container create 3eb896fa70a3aaaac226813c1b819b74a042e065f5470ed2f9334455475a34ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 01:24:34 compute-0 sudo[270487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycxozgyjuwwpexbezhcsfwdwnlmmgktz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897874.2513509-218-198321804919434/AnsiballZ_command.py'
Dec 05 01:24:34 compute-0 sudo[270487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:34 compute-0 podman[270447]: 2025-12-05 01:24:34.739370151 +0000 UTC m=+0.044383743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:24:34 compute-0 systemd[1]: Started libpod-conmon-3eb896fa70a3aaaac226813c1b819b74a042e065f5470ed2f9334455475a34ab.scope.
Dec 05 01:24:34 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:24:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c57d7cd6c78a7412510578ee8fc62b5b5b415d47b59be38c939d5d40b8c5a7e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:24:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c57d7cd6c78a7412510578ee8fc62b5b5b415d47b59be38c939d5d40b8c5a7e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:24:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c57d7cd6c78a7412510578ee8fc62b5b5b415d47b59be38c939d5d40b8c5a7e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:24:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c57d7cd6c78a7412510578ee8fc62b5b5b415d47b59be38c939d5d40b8c5a7e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:24:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c57d7cd6c78a7412510578ee8fc62b5b5b415d47b59be38c939d5d40b8c5a7e1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:24:34 compute-0 ceph-mon[192914]: pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:34 compute-0 podman[270447]: 2025-12-05 01:24:34.923700438 +0000 UTC m=+0.228713970 container init 3eb896fa70a3aaaac226813c1b819b74a042e065f5470ed2f9334455475a34ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 05 01:24:34 compute-0 podman[270447]: 2025-12-05 01:24:34.937956414 +0000 UTC m=+0.242969956 container start 3eb896fa70a3aaaac226813c1b819b74a042e065f5470ed2f9334455475a34ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 05 01:24:34 compute-0 podman[270447]: 2025-12-05 01:24:34.944745762 +0000 UTC m=+0.249759334 container attach 3eb896fa70a3aaaac226813c1b819b74a042e065f5470ed2f9334455475a34ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 05 01:24:35 compute-0 python3.9[270489]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:24:35 compute-0 sudo[270487]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 01:24:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Cumulative writes: 5557 writes, 24K keys, 5557 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                            Cumulative WAL: 5557 writes, 841 syncs, 6.61 writes per sync, written: 0.02 GB, 0.03 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 5557 writes, 24K keys, 5557 commit groups, 1.0 writes per commit group, ingest: 18.47 MB, 0.03 MB/s
                                            Interval WAL: 5557 writes, 841 syncs, 6.61 writes per sync, written: 0.02 GB, 0.03 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575e430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575e430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575e430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 600.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Dec 05 01:24:36 compute-0 elastic_lamarr[270492]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:24:36 compute-0 elastic_lamarr[270492]: --> relative data size: 1.0
Dec 05 01:24:36 compute-0 elastic_lamarr[270492]: --> All data devices are unavailable
Dec 05 01:24:36 compute-0 systemd[1]: libpod-3eb896fa70a3aaaac226813c1b819b74a042e065f5470ed2f9334455475a34ab.scope: Deactivated successfully.
Dec 05 01:24:36 compute-0 systemd[1]: libpod-3eb896fa70a3aaaac226813c1b819b74a042e065f5470ed2f9334455475a34ab.scope: Consumed 1.238s CPU time.
Dec 05 01:24:36 compute-0 podman[270447]: 2025-12-05 01:24:36.245596584 +0000 UTC m=+1.550610116 container died 3eb896fa70a3aaaac226813c1b819b74a042e065f5470ed2f9334455475a34ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:24:36 compute-0 sudo[270689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxmnnbqojxcjyjujwajtzsjdvuxhsycz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897875.3647864-226-222417934508013/AnsiballZ_blockinfile.py'
Dec 05 01:24:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c57d7cd6c78a7412510578ee8fc62b5b5b415d47b59be38c939d5d40b8c5a7e1-merged.mount: Deactivated successfully.
Dec 05 01:24:36 compute-0 podman[270644]: 2025-12-05 01:24:36.289847792 +0000 UTC m=+0.133838836 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, container_name=kepler, release=1214.1726694543, name=ubi9, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, managed_by=edpm_ansible, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-container, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 05 01:24:36 compute-0 sudo[270689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:36 compute-0 podman[270447]: 2025-12-05 01:24:36.334595375 +0000 UTC m=+1.639608897 container remove 3eb896fa70a3aaaac226813c1b819b74a042e065f5470ed2f9334455475a34ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 05 01:24:36 compute-0 systemd[1]: libpod-conmon-3eb896fa70a3aaaac226813c1b819b74a042e065f5470ed2f9334455475a34ab.scope: Deactivated successfully.
Dec 05 01:24:36 compute-0 sudo[270191]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:36 compute-0 sudo[270707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:24:36 compute-0 python3.9[270703]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:24:36 compute-0 sudo[270707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:24:36 compute-0 sudo[270707]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:36 compute-0 sudo[270689]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:36 compute-0 sudo[270732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:24:36 compute-0 sudo[270732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:24:36 compute-0 sudo[270732]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:36 compute-0 sudo[270781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:24:36 compute-0 sudo[270781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:24:36 compute-0 sudo[270781]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:36 compute-0 sudo[270806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:24:36 compute-0 sudo[270806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:24:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:24:37 compute-0 podman[270965]: 2025-12-05 01:24:37.332826207 +0000 UTC m=+0.087089199 container create bbf0e4033daf16aa11c72924df1cfbf87c3440f126b3d1733053089d25b6850b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_brown, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:24:37 compute-0 sudo[271004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olepkmcmljxjxxymzrgvvsrenfovkiap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897876.8184114-235-98580473328962/AnsiballZ_command.py'
Dec 05 01:24:37 compute-0 sudo[271004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:37 compute-0 podman[270965]: 2025-12-05 01:24:37.2947628 +0000 UTC m=+0.049025802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:24:37 compute-0 systemd[1]: Started libpod-conmon-bbf0e4033daf16aa11c72924df1cfbf87c3440f126b3d1733053089d25b6850b.scope.
Dec 05 01:24:37 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:24:37 compute-0 podman[270965]: 2025-12-05 01:24:37.474261023 +0000 UTC m=+0.228524055 container init bbf0e4033daf16aa11c72924df1cfbf87c3440f126b3d1733053089d25b6850b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_brown, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:24:37 compute-0 ceph-mon[192914]: pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:37 compute-0 podman[270965]: 2025-12-05 01:24:37.49467726 +0000 UTC m=+0.248940252 container start bbf0e4033daf16aa11c72924df1cfbf87c3440f126b3d1733053089d25b6850b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_brown, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:24:37 compute-0 great_brown[271009]: 167 167
Dec 05 01:24:37 compute-0 podman[270965]: 2025-12-05 01:24:37.507450244 +0000 UTC m=+0.261713296 container attach bbf0e4033daf16aa11c72924df1cfbf87c3440f126b3d1733053089d25b6850b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_brown, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 01:24:37 compute-0 systemd[1]: libpod-bbf0e4033daf16aa11c72924df1cfbf87c3440f126b3d1733053089d25b6850b.scope: Deactivated successfully.
Dec 05 01:24:37 compute-0 podman[270965]: 2025-12-05 01:24:37.511596059 +0000 UTC m=+0.265859041 container died bbf0e4033daf16aa11c72924df1cfbf87c3440f126b3d1733053089d25b6850b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_brown, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 05 01:24:37 compute-0 python3.9[271006]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:24:37 compute-0 ceph-mgr[193209]: [devicehealth INFO root] Check health
Dec 05 01:24:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-337e26872be0656da58bb823a3d0d73cde3739f107584a4143e12b13b5a10e82-merged.mount: Deactivated successfully.
Dec 05 01:24:37 compute-0 sudo[271004]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:37 compute-0 podman[270965]: 2025-12-05 01:24:37.594530412 +0000 UTC m=+0.348793404 container remove bbf0e4033daf16aa11c72924df1cfbf87c3440f126b3d1733053089d25b6850b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_brown, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 01:24:37 compute-0 systemd[1]: libpod-conmon-bbf0e4033daf16aa11c72924df1cfbf87c3440f126b3d1733053089d25b6850b.scope: Deactivated successfully.
Dec 05 01:24:37 compute-0 podman[271056]: 2025-12-05 01:24:37.846090445 +0000 UTC m=+0.077537413 container create f9927334bc64bf01bd295ff1ec02ed92d47c40f7a1d09b4dd12d769c140466c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 01:24:37 compute-0 podman[271056]: 2025-12-05 01:24:37.818431107 +0000 UTC m=+0.049878105 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:24:37 compute-0 systemd[1]: Started libpod-conmon-f9927334bc64bf01bd295ff1ec02ed92d47c40f7a1d09b4dd12d769c140466c8.scope.
Dec 05 01:24:37 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0da455947fda80a2a619c6e0faec77219520bd1a714c7518f9cb9ad3300b07d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0da455947fda80a2a619c6e0faec77219520bd1a714c7518f9cb9ad3300b07d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0da455947fda80a2a619c6e0faec77219520bd1a714c7518f9cb9ad3300b07d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0da455947fda80a2a619c6e0faec77219520bd1a714c7518f9cb9ad3300b07d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:24:38 compute-0 podman[271056]: 2025-12-05 01:24:38.030234477 +0000 UTC m=+0.261681475 container init f9927334bc64bf01bd295ff1ec02ed92d47c40f7a1d09b4dd12d769c140466c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dewdney, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:24:38 compute-0 podman[271056]: 2025-12-05 01:24:38.063802329 +0000 UTC m=+0.295249297 container start f9927334bc64bf01bd295ff1ec02ed92d47c40f7a1d09b4dd12d769c140466c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:24:38 compute-0 podman[271056]: 2025-12-05 01:24:38.0699584 +0000 UTC m=+0.301405408 container attach f9927334bc64bf01bd295ff1ec02ed92d47c40f7a1d09b4dd12d769c140466c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dewdney, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:24:38 compute-0 sudo[271202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyrpnnirlebypimtqrdrmvoseytcixzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897877.8609867-243-20784962804112/AnsiballZ_stat.py'
Dec 05 01:24:38 compute-0 sudo[271202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:38 compute-0 python3.9[271204]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:24:38 compute-0 sudo[271202]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]: {
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:     "0": [
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:         {
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "devices": [
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "/dev/loop3"
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             ],
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "lv_name": "ceph_lv0",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "lv_size": "21470642176",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "name": "ceph_lv0",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "tags": {
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.cluster_name": "ceph",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.crush_device_class": "",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.encrypted": "0",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.osd_id": "0",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.type": "block",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.vdo": "0"
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             },
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "type": "block",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "vg_name": "ceph_vg0"
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:         }
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:     ],
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:     "1": [
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:         {
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "devices": [
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "/dev/loop4"
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             ],
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "lv_name": "ceph_lv1",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "lv_size": "21470642176",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "name": "ceph_lv1",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "tags": {
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.cluster_name": "ceph",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.crush_device_class": "",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.encrypted": "0",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.osd_id": "1",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.type": "block",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.vdo": "0"
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             },
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "type": "block",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "vg_name": "ceph_vg1"
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:         }
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:     ],
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:     "2": [
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:         {
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "devices": [
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "/dev/loop5"
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             ],
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "lv_name": "ceph_lv2",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "lv_size": "21470642176",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "name": "ceph_lv2",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "tags": {
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.cluster_name": "ceph",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.crush_device_class": "",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.encrypted": "0",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.osd_id": "2",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.type": "block",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:                 "ceph.vdo": "0"
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             },
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "type": "block",
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:             "vg_name": "ceph_vg2"
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:         }
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]:     ]
Dec 05 01:24:38 compute-0 mystifying_dewdney[271108]: }
Dec 05 01:24:38 compute-0 podman[271056]: 2025-12-05 01:24:38.945187006 +0000 UTC m=+1.176633994 container died f9927334bc64bf01bd295ff1ec02ed92d47c40f7a1d09b4dd12d769c140466c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dewdney, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 05 01:24:38 compute-0 systemd[1]: libpod-f9927334bc64bf01bd295ff1ec02ed92d47c40f7a1d09b4dd12d769c140466c8.scope: Deactivated successfully.
Dec 05 01:24:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-0da455947fda80a2a619c6e0faec77219520bd1a714c7518f9cb9ad3300b07d1-merged.mount: Deactivated successfully.
Dec 05 01:24:39 compute-0 podman[271056]: 2025-12-05 01:24:39.066935386 +0000 UTC m=+1.298382354 container remove f9927334bc64bf01bd295ff1ec02ed92d47c40f7a1d09b4dd12d769c140466c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:24:39 compute-0 systemd[1]: libpod-conmon-f9927334bc64bf01bd295ff1ec02ed92d47c40f7a1d09b4dd12d769c140466c8.scope: Deactivated successfully.
Dec 05 01:24:39 compute-0 sudo[270806]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:39 compute-0 sudo[271292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:24:39 compute-0 sudo[271292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:24:39 compute-0 sudo[271292]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:39 compute-0 sudo[271344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:24:39 compute-0 sudo[271344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:24:39 compute-0 sudo[271344]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:39 compute-0 ceph-mon[192914]: pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:39 compute-0 podman[271388]: 2025-12-05 01:24:39.529088086 +0000 UTC m=+0.120690032 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, config_id=edpm)
Dec 05 01:24:39 compute-0 sudo[271459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqrmnthhdojijxrwchpcepkyymqxayad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897879.00161-252-82435868879662/AnsiballZ_file.py'
Dec 05 01:24:39 compute-0 sudo[271399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:24:39 compute-0 sudo[271399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:24:39 compute-0 sudo[271459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:39 compute-0 sudo[271399]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:39 compute-0 sudo[271468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:24:39 compute-0 sudo[271468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:24:39 compute-0 python3.9[271467]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:24:39 compute-0 sudo[271459]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:40 compute-0 podman[271554]: 2025-12-05 01:24:40.22986224 +0000 UTC m=+0.087626394 container create 976785ceb889088c297c6d75126be7cdde0505354e7d8fb8986468f553289d86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hellman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:24:40 compute-0 podman[271554]: 2025-12-05 01:24:40.19239571 +0000 UTC m=+0.050159884 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:24:40 compute-0 systemd[1]: Started libpod-conmon-976785ceb889088c297c6d75126be7cdde0505354e7d8fb8986468f553289d86.scope.
Dec 05 01:24:40 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:24:40 compute-0 podman[271554]: 2025-12-05 01:24:40.378520157 +0000 UTC m=+0.236284381 container init 976785ceb889088c297c6d75126be7cdde0505354e7d8fb8986468f553289d86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 01:24:40 compute-0 podman[271554]: 2025-12-05 01:24:40.396752793 +0000 UTC m=+0.254516947 container start 976785ceb889088c297c6d75126be7cdde0505354e7d8fb8986468f553289d86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 05 01:24:40 compute-0 podman[271554]: 2025-12-05 01:24:40.403473339 +0000 UTC m=+0.261237553 container attach 976785ceb889088c297c6d75126be7cdde0505354e7d8fb8986468f553289d86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hellman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Dec 05 01:24:40 compute-0 keen_hellman[271570]: 167 167
Dec 05 01:24:40 compute-0 systemd[1]: libpod-976785ceb889088c297c6d75126be7cdde0505354e7d8fb8986468f553289d86.scope: Deactivated successfully.
Dec 05 01:24:40 compute-0 conmon[271570]: conmon 976785ceb889088c297c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-976785ceb889088c297c6d75126be7cdde0505354e7d8fb8986468f553289d86.scope/container/memory.events
Dec 05 01:24:40 compute-0 podman[271554]: 2025-12-05 01:24:40.408512729 +0000 UTC m=+0.266276883 container died 976785ceb889088c297c6d75126be7cdde0505354e7d8fb8986468f553289d86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 05 01:24:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-729f0b32ee8f8109cde1ab3c0e677470a7d120518369eba258594edc86dc97af-merged.mount: Deactivated successfully.
Dec 05 01:24:40 compute-0 podman[271554]: 2025-12-05 01:24:40.478257135 +0000 UTC m=+0.336021269 container remove 976785ceb889088c297c6d75126be7cdde0505354e7d8fb8986468f553289d86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hellman, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 01:24:40 compute-0 systemd[1]: libpod-conmon-976785ceb889088c297c6d75126be7cdde0505354e7d8fb8986468f553289d86.scope: Deactivated successfully.
Dec 05 01:24:40 compute-0 podman[271651]: 2025-12-05 01:24:40.681091337 +0000 UTC m=+0.048651054 container create 0da0962a48694b6b9b5234c73c4db00d18ba877c89fee6a9d1ca5c95fe69f68d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_brattain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:24:40 compute-0 systemd[1]: Started libpod-conmon-0da0962a48694b6b9b5234c73c4db00d18ba877c89fee6a9d1ca5c95fe69f68d.scope.
Dec 05 01:24:40 compute-0 podman[271651]: 2025-12-05 01:24:40.664031169 +0000 UTC m=+0.031590916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:24:40 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db8ad4efadffd26f978a7bc1e810014f31564dc177240d7b57a8813260c8be1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db8ad4efadffd26f978a7bc1e810014f31564dc177240d7b57a8813260c8be1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db8ad4efadffd26f978a7bc1e810014f31564dc177240d7b57a8813260c8be1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db8ad4efadffd26f978a7bc1e810014f31564dc177240d7b57a8813260c8be1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:24:40 compute-0 podman[271651]: 2025-12-05 01:24:40.868428036 +0000 UTC m=+0.235987843 container init 0da0962a48694b6b9b5234c73c4db00d18ba877c89fee6a9d1ca5c95fe69f68d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_brattain, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 05 01:24:40 compute-0 podman[271651]: 2025-12-05 01:24:40.888991222 +0000 UTC m=+0.256550979 container start 0da0962a48694b6b9b5234c73c4db00d18ba877c89fee6a9d1ca5c95fe69f68d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_brattain, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 01:24:40 compute-0 podman[271651]: 2025-12-05 01:24:40.897614044 +0000 UTC m=+0.265173781 container attach 0da0962a48694b6b9b5234c73c4db00d18ba877c89fee6a9d1ca5c95fe69f68d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:24:41 compute-0 python3.9[271739]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:24:41 compute-0 ceph-mon[192914]: pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:42 compute-0 vibrant_brattain[271690]: {
Dec 05 01:24:42 compute-0 vibrant_brattain[271690]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:24:42 compute-0 vibrant_brattain[271690]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:24:42 compute-0 vibrant_brattain[271690]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:24:42 compute-0 vibrant_brattain[271690]:         "osd_id": 0,
Dec 05 01:24:42 compute-0 vibrant_brattain[271690]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:24:42 compute-0 vibrant_brattain[271690]:         "type": "bluestore"
Dec 05 01:24:42 compute-0 vibrant_brattain[271690]:     },
Dec 05 01:24:42 compute-0 vibrant_brattain[271690]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:24:42 compute-0 vibrant_brattain[271690]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:24:42 compute-0 vibrant_brattain[271690]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:24:42 compute-0 vibrant_brattain[271690]:         "osd_id": 1,
Dec 05 01:24:42 compute-0 vibrant_brattain[271690]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:24:42 compute-0 vibrant_brattain[271690]:         "type": "bluestore"
Dec 05 01:24:42 compute-0 vibrant_brattain[271690]:     },
Dec 05 01:24:42 compute-0 vibrant_brattain[271690]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:24:42 compute-0 vibrant_brattain[271690]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:24:42 compute-0 vibrant_brattain[271690]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:24:42 compute-0 vibrant_brattain[271690]:         "osd_id": 2,
Dec 05 01:24:42 compute-0 vibrant_brattain[271690]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:24:42 compute-0 vibrant_brattain[271690]:         "type": "bluestore"
Dec 05 01:24:42 compute-0 vibrant_brattain[271690]:     }
Dec 05 01:24:42 compute-0 vibrant_brattain[271690]: }
Dec 05 01:24:42 compute-0 systemd[1]: libpod-0da0962a48694b6b9b5234c73c4db00d18ba877c89fee6a9d1ca5c95fe69f68d.scope: Deactivated successfully.
Dec 05 01:24:42 compute-0 systemd[1]: libpod-0da0962a48694b6b9b5234c73c4db00d18ba877c89fee6a9d1ca5c95fe69f68d.scope: Consumed 1.256s CPU time.
Dec 05 01:24:42 compute-0 podman[271651]: 2025-12-05 01:24:42.15569833 +0000 UTC m=+1.523258087 container died 0da0962a48694b6b9b5234c73c4db00d18ba877c89fee6a9d1ca5c95fe69f68d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_brattain, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 05 01:24:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-0db8ad4efadffd26f978a7bc1e810014f31564dc177240d7b57a8813260c8be1-merged.mount: Deactivated successfully.
Dec 05 01:24:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:24:42 compute-0 podman[271651]: 2025-12-05 01:24:42.230235527 +0000 UTC m=+1.597795244 container remove 0da0962a48694b6b9b5234c73c4db00d18ba877c89fee6a9d1ca5c95fe69f68d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:24:42 compute-0 systemd[1]: libpod-conmon-0da0962a48694b6b9b5234c73c4db00d18ba877c89fee6a9d1ca5c95fe69f68d.scope: Deactivated successfully.
Dec 05 01:24:42 compute-0 sudo[271468]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:24:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:24:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:24:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:24:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c3fd36d6-2ea0-4b9b-8694-39fe2f0067ed does not exist
Dec 05 01:24:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 9f10def9-e592-43cd-ab6e-8b8ddb65559c does not exist
Dec 05 01:24:42 compute-0 sudo[271860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:24:42 compute-0 sudo[271860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:24:42 compute-0 sudo[271860]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:42 compute-0 sudo[271909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:24:42 compute-0 sudo[271909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:24:42 compute-0 sudo[271909]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.545 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.546 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.554 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.557 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.558 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.559 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.559 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.560 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.560 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.560 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.560 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.561 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.561 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.562 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.562 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.562 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.563 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.563 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.563 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.564 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.565 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.565 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.565 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.565 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.566 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.566 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.566 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.566 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.569 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.569 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.569 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.569 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.569 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.569 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.569 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.569 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.569 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.570 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:24:42 compute-0 sudo[271980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuvdtnrbtwhpoxygoyjfjnpotrouqiin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897882.1194615-292-61578221065880/AnsiballZ_command.py'
Dec 05 01:24:42 compute-0 sudo[271980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:42 compute-0 python3.9[271982]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:f2:93:49:d5" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:24:42 compute-0 ovs-vsctl[271983]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:f2:93:49:d5 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec 05 01:24:42 compute-0 sudo[271980]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:24:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:24:43 compute-0 ceph-mon[192914]: pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:43 compute-0 podman[272101]: 2025-12-05 01:24:43.723847151 +0000 UTC m=+0.126627808 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 01:24:43 compute-0 sudo[272157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eteevntrqjjnadafdzmsnpveizhqmadm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897883.179567-301-183397343789758/AnsiballZ_command.py'
Dec 05 01:24:43 compute-0 sudo[272157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:44 compute-0 python3.9[272159]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:24:44 compute-0 sudo[272157]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:45 compute-0 python3.9[272312]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:24:45 compute-0 ceph-mon[192914]: pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:24:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:24:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:24:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:24:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:24:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:24:46 compute-0 sudo[272464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psohimrydrmfljecerexadohlabyuvol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897885.7652268-319-141909289392307/AnsiballZ_file.py'
Dec 05 01:24:46 compute-0 sudo[272464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:46 compute-0 python3.9[272466]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:24:46 compute-0 sudo[272464]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:24:47 compute-0 sudo[272616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztxqdkqpwnyzenmefoigmjalarqyufdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897886.8355699-327-24395449441418/AnsiballZ_stat.py'
Dec 05 01:24:47 compute-0 sudo[272616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:47 compute-0 ceph-mon[192914]: pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:47 compute-0 python3.9[272618]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:24:47 compute-0 sudo[272616]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:48 compute-0 sudo[272694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqroxpmaoxzolbtthxrluqgvemvlfxgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897886.8355699-327-24395449441418/AnsiballZ_file.py'
Dec 05 01:24:48 compute-0 sudo[272694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:48 compute-0 python3.9[272696]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:24:48 compute-0 sudo[272694]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:49 compute-0 sudo[272846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeuiyfxalbcfbjapgcoprijusvykizmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897888.5688462-327-3618097814513/AnsiballZ_stat.py'
Dec 05 01:24:49 compute-0 sudo[272846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:49 compute-0 python3.9[272848]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:24:49 compute-0 sudo[272846]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:49 compute-0 ceph-mon[192914]: pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:49 compute-0 sudo[272925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afdawxqwvufsuebsccvqzmoojsnyhfai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897888.5688462-327-3618097814513/AnsiballZ_file.py'
Dec 05 01:24:49 compute-0 sudo[272925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:50 compute-0 python3.9[272927]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:24:50 compute-0 sudo[272925]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:50 compute-0 sudo[273077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfelgjabsvblasjimzozhpqbvzlasopn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897890.3065722-350-193237287006314/AnsiballZ_file.py'
Dec 05 01:24:50 compute-0 sudo[273077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:51 compute-0 python3.9[273079]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:24:51 compute-0 sudo[273077]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:51 compute-0 ceph-mon[192914]: pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:52 compute-0 sudo[273229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avizvvnwjdzxcbadswlfowfsguqmbejo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897891.4207838-358-100557542969807/AnsiballZ_stat.py'
Dec 05 01:24:52 compute-0 sudo[273229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:24:52 compute-0 python3.9[273231]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:24:52 compute-0 sudo[273229]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:52 compute-0 sudo[273307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rznwrtqxeobpermiovnkvmfeeqocgyyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897891.4207838-358-100557542969807/AnsiballZ_file.py'
Dec 05 01:24:52 compute-0 sudo[273307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:52 compute-0 python3.9[273309]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:24:52 compute-0 sudo[273307]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:53 compute-0 ceph-mon[192914]: pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:53 compute-0 sudo[273459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwfvxbsjbwrmmexbqyqncbvhjgjqifci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897893.253799-370-233194727393135/AnsiballZ_stat.py'
Dec 05 01:24:53 compute-0 sudo[273459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:54 compute-0 python3.9[273461]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:24:54 compute-0 sudo[273459]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:54 compute-0 sudo[273537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xujlgzfpeqxeuqhwbzjtwijhqpwjbprc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897893.253799-370-233194727393135/AnsiballZ_file.py'
Dec 05 01:24:54 compute-0 sudo[273537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:54 compute-0 python3.9[273539]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:24:54 compute-0 sudo[273537]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:55 compute-0 ceph-mon[192914]: pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:55 compute-0 sudo[273689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbzoxomgeckudpzxyqiuuxpmwytpqcdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897895.0803657-382-250696764230129/AnsiballZ_systemd.py'
Dec 05 01:24:55 compute-0 sudo[273689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:55 compute-0 python3.9[273691]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:24:55 compute-0 systemd[1]: Reloading.
Dec 05 01:24:56 compute-0 systemd-rc-local-generator[273712]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:24:56 compute-0 systemd-sysv-generator[273720]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:24:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:56 compute-0 sudo[273689]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:24:57 compute-0 sudo[273878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfxcqpdlkawqifwhylehndpeagpdknoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897896.9080515-390-24420121650867/AnsiballZ_stat.py'
Dec 05 01:24:57 compute-0 sudo[273878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:57 compute-0 ceph-mon[192914]: pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:57 compute-0 python3.9[273880]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:24:57 compute-0 sudo[273878]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:58 compute-0 sudo[273956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmmcahrfoxhnrbdojiqbbuxoysulgbpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897896.9080515-390-24420121650867/AnsiballZ_file.py'
Dec 05 01:24:58 compute-0 sudo[273956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:58 compute-0 python3.9[273958]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:24:58 compute-0 sudo[273956]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:59 compute-0 sudo[274108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trcfovqbehlwjtghcryefcpdngposiqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897898.7764845-402-278940396654894/AnsiballZ_stat.py'
Dec 05 01:24:59 compute-0 sudo[274108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:24:59 compute-0 python3.9[274110]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:24:59 compute-0 sudo[274108]: pam_unix(sudo:session): session closed for user root
Dec 05 01:24:59 compute-0 ceph-mon[192914]: pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:24:59 compute-0 podman[158197]: time="2025-12-05T01:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:24:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 05 01:24:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6816 "" "Go-http-client/1.1"
Dec 05 01:25:00 compute-0 sudo[274186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iczpjdhcpiqmdzgzfconsmsqdekbbcsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897898.7764845-402-278940396654894/AnsiballZ_file.py'
Dec 05 01:25:00 compute-0 sudo[274186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:00 compute-0 python3.9[274188]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:25:00 compute-0 sudo[274186]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:00 compute-0 podman[274237]: 2025-12-05 01:25:00.706470356 +0000 UTC m=+0.112263536 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 01:25:00 compute-0 podman[274231]: 2025-12-05 01:25:00.734025938 +0000 UTC m=+0.128882702 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:25:00 compute-0 podman[274238]: 2025-12-05 01:25:00.754266975 +0000 UTC m=+0.151081484 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 05 01:25:01 compute-0 sudo[274401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxpjgpcexbigkrajcheypeocxqfiwqko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897900.5948308-414-173702353169647/AnsiballZ_systemd.py'
Dec 05 01:25:01 compute-0 sudo[274401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:01 compute-0 python3.9[274403]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:25:01 compute-0 openstack_network_exporter[160350]: ERROR   01:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:25:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:25:01 compute-0 openstack_network_exporter[160350]: ERROR   01:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:25:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:25:01 compute-0 openstack_network_exporter[160350]: ERROR   01:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:25:01 compute-0 openstack_network_exporter[160350]: ERROR   01:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:25:01 compute-0 openstack_network_exporter[160350]: ERROR   01:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:25:01 compute-0 systemd[1]: Reloading.
Dec 05 01:25:01 compute-0 systemd-sysv-generator[274446]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:25:01 compute-0 systemd-rc-local-generator[274442]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:25:01 compute-0 podman[274405]: 2025-12-05 01:25:01.588502837 +0000 UTC m=+0.124125109 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3)
Dec 05 01:25:01 compute-0 ceph-mon[192914]: pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:01 compute-0 systemd[1]: Starting Create netns directory...
Dec 05 01:25:01 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 05 01:25:01 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 05 01:25:01 compute-0 systemd[1]: Finished Create netns directory.
Dec 05 01:25:02 compute-0 sudo[274401]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:25:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:02 compute-0 ceph-mon[192914]: pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:02 compute-0 sudo[274612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xenxdmbwliktpgfallzkrsiwkejlpryo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897902.3598382-424-29360126972872/AnsiballZ_file.py'
Dec 05 01:25:02 compute-0 sudo[274612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:03 compute-0 python3.9[274614]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:25:03 compute-0 sudo[274612]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:04 compute-0 sudo[274764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usysdqiaazpbsgyrgsjnndhezcxyqjey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897903.4846897-432-279536733203038/AnsiballZ_stat.py'
Dec 05 01:25:04 compute-0 sudo[274764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:04 compute-0 python3.9[274766]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:25:04 compute-0 sudo[274764]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:04 compute-0 sudo[274842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqkvauyartxbrxsqkcvgiwcddplgqcsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897903.4846897-432-279536733203038/AnsiballZ_file.py'
Dec 05 01:25:04 compute-0 sudo[274842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:04 compute-0 python3.9[274844]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ovn_controller/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/ovn_controller/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:25:04 compute-0 sudo[274842]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:05 compute-0 ceph-mon[192914]: pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:05 compute-0 sudo[274994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mskmqfnojzuvxeeeisyoqscdgsxvjbqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897905.44214-446-238987571897478/AnsiballZ_file.py'
Dec 05 01:25:05 compute-0 sudo[274994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:06 compute-0 python3.9[274996]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:25:06 compute-0 sudo[274994]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:06 compute-0 podman[275039]: 2025-12-05 01:25:06.745820001 +0000 UTC m=+0.153476091 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, config_id=edpm, name=ubi9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.tags=base rhel9, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public)
Dec 05 01:25:07 compute-0 sudo[275164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oycvvgltrzdhqjvctkssnhdawjhulxaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897906.5907588-454-167100757444109/AnsiballZ_stat.py'
Dec 05 01:25:07 compute-0 sudo[275164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:25:07 compute-0 python3.9[275166]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:25:07 compute-0 sudo[275164]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:07 compute-0 ceph-mon[192914]: pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:07 compute-0 sudo[275242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydbqvpppmzwilixqwmdszmxavgjbmitj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897906.5907588-454-167100757444109/AnsiballZ_file.py'
Dec 05 01:25:07 compute-0 sudo[275242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:07 compute-0 python3.9[275244]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ovn_controller.json _original_basename=.zn4mekf4 recurse=False state=file path=/var/lib/kolla/config_files/ovn_controller.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:25:08 compute-0 sudo[275242]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:08 compute-0 sudo[275394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awzauztlywkvzyrsxyevlyanmzmqrieu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897908.2737923-466-173930234370305/AnsiballZ_file.py'
Dec 05 01:25:08 compute-0 sudo[275394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:09 compute-0 python3.9[275396]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:25:09 compute-0 sudo[275394]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:09 compute-0 ceph-mon[192914]: pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:10 compute-0 podman[275520]: 2025-12-05 01:25:10.021743438 +0000 UTC m=+0.125494307 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, release=1755695350, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_id=edpm, managed_by=edpm_ansible)
Dec 05 01:25:10 compute-0 sudo[275563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tydonqjyfrozqrgqnnraocqmevkknqbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897909.4369044-474-220305546783314/AnsiballZ_stat.py'
Dec 05 01:25:10 compute-0 sudo[275563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:10 compute-0 sudo[275563]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:10 compute-0 sudo[275643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jopywunbcltckxodhfqedfxiblxzefic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897909.4369044-474-220305546783314/AnsiballZ_file.py'
Dec 05 01:25:10 compute-0 sudo[275643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:10 compute-0 sudo[275643]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:11 compute-0 ceph-mon[192914]: pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:12 compute-0 sudo[275795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnboeypaolxlqxdntcrzdwsmmfwlorxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897911.406919-488-11338615169212/AnsiballZ_container_config_data.py'
Dec 05 01:25:12 compute-0 sudo[275795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:25:12 compute-0 python3.9[275797]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec 05 01:25:12 compute-0 sudo[275795]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:13 compute-0 sudo[275947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drtyyvhivjzmxavohqedgvczwcbgjsuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897912.7149599-497-249834494870600/AnsiballZ_container_config_hash.py'
Dec 05 01:25:13 compute-0 sudo[275947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:13 compute-0 ceph-mon[192914]: pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:13 compute-0 python3.9[275949]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 01:25:13 compute-0 sudo[275947]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:14 compute-0 podman[276050]: 2025-12-05 01:25:14.727789739 +0000 UTC m=+0.134059127 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 01:25:14 compute-0 sudo[276123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srobqdqkbfnpciwinqxzafnmoikeppuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897914.1082993-506-193951684929826/AnsiballZ_podman_container_info.py'
Dec 05 01:25:14 compute-0 sudo[276123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:14 compute-0 python3.9[276125]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 05 01:25:15 compute-0 sudo[276123]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:15 compute-0 ceph-mon[192914]: pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:25:16
Dec 05 01:25:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:25:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:25:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'default.rgw.meta', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'volumes']
Dec 05 01:25:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:25:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:25:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:25:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:25:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:25:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:25:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:25:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:25:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:25:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:25:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:25:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:25:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:25:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:25:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:25:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:25:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:25:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:17 compute-0 sudo[276300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvsdrqhqdhdluwsqgcyiemczcyxtfdyc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764897916.2111647-519-64722671195639/AnsiballZ_edpm_container_manage.py'
Dec 05 01:25:17 compute-0 sudo[276300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:25:17 compute-0 python3[276302]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 01:25:17 compute-0 ceph-mon[192914]: pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:17 compute-0 python3[276302]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                {
                                                     "Id": "3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c",
                                                     "Digest": "sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c",
                                                     "RepoTags": [
                                                          "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"
                                                     ],
                                                     "RepoDigests": [
                                                          "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c"
                                                     ],
                                                     "Parent": "",
                                                     "Comment": "",
                                                     "Created": "2025-12-01T06:38:47.246477714Z",
                                                     "Config": {
                                                          "User": "root",
                                                          "Env": [
                                                               "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                               "LANG=en_US.UTF-8",
                                                               "TZ=UTC",
                                                               "container=oci"
                                                          ],
                                                          "Entrypoint": [
                                                               "dumb-init",
                                                               "--single-child",
                                                               "--"
                                                          ],
                                                          "Cmd": [
                                                               "kolla_start"
                                                          ],
                                                          "Labels": {
                                                               "io.buildah.version": "1.41.3",
                                                               "maintainer": "OpenStack Kubernetes Operator team",
                                                               "org.label-schema.build-date": "20251125",
                                                               "org.label-schema.license": "GPLv2",
                                                               "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                               "org.label-schema.schema-version": "1.0",
                                                               "org.label-schema.vendor": "CentOS",
                                                               "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                               "tcib_managed": "true"
                                                          },
                                                          "StopSignal": "SIGTERM"
                                                     },
                                                     "Version": "",
                                                     "Author": "",
                                                     "Architecture": "amd64",
                                                     "Os": "linux",
                                                     "Size": 345722821,
                                                     "VirtualSize": 345722821,
                                                     "GraphDriver": {
                                                          "Name": "overlay",
                                                          "Data": {
                                                               "LowerDir": "/var/lib/containers/storage/overlay/06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",
                                                               "UpperDir": "/var/lib/containers/storage/overlay/0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8/diff",
                                                               "WorkDir": "/var/lib/containers/storage/overlay/0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8/work"
                                                          }
                                                     },
                                                     "RootFS": {
                                                          "Type": "layers",
                                                          "Layers": [
                                                               "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",
                                                               "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",
                                                               "sha256:ba9362d2aeb297e34b0679b2fc8168350c70a5b0ec414daf293bf2bc013e9088",
                                                               "sha256:aae3b8a85314314b9db80a043fdf3f3b1d0b69927faca0303c73969a23dddd0f"
                                                          ]
                                                     },
                                                     "Labels": {
                                                          "io.buildah.version": "1.41.3",
                                                          "maintainer": "OpenStack Kubernetes Operator team",
                                                          "org.label-schema.build-date": "20251125",
                                                          "org.label-schema.license": "GPLv2",
                                                          "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                          "org.label-schema.schema-version": "1.0",
                                                          "org.label-schema.vendor": "CentOS",
                                                          "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                          "tcib_managed": "true"
                                                     },
                                                     "Annotations": {},
                                                     "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                     "User": "root",
                                                     "History": [
                                                          {
                                                               "created": "2025-11-25T04:02:36.223494528Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-11-25T04:02:36.223562059Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-11-25T04:02:39.054452717Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.025707917Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                               "comment": "FROM quay.io/centos/centos:stream9",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.025744608Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.025767729Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.025791379Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.02581523Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.025867611Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.469442331Z",
                                                               "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:02.029095017Z",
                                                               "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:05.672474685Z",
                                                               "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:06.113425253Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:06.532320725Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:07.370061347Z",
                                                               "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:07.805172373Z",
                                                               "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:08.259306372Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:08.625948784Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:09.028304824Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:09.423316076Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:09.801219631Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:10.239187116Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:10.70996597Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:11.147342611Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:11.5739488Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:12.006975065Z",
                                                               "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:12.421255505Z",
                                                               "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:16.066694755Z",
                                                               "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:16.475695836Z",
                                                               "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:16.8971372Z",
                                                               "created_by": "/bin/sh -c mkdir -p /openstack",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:18.542651107Z",
                                                               "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:20.622503041Z",
                                                               "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:20.622561802Z",
                                                               "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:20.622578342Z",
                                                               "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:20.622594423Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:22.080892529Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:12:22.759131427Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:13:25.258260855Z",
                                                               "created_by": "/bin/sh -c dnf -y install openvswitch openvswitch-ovn-common python3-netifaces python3-openvswitch tcpdump && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:13:28.025145079Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:38:13.535675197Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:38:47.244104142Z",
                                                               "created_by": "/bin/sh -c dnf -y install openvswitch-ovn-host && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:38:48.759416475Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                          }
                                                     ],
                                                     "NamesHistory": [
                                                          "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"
                                                     ]
                                                }
                                           ]
                                           : quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 05 01:25:17 compute-0 sudo[276300]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:18 compute-0 sudo[276509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enmiqwzvcvjjwwjqcwqcowrmvcxwlcsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897918.0894759-527-82174710690854/AnsiballZ_stat.py'
Dec 05 01:25:18 compute-0 sudo[276509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:19 compute-0 ceph-mon[192914]: pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:19 compute-0 python3.9[276511]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:25:19 compute-0 sudo[276509]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:20 compute-0 sudo[276664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubqzvdbbytqqaftftkldrzblufvzmnew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897919.555204-536-34437562076946/AnsiballZ_file.py'
Dec 05 01:25:20 compute-0 sudo[276664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:20 compute-0 python3.9[276666]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:25:20 compute-0 sudo[276664]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:20 compute-0 sudo[276740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptlumpllxeiectbwwyglernvylqqcoxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897919.555204-536-34437562076946/AnsiballZ_stat.py'
Dec 05 01:25:20 compute-0 sudo[276740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:20 compute-0 python3.9[276742]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:25:20 compute-0 sudo[276740]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:21 compute-0 ceph-mon[192914]: pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:21 compute-0 sudo[276891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geavqsnfkuzpzosggicfssveiromplkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897921.0771534-536-24196944710177/AnsiballZ_copy.py'
Dec 05 01:25:21 compute-0 sudo[276891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:22 compute-0 python3.9[276893]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764897921.0771534-536-24196944710177/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:25:22 compute-0 sudo[276891]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:25:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:22 compute-0 sudo[276967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grhlrejqdxgaxnedkonutgrnrckpvopl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897921.0771534-536-24196944710177/AnsiballZ_systemd.py'
Dec 05 01:25:22 compute-0 sudo[276967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:22 compute-0 python3.9[276969]: ansible-systemd Invoked with state=started name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:25:23 compute-0 sudo[276967]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:23 compute-0 ceph-mon[192914]: pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:23 compute-0 sudo[277121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdwkboojcqmovqlrgjovkgclsbwfokvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897923.341907-560-168765016690844/AnsiballZ_command.py'
Dec 05 01:25:23 compute-0 sudo[277121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:24 compute-0 python3.9[277123]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:25:24 compute-0 ovs-vsctl[277124]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec 05 01:25:24 compute-0 sudo[277121]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:24 compute-0 sudo[277274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlzwmkqioswqxibfhzlhogpruxkvgfib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897924.4376469-568-65310887258311/AnsiballZ_command.py'
Dec 05 01:25:25 compute-0 sudo[277274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:25 compute-0 python3.9[277276]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:25:25 compute-0 ovs-vsctl[277278]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec 05 01:25:25 compute-0 sudo[277274]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:25 compute-0 ceph-mon[192914]: pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:25:26 compute-0 sudo[277429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjurxszeqgunpefpapwotgxzmdzbqcdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897925.816801-582-85271211357231/AnsiballZ_command.py'
Dec 05 01:25:26 compute-0 sudo[277429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:26 compute-0 python3.9[277431]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:25:26 compute-0 ovs-vsctl[277432]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec 05 01:25:26 compute-0 sudo[277429]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:27 compute-0 sshd-session[266151]: Connection closed by 192.168.122.30 port 48796
Dec 05 01:25:27 compute-0 sshd-session[266148]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:25:27 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Dec 05 01:25:27 compute-0 systemd[1]: session-53.scope: Consumed 1min 11.461s CPU time.
Dec 05 01:25:27 compute-0 systemd-logind[792]: Session 53 logged out. Waiting for processes to exit.
Dec 05 01:25:27 compute-0 systemd-logind[792]: Removed session 53.
Dec 05 01:25:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:25:27 compute-0 ceph-mon[192914]: pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:29 compute-0 ceph-mon[192914]: pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:29 compute-0 podman[158197]: time="2025-12-05T01:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:25:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 05 01:25:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6841 "" "Go-http-client/1.1"
Dec 05 01:25:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:31 compute-0 openstack_network_exporter[160350]: ERROR   01:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:25:31 compute-0 openstack_network_exporter[160350]: ERROR   01:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:25:31 compute-0 openstack_network_exporter[160350]: ERROR   01:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:25:31 compute-0 openstack_network_exporter[160350]: ERROR   01:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:25:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:25:31 compute-0 openstack_network_exporter[160350]: ERROR   01:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:25:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:25:31 compute-0 ceph-mon[192914]: pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:31 compute-0 podman[277457]: 2025-12-05 01:25:31.714492279 +0000 UTC m=+0.121027371 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 01:25:31 compute-0 podman[277459]: 2025-12-05 01:25:31.741684711 +0000 UTC m=+0.148100640 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec 05 01:25:31 compute-0 podman[277458]: 2025-12-05 01:25:31.745874429 +0000 UTC m=+0.146863096 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:25:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:25:32 compute-0 sshd-session[277523]: Accepted publickey for zuul from 192.168.122.30 port 36048 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 01:25:32 compute-0 systemd-logind[792]: New session 54 of user zuul.
Dec 05 01:25:32 compute-0 systemd[1]: Started Session 54 of User zuul.
Dec 05 01:25:32 compute-0 sshd-session[277523]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:25:32 compute-0 podman[277525]: 2025-12-05 01:25:32.384634263 +0000 UTC m=+0.119441946 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 01:25:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:33 compute-0 ceph-mon[192914]: pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:33 compute-0 python3.9[277696]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:25:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:35 compute-0 sudo[277850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vniitdmsjyxzkkhohozugztmsslybqrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897934.3618968-34-64115628041875/AnsiballZ_file.py'
Dec 05 01:25:35 compute-0 sudo[277850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:35 compute-0 python3.9[277852]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:25:35 compute-0 sudo[277850]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:35 compute-0 ceph-mon[192914]: pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:36 compute-0 sudo[278002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svwnjiendkmgjuffcxohytxxonzmzkvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897935.5572715-34-234804804059514/AnsiballZ_file.py'
Dec 05 01:25:36 compute-0 sudo[278002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:36 compute-0 python3.9[278004]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:25:36 compute-0 sudo[278002]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:36 compute-0 ceph-mon[192914]: pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:37 compute-0 sudo[278170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnmxbqjxtstgmrdwwxdzpskrbakfbdtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897936.6055954-34-197067012280577/AnsiballZ_file.py'
Dec 05 01:25:37 compute-0 sudo[278170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:37 compute-0 podman[278128]: 2025-12-05 01:25:37.209081842 +0000 UTC m=+0.155698723 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, architecture=x86_64, distribution-scope=public, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vcs-type=git, version=9.4, io.openshift.tags=base rhel9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, build-date=2024-09-18T21:23:30)
Dec 05 01:25:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:25:37 compute-0 python3.9[278175]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:25:37 compute-0 sudo[278170]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:38 compute-0 sudo[278325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daybzebptkdnphdmveytxsgehwphkvzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897937.62336-34-227697219688196/AnsiballZ_file.py'
Dec 05 01:25:38 compute-0 sudo[278325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:38 compute-0 python3.9[278327]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:25:38 compute-0 sudo[278325]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:39 compute-0 sudo[278477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wehylycguqjipoxiwfajerxzguewqcrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897938.5970428-34-136772340755225/AnsiballZ_file.py'
Dec 05 01:25:39 compute-0 sudo[278477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:39 compute-0 python3.9[278479]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:25:39 compute-0 sudo[278477]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:39 compute-0 ceph-mon[192914]: pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:40 compute-0 podman[278603]: 2025-12-05 01:25:40.365510141 +0000 UTC m=+0.115293401 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=edpm_ansible)
Dec 05 01:25:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:40 compute-0 python3.9[278647]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:25:41 compute-0 ceph-mon[192914]: pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:41 compute-0 sudo[278800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnlbgfvimotpbsabumhqdavpfhashxlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897940.9704642-78-253905760203254/AnsiballZ_seboolean.py'
Dec 05 01:25:41 compute-0 sudo[278800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:41 compute-0 python3.9[278802]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 05 01:25:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:25:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:42 compute-0 sudo[278803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:25:42 compute-0 sudo[278803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:25:42 compute-0 sudo[278803]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:42 compute-0 sudo[278800]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:42 compute-0 sudo[278828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:25:42 compute-0 sudo[278828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:25:42 compute-0 sudo[278828]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:42 compute-0 sudo[278874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:25:42 compute-0 sudo[278874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:25:42 compute-0 sudo[278874]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:42 compute-0 sudo[278902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:25:42 compute-0 sudo[278902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:25:43 compute-0 sudo[278902]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:43 compute-0 ceph-mon[192914]: pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 05 01:25:43 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 01:25:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:25:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:25:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:25:43 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:25:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:25:43 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:25:43 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 0ea22727-3077-40b9-81d3-a451c7a7e3ed does not exist
Dec 05 01:25:43 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b8c9da6d-d1d5-4cec-a456-bc27013d7b34 does not exist
Dec 05 01:25:43 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7ca161fd-9f8b-4ee2-8474-af2a638cc50c does not exist
Dec 05 01:25:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:25:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:25:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:25:43 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:25:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:25:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:25:43 compute-0 sudo[279083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:25:43 compute-0 python3.9[279082]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:25:43 compute-0 sudo[279083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:25:43 compute-0 sudo[279083]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:43 compute-0 sudo[279108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:25:43 compute-0 sudo[279108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:25:43 compute-0 sudo[279108]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:44 compute-0 sudo[279156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:25:44 compute-0 sudo[279156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:25:44 compute-0 sudo[279156]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:44 compute-0 sudo[279205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:25:44 compute-0 sudo[279205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:25:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 01:25:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:25:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:25:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:25:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:25:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:25:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:25:44 compute-0 python3.9[279340]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764897942.9652357-86-187112415400862/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:25:44 compute-0 podman[279345]: 2025-12-05 01:25:44.839555973 +0000 UTC m=+0.100458766 container create 570965308b823bb816d0f659b560f4dbd0971f63ff0d8013a6d74154970a0e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hamilton, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:25:44 compute-0 podman[279345]: 2025-12-05 01:25:44.789958933 +0000 UTC m=+0.050861786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:25:44 compute-0 systemd[1]: Started libpod-conmon-570965308b823bb816d0f659b560f4dbd0971f63ff0d8013a6d74154970a0e1e.scope.
Dec 05 01:25:44 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:25:44 compute-0 podman[279345]: 2025-12-05 01:25:44.985760679 +0000 UTC m=+0.246663482 container init 570965308b823bb816d0f659b560f4dbd0971f63ff0d8013a6d74154970a0e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hamilton, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:25:44 compute-0 podman[279359]: 2025-12-05 01:25:44.992919759 +0000 UTC m=+0.106539085 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:25:45 compute-0 podman[279345]: 2025-12-05 01:25:45.00399574 +0000 UTC m=+0.264898573 container start 570965308b823bb816d0f659b560f4dbd0971f63ff0d8013a6d74154970a0e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hamilton, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:25:45 compute-0 naughty_hamilton[279373]: 167 167
Dec 05 01:25:45 compute-0 systemd[1]: libpod-570965308b823bb816d0f659b560f4dbd0971f63ff0d8013a6d74154970a0e1e.scope: Deactivated successfully.
Dec 05 01:25:45 compute-0 podman[279345]: 2025-12-05 01:25:45.011532121 +0000 UTC m=+0.272434904 container attach 570965308b823bb816d0f659b560f4dbd0971f63ff0d8013a6d74154970a0e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 05 01:25:45 compute-0 podman[279345]: 2025-12-05 01:25:45.011923112 +0000 UTC m=+0.272825875 container died 570965308b823bb816d0f659b560f4dbd0971f63ff0d8013a6d74154970a0e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hamilton, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 01:25:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-22f8cc186e27aae0e976b23ddf6eab9d8e00ab44677f70f5f786b08adb11a11f-merged.mount: Deactivated successfully.
Dec 05 01:25:45 compute-0 podman[279345]: 2025-12-05 01:25:45.07824067 +0000 UTC m=+0.339143433 container remove 570965308b823bb816d0f659b560f4dbd0971f63ff0d8013a6d74154970a0e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hamilton, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:25:45 compute-0 systemd[1]: libpod-conmon-570965308b823bb816d0f659b560f4dbd0971f63ff0d8013a6d74154970a0e1e.scope: Deactivated successfully.
Dec 05 01:25:45 compute-0 podman[279476]: 2025-12-05 01:25:45.320310691 +0000 UTC m=+0.092548033 container create b96c4ec1682e88bccd0eff6bd5009960b1204976c92b808ec61518bd2a9776f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_knuth, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:25:45 compute-0 podman[279476]: 2025-12-05 01:25:45.282551804 +0000 UTC m=+0.054789186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:25:45 compute-0 systemd[1]: Started libpod-conmon-b96c4ec1682e88bccd0eff6bd5009960b1204976c92b808ec61518bd2a9776f3.scope.
Dec 05 01:25:45 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1f439ee21e7c9d5a4bcd58aefe75191a464307883ba0305614590d7e097cc56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1f439ee21e7c9d5a4bcd58aefe75191a464307883ba0305614590d7e097cc56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1f439ee21e7c9d5a4bcd58aefe75191a464307883ba0305614590d7e097cc56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1f439ee21e7c9d5a4bcd58aefe75191a464307883ba0305614590d7e097cc56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1f439ee21e7c9d5a4bcd58aefe75191a464307883ba0305614590d7e097cc56/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:25:45 compute-0 podman[279476]: 2025-12-05 01:25:45.488990077 +0000 UTC m=+0.261227449 container init b96c4ec1682e88bccd0eff6bd5009960b1204976c92b808ec61518bd2a9776f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_knuth, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:25:45 compute-0 podman[279476]: 2025-12-05 01:25:45.515705625 +0000 UTC m=+0.287942967 container start b96c4ec1682e88bccd0eff6bd5009960b1204976c92b808ec61518bd2a9776f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 01:25:45 compute-0 podman[279476]: 2025-12-05 01:25:45.52406417 +0000 UTC m=+0.296301562 container attach b96c4ec1682e88bccd0eff6bd5009960b1204976c92b808ec61518bd2a9776f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_knuth, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:25:45 compute-0 ceph-mon[192914]: pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:45 compute-0 python3.9[279575]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:25:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:25:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:25:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:25:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:25:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:25:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:25:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:46 compute-0 python3.9[279710]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764897945.1367009-101-249378286908391/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:25:46 compute-0 elated_knuth[279520]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:25:46 compute-0 elated_knuth[279520]: --> relative data size: 1.0
Dec 05 01:25:46 compute-0 elated_knuth[279520]: --> All data devices are unavailable
Dec 05 01:25:46 compute-0 systemd[1]: libpod-b96c4ec1682e88bccd0eff6bd5009960b1204976c92b808ec61518bd2a9776f3.scope: Deactivated successfully.
Dec 05 01:25:46 compute-0 systemd[1]: libpod-b96c4ec1682e88bccd0eff6bd5009960b1204976c92b808ec61518bd2a9776f3.scope: Consumed 1.215s CPU time.
Dec 05 01:25:46 compute-0 podman[279476]: 2025-12-05 01:25:46.799704267 +0000 UTC m=+1.571941589 container died b96c4ec1682e88bccd0eff6bd5009960b1204976c92b808ec61518bd2a9776f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_knuth, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 01:25:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1f439ee21e7c9d5a4bcd58aefe75191a464307883ba0305614590d7e097cc56-merged.mount: Deactivated successfully.
Dec 05 01:25:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:25:47 compute-0 podman[279476]: 2025-12-05 01:25:47.415444637 +0000 UTC m=+2.187681979 container remove b96c4ec1682e88bccd0eff6bd5009960b1204976c92b808ec61518bd2a9776f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_knuth, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 05 01:25:47 compute-0 sudo[279205]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:47 compute-0 systemd[1]: libpod-conmon-b96c4ec1682e88bccd0eff6bd5009960b1204976c92b808ec61518bd2a9776f3.scope: Deactivated successfully.
Dec 05 01:25:47 compute-0 sudo[279830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:25:47 compute-0 sudo[279830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:25:47 compute-0 sudo[279830]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:47 compute-0 ceph-mon[192914]: pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:47 compute-0 sudo[279863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:25:47 compute-0 sudo[279863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:25:47 compute-0 sudo[279863]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:47 compute-0 sudo[279938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecjjrjidrlvvcxismguplrkkdiwyuqwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897947.1861105-118-253698570386955/AnsiballZ_setup.py'
Dec 05 01:25:47 compute-0 sudo[279938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:47 compute-0 sudo[279920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:25:47 compute-0 sudo[279920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:25:47 compute-0 sudo[279920]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:47 compute-0 sudo[279958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:25:47 compute-0 sudo[279958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:25:48 compute-0 python3.9[279953]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 01:25:48 compute-0 podman[280026]: 2025-12-05 01:25:48.425106973 +0000 UTC m=+0.068026867 container create 57ce447b39dcc3e34555176521afbdc534439b935f4c0c10ba51cf66c1d2c680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:25:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:48 compute-0 systemd[1]: Started libpod-conmon-57ce447b39dcc3e34555176521afbdc534439b935f4c0c10ba51cf66c1d2c680.scope.
Dec 05 01:25:48 compute-0 sudo[279938]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:48 compute-0 podman[280026]: 2025-12-05 01:25:48.398688503 +0000 UTC m=+0.041608487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:25:48 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:25:48 compute-0 podman[280026]: 2025-12-05 01:25:48.529743545 +0000 UTC m=+0.172663469 container init 57ce447b39dcc3e34555176521afbdc534439b935f4c0c10ba51cf66c1d2c680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khorana, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 05 01:25:48 compute-0 podman[280026]: 2025-12-05 01:25:48.546944847 +0000 UTC m=+0.189864751 container start 57ce447b39dcc3e34555176521afbdc534439b935f4c0c10ba51cf66c1d2c680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 05 01:25:48 compute-0 modest_khorana[280044]: 167 167
Dec 05 01:25:48 compute-0 podman[280026]: 2025-12-05 01:25:48.553064208 +0000 UTC m=+0.195984142 container attach 57ce447b39dcc3e34555176521afbdc534439b935f4c0c10ba51cf66c1d2c680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 01:25:48 compute-0 systemd[1]: libpod-57ce447b39dcc3e34555176521afbdc534439b935f4c0c10ba51cf66c1d2c680.scope: Deactivated successfully.
Dec 05 01:25:48 compute-0 podman[280026]: 2025-12-05 01:25:48.554337164 +0000 UTC m=+0.197257138 container died 57ce447b39dcc3e34555176521afbdc534439b935f4c0c10ba51cf66c1d2c680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:25:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-08c44cbb2975d258eb98d3348861889f43ce52031c880dddf4a3fc98bbb804f6-merged.mount: Deactivated successfully.
Dec 05 01:25:48 compute-0 podman[280026]: 2025-12-05 01:25:48.619592962 +0000 UTC m=+0.262512856 container remove 57ce447b39dcc3e34555176521afbdc534439b935f4c0c10ba51cf66c1d2c680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:25:48 compute-0 systemd[1]: libpod-conmon-57ce447b39dcc3e34555176521afbdc534439b935f4c0c10ba51cf66c1d2c680.scope: Deactivated successfully.
Dec 05 01:25:48 compute-0 podman[280091]: 2025-12-05 01:25:48.859426441 +0000 UTC m=+0.078909882 container create c1964e64b34ef2c20a4d964d182ac9c8a61dde24f5517df72df8e5f6e4c5f25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 05 01:25:48 compute-0 podman[280091]: 2025-12-05 01:25:48.828990288 +0000 UTC m=+0.048473739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:25:48 compute-0 systemd[1]: Started libpod-conmon-c1964e64b34ef2c20a4d964d182ac9c8a61dde24f5517df72df8e5f6e4c5f25c.scope.
Dec 05 01:25:48 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eadf0664c7ed87e017442a1caad484d8754a140ce2acf9017021b8047fa386b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eadf0664c7ed87e017442a1caad484d8754a140ce2acf9017021b8047fa386b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eadf0664c7ed87e017442a1caad484d8754a140ce2acf9017021b8047fa386b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eadf0664c7ed87e017442a1caad484d8754a140ce2acf9017021b8047fa386b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:25:49 compute-0 podman[280091]: 2025-12-05 01:25:49.013125297 +0000 UTC m=+0.232608768 container init c1964e64b34ef2c20a4d964d182ac9c8a61dde24f5517df72df8e5f6e4c5f25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bouman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:25:49 compute-0 podman[280091]: 2025-12-05 01:25:49.041365978 +0000 UTC m=+0.260849429 container start c1964e64b34ef2c20a4d964d182ac9c8a61dde24f5517df72df8e5f6e4c5f25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:25:49 compute-0 podman[280091]: 2025-12-05 01:25:49.046751839 +0000 UTC m=+0.266235360 container attach c1964e64b34ef2c20a4d964d182ac9c8a61dde24f5517df72df8e5f6e4c5f25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:25:49 compute-0 sudo[280162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkdmeulnbrmcbieikzkapzmgnxqdkqly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897947.1861105-118-253698570386955/AnsiballZ_dnf.py'
Dec 05 01:25:49 compute-0 sudo[280162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:49 compute-0 python3.9[280164]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 01:25:49 compute-0 ceph-mon[192914]: pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:49 compute-0 great_bouman[280134]: {
Dec 05 01:25:49 compute-0 great_bouman[280134]:     "0": [
Dec 05 01:25:49 compute-0 great_bouman[280134]:         {
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "devices": [
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "/dev/loop3"
Dec 05 01:25:49 compute-0 great_bouman[280134]:             ],
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "lv_name": "ceph_lv0",
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "lv_size": "21470642176",
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "name": "ceph_lv0",
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "tags": {
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.cluster_name": "ceph",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.crush_device_class": "",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.encrypted": "0",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.osd_id": "0",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.type": "block",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.vdo": "0"
Dec 05 01:25:49 compute-0 great_bouman[280134]:             },
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "type": "block",
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "vg_name": "ceph_vg0"
Dec 05 01:25:49 compute-0 great_bouman[280134]:         }
Dec 05 01:25:49 compute-0 great_bouman[280134]:     ],
Dec 05 01:25:49 compute-0 great_bouman[280134]:     "1": [
Dec 05 01:25:49 compute-0 great_bouman[280134]:         {
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "devices": [
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "/dev/loop4"
Dec 05 01:25:49 compute-0 great_bouman[280134]:             ],
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "lv_name": "ceph_lv1",
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "lv_size": "21470642176",
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "name": "ceph_lv1",
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "tags": {
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.cluster_name": "ceph",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.crush_device_class": "",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.encrypted": "0",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.osd_id": "1",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.type": "block",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.vdo": "0"
Dec 05 01:25:49 compute-0 great_bouman[280134]:             },
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "type": "block",
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "vg_name": "ceph_vg1"
Dec 05 01:25:49 compute-0 great_bouman[280134]:         }
Dec 05 01:25:49 compute-0 great_bouman[280134]:     ],
Dec 05 01:25:49 compute-0 great_bouman[280134]:     "2": [
Dec 05 01:25:49 compute-0 great_bouman[280134]:         {
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "devices": [
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "/dev/loop5"
Dec 05 01:25:49 compute-0 great_bouman[280134]:             ],
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "lv_name": "ceph_lv2",
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "lv_size": "21470642176",
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "name": "ceph_lv2",
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "tags": {
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.cluster_name": "ceph",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.crush_device_class": "",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.encrypted": "0",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.osd_id": "2",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.type": "block",
Dec 05 01:25:49 compute-0 great_bouman[280134]:                 "ceph.vdo": "0"
Dec 05 01:25:49 compute-0 great_bouman[280134]:             },
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "type": "block",
Dec 05 01:25:49 compute-0 great_bouman[280134]:             "vg_name": "ceph_vg2"
Dec 05 01:25:49 compute-0 great_bouman[280134]:         }
Dec 05 01:25:49 compute-0 great_bouman[280134]:     ]
Dec 05 01:25:49 compute-0 great_bouman[280134]: }
Dec 05 01:25:49 compute-0 systemd[1]: libpod-c1964e64b34ef2c20a4d964d182ac9c8a61dde24f5517df72df8e5f6e4c5f25c.scope: Deactivated successfully.
Dec 05 01:25:49 compute-0 podman[280091]: 2025-12-05 01:25:49.924825309 +0000 UTC m=+1.144308750 container died c1964e64b34ef2c20a4d964d182ac9c8a61dde24f5517df72df8e5f6e4c5f25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:25:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-9eadf0664c7ed87e017442a1caad484d8754a140ce2acf9017021b8047fa386b-merged.mount: Deactivated successfully.
Dec 05 01:25:50 compute-0 podman[280091]: 2025-12-05 01:25:50.013515554 +0000 UTC m=+1.232999005 container remove c1964e64b34ef2c20a4d964d182ac9c8a61dde24f5517df72df8e5f6e4c5f25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bouman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:25:50 compute-0 systemd[1]: libpod-conmon-c1964e64b34ef2c20a4d964d182ac9c8a61dde24f5517df72df8e5f6e4c5f25c.scope: Deactivated successfully.
Dec 05 01:25:50 compute-0 sudo[279958]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:50 compute-0 sudo[280184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:25:50 compute-0 sudo[280184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:25:50 compute-0 sudo[280184]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:50 compute-0 sudo[280209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:25:50 compute-0 sudo[280209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:25:50 compute-0 sudo[280209]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:50 compute-0 sudo[280234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:25:50 compute-0 sudo[280234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:25:50 compute-0 sudo[280234]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:50 compute-0 sudo[280259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:25:50 compute-0 sudo[280259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:25:50 compute-0 sudo[280162]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:51 compute-0 podman[280398]: 2025-12-05 01:25:51.094407565 +0000 UTC m=+0.055532637 container create 7ad159a7e23fe2bdc4dd375a254d4e872ecf51db5f9873ed18901fd0952edb77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 05 01:25:51 compute-0 systemd[1]: Started libpod-conmon-7ad159a7e23fe2bdc4dd375a254d4e872ecf51db5f9873ed18901fd0952edb77.scope.
Dec 05 01:25:51 compute-0 podman[280398]: 2025-12-05 01:25:51.071823242 +0000 UTC m=+0.032948344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:25:51 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:25:51 compute-0 podman[280398]: 2025-12-05 01:25:51.207452172 +0000 UTC m=+0.168577274 container init 7ad159a7e23fe2bdc4dd375a254d4e872ecf51db5f9873ed18901fd0952edb77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 01:25:51 compute-0 podman[280398]: 2025-12-05 01:25:51.222488123 +0000 UTC m=+0.183613215 container start 7ad159a7e23fe2bdc4dd375a254d4e872ecf51db5f9873ed18901fd0952edb77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 05 01:25:51 compute-0 condescending_elgamal[280417]: 167 167
Dec 05 01:25:51 compute-0 podman[280398]: 2025-12-05 01:25:51.230365914 +0000 UTC m=+0.191491066 container attach 7ad159a7e23fe2bdc4dd375a254d4e872ecf51db5f9873ed18901fd0952edb77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 05 01:25:51 compute-0 systemd[1]: libpod-7ad159a7e23fe2bdc4dd375a254d4e872ecf51db5f9873ed18901fd0952edb77.scope: Deactivated successfully.
Dec 05 01:25:51 compute-0 podman[280398]: 2025-12-05 01:25:51.230869548 +0000 UTC m=+0.191994690 container died 7ad159a7e23fe2bdc4dd375a254d4e872ecf51db5f9873ed18901fd0952edb77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:25:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9e6018414f42b2e1a63d430b42ff09309067ed2995006f0a75f7c56a026af3b-merged.mount: Deactivated successfully.
Dec 05 01:25:51 compute-0 podman[280398]: 2025-12-05 01:25:51.307608458 +0000 UTC m=+0.268733550 container remove 7ad159a7e23fe2bdc4dd375a254d4e872ecf51db5f9873ed18901fd0952edb77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 05 01:25:51 compute-0 systemd[1]: libpod-conmon-7ad159a7e23fe2bdc4dd375a254d4e872ecf51db5f9873ed18901fd0952edb77.scope: Deactivated successfully.
Dec 05 01:25:51 compute-0 podman[280466]: 2025-12-05 01:25:51.581336386 +0000 UTC m=+0.091723250 container create dd70189ae9fb8a7d7a8661fbcbff94b37eed357b4d979e93ffb5a22c1efa3b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 05 01:25:51 compute-0 podman[280466]: 2025-12-05 01:25:51.543242809 +0000 UTC m=+0.053629723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:25:51 compute-0 ceph-mon[192914]: pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:51 compute-0 systemd[1]: Started libpod-conmon-dd70189ae9fb8a7d7a8661fbcbff94b37eed357b4d979e93ffb5a22c1efa3b84.scope.
Dec 05 01:25:51 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:25:51 compute-0 sudo[280531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smxscomiljlvtirraytfursswkzirocv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897950.9369166-130-249289121745380/AnsiballZ_systemd.py'
Dec 05 01:25:51 compute-0 sudo[280531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff3efd00a90fa8f79a69937dc1424349fcdc1647494aae06e3d8e0094cf399a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff3efd00a90fa8f79a69937dc1424349fcdc1647494aae06e3d8e0094cf399a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff3efd00a90fa8f79a69937dc1424349fcdc1647494aae06e3d8e0094cf399a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff3efd00a90fa8f79a69937dc1424349fcdc1647494aae06e3d8e0094cf399a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:25:51 compute-0 podman[280466]: 2025-12-05 01:25:51.748000335 +0000 UTC m=+0.258387209 container init dd70189ae9fb8a7d7a8661fbcbff94b37eed357b4d979e93ffb5a22c1efa3b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_solomon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:25:51 compute-0 podman[280466]: 2025-12-05 01:25:51.772563934 +0000 UTC m=+0.282950758 container start dd70189ae9fb8a7d7a8661fbcbff94b37eed357b4d979e93ffb5a22c1efa3b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_solomon, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 01:25:51 compute-0 podman[280466]: 2025-12-05 01:25:51.778205742 +0000 UTC m=+0.288592616 container attach dd70189ae9fb8a7d7a8661fbcbff94b37eed357b4d979e93ffb5a22c1efa3b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:25:52 compute-0 python3.9[280534]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 01:25:52 compute-0 sudo[280531]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:25:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:52 compute-0 ceph-mon[192914]: pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:52 compute-0 elegant_solomon[280529]: {
Dec 05 01:25:52 compute-0 elegant_solomon[280529]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:25:52 compute-0 elegant_solomon[280529]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:25:52 compute-0 elegant_solomon[280529]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:25:52 compute-0 elegant_solomon[280529]:         "osd_id": 0,
Dec 05 01:25:52 compute-0 elegant_solomon[280529]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:25:52 compute-0 elegant_solomon[280529]:         "type": "bluestore"
Dec 05 01:25:52 compute-0 elegant_solomon[280529]:     },
Dec 05 01:25:52 compute-0 elegant_solomon[280529]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:25:52 compute-0 elegant_solomon[280529]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:25:52 compute-0 elegant_solomon[280529]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:25:52 compute-0 elegant_solomon[280529]:         "osd_id": 1,
Dec 05 01:25:52 compute-0 elegant_solomon[280529]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:25:52 compute-0 elegant_solomon[280529]:         "type": "bluestore"
Dec 05 01:25:52 compute-0 elegant_solomon[280529]:     },
Dec 05 01:25:52 compute-0 elegant_solomon[280529]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:25:52 compute-0 elegant_solomon[280529]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:25:52 compute-0 elegant_solomon[280529]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:25:52 compute-0 elegant_solomon[280529]:         "osd_id": 2,
Dec 05 01:25:52 compute-0 elegant_solomon[280529]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:25:52 compute-0 elegant_solomon[280529]:         "type": "bluestore"
Dec 05 01:25:52 compute-0 elegant_solomon[280529]:     }
Dec 05 01:25:52 compute-0 elegant_solomon[280529]: }
Dec 05 01:25:52 compute-0 systemd[1]: libpod-dd70189ae9fb8a7d7a8661fbcbff94b37eed357b4d979e93ffb5a22c1efa3b84.scope: Deactivated successfully.
Dec 05 01:25:52 compute-0 systemd[1]: libpod-dd70189ae9fb8a7d7a8661fbcbff94b37eed357b4d979e93ffb5a22c1efa3b84.scope: Consumed 1.209s CPU time.
Dec 05 01:25:52 compute-0 podman[280466]: 2025-12-05 01:25:52.985878135 +0000 UTC m=+1.496264989 container died dd70189ae9fb8a7d7a8661fbcbff94b37eed357b4d979e93ffb5a22c1efa3b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_solomon, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:25:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-bff3efd00a90fa8f79a69937dc1424349fcdc1647494aae06e3d8e0094cf399a-merged.mount: Deactivated successfully.
Dec 05 01:25:53 compute-0 podman[280466]: 2025-12-05 01:25:53.07743034 +0000 UTC m=+1.587817164 container remove dd70189ae9fb8a7d7a8661fbcbff94b37eed357b4d979e93ffb5a22c1efa3b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:25:53 compute-0 systemd[1]: libpod-conmon-dd70189ae9fb8a7d7a8661fbcbff94b37eed357b4d979e93ffb5a22c1efa3b84.scope: Deactivated successfully.
Dec 05 01:25:53 compute-0 python3.9[280715]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:25:53 compute-0 sudo[280259]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:25:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:25:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:25:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:25:53 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 5a34326d-0cf4-4720-9c3a-a0cc826ac947 does not exist
Dec 05 01:25:53 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 1765033a-fa2e-4058-8e76-30069dcbb3f3 does not exist
Dec 05 01:25:53 compute-0 sudo[280730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:25:53 compute-0 sudo[280730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:25:53 compute-0 sudo[280730]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:53 compute-0 sudo[280782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:25:53 compute-0 sudo[280782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:25:53 compute-0 sudo[280782]: pam_unix(sudo:session): session closed for user root
Dec 05 01:25:53 compute-0 python3.9[280900]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764897952.4023588-138-123485413661297/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:25:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:25:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:25:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:54 compute-0 python3.9[281050]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:25:55 compute-0 ceph-mon[192914]: pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:55 compute-0 python3.9[281171]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764897954.227336-138-30038818134245/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:25:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:25:57 compute-0 python3.9[281321]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:25:57 compute-0 ceph-mon[192914]: pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:58 compute-0 python3.9[281442]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764897956.6940145-182-45735763062298/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:25:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:59 compute-0 python3.9[281592]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:25:59 compute-0 ceph-mon[192914]: pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:25:59 compute-0 podman[158197]: time="2025-12-05T01:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:25:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 05 01:25:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6841 "" "Go-http-client/1.1"
Dec 05 01:26:00 compute-0 python3.9[281713]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764897958.5798128-182-187535376460068/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:26:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:01 compute-0 python3.9[281863]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:26:01 compute-0 openstack_network_exporter[160350]: ERROR   01:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:26:01 compute-0 openstack_network_exporter[160350]: ERROR   01:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:26:01 compute-0 openstack_network_exporter[160350]: ERROR   01:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:26:01 compute-0 openstack_network_exporter[160350]: ERROR   01:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:26:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:26:01 compute-0 openstack_network_exporter[160350]: ERROR   01:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:26:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:26:01 compute-0 ceph-mon[192914]: pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:02 compute-0 sudo[282053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzunuwrodsaaqwrmynltbnfsvgcviryb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897961.4485412-220-143474826048460/AnsiballZ_file.py'
Dec 05 01:26:02 compute-0 podman[281989]: 2025-12-05 01:26:02.016034628 +0000 UTC m=+0.109342684 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 05 01:26:02 compute-0 sudo[282053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:02 compute-0 podman[281990]: 2025-12-05 01:26:02.028612181 +0000 UTC m=+0.120148767 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:26:02 compute-0 podman[281991]: 2025-12-05 01:26:02.072666545 +0000 UTC m=+0.154607062 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 05 01:26:02 compute-0 python3.9[282074]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:26:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:26:02 compute-0 sudo[282053]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:02 compute-0 podman[282157]: 2025-12-05 01:26:02.705029441 +0000 UTC m=+0.119358005 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:26:02 compute-0 sudo[282253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgakehsyrvgdjebcuavijxddzlxqdgph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897962.4478204-228-177318335281311/AnsiballZ_stat.py'
Dec 05 01:26:02 compute-0 sudo[282253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:03 compute-0 python3.9[282255]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:26:03 compute-0 sudo[282253]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:03 compute-0 ceph-mon[192914]: pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:03 compute-0 sudo[282331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tssrfrjwenbjxdglkaxxopgrrplucfpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897962.4478204-228-177318335281311/AnsiballZ_file.py'
Dec 05 01:26:03 compute-0 sudo[282331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:03 compute-0 python3.9[282333]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:26:03 compute-0 sudo[282331]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:04 compute-0 sudo[282483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfrcuyseqsqzegopqyrfmiydwmipfdnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897964.0818727-228-140020261790812/AnsiballZ_stat.py'
Dec 05 01:26:04 compute-0 sudo[282483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:04 compute-0 python3.9[282485]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:26:04 compute-0 sudo[282483]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:05 compute-0 sudo[282561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmrjujmhdqwtcxbflnvqsppxvdmfpskd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897964.0818727-228-140020261790812/AnsiballZ_file.py'
Dec 05 01:26:05 compute-0 sudo[282561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:05 compute-0 python3.9[282563]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:26:05 compute-0 sudo[282561]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:05 compute-0 ceph-mon[192914]: pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:06 compute-0 sudo[282713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwaversdjawrloqyvigdjrdprtngfbjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897965.7940893-251-680890374810/AnsiballZ_file.py'
Dec 05 01:26:06 compute-0 sudo[282713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Dec 05 01:26:06 compute-0 python3.9[282715]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:26:06 compute-0 sudo[282713]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:26:07 compute-0 sudo[282879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wymqfioxmgxenfrugtzhbgmdcakrvpcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897966.833762-259-215476565356834/AnsiballZ_stat.py'
Dec 05 01:26:07 compute-0 podman[282839]: 2025-12-05 01:26:07.425180678 +0000 UTC m=+0.118454439 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, version=9.4, com.redhat.component=ubi9-container, config_id=edpm, io.buildah.version=1.29.0, name=ubi9)
Dec 05 01:26:07 compute-0 sudo[282879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:07 compute-0 ceph-mon[192914]: pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Dec 05 01:26:07 compute-0 python3.9[282887]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:26:07 compute-0 sudo[282879]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:08 compute-0 sudo[282963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwheoutvuuatrdhjosbnymwzhcazdnji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897966.833762-259-215476565356834/AnsiballZ_file.py'
Dec 05 01:26:08 compute-0 sudo[282963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:08 compute-0 python3.9[282965]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:26:08 compute-0 sudo[282963]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Dec 05 01:26:09 compute-0 sudo[283115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnwwxkvdfqyspoggmqhdyompitcptoph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897968.6439397-271-170907998345119/AnsiballZ_stat.py'
Dec 05 01:26:09 compute-0 sudo[283115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:09 compute-0 python3.9[283117]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:26:09 compute-0 sudo[283115]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:09 compute-0 ceph-mon[192914]: pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Dec 05 01:26:09 compute-0 sudo[283193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yclcexaekmzzoyudyxzqavbffpvgpdqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897968.6439397-271-170907998345119/AnsiballZ_file.py'
Dec 05 01:26:09 compute-0 sudo[283193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:09 compute-0 python3.9[283195]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:26:10 compute-0 sudo[283193]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:26:10 compute-0 podman[283295]: 2025-12-05 01:26:10.748652826 +0000 UTC m=+0.153878222 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, vcs-type=git, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, distribution-scope=public)
Dec 05 01:26:10 compute-0 sudo[283364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wivgggvkqkbqqnzvanwdbruazrutzlel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897970.304398-283-13579201131215/AnsiballZ_systemd.py'
Dec 05 01:26:10 compute-0 sudo[283364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:11 compute-0 python3.9[283366]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:26:11 compute-0 systemd[1]: Reloading.
Dec 05 01:26:11 compute-0 systemd-rc-local-generator[283392]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:26:11 compute-0 systemd-sysv-generator[283396]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:26:11 compute-0 ceph-mon[192914]: pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:26:11 compute-0 sudo[283364]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:26:12 compute-0 sudo[283553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idknczipjuokpfjzswtpugvgqjexejbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897971.9384708-291-253442566509118/AnsiballZ_stat.py'
Dec 05 01:26:12 compute-0 sudo[283553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:26:12 compute-0 python3.9[283555]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:26:12 compute-0 sudo[283553]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:12 compute-0 ceph-mon[192914]: pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:26:13 compute-0 sudo[283631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evfhgkacbkednvkcrleavxmuoayrfdwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897971.9384708-291-253442566509118/AnsiballZ_file.py'
Dec 05 01:26:13 compute-0 sudo[283631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:13 compute-0 python3.9[283633]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:26:13 compute-0 sudo[283631]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:26:14 compute-0 sudo[283783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoavgosgkjvcfwihdkmvpxezltkqyhul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897973.527174-303-67960054756081/AnsiballZ_stat.py'
Dec 05 01:26:14 compute-0 sudo[283783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:15 compute-0 python3.9[283785]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:26:15 compute-0 sudo[283783]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:15 compute-0 ceph-mon[192914]: pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:26:15 compute-0 podman[283835]: 2025-12-05 01:26:15.715111604 +0000 UTC m=+0.120501507 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:26:15 compute-0 sudo[283876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciwfrsodmkkrwwjhrsvdlxcvxpruatpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897973.527174-303-67960054756081/AnsiballZ_file.py'
Dec 05 01:26:15 compute-0 sudo[283876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:15 compute-0 python3.9[283885]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:26:15 compute-0 sudo[283876]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:26:16
Dec 05 01:26:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:26:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:26:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'images', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.log', 'vms', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta']
Dec 05 01:26:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:26:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:26:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:26:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:26:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:26:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:26:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:26:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:26:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:26:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:26:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:26:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:26:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:26:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:26:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:26:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:26:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:26:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:26:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:26:17 compute-0 ceph-mon[192914]: pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:26:17 compute-0 sudo[284035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twcuoxgvexyrvifqnvxtmhllyzapasln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897976.2261786-315-89054096310716/AnsiballZ_systemd.py'
Dec 05 01:26:17 compute-0 sudo[284035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:18 compute-0 python3.9[284037]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:26:18 compute-0 systemd[1]: Reloading.
Dec 05 01:26:18 compute-0 systemd-sysv-generator[284067]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:26:18 compute-0 systemd-rc-local-generator[284063]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:26:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Dec 05 01:26:18 compute-0 systemd[1]: Starting Create netns directory...
Dec 05 01:26:18 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 05 01:26:18 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 05 01:26:18 compute-0 systemd[1]: Finished Create netns directory.
Dec 05 01:26:19 compute-0 sudo[284035]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:19 compute-0 ceph-mon[192914]: pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Dec 05 01:26:20 compute-0 sudo[284230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twgaxhaqflvkwjywfvdgrzsxlovwolge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897979.4484148-325-212545682487814/AnsiballZ_file.py'
Dec 05 01:26:20 compute-0 sudo[284230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:20 compute-0 python3.9[284232]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:26:20 compute-0 sudo[284230]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 4 op/s
Dec 05 01:26:21 compute-0 sudo[284382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktqjfzpiaatjgyalqvxzmcvfkjpvnhxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897980.58235-333-96165476405112/AnsiballZ_stat.py'
Dec 05 01:26:21 compute-0 sudo[284382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:21 compute-0 python3.9[284384]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:26:21 compute-0 sudo[284382]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:21 compute-0 ceph-mon[192914]: pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 4 op/s
Dec 05 01:26:22 compute-0 sudo[284505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vacvinwuusiqutdttpawxguotvqgydre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897980.58235-333-96165476405112/AnsiballZ_copy.py'
Dec 05 01:26:22 compute-0 sudo[284505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:26:22 compute-0 python3.9[284507]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764897980.58235-333-96165476405112/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:26:22 compute-0 sudo[284505]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:23 compute-0 sudo[284657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixezoaanrnntaxvmbbmohhovgxepbkkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897982.7498956-350-6030619596225/AnsiballZ_file.py'
Dec 05 01:26:23 compute-0 sudo[284657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:23 compute-0 python3.9[284659]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:26:23 compute-0 sudo[284657]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:23 compute-0 ceph-mon[192914]: pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:24 compute-0 sudo[284809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnszvllwlhkwamokjtfbrlsroqxybqud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897983.782176-358-48370933557769/AnsiballZ_stat.py'
Dec 05 01:26:24 compute-0 sudo[284809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:24 compute-0 python3.9[284811]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:26:24 compute-0 sudo[284809]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:25 compute-0 sudo[284932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soukjyydtktqhmrjziiftbehthczupfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897983.782176-358-48370933557769/AnsiballZ_copy.py'
Dec 05 01:26:25 compute-0 sudo[284932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:25 compute-0 python3.9[284934]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764897983.782176-358-48370933557769/.source.json _original_basename=.d67sywwq follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:26:25 compute-0 sudo[284932]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:25 compute-0 ceph-mon[192914]: pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:26:26 compute-0 sudo[285084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sivxkxjrfaqamdfanbsxwicewcshutza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897985.7307925-373-228255472329973/AnsiballZ_file.py'
Dec 05 01:26:26 compute-0 sudo[285084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:26 compute-0 python3.9[285086]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:26:26 compute-0 sudo[285084]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:26:27 compute-0 sudo[285236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcvagpnclrygthxrqxhgaireskbsfxut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897986.8143208-381-144538132980116/AnsiballZ_stat.py'
Dec 05 01:26:27 compute-0 sudo[285236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:27 compute-0 sudo[285236]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:27 compute-0 ceph-mon[192914]: pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:28 compute-0 ceph-mon[192914]: pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:28 compute-0 sudo[285359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icohruazzynedtllpiiratjbbqncsxgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897986.8143208-381-144538132980116/AnsiballZ_copy.py'
Dec 05 01:26:28 compute-0 sudo[285359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:28 compute-0 sudo[285359]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:29 compute-0 podman[158197]: time="2025-12-05T01:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:26:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec 05 01:26:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6837 "" "Go-http-client/1.1"
Dec 05 01:26:29 compute-0 sudo[285511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vurphotdrrvidqlbvzizpfanfeorlmdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897989.3551502-398-267155104768058/AnsiballZ_container_config_data.py'
Dec 05 01:26:29 compute-0 sudo[285511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:30 compute-0 python3.9[285513]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec 05 01:26:30 compute-0 sudo[285511]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:31 compute-0 openstack_network_exporter[160350]: ERROR   01:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:26:31 compute-0 openstack_network_exporter[160350]: ERROR   01:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:26:31 compute-0 openstack_network_exporter[160350]: ERROR   01:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:26:31 compute-0 openstack_network_exporter[160350]: ERROR   01:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:26:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:26:31 compute-0 openstack_network_exporter[160350]: ERROR   01:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:26:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:26:31 compute-0 sudo[285663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsrxivdgfpphvhpemygccntnquczuzgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897990.857215-407-45446238484359/AnsiballZ_container_config_hash.py'
Dec 05 01:26:31 compute-0 sudo[285663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:31 compute-0 ceph-mon[192914]: pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:31 compute-0 python3.9[285665]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 01:26:31 compute-0 sudo[285663]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:26:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:32 compute-0 podman[285765]: 2025-12-05 01:26:32.728034439 +0000 UTC m=+0.125463336 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 01:26:32 compute-0 podman[285767]: 2025-12-05 01:26:32.757127774 +0000 UTC m=+0.149619053 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:26:32 compute-0 sudo[285889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kffhxrqmiguyvkzpfxbxjmqpiexnhtil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764897992.1088665-416-153725275012206/AnsiballZ_podman_container_info.py'
Dec 05 01:26:32 compute-0 podman[285770]: 2025-12-05 01:26:32.787055362 +0000 UTC m=+0.169961632 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 01:26:32 compute-0 sudo[285889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:32 compute-0 podman[285863]: 2025-12-05 01:26:32.849671207 +0000 UTC m=+0.095039174 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec 05 01:26:32 compute-0 python3.9[285895]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 05 01:26:33 compute-0 sudo[285889]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:33 compute-0 ceph-mon[192914]: pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:35 compute-0 sudo[286080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfyjdzktdcxjozurtmgcrpwaykvzngau ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764897994.297759-429-93405439963528/AnsiballZ_edpm_container_manage.py'
Dec 05 01:26:35 compute-0 sudo[286080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:35 compute-0 python3[286082]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 01:26:35 compute-0 ceph-mon[192914]: pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:26:37 compute-0 ceph-mon[192914]: pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:37 compute-0 podman[286122]: 2025-12-05 01:26:37.647366435 +0000 UTC m=+0.066358600 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, release-0.7.12=, vendor=Red Hat, Inc., container_name=kepler)
Dec 05 01:26:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:39 compute-0 ceph-mon[192914]: pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:40 compute-0 ceph-mon[192914]: pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:26:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.545 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.546 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.549 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.555 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:26:42 compute-0 ceph-mon[192914]: pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:44 compute-0 podman[286157]: 2025-12-05 01:26:44.056977183 +0000 UTC m=+2.470574575 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, release=1755695350, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vcs-type=git, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7)
Dec 05 01:26:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:45 compute-0 ceph-mon[192914]: pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:26:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:26:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:26:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:26:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:26:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:26:46 compute-0 podman[286207]: 2025-12-05 01:26:46.244341973 +0000 UTC m=+0.362261880 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:26:46 compute-0 podman[286093]: 2025-12-05 01:26:46.24813257 +0000 UTC m=+10.735390328 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 05 01:26:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:46 compute-0 podman[286251]: 2025-12-05 01:26:46.51298363 +0000 UTC m=+0.103405188 container create 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec 05 01:26:46 compute-0 podman[286251]: 2025-12-05 01:26:46.457153426 +0000 UTC m=+0.047575064 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 05 01:26:46 compute-0 python3[286082]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 05 01:26:46 compute-0 sudo[286080]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:26:47 compute-0 ceph-mon[192914]: pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:47 compute-0 sudo[286435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jarawtsywnwwsufmxyonmkttquqnprge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898007.0467231-437-152237363297162/AnsiballZ_stat.py'
Dec 05 01:26:47 compute-0 sudo[286435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:47 compute-0 python3.9[286437]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:26:47 compute-0 sudo[286435]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:48 compute-0 sudo[286589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iolauhbzcnnxdzaiaiaexqafvfuijnvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898008.3074985-446-193954103204073/AnsiballZ_file.py'
Dec 05 01:26:48 compute-0 sudo[286589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:49 compute-0 python3.9[286591]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:26:49 compute-0 sudo[286589]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:49 compute-0 ceph-mon[192914]: pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:49 compute-0 sudo[286666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzdqgyqtcicwuvmiypfddugxtnckhqze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898008.3074985-446-193954103204073/AnsiballZ_stat.py'
Dec 05 01:26:49 compute-0 sudo[286666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:49 compute-0 python3.9[286668]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:26:49 compute-0 sudo[286666]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:50 compute-0 sudo[286817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfrcxmwgjvyprfqguabpibdsvzpdzvxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898009.9640265-446-10032884565177/AnsiballZ_copy.py'
Dec 05 01:26:50 compute-0 sudo[286817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:50 compute-0 ceph-mon[192914]: pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:50 compute-0 python3.9[286819]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764898009.9640265-446-10032884565177/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:26:50 compute-0 sudo[286817]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:51 compute-0 sudo[286893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvbztirijhnclnxvdnlzfmlkvgsnfrzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898009.9640265-446-10032884565177/AnsiballZ_systemd.py'
Dec 05 01:26:51 compute-0 sudo[286893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:51 compute-0 python3.9[286895]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 01:26:51 compute-0 systemd[1]: Reloading.
Dec 05 01:26:51 compute-0 systemd-rc-local-generator[286918]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:26:51 compute-0 systemd-sysv-generator[286924]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:26:52 compute-0 sudo[286893]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:26:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:52 compute-0 sudo[287003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyedibfqstpnclgqmuexzvlscabbeffh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898009.9640265-446-10032884565177/AnsiballZ_systemd.py'
Dec 05 01:26:52 compute-0 sudo[287003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:26:53 compute-0 python3.9[287005]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:26:53 compute-0 systemd[1]: Reloading.
Dec 05 01:26:53 compute-0 systemd-rc-local-generator[287031]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:26:53 compute-0 systemd-sysv-generator[287037]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:26:53 compute-0 ceph-mon[192914]: pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:53 compute-0 sudo[287045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:26:53 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Dec 05 01:26:53 compute-0 sudo[287045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:26:53 compute-0 sudo[287045]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:53 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:26:53 compute-0 sudo[287079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/393669ba6562fd4aacce8ce9edf46b11d718390a30075dad6e312fb8e357d173/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 05 01:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/393669ba6562fd4aacce8ce9edf46b11d718390a30075dad6e312fb8e357d173/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 01:26:53 compute-0 sudo[287079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:26:53 compute-0 sudo[287079]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:53 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638.
Dec 05 01:26:53 compute-0 podman[287073]: 2025-12-05 01:26:53.896271103 +0000 UTC m=+0.192289910 container init 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 05 01:26:53 compute-0 ovn_metadata_agent[287107]: + sudo -E kolla_set_configs
Dec 05 01:26:53 compute-0 podman[287073]: 2025-12-05 01:26:53.932238339 +0000 UTC m=+0.228257166 container start 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 05 01:26:53 compute-0 edpm-start-podman-container[287073]: ovn_metadata_agent
Dec 05 01:26:53 compute-0 sudo[287115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:26:53 compute-0 sudo[287115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:26:53 compute-0 sudo[287115]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Validating config file
Dec 05 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Copying service configuration files
Dec 05 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 05 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 05 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 05 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Writing out command to execute
Dec 05 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 05 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 05 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 05 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 05 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 05 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 05 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 05 01:26:54 compute-0 ovn_metadata_agent[287107]: ++ cat /run_command
Dec 05 01:26:54 compute-0 ovn_metadata_agent[287107]: + CMD=neutron-ovn-metadata-agent
Dec 05 01:26:54 compute-0 ovn_metadata_agent[287107]: + ARGS=
Dec 05 01:26:54 compute-0 ovn_metadata_agent[287107]: + sudo kolla_copy_cacerts
Dec 05 01:26:54 compute-0 sudo[287165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:26:54 compute-0 sudo[287165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:26:54 compute-0 ovn_metadata_agent[287107]: + [[ ! -n '' ]]
Dec 05 01:26:54 compute-0 ovn_metadata_agent[287107]: + . kolla_extend_start
Dec 05 01:26:54 compute-0 ovn_metadata_agent[287107]: Running command: 'neutron-ovn-metadata-agent'
Dec 05 01:26:54 compute-0 ovn_metadata_agent[287107]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec 05 01:26:54 compute-0 ovn_metadata_agent[287107]: + umask 0022
Dec 05 01:26:54 compute-0 ovn_metadata_agent[287107]: + exec neutron-ovn-metadata-agent
Dec 05 01:26:54 compute-0 podman[287126]: 2025-12-05 01:26:54.065285061 +0000 UTC m=+0.116675615 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 01:26:54 compute-0 edpm-start-podman-container[287071]: Creating additional drop-in dependency for "ovn_metadata_agent" (33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638)
Dec 05 01:26:54 compute-0 systemd[1]: Reloading.
Dec 05 01:26:54 compute-0 systemd-rc-local-generator[287235]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:26:54 compute-0 systemd-sysv-generator[287241]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:26:54 compute-0 systemd[1]: Started ovn_metadata_agent container.
Dec 05 01:26:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:54 compute-0 sudo[287003]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:54 compute-0 sudo[287165]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:26:54 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:26:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:26:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:26:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:26:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:26:54 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev fbe546c2-0d9a-4e20-95c8-cfc58367be8f does not exist
Dec 05 01:26:54 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a9a86025-4371-409f-97b3-ff0e6e9b8ea7 does not exist
Dec 05 01:26:54 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c2fa5975-d35d-42d6-966b-2bd57fce7343 does not exist
Dec 05 01:26:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:26:54 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:26:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:26:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:26:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:26:54 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:26:54 compute-0 sudo[287301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:26:54 compute-0 sudo[287301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:26:54 compute-0 sudo[287301]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:54 compute-0 sudo[287327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:26:54 compute-0 sudo[287327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:26:54 compute-0 sudo[287327]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:55 compute-0 sudo[287352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:26:55 compute-0 sudo[287352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:26:55 compute-0 sudo[287352]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:55 compute-0 sshd-session[277535]: Connection closed by 192.168.122.30 port 36048
Dec 05 01:26:55 compute-0 sshd-session[277523]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:26:55 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Dec 05 01:26:55 compute-0 systemd-logind[792]: Session 54 logged out. Waiting for processes to exit.
Dec 05 01:26:55 compute-0 systemd[1]: session-54.scope: Consumed 1min 30.337s CPU time.
Dec 05 01:26:55 compute-0 systemd-logind[792]: Removed session 54.
Dec 05 01:26:55 compute-0 sudo[287377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:26:55 compute-0 sudo[287377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:26:55 compute-0 ceph-mon[192914]: pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:26:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:26:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:26:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:26:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:26:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:26:55 compute-0 podman[287440]: 2025-12-05 01:26:55.718457863 +0000 UTC m=+0.075025440 container create 2edd814af2bc09cd73db3844cc8dfe45de554d2493cda41283af33bae424e5cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_almeida, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:26:55 compute-0 systemd[1]: Started libpod-conmon-2edd814af2bc09cd73db3844cc8dfe45de554d2493cda41283af33bae424e5cd.scope.
Dec 05 01:26:55 compute-0 podman[287440]: 2025-12-05 01:26:55.689585375 +0000 UTC m=+0.046152982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:26:55 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:26:55 compute-0 podman[287440]: 2025-12-05 01:26:55.854250071 +0000 UTC m=+0.210817738 container init 2edd814af2bc09cd73db3844cc8dfe45de554d2493cda41283af33bae424e5cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 01:26:55 compute-0 podman[287440]: 2025-12-05 01:26:55.870944068 +0000 UTC m=+0.227511685 container start 2edd814af2bc09cd73db3844cc8dfe45de554d2493cda41283af33bae424e5cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_almeida, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 05 01:26:55 compute-0 podman[287440]: 2025-12-05 01:26:55.876988347 +0000 UTC m=+0.233555964 container attach 2edd814af2bc09cd73db3844cc8dfe45de554d2493cda41283af33bae424e5cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_almeida, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 05 01:26:55 compute-0 angry_almeida[287456]: 167 167
Dec 05 01:26:55 compute-0 systemd[1]: libpod-2edd814af2bc09cd73db3844cc8dfe45de554d2493cda41283af33bae424e5cd.scope: Deactivated successfully.
Dec 05 01:26:55 compute-0 podman[287440]: 2025-12-05 01:26:55.885587297 +0000 UTC m=+0.242154904 container died 2edd814af2bc09cd73db3844cc8dfe45de554d2493cda41283af33bae424e5cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Dec 05 01:26:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-e283dbcdfc773cb6d7960df6e3cc33e11d6130c2bf99020ea5167738e7968a97-merged.mount: Deactivated successfully.
Dec 05 01:26:55 compute-0 podman[287440]: 2025-12-05 01:26:55.952864969 +0000 UTC m=+0.309432556 container remove 2edd814af2bc09cd73db3844cc8dfe45de554d2493cda41283af33bae424e5cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_almeida, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec 05 01:26:55 compute-0 systemd[1]: libpod-conmon-2edd814af2bc09cd73db3844cc8dfe45de554d2493cda41283af33bae424e5cd.scope: Deactivated successfully.
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.096 287122 INFO neutron.common.config [-] Logging enabled!
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.097 287122 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.097 287122 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.097 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.097 287122 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.097 287122 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.098 287122 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.098 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.098 287122 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.098 287122 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.098 287122 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.098 287122 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.098 287122 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.098 287122 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.098 287122 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.099 287122 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.099 287122 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.099 287122 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.099 287122 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.099 287122 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.099 287122 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.099 287122 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.099 287122 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.099 287122 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.100 287122 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.100 287122 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.100 287122 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.100 287122 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.100 287122 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.100 287122 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.100 287122 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.100 287122 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.100 287122 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.101 287122 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.101 287122 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.101 287122 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.101 287122 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.101 287122 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.101 287122 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.101 287122 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.101 287122 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.102 287122 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.102 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.102 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.102 287122 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.102 287122 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.102 287122 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.102 287122 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.102 287122 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.102 287122 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.103 287122 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.103 287122 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.103 287122 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.103 287122 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.103 287122 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.103 287122 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.103 287122 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.103 287122 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.103 287122 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.104 287122 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.104 287122 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.104 287122 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.104 287122 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.104 287122 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.104 287122 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.104 287122 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.104 287122 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.104 287122 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.105 287122 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.105 287122 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.105 287122 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.105 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.105 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.105 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.105 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.105 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.106 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.106 287122 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.106 287122 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.106 287122 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.106 287122 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.106 287122 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.106 287122 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.106 287122 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.106 287122 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.107 287122 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.107 287122 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.107 287122 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.107 287122 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.107 287122 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.107 287122 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.107 287122 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.108 287122 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.108 287122 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.108 287122 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.108 287122 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.108 287122 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.108 287122 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.108 287122 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.108 287122 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.109 287122 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.109 287122 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.109 287122 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.109 287122 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.109 287122 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.109 287122 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.109 287122 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.109 287122 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.110 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.110 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.110 287122 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.110 287122 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.110 287122 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.110 287122 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.111 287122 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.111 287122 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.111 287122 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.111 287122 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.111 287122 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.111 287122 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.111 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.112 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.112 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.112 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.112 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.112 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.112 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.112 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.113 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.113 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.113 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.113 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.113 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.113 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.113 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.114 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.114 287122 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.114 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.114 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.114 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.114 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.114 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.115 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.115 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.115 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.115 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.115 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.115 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.116 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.116 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.116 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.116 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.116 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.116 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.116 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.117 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.117 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.117 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.117 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.117 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.117 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.118 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.118 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.118 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.118 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.118 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.118 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.119 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.119 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.119 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.119 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.119 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.119 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.119 287122 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.119 287122 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.120 287122 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.120 287122 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.120 287122 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.120 287122 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.120 287122 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.120 287122 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.120 287122 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.121 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.121 287122 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.121 287122 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.121 287122 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.121 287122 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.121 287122 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.121 287122 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.122 287122 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.122 287122 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.122 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.122 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.122 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.122 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.122 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.123 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.123 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.123 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.123 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.123 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.123 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.123 287122 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.124 287122 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.124 287122 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.124 287122 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.124 287122 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.124 287122 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.124 287122 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.124 287122 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.124 287122 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.125 287122 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.125 287122 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.125 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.125 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.125 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.125 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.125 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.125 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.126 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.126 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.126 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.126 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.126 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.126 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.126 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.127 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.127 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.127 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.127 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.127 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.127 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.127 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.128 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.128 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.128 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.128 287122 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.128 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.128 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.128 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.128 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.129 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.129 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.129 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.129 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.129 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.129 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.129 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.130 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.130 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.130 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.130 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.130 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.130 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.130 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.131 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.131 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.131 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.131 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.131 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.132 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.132 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.132 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.132 287122 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.132 287122 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.132 287122 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.132 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.133 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.133 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.133 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.133 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.133 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.133 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.133 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.134 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.134 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.134 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.134 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.134 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.134 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.134 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.135 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.135 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.135 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.135 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.135 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.135 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.135 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.136 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.136 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.136 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.136 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.136 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.136 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.136 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.137 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.137 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.137 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.137 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.137 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.137 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.137 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.137 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.138 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.150 287122 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.150 287122 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.150 287122 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.151 287122 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.151 287122 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.165 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 8dd76c1c-ab01-42af-b35e-2e870841b6ad (UUID: 8dd76c1c-ab01-42af-b35e-2e870841b6ad) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.187 287122 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.187 287122 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.187 287122 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.187 287122 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.191 287122 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.197 287122 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.202 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '8dd76c1c-ab01-42af-b35e-2e870841b6ad'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], external_ids={}, name=8dd76c1c-ab01-42af-b35e-2e870841b6ad, nb_cfg_timestamp=1764896443569, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.203 287122 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f64f0638e20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.204 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.204 287122 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.204 287122 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.204 287122 INFO oslo_service.service [-] Starting 1 workers
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.210 287122 DEBUG oslo_service.service [-] Started child 287490 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.214 287122 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpos3ihnt6/privsep.sock']
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.217 287490 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-956106'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Dec 05 01:26:56 compute-0 podman[287479]: 2025-12-05 01:26:56.249720973 +0000 UTC m=+0.106154310 container create 56e3ab83e37878a9c4316f039e10fc1e95a149b0b78e557895e67916337440bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldberg, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.254 287490 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.255 287490 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.255 287490 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.261 287490 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.273 287490 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 05 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.279 287490 INFO eventlet.wsgi.server [-] (287490) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Dec 05 01:26:56 compute-0 podman[287479]: 2025-12-05 01:26:56.215848486 +0000 UTC m=+0.072281883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:26:56 compute-0 systemd[1]: Started libpod-conmon-56e3ab83e37878a9c4316f039e10fc1e95a149b0b78e557895e67916337440bd.scope.
Dec 05 01:26:56 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e5eba10045fe0a0937e6791e6a79cac42d3bdac53682231b8545677ef33cf7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e5eba10045fe0a0937e6791e6a79cac42d3bdac53682231b8545677ef33cf7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e5eba10045fe0a0937e6791e6a79cac42d3bdac53682231b8545677ef33cf7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e5eba10045fe0a0937e6791e6a79cac42d3bdac53682231b8545677ef33cf7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e5eba10045fe0a0937e6791e6a79cac42d3bdac53682231b8545677ef33cf7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:26:56 compute-0 podman[287479]: 2025-12-05 01:26:56.45980123 +0000 UTC m=+0.316234567 container init 56e3ab83e37878a9c4316f039e10fc1e95a149b0b78e557895e67916337440bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldberg, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 05 01:26:56 compute-0 podman[287479]: 2025-12-05 01:26:56.4827029 +0000 UTC m=+0.339136237 container start 56e3ab83e37878a9c4316f039e10fc1e95a149b0b78e557895e67916337440bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldberg, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 05 01:26:56 compute-0 podman[287479]: 2025-12-05 01:26:56.487041051 +0000 UTC m=+0.343474388 container attach 56e3ab83e37878a9c4316f039e10fc1e95a149b0b78e557895e67916337440bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldberg, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 01:26:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:57.010 287122 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 05 01:26:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:57.011 287122 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpos3ihnt6/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 05 01:26:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.869 287504 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 05 01:26:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.874 287504 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 05 01:26:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.876 287504 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 05 01:26:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.877 287504 INFO oslo.privsep.daemon [-] privsep daemon running as pid 287504
Dec 05 01:26:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:57.016 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[0a119daf-b097-4494-8298-5b906b94100a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:26:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:26:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:57.525 287504 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:26:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:57.525 287504 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:26:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:57.525 287504 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:26:57 compute-0 ceph-mon[192914]: pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:57 compute-0 vibrant_goldberg[287498]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:26:57 compute-0 vibrant_goldberg[287498]: --> relative data size: 1.0
Dec 05 01:26:57 compute-0 vibrant_goldberg[287498]: --> All data devices are unavailable
Dec 05 01:26:57 compute-0 systemd[1]: libpod-56e3ab83e37878a9c4316f039e10fc1e95a149b0b78e557895e67916337440bd.scope: Deactivated successfully.
Dec 05 01:26:57 compute-0 systemd[1]: libpod-56e3ab83e37878a9c4316f039e10fc1e95a149b0b78e557895e67916337440bd.scope: Consumed 1.168s CPU time.
Dec 05 01:26:57 compute-0 podman[287479]: 2025-12-05 01:26:57.713856978 +0000 UTC m=+1.570290325 container died 56e3ab83e37878a9c4316f039e10fc1e95a149b0b78e557895e67916337440bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldberg, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:26:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8e5eba10045fe0a0937e6791e6a79cac42d3bdac53682231b8545677ef33cf7-merged.mount: Deactivated successfully.
Dec 05 01:26:57 compute-0 podman[287479]: 2025-12-05 01:26:57.798934158 +0000 UTC m=+1.655367505 container remove 56e3ab83e37878a9c4316f039e10fc1e95a149b0b78e557895e67916337440bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldberg, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:26:57 compute-0 sudo[287377]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:57 compute-0 systemd[1]: libpod-conmon-56e3ab83e37878a9c4316f039e10fc1e95a149b0b78e557895e67916337440bd.scope: Deactivated successfully.
Dec 05 01:26:57 compute-0 sudo[287543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:26:57 compute-0 sudo[287543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:26:57 compute-0 sudo[287543]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:58 compute-0 sudo[287568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:26:58 compute-0 sudo[287568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:26:58 compute-0 sudo[287568]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:58 compute-0 sudo[287593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.160 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[15914d36-c181-4361-8d8c-752121762e9b]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.163 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, column=external_ids, values=({'neutron:ovn-metadata-id': '87fd3287-3707-559d-869a-060a9ee7b0a4'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:26:58 compute-0 sudo[287593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:26:58 compute-0 sudo[287593]: pam_unix(sudo:session): session closed for user root
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.181 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.189 287122 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.189 287122 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.190 287122 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.190 287122 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.190 287122 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.190 287122 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.191 287122 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.191 287122 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.191 287122 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.192 287122 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.192 287122 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.192 287122 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.192 287122 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.193 287122 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.193 287122 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.193 287122 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.194 287122 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.194 287122 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.194 287122 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.194 287122 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.195 287122 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.195 287122 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.195 287122 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.196 287122 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.196 287122 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.196 287122 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.197 287122 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.197 287122 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.197 287122 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.198 287122 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.198 287122 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.198 287122 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.199 287122 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.199 287122 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.199 287122 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.199 287122 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.200 287122 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.200 287122 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.201 287122 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.201 287122 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.201 287122 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.201 287122 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.202 287122 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.202 287122 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.202 287122 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.203 287122 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.203 287122 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.203 287122 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.203 287122 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.203 287122 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.204 287122 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.204 287122 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.204 287122 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.204 287122 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.205 287122 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.205 287122 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.205 287122 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.205 287122 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.206 287122 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.206 287122 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.206 287122 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.207 287122 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.207 287122 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.207 287122 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.207 287122 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.208 287122 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.208 287122 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.208 287122 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.208 287122 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.209 287122 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.209 287122 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.209 287122 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.209 287122 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.210 287122 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.210 287122 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.210 287122 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.210 287122 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.211 287122 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.211 287122 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.211 287122 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.211 287122 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.212 287122 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.212 287122 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.212 287122 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.212 287122 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.213 287122 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.213 287122 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.213 287122 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.213 287122 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.214 287122 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.214 287122 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.214 287122 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.215 287122 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.215 287122 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.215 287122 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.215 287122 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.216 287122 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.216 287122 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.216 287122 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.216 287122 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.217 287122 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.217 287122 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.217 287122 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.217 287122 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.218 287122 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.218 287122 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.218 287122 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.218 287122 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.219 287122 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.219 287122 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.219 287122 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.220 287122 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.220 287122 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.220 287122 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.220 287122 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.221 287122 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.221 287122 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.221 287122 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.222 287122 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.222 287122 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.222 287122 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.222 287122 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.223 287122 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.223 287122 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.223 287122 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.224 287122 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.224 287122 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.224 287122 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.224 287122 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.225 287122 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.225 287122 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.225 287122 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.226 287122 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.227 287122 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.227 287122 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.227 287122 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.228 287122 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.228 287122 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.228 287122 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.228 287122 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.229 287122 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.229 287122 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.229 287122 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.229 287122 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.229 287122 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.230 287122 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.230 287122 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.230 287122 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.230 287122 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.230 287122 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.230 287122 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.230 287122 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.231 287122 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.231 287122 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.231 287122 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.231 287122 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.231 287122 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.231 287122 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.232 287122 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.232 287122 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.232 287122 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.232 287122 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.232 287122 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.232 287122 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.233 287122 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.233 287122 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.233 287122 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.233 287122 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.233 287122 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.233 287122 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.233 287122 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.234 287122 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.234 287122 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.234 287122 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.234 287122 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.234 287122 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.234 287122 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.235 287122 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.235 287122 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.235 287122 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.235 287122 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.235 287122 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.235 287122 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.236 287122 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.236 287122 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.236 287122 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.236 287122 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.236 287122 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.236 287122 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.237 287122 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.237 287122 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.237 287122 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.237 287122 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.237 287122 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.237 287122 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.238 287122 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.238 287122 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.238 287122 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.238 287122 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.238 287122 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.238 287122 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.239 287122 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.239 287122 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.239 287122 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.239 287122 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.239 287122 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.239 287122 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.240 287122 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.240 287122 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.240 287122 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.240 287122 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.240 287122 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.241 287122 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.241 287122 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.241 287122 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.241 287122 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.241 287122 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.241 287122 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.242 287122 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.242 287122 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.242 287122 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.242 287122 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.242 287122 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.242 287122 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.242 287122 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.243 287122 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.243 287122 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.243 287122 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.243 287122 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.243 287122 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.243 287122 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.244 287122 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.244 287122 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.244 287122 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.244 287122 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.244 287122 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.244 287122 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.245 287122 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.245 287122 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.245 287122 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.245 287122 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.245 287122 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.245 287122 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.246 287122 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.246 287122 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.246 287122 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.246 287122 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.246 287122 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.246 287122 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.247 287122 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.247 287122 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.247 287122 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.247 287122 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.247 287122 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.248 287122 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.248 287122 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.248 287122 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.248 287122 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.248 287122 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.248 287122 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.249 287122 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.249 287122 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.249 287122 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.249 287122 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.249 287122 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.249 287122 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.250 287122 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.250 287122 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.250 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.250 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.250 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.251 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.251 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.251 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.251 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.251 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.251 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.251 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.252 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.252 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.252 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.252 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.252 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.253 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.253 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.253 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.253 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.253 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.253 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.254 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.254 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.254 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.254 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.254 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.254 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.255 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.255 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.255 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.255 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.255 287122 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.255 287122 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.256 287122 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.256 287122 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.256 287122 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 05 01:26:58 compute-0 sudo[287618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:26:58 compute-0 sudo[287618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:26:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:58 compute-0 podman[287680]: 2025-12-05 01:26:58.831243093 +0000 UTC m=+0.075289787 container create db0318d7a79277372ef91a8b995252e3df6ab7721d0ddb49e56db814686be292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_proskuriakova, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:26:58 compute-0 podman[287680]: 2025-12-05 01:26:58.796430969 +0000 UTC m=+0.040477713 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:26:58 compute-0 systemd[1]: Started libpod-conmon-db0318d7a79277372ef91a8b995252e3df6ab7721d0ddb49e56db814686be292.scope.
Dec 05 01:26:58 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:26:58 compute-0 podman[287680]: 2025-12-05 01:26:58.975809467 +0000 UTC m=+0.219856221 container init db0318d7a79277372ef91a8b995252e3df6ab7721d0ddb49e56db814686be292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_proskuriakova, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:26:58 compute-0 podman[287680]: 2025-12-05 01:26:58.993983435 +0000 UTC m=+0.238030129 container start db0318d7a79277372ef91a8b995252e3df6ab7721d0ddb49e56db814686be292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_proskuriakova, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 01:26:59 compute-0 podman[287680]: 2025-12-05 01:26:58.999824619 +0000 UTC m=+0.243871393 container attach db0318d7a79277372ef91a8b995252e3df6ab7721d0ddb49e56db814686be292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 05 01:26:59 compute-0 naughty_proskuriakova[287696]: 167 167
Dec 05 01:26:59 compute-0 systemd[1]: libpod-db0318d7a79277372ef91a8b995252e3df6ab7721d0ddb49e56db814686be292.scope: Deactivated successfully.
Dec 05 01:26:59 compute-0 podman[287680]: 2025-12-05 01:26:59.004606712 +0000 UTC m=+0.248653376 container died db0318d7a79277372ef91a8b995252e3df6ab7721d0ddb49e56db814686be292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 05 01:26:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5e3175df4e3a9ed826d2cfda6756d237dd93eb7cb7cd62187df3b2348a5c4ff-merged.mount: Deactivated successfully.
Dec 05 01:26:59 compute-0 podman[287680]: 2025-12-05 01:26:59.074724134 +0000 UTC m=+0.318770798 container remove db0318d7a79277372ef91a8b995252e3df6ab7721d0ddb49e56db814686be292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_proskuriakova, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Dec 05 01:26:59 compute-0 systemd[1]: libpod-conmon-db0318d7a79277372ef91a8b995252e3df6ab7721d0ddb49e56db814686be292.scope: Deactivated successfully.
Dec 05 01:26:59 compute-0 podman[287718]: 2025-12-05 01:26:59.321383793 +0000 UTC m=+0.077013175 container create 6f2650eb33b3e266c2fbda9cc30a74f49e9a42b84bbc4fd00b0e4debe644e06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:26:59 compute-0 podman[287718]: 2025-12-05 01:26:59.285657404 +0000 UTC m=+0.041286826 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:26:59 compute-0 systemd[1]: Started libpod-conmon-6f2650eb33b3e266c2fbda9cc30a74f49e9a42b84bbc4fd00b0e4debe644e06d.scope.
Dec 05 01:26:59 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98913c89380ee6b6dd8168527dd7c8af5d07a8fc7d502b5095a72c9e770741e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98913c89380ee6b6dd8168527dd7c8af5d07a8fc7d502b5095a72c9e770741e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98913c89380ee6b6dd8168527dd7c8af5d07a8fc7d502b5095a72c9e770741e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98913c89380ee6b6dd8168527dd7c8af5d07a8fc7d502b5095a72c9e770741e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:26:59 compute-0 podman[287718]: 2025-12-05 01:26:59.478666943 +0000 UTC m=+0.234296315 container init 6f2650eb33b3e266c2fbda9cc30a74f49e9a42b84bbc4fd00b0e4debe644e06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_swirles, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 05 01:26:59 compute-0 podman[287718]: 2025-12-05 01:26:59.497068378 +0000 UTC m=+0.252697760 container start 6f2650eb33b3e266c2fbda9cc30a74f49e9a42b84bbc4fd00b0e4debe644e06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 05 01:26:59 compute-0 podman[287718]: 2025-12-05 01:26:59.503558529 +0000 UTC m=+0.259187961 container attach 6f2650eb33b3e266c2fbda9cc30a74f49e9a42b84bbc4fd00b0e4debe644e06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_swirles, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:26:59 compute-0 ceph-mon[192914]: pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:26:59 compute-0 podman[158197]: time="2025-12-05T01:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:26:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 37300 "" "Go-http-client/1.1"
Dec 05 01:26:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7680 "" "Go-http-client/1.1"
Dec 05 01:27:00 compute-0 kind_swirles[287733]: {
Dec 05 01:27:00 compute-0 kind_swirles[287733]:     "0": [
Dec 05 01:27:00 compute-0 kind_swirles[287733]:         {
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "devices": [
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "/dev/loop3"
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             ],
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "lv_name": "ceph_lv0",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "lv_size": "21470642176",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "name": "ceph_lv0",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "tags": {
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.cluster_name": "ceph",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.crush_device_class": "",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.encrypted": "0",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.osd_id": "0",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.type": "block",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.vdo": "0"
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             },
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "type": "block",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "vg_name": "ceph_vg0"
Dec 05 01:27:00 compute-0 kind_swirles[287733]:         }
Dec 05 01:27:00 compute-0 kind_swirles[287733]:     ],
Dec 05 01:27:00 compute-0 kind_swirles[287733]:     "1": [
Dec 05 01:27:00 compute-0 kind_swirles[287733]:         {
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "devices": [
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "/dev/loop4"
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             ],
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "lv_name": "ceph_lv1",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "lv_size": "21470642176",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "name": "ceph_lv1",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "tags": {
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.cluster_name": "ceph",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.crush_device_class": "",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.encrypted": "0",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.osd_id": "1",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.type": "block",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.vdo": "0"
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             },
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "type": "block",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "vg_name": "ceph_vg1"
Dec 05 01:27:00 compute-0 kind_swirles[287733]:         }
Dec 05 01:27:00 compute-0 kind_swirles[287733]:     ],
Dec 05 01:27:00 compute-0 kind_swirles[287733]:     "2": [
Dec 05 01:27:00 compute-0 kind_swirles[287733]:         {
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "devices": [
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "/dev/loop5"
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             ],
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "lv_name": "ceph_lv2",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "lv_size": "21470642176",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "name": "ceph_lv2",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "tags": {
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.cluster_name": "ceph",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.crush_device_class": "",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.encrypted": "0",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.osd_id": "2",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.type": "block",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:                 "ceph.vdo": "0"
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             },
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "type": "block",
Dec 05 01:27:00 compute-0 kind_swirles[287733]:             "vg_name": "ceph_vg2"
Dec 05 01:27:00 compute-0 kind_swirles[287733]:         }
Dec 05 01:27:00 compute-0 kind_swirles[287733]:     ]
Dec 05 01:27:00 compute-0 kind_swirles[287733]: }
Dec 05 01:27:00 compute-0 systemd[1]: libpod-6f2650eb33b3e266c2fbda9cc30a74f49e9a42b84bbc4fd00b0e4debe644e06d.scope: Deactivated successfully.
Dec 05 01:27:00 compute-0 podman[287718]: 2025-12-05 01:27:00.302377434 +0000 UTC m=+1.058006766 container died 6f2650eb33b3e266c2fbda9cc30a74f49e9a42b84bbc4fd00b0e4debe644e06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 01:27:00 compute-0 sshd-session[287740]: Accepted publickey for zuul from 192.168.122.30 port 60202 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 01:27:00 compute-0 systemd-logind[792]: New session 55 of user zuul.
Dec 05 01:27:00 compute-0 systemd[1]: Started Session 55 of User zuul.
Dec 05 01:27:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-98913c89380ee6b6dd8168527dd7c8af5d07a8fc7d502b5095a72c9e770741e0-merged.mount: Deactivated successfully.
Dec 05 01:27:00 compute-0 sshd-session[287740]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:27:00 compute-0 podman[287718]: 2025-12-05 01:27:00.389576823 +0000 UTC m=+1.145206165 container remove 6f2650eb33b3e266c2fbda9cc30a74f49e9a42b84bbc4fd00b0e4debe644e06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:27:00 compute-0 systemd[1]: libpod-conmon-6f2650eb33b3e266c2fbda9cc30a74f49e9a42b84bbc4fd00b0e4debe644e06d.scope: Deactivated successfully.
Dec 05 01:27:00 compute-0 sudo[287618]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:00 compute-0 sudo[287763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:27:00 compute-0 sudo[287763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:27:00 compute-0 sudo[287763]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:00 compute-0 sudo[287826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:27:00 compute-0 sudo[287826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:27:00 compute-0 sudo[287826]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:00 compute-0 sudo[287860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:27:00 compute-0 sudo[287860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:27:00 compute-0 sudo[287860]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:00 compute-0 sudo[287885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:27:00 compute-0 sudo[287885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:27:01 compute-0 openstack_network_exporter[160350]: ERROR   01:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:27:01 compute-0 openstack_network_exporter[160350]: ERROR   01:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:27:01 compute-0 openstack_network_exporter[160350]: ERROR   01:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:27:01 compute-0 openstack_network_exporter[160350]: ERROR   01:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:27:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:27:01 compute-0 openstack_network_exporter[160350]: ERROR   01:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:27:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:27:01 compute-0 podman[288048]: 2025-12-05 01:27:01.496849336 +0000 UTC m=+0.064062593 container create d13ca90e411a7cac455d4e88c5f3b53210e15387898f037a54ad910a75a0c22a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_merkle, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 05 01:27:01 compute-0 systemd[1]: Started libpod-conmon-d13ca90e411a7cac455d4e88c5f3b53210e15387898f037a54ad910a75a0c22a.scope.
Dec 05 01:27:01 compute-0 podman[288048]: 2025-12-05 01:27:01.469730447 +0000 UTC m=+0.036943744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:27:01 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:27:01 compute-0 python3.9[288040]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:27:01 compute-0 podman[288048]: 2025-12-05 01:27:01.707817057 +0000 UTC m=+0.275030344 container init d13ca90e411a7cac455d4e88c5f3b53210e15387898f037a54ad910a75a0c22a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 05 01:27:01 compute-0 ceph-mon[192914]: pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:01 compute-0 podman[288048]: 2025-12-05 01:27:01.729087932 +0000 UTC m=+0.296301179 container start d13ca90e411a7cac455d4e88c5f3b53210e15387898f037a54ad910a75a0c22a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 05 01:27:01 compute-0 podman[288048]: 2025-12-05 01:27:01.734034361 +0000 UTC m=+0.301247658 container attach d13ca90e411a7cac455d4e88c5f3b53210e15387898f037a54ad910a75a0c22a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:27:01 compute-0 sharp_merkle[288063]: 167 167
Dec 05 01:27:01 compute-0 podman[288048]: 2025-12-05 01:27:01.741274313 +0000 UTC m=+0.308487560 container died d13ca90e411a7cac455d4e88c5f3b53210e15387898f037a54ad910a75a0c22a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_merkle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 05 01:27:01 compute-0 systemd[1]: libpod-d13ca90e411a7cac455d4e88c5f3b53210e15387898f037a54ad910a75a0c22a.scope: Deactivated successfully.
Dec 05 01:27:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-6780c3b8233f8f31b301a0cfe43f906960ea4eeee26304bc4898d8b1f3f1e9a2-merged.mount: Deactivated successfully.
Dec 05 01:27:01 compute-0 podman[288048]: 2025-12-05 01:27:01.784161113 +0000 UTC m=+0.351374380 container remove d13ca90e411a7cac455d4e88c5f3b53210e15387898f037a54ad910a75a0c22a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_merkle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 05 01:27:01 compute-0 systemd[1]: libpod-conmon-d13ca90e411a7cac455d4e88c5f3b53210e15387898f037a54ad910a75a0c22a.scope: Deactivated successfully.
Dec 05 01:27:02 compute-0 podman[288091]: 2025-12-05 01:27:02.005138753 +0000 UTC m=+0.061958234 container create 96b8c3b6fa014aaafcd48c53118a5cf5f18ca1bd9efc5b00090cae2d48810112 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 05 01:27:02 compute-0 systemd[1]: Started libpod-conmon-96b8c3b6fa014aaafcd48c53118a5cf5f18ca1bd9efc5b00090cae2d48810112.scope.
Dec 05 01:27:02 compute-0 podman[288091]: 2025-12-05 01:27:01.983188229 +0000 UTC m=+0.040007790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:27:02 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d9f75ecb6357c5a13328c21a8306bfd743b1d1612d76a578fcd79cfa91b30e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d9f75ecb6357c5a13328c21a8306bfd743b1d1612d76a578fcd79cfa91b30e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d9f75ecb6357c5a13328c21a8306bfd743b1d1612d76a578fcd79cfa91b30e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d9f75ecb6357c5a13328c21a8306bfd743b1d1612d76a578fcd79cfa91b30e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:27:02 compute-0 podman[288091]: 2025-12-05 01:27:02.123801152 +0000 UTC m=+0.180620723 container init 96b8c3b6fa014aaafcd48c53118a5cf5f18ca1bd9efc5b00090cae2d48810112 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:27:02 compute-0 podman[288091]: 2025-12-05 01:27:02.148971706 +0000 UTC m=+0.205791227 container start 96b8c3b6fa014aaafcd48c53118a5cf5f18ca1bd9efc5b00090cae2d48810112 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 05 01:27:02 compute-0 podman[288091]: 2025-12-05 01:27:02.154507041 +0000 UTC m=+0.211326602 container attach 96b8c3b6fa014aaafcd48c53118a5cf5f18ca1bd9efc5b00090cae2d48810112 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:27:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:27:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:02 compute-0 ceph-mon[192914]: pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.732106) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898022732157, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2042, "num_deletes": 251, "total_data_size": 3473827, "memory_usage": 3522088, "flush_reason": "Manual Compaction"}
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898022749879, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3408983, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9687, "largest_seqno": 11728, "table_properties": {"data_size": 3399730, "index_size": 5875, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17874, "raw_average_key_size": 19, "raw_value_size": 3381359, "raw_average_value_size": 3683, "num_data_blocks": 266, "num_entries": 918, "num_filter_entries": 918, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897789, "oldest_key_time": 1764897789, "file_creation_time": 1764898022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 17848 microseconds, and 7349 cpu microseconds.
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.749961) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3408983 bytes OK
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.749977) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.751808) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.751821) EVENT_LOG_v1 {"time_micros": 1764898022751816, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.751838) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3465306, prev total WAL file size 3465306, number of live WAL files 2.
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.753181) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3329KB)], [26(5939KB)]
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898022753221, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9491325, "oldest_snapshot_seqno": -1}
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3686 keys, 7824079 bytes, temperature: kUnknown
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898022791824, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7824079, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7795882, "index_size": 17911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9221, "raw_key_size": 88514, "raw_average_key_size": 24, "raw_value_size": 7725743, "raw_average_value_size": 2095, "num_data_blocks": 775, "num_entries": 3686, "num_filter_entries": 3686, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764898022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.792048) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7824079 bytes
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.799875) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 245.3 rd, 202.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.8 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4200, records dropped: 514 output_compression: NoCompression
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.800716) EVENT_LOG_v1 {"time_micros": 1764898022800588, "job": 10, "event": "compaction_finished", "compaction_time_micros": 38694, "compaction_time_cpu_micros": 16949, "output_level": 6, "num_output_files": 1, "total_output_size": 7824079, "num_input_records": 4200, "num_output_records": 3686, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898022801770, "job": 10, "event": "table_file_deletion", "file_number": 28}
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898022802778, "job": 10, "event": "table_file_deletion", "file_number": 26}
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.752999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.803137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.803145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.803149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.803152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.803155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:27:03 compute-0 sudo[288326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqwomhewehtajgflvwkqtvgseftktmes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898022.3660884-34-73260536927012/AnsiballZ_command.py'
Dec 05 01:27:03 compute-0 podman[288236]: 2025-12-05 01:27:03.019078215 +0000 UTC m=+0.112585740 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:27:03 compute-0 sudo[288326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:03 compute-0 podman[288237]: 2025-12-05 01:27:03.046288456 +0000 UTC m=+0.122513728 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 01:27:03 compute-0 podman[288238]: 2025-12-05 01:27:03.060949006 +0000 UTC m=+0.131124009 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Dec 05 01:27:03 compute-0 podman[288246]: 2025-12-05 01:27:03.069397073 +0000 UTC m=+0.133702491 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 05 01:27:03 compute-0 python3.9[288346]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:27:03 compute-0 priceless_zhukovsky[288130]: {
Dec 05 01:27:03 compute-0 priceless_zhukovsky[288130]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:27:03 compute-0 priceless_zhukovsky[288130]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:27:03 compute-0 priceless_zhukovsky[288130]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:27:03 compute-0 priceless_zhukovsky[288130]:         "osd_id": 0,
Dec 05 01:27:03 compute-0 priceless_zhukovsky[288130]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:27:03 compute-0 priceless_zhukovsky[288130]:         "type": "bluestore"
Dec 05 01:27:03 compute-0 priceless_zhukovsky[288130]:     },
Dec 05 01:27:03 compute-0 priceless_zhukovsky[288130]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:27:03 compute-0 priceless_zhukovsky[288130]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:27:03 compute-0 priceless_zhukovsky[288130]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:27:03 compute-0 priceless_zhukovsky[288130]:         "osd_id": 1,
Dec 05 01:27:03 compute-0 priceless_zhukovsky[288130]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:27:03 compute-0 priceless_zhukovsky[288130]:         "type": "bluestore"
Dec 05 01:27:03 compute-0 priceless_zhukovsky[288130]:     },
Dec 05 01:27:03 compute-0 priceless_zhukovsky[288130]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:27:03 compute-0 priceless_zhukovsky[288130]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:27:03 compute-0 priceless_zhukovsky[288130]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:27:03 compute-0 priceless_zhukovsky[288130]:         "osd_id": 2,
Dec 05 01:27:03 compute-0 priceless_zhukovsky[288130]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:27:03 compute-0 priceless_zhukovsky[288130]:         "type": "bluestore"
Dec 05 01:27:03 compute-0 priceless_zhukovsky[288130]:     }
Dec 05 01:27:03 compute-0 priceless_zhukovsky[288130]: }
Dec 05 01:27:03 compute-0 podman[288091]: 2025-12-05 01:27:03.332974416 +0000 UTC m=+1.389793937 container died 96b8c3b6fa014aaafcd48c53118a5cf5f18ca1bd9efc5b00090cae2d48810112 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:27:03 compute-0 systemd[1]: libpod-96b8c3b6fa014aaafcd48c53118a5cf5f18ca1bd9efc5b00090cae2d48810112.scope: Deactivated successfully.
Dec 05 01:27:03 compute-0 systemd[1]: libpod-96b8c3b6fa014aaafcd48c53118a5cf5f18ca1bd9efc5b00090cae2d48810112.scope: Consumed 1.169s CPU time.
Dec 05 01:27:03 compute-0 sudo[288326]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d9f75ecb6357c5a13328c21a8306bfd743b1d1612d76a578fcd79cfa91b30e8-merged.mount: Deactivated successfully.
Dec 05 01:27:03 compute-0 podman[288091]: 2025-12-05 01:27:03.427214812 +0000 UTC m=+1.484034303 container remove 96b8c3b6fa014aaafcd48c53118a5cf5f18ca1bd9efc5b00090cae2d48810112 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_zhukovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:27:03 compute-0 systemd[1]: libpod-conmon-96b8c3b6fa014aaafcd48c53118a5cf5f18ca1bd9efc5b00090cae2d48810112.scope: Deactivated successfully.
Dec 05 01:27:03 compute-0 sudo[287885]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:27:03 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:27:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:27:03 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:27:03 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d6398f30-efed-499b-ab20-be364d3ef25f does not exist
Dec 05 01:27:03 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d7db7296-23f6-483b-b846-bd305c9e7e53 does not exist
Dec 05 01:27:03 compute-0 sudo[288420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:27:03 compute-0 sudo[288420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:27:03 compute-0 sudo[288420]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:03 compute-0 sudo[288445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:27:03 compute-0 sudo[288445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:27:03 compute-0 sudo[288445]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:04 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:27:04 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:27:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:05 compute-0 sudo[288595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgcvjtuolzezprjnrvfhoxolehmkvxta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898024.2523324-45-234751787817864/AnsiballZ_systemd_service.py'
Dec 05 01:27:05 compute-0 sudo[288595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:05 compute-0 python3.9[288597]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 01:27:05 compute-0 systemd[1]: Reloading.
Dec 05 01:27:05 compute-0 ceph-mon[192914]: pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:05 compute-0 systemd-rc-local-generator[288620]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:27:05 compute-0 systemd-sysv-generator[288626]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:27:05 compute-0 sudo[288595]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:27:08 compute-0 ceph-mon[192914]: pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:08 compute-0 python3.9[288785]: ansible-ansible.builtin.service_facts Invoked
Dec 05 01:27:08 compute-0 podman[288786]: 2025-12-05 01:27:08.752535532 +0000 UTC m=+0.151436377 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, architecture=x86_64, build-date=2024-09-18T21:23:30, name=ubi9, release-0.7.12=, io.buildah.version=1.29.0, config_id=edpm, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-container, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.4)
Dec 05 01:27:08 compute-0 network[288822]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 01:27:08 compute-0 network[288823]: 'network-scripts' will be removed from distribution in near future.
Dec 05 01:27:08 compute-0 network[288824]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 01:27:09 compute-0 ceph-mon[192914]: pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:11 compute-0 ceph-mon[192914]: pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:27:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:13 compute-0 ceph-mon[192914]: pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:14 compute-0 sudo[289093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfqjmdynyudltmdxbaukiucotbrfgmey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898034.1536703-64-207210756711042/AnsiballZ_systemd_service.py'
Dec 05 01:27:14 compute-0 sudo[289093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:15 compute-0 python3.9[289095]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:27:15 compute-0 sudo[289093]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:15 compute-0 ceph-mon[192914]: pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:15 compute-0 sudo[289258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwwdlsjegvpvnefcxbfghjchbhmzgrvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898035.3269901-64-243135464910096/AnsiballZ_systemd_service.py'
Dec 05 01:27:15 compute-0 sudo[289258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:15 compute-0 podman[289220]: 2025-12-05 01:27:15.948168448 +0000 UTC m=+0.134502934 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, architecture=x86_64, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_id=edpm, name=ubi9-minimal, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, container_name=openstack_network_exporter)
Dec 05 01:27:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:27:16
Dec 05 01:27:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:27:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:27:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', '.rgw.root', 'default.rgw.control', 'images', 'vms', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes']
Dec 05 01:27:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:27:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:27:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:27:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:27:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:27:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:27:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:27:16 compute-0 python3.9[289268]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:27:16 compute-0 sudo[289258]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:16 compute-0 podman[289272]: 2025-12-05 01:27:16.384724408 +0000 UTC m=+0.085366329 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 01:27:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:27:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:27:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:27:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:27:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:27:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:27:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:27:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:27:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:27:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:27:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:16 compute-0 sudo[289444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knuiwbvbpvwuffauontpdxisigbcvgqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898036.4900448-64-93451249771681/AnsiballZ_systemd_service.py'
Dec 05 01:27:16 compute-0 sudo[289444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:17 compute-0 python3.9[289446]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:27:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:27:17 compute-0 sudo[289444]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:17 compute-0 ceph-mon[192914]: pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:18 compute-0 sudo[289597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klkwspqgpyqjcccwpqtoqvjwxdoitnhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898038.0397398-64-71722436973647/AnsiballZ_systemd_service.py'
Dec 05 01:27:18 compute-0 sudo[289597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:18 compute-0 python3.9[289599]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:27:18 compute-0 sudo[289597]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:19 compute-0 ceph-mon[192914]: pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:20 compute-0 sudo[289751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkymtxrznlskcjvlsqzmxyhqpayvqust ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898039.2376153-64-186252627322430/AnsiballZ_systemd_service.py'
Dec 05 01:27:20 compute-0 sudo[289751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:21 compute-0 python3.9[289753]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:27:21 compute-0 sudo[289751]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:21 compute-0 ceph-mon[192914]: pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:22 compute-0 sudo[289904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxlojfamlpurcjpxuhzcovgnezdsighz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898041.4537501-64-173854998502370/AnsiballZ_systemd_service.py'
Dec 05 01:27:22 compute-0 sudo[289904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:27:22 compute-0 python3.9[289906]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:27:22 compute-0 sudo[289904]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:22 compute-0 ceph-mon[192914]: pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:23 compute-0 sudo[290057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dslyvzbrhsnypyafaxotyptkjrggdskt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898042.6317148-64-194622038087168/AnsiballZ_systemd_service.py'
Dec 05 01:27:23 compute-0 sudo[290057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:23 compute-0 python3.9[290059]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:27:23 compute-0 sudo[290057]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:24 compute-0 sudo[290227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spkkufbmirirplxpnpmcfjrygzximgqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898043.9728782-116-144102898275048/AnsiballZ_file.py'
Dec 05 01:27:24 compute-0 sudo[290227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:24 compute-0 podman[290184]: 2025-12-05 01:27:24.688097531 +0000 UTC m=+0.105554963 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 01:27:24 compute-0 python3.9[290229]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:27:24 compute-0 sudo[290227]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:25 compute-0 ceph-mon[192914]: pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:25 compute-0 sudo[290379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taqtcofmwzkodqwbcikhnkytepmkyypk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898045.1212418-116-223613582725828/AnsiballZ_file.py'
Dec 05 01:27:25 compute-0 sudo[290379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:25 compute-0 python3.9[290381]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:27:25 compute-0 sudo[290379]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:27:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:26 compute-0 sudo[290531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlexzlmekkyzieaylmviahcpzkxknspg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898046.152746-116-253362025368161/AnsiballZ_file.py'
Dec 05 01:27:26 compute-0 sudo[290531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:26 compute-0 python3.9[290533]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:27:26 compute-0 sudo[290531]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:27:27 compute-0 ceph-mon[192914]: pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:27 compute-0 sudo[290683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmaepftqbteqwrnguifclszwrgmlmmgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898047.1336083-116-280857046345372/AnsiballZ_file.py'
Dec 05 01:27:27 compute-0 sudo[290683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:27 compute-0 python3.9[290685]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:27:27 compute-0 sudo[290683]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:28 compute-0 sudo[290835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlzwiotnqmewxtkvznkcaxqrwmyhitbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898048.131027-116-128237418082988/AnsiballZ_file.py'
Dec 05 01:27:28 compute-0 sudo[290835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:28 compute-0 python3.9[290837]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:27:28 compute-0 sudo[290835]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:29 compute-0 ceph-mon[192914]: pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:29 compute-0 podman[158197]: time="2025-12-05T01:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:27:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Dec 05 01:27:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7276 "" "Go-http-client/1.1"
Dec 05 01:27:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:30 compute-0 sudo[290987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiqbygfovfdxpzxjqccljidsorhmdssm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898049.3967443-116-247819703984999/AnsiballZ_file.py'
Dec 05 01:27:30 compute-0 sudo[290987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:30 compute-0 python3.9[290989]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:27:30 compute-0 sudo[290987]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:31 compute-0 openstack_network_exporter[160350]: ERROR   01:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:27:31 compute-0 openstack_network_exporter[160350]: ERROR   01:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:27:31 compute-0 openstack_network_exporter[160350]: ERROR   01:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:27:31 compute-0 openstack_network_exporter[160350]: ERROR   01:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:27:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:27:31 compute-0 openstack_network_exporter[160350]: ERROR   01:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:27:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:27:31 compute-0 sudo[291139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqdhhitrflokaiuwgshaxycgwphamttf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898051.1998446-116-231300071569049/AnsiballZ_file.py'
Dec 05 01:27:31 compute-0 sudo[291139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:31 compute-0 ceph-mon[192914]: pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:31 compute-0 python3.9[291141]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:27:31 compute-0 sudo[291139]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:27:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:33 compute-0 podman[291266]: 2025-12-05 01:27:33.260616318 +0000 UTC m=+0.102437606 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 01:27:33 compute-0 sudo[291354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nutjubfelvvmwtqjwtgolbvmlqcdtiyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898052.694219-166-190227149463559/AnsiballZ_file.py'
Dec 05 01:27:33 compute-0 sudo[291354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:33 compute-0 podman[291267]: 2025-12-05 01:27:33.286451161 +0000 UTC m=+0.120460551 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:27:33 compute-0 podman[291265]: 2025-12-05 01:27:33.28714978 +0000 UTC m=+0.134842603 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125)
Dec 05 01:27:33 compute-0 podman[291268]: 2025-12-05 01:27:33.323346153 +0000 UTC m=+0.147332192 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3)
Dec 05 01:27:33 compute-0 python3.9[291370]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:27:33 compute-0 sudo[291354]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:33 compute-0 ceph-mon[192914]: pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:34 compute-0 sudo[291526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctignppudpqesitazpvgnvwzshyzpobf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898053.6575077-166-24533540669979/AnsiballZ_file.py'
Dec 05 01:27:34 compute-0 sudo[291526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:34 compute-0 python3.9[291528]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:27:34 compute-0 sudo[291526]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:35 compute-0 sudo[291678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwgtumzsygllqhdcgbvnuqwjhsbbgrri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898054.716335-166-37610937277716/AnsiballZ_file.py'
Dec 05 01:27:35 compute-0 sudo[291678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:35 compute-0 python3.9[291680]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:27:35 compute-0 sudo[291678]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:35 compute-0 ceph-mon[192914]: pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:36 compute-0 sudo[291830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlkgzdiwbrckahkpycpmlepryrwkcaae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898055.7035823-166-244649170432440/AnsiballZ_file.py'
Dec 05 01:27:36 compute-0 sudo[291830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:36 compute-0 python3.9[291832]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:27:36 compute-0 sudo[291830]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:36 compute-0 ceph-mon[192914]: pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:27:37 compute-0 sudo[291982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sadplfxcjzpcduoxrperijxdombivfch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898056.7289925-166-224521699983244/AnsiballZ_file.py'
Dec 05 01:27:37 compute-0 sudo[291982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:37 compute-0 python3.9[291984]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:27:37 compute-0 sudo[291982]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:38 compute-0 sudo[292134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpysdzgvqnzdgmaryglmminoapowzmaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898057.710724-166-196283004464945/AnsiballZ_file.py'
Dec 05 01:27:38 compute-0 sudo[292134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:38 compute-0 python3.9[292136]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:27:38 compute-0 sudo[292134]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:39 compute-0 podman[292260]: 2025-12-05 01:27:39.439591945 +0000 UTC m=+0.110381999 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.openshift.expose-services=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, name=ubi9, maintainer=Red Hat, Inc., release-0.7.12=, architecture=x86_64, config_id=edpm, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 01:27:39 compute-0 sudo[292301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoqckipwkdylrakgmmzlpdaadlxiqlsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898058.674935-166-8342427040838/AnsiballZ_file.py'
Dec 05 01:27:39 compute-0 sudo[292301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:39 compute-0 ceph-mon[192914]: pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:39 compute-0 python3.9[292304]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:27:39 compute-0 sudo[292301]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:41 compute-0 sudo[292454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvuvoeilowiulpqgolplnnhrekmpeowi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898060.0897067-217-79179649594156/AnsiballZ_command.py'
Dec 05 01:27:41 compute-0 sudo[292454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:41 compute-0 python3.9[292456]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:27:41 compute-0 sudo[292454]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:41 compute-0 ceph-mon[192914]: pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:27:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:43 compute-0 python3.9[292608]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 01:27:43 compute-0 ceph-mon[192914]: pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:44 compute-0 sudo[292758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whatqwglnkgzbeigsbbfjoqvpsyyqntv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898064.0488591-235-237763553516181/AnsiballZ_systemd_service.py'
Dec 05 01:27:44 compute-0 sudo[292758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:44 compute-0 python3.9[292760]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 01:27:44 compute-0 systemd[1]: Reloading.
Dec 05 01:27:44 compute-0 systemd-sysv-generator[292791]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:27:44 compute-0 systemd-rc-local-generator[292788]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:27:45 compute-0 sudo[292758]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:45 compute-0 ceph-mon[192914]: pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:46 compute-0 sudo[292945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdhmaxbacosjasssygzztgwuvfjngxzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898065.5740929-243-82003144576333/AnsiballZ_command.py'
Dec 05 01:27:46 compute-0 sudo[292945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:46 compute-0 podman[292947]: 2025-12-05 01:27:46.212534006 +0000 UTC m=+0.124321208 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, release=1755695350, vcs-type=git, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, config_id=edpm, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 05 01:27:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:27:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:27:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:27:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:27:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:27:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:27:46 compute-0 python3.9[292948]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:27:46 compute-0 sudo[292945]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:46 compute-0 podman[293016]: 2025-12-05 01:27:46.673630494 +0000 UTC m=+0.083228199 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:27:46 compute-0 ceph-mon[192914]: pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:46 compute-0 sudo[293141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neqidhgcbkycvbhfccgmtkmvopwhzpmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898066.5487974-243-272222959850157/AnsiballZ_command.py'
Dec 05 01:27:47 compute-0 sudo[293141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:47 compute-0 python3.9[293143]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:27:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:27:47 compute-0 sudo[293141]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:48 compute-0 sudo[293294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cphssxjgwsxxikqrbgazhalceoilbuer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898067.49775-243-52492653749329/AnsiballZ_command.py'
Dec 05 01:27:48 compute-0 sudo[293294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:48 compute-0 python3.9[293296]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:27:48 compute-0 sudo[293294]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:49 compute-0 sudo[293447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikheihzbaljfcctwjkbvichnvuqhavhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898068.556086-243-222436279730565/AnsiballZ_command.py'
Dec 05 01:27:49 compute-0 sudo[293447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:49 compute-0 python3.9[293449]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:27:49 compute-0 sudo[293447]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:49 compute-0 ceph-mon[192914]: pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:50 compute-0 sudo[293601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsjxbeyvbbxvigfvidpqipangeofcbcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898069.5262983-243-192877835222660/AnsiballZ_command.py'
Dec 05 01:27:50 compute-0 sudo[293601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:50 compute-0 python3.9[293603]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:27:50 compute-0 sudo[293601]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:50 compute-0 sudo[293754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgrodcehjzetlvfdxrwbwisrbqrgherx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898070.5567331-243-105714274718840/AnsiballZ_command.py'
Dec 05 01:27:50 compute-0 sudo[293754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:51 compute-0 python3.9[293756]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:27:51 compute-0 sudo[293754]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:51 compute-0 ceph-mon[192914]: pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:27:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:52 compute-0 sudo[293907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvaotdvtsxhtuuouuewsogfdclggdlms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898071.425844-243-263316991102895/AnsiballZ_command.py'
Dec 05 01:27:52 compute-0 sudo[293907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:52 compute-0 ceph-mon[192914]: pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:52 compute-0 python3.9[293909]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:27:52 compute-0 sudo[293907]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:54 compute-0 podman[294034]: 2025-12-05 01:27:54.97962212 +0000 UTC m=+0.113127455 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 05 01:27:54 compute-0 sudo[294077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tduytchgnvlalztpuwsejetonltgqtud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898073.5794117-297-64473775392569/AnsiballZ_getent.py'
Dec 05 01:27:54 compute-0 sudo[294077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:55 compute-0 python3.9[294081]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec 05 01:27:55 compute-0 sudo[294077]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:55 compute-0 ceph-mon[192914]: pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:27:56.153 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:27:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:27:56.154 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:27:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:27:56.154 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:27:56 compute-0 sudo[294232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwgtsgqxiobpzmjonumoyfwodejdunhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898075.6464424-310-70951072938130/AnsiballZ_setup.py'
Dec 05 01:27:56 compute-0 sudo[294232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:56 compute-0 python3.9[294234]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 01:27:56 compute-0 sudo[294232]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:27:57 compute-0 ceph-mon[192914]: pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:57 compute-0 sudo[294316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhascrzoxehiqckzncqpwyzddolcgxme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898075.6464424-310-70951072938130/AnsiballZ_dnf.py'
Dec 05 01:27:57 compute-0 sudo[294316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:27:57 compute-0 python3.9[294318]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 01:27:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:59 compute-0 sudo[294316]: pam_unix(sudo:session): session closed for user root
Dec 05 01:27:59 compute-0 ceph-mon[192914]: pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:27:59 compute-0 podman[158197]: time="2025-12-05T01:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:27:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Dec 05 01:27:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7283 "" "Go-http-client/1.1"
Dec 05 01:28:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:00 compute-0 sudo[294469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oejdojsnburgpijktafhvklwcsfervnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898079.5829635-322-227791584714555/AnsiballZ_systemd.py'
Dec 05 01:28:00 compute-0 sudo[294469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:01 compute-0 python3.9[294471]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 01:28:01 compute-0 sudo[294469]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:01 compute-0 openstack_network_exporter[160350]: ERROR   01:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:28:01 compute-0 openstack_network_exporter[160350]: ERROR   01:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:28:01 compute-0 openstack_network_exporter[160350]: ERROR   01:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:28:01 compute-0 openstack_network_exporter[160350]: ERROR   01:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:28:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:28:01 compute-0 openstack_network_exporter[160350]: ERROR   01:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:28:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:28:01 compute-0 ceph-mon[192914]: pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:28:02 compute-0 sudo[294624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzrokidpgjqytzseqsyjrihcuzjszoqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898081.7122734-322-97417265356165/AnsiballZ_systemd.py'
Dec 05 01:28:02 compute-0 sudo[294624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:02 compute-0 python3.9[294626]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 01:28:02 compute-0 sudo[294624]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:03 compute-0 ceph-mon[192914]: pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:03 compute-0 podman[294678]: 2025-12-05 01:28:03.69841187 +0000 UTC m=+0.103054133 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:28:03 compute-0 podman[294677]: 2025-12-05 01:28:03.726522497 +0000 UTC m=+0.127831847 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 05 01:28:03 compute-0 podman[294682]: 2025-12-05 01:28:03.745360964 +0000 UTC m=+0.126785188 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 01:28:03 compute-0 podman[294680]: 2025-12-05 01:28:03.746736722 +0000 UTC m=+0.133927077 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:28:03 compute-0 sudo[294809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:28:03 compute-0 sudo[294809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:28:03 compute-0 sudo[294809]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:03 compute-0 sudo[294857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:28:03 compute-0 sudo[294857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:28:03 compute-0 sudo[294857]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:03 compute-0 sudo[294910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cobarmcgzxagzurhuncbmimturjqgiva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898083.1148787-322-4457469964276/AnsiballZ_systemd.py'
Dec 05 01:28:03 compute-0 sudo[294910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:04 compute-0 sudo[294909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:28:04 compute-0 sudo[294909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:28:04 compute-0 sudo[294909]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:04 compute-0 sudo[294937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 05 01:28:04 compute-0 sudo[294937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:28:04 compute-0 python3.9[294918]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 01:28:04 compute-0 sudo[294910]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:04 compute-0 podman[295107]: 2025-12-05 01:28:04.938081156 +0000 UTC m=+0.116132139 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 05 01:28:05 compute-0 podman[295107]: 2025-12-05 01:28:05.087029123 +0000 UTC m=+0.265080116 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:28:05 compute-0 ceph-mon[192914]: pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:05 compute-0 sudo[295294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avqwccprvaqsyzltqejhltrjkvthstfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898084.6313908-322-260383505875496/AnsiballZ_systemd.py'
Dec 05 01:28:05 compute-0 sudo[295294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:06 compute-0 python3.9[295299]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 01:28:06 compute-0 sudo[294937]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:06 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:28:06 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:28:06 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:28:06 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:28:06 compute-0 sudo[295294]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:06 compute-0 sudo[295334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:28:06 compute-0 sudo[295334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:28:06 compute-0 sudo[295334]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:06 compute-0 sudo[295383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:28:06 compute-0 sudo[295383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:28:06 compute-0 sudo[295383]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:06 compute-0 sudo[295426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:28:06 compute-0 sudo[295426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:28:06 compute-0 sudo[295426]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:06 compute-0 sudo[295482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:28:06 compute-0 sudo[295482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:28:07 compute-0 sudo[295596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgmejraxeejmmdvjwedjzzdrfkxwmgnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898086.5214205-351-8285518701735/AnsiballZ_systemd.py'
Dec 05 01:28:07 compute-0 sudo[295596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:07 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:28:07 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:28:07 compute-0 ceph-mon[192914]: pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:28:07 compute-0 sudo[295482]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:07 compute-0 python3.9[295598]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:28:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:28:07 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:28:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:28:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:28:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:28:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:28:07 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev e0da4494-2fac-4667-a183-fac897211f20 does not exist
Dec 05 01:28:07 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 62e9c5a8-0114-4358-85b9-01044d7c5811 does not exist
Dec 05 01:28:07 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c400a858-933a-4f79-93a1-47c47866171b does not exist
Dec 05 01:28:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:28:07 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:28:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:28:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:28:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:28:07 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:28:07 compute-0 sudo[295596]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:07 compute-0 sudo[295619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:28:07 compute-0 sudo[295619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:28:07 compute-0 sudo[295619]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:07 compute-0 sudo[295665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:28:07 compute-0 sudo[295665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:28:07 compute-0 sudo[295665]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:07 compute-0 sudo[295716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:28:07 compute-0 sudo[295716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:28:07 compute-0 sudo[295716]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:07 compute-0 sudo[295770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:28:07 compute-0 sudo[295770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:28:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:28:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:28:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:28:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:28:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:28:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:28:08 compute-0 sudo[295890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxevcujzpgtkeenpuhjmmhpcpetsvgoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898087.7626941-351-121777045028806/AnsiballZ_systemd.py'
Dec 05 01:28:08 compute-0 sudo[295890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:08 compute-0 podman[295906]: 2025-12-05 01:28:08.533212999 +0000 UTC m=+0.081067719 container create c242900af9670b0882c5adbd5ef94280159028f1fa23c428b4f42eb21ac50524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_newton, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:28:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:08 compute-0 podman[295906]: 2025-12-05 01:28:08.501268725 +0000 UTC m=+0.049123465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:28:08 compute-0 systemd[1]: Started libpod-conmon-c242900af9670b0882c5adbd5ef94280159028f1fa23c428b4f42eb21ac50524.scope.
Dec 05 01:28:08 compute-0 python3.9[295892]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:28:08 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:28:08 compute-0 podman[295906]: 2025-12-05 01:28:08.714741796 +0000 UTC m=+0.262596536 container init c242900af9670b0882c5adbd5ef94280159028f1fa23c428b4f42eb21ac50524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_newton, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:28:08 compute-0 podman[295906]: 2025-12-05 01:28:08.734791027 +0000 UTC m=+0.282645747 container start c242900af9670b0882c5adbd5ef94280159028f1fa23c428b4f42eb21ac50524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_newton, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 05 01:28:08 compute-0 podman[295906]: 2025-12-05 01:28:08.741810173 +0000 UTC m=+0.289664913 container attach c242900af9670b0882c5adbd5ef94280159028f1fa23c428b4f42eb21ac50524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_newton, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 05 01:28:08 compute-0 focused_newton[295923]: 167 167
Dec 05 01:28:08 compute-0 systemd[1]: libpod-c242900af9670b0882c5adbd5ef94280159028f1fa23c428b4f42eb21ac50524.scope: Deactivated successfully.
Dec 05 01:28:08 compute-0 podman[295906]: 2025-12-05 01:28:08.747877583 +0000 UTC m=+0.295732343 container died c242900af9670b0882c5adbd5ef94280159028f1fa23c428b4f42eb21ac50524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_newton, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 05 01:28:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f74d99a2e5f9d6bb2b0b10635d17f3f6455329f5c8e99cf5bcf5a066ffb2823e-merged.mount: Deactivated successfully.
Dec 05 01:28:08 compute-0 sudo[295890]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:08 compute-0 podman[295906]: 2025-12-05 01:28:08.840774562 +0000 UTC m=+0.388629292 container remove c242900af9670b0882c5adbd5ef94280159028f1fa23c428b4f42eb21ac50524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_newton, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:28:08 compute-0 systemd[1]: libpod-conmon-c242900af9670b0882c5adbd5ef94280159028f1fa23c428b4f42eb21ac50524.scope: Deactivated successfully.
Dec 05 01:28:09 compute-0 podman[295973]: 2025-12-05 01:28:09.05056039 +0000 UTC m=+0.083485046 container create 01afabe20f064870808f9cfed8e313fea44c6d884faa1d2e6f47196c71ed0841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ride, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:28:09 compute-0 podman[295973]: 2025-12-05 01:28:09.014426779 +0000 UTC m=+0.047351505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:28:09 compute-0 systemd[1]: Started libpod-conmon-01afabe20f064870808f9cfed8e313fea44c6d884faa1d2e6f47196c71ed0841.scope.
Dec 05 01:28:09 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:28:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeadd46d0c86c5625aea68681bcda392c224e9a22cb503f9c8d85a5b9de8b5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:28:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeadd46d0c86c5625aea68681bcda392c224e9a22cb503f9c8d85a5b9de8b5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:28:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeadd46d0c86c5625aea68681bcda392c224e9a22cb503f9c8d85a5b9de8b5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:28:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeadd46d0c86c5625aea68681bcda392c224e9a22cb503f9c8d85a5b9de8b5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:28:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeadd46d0c86c5625aea68681bcda392c224e9a22cb503f9c8d85a5b9de8b5f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:28:09 compute-0 podman[295973]: 2025-12-05 01:28:09.181847362 +0000 UTC m=+0.214772048 container init 01afabe20f064870808f9cfed8e313fea44c6d884faa1d2e6f47196c71ed0841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ride, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:28:09 compute-0 podman[295973]: 2025-12-05 01:28:09.201716118 +0000 UTC m=+0.234640774 container start 01afabe20f064870808f9cfed8e313fea44c6d884faa1d2e6f47196c71ed0841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ride, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 01:28:09 compute-0 podman[295973]: 2025-12-05 01:28:09.208322813 +0000 UTC m=+0.241247499 container attach 01afabe20f064870808f9cfed8e313fea44c6d884faa1d2e6f47196c71ed0841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:28:09 compute-0 ceph-mon[192914]: pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:09 compute-0 podman[296076]: 2025-12-05 01:28:09.739069179 +0000 UTC m=+0.139530084 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release-0.7.12=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vendor=Red Hat, Inc., distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 01:28:09 compute-0 sudo[296137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uudglwezbxoulejwujxzcggsthkdhrvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898089.0758553-351-103648555410470/AnsiballZ_systemd.py'
Dec 05 01:28:09 compute-0 sudo[296137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:10 compute-0 python3.9[296139]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:28:10 compute-0 sudo[296137]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:10 compute-0 nostalgic_ride[296024]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:28:10 compute-0 nostalgic_ride[296024]: --> relative data size: 1.0
Dec 05 01:28:10 compute-0 nostalgic_ride[296024]: --> All data devices are unavailable
Dec 05 01:28:10 compute-0 systemd[1]: libpod-01afabe20f064870808f9cfed8e313fea44c6d884faa1d2e6f47196c71ed0841.scope: Deactivated successfully.
Dec 05 01:28:10 compute-0 podman[295973]: 2025-12-05 01:28:10.44128859 +0000 UTC m=+1.474213246 container died 01afabe20f064870808f9cfed8e313fea44c6d884faa1d2e6f47196c71ed0841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:28:10 compute-0 systemd[1]: libpod-01afabe20f064870808f9cfed8e313fea44c6d884faa1d2e6f47196c71ed0841.scope: Consumed 1.173s CPU time.
Dec 05 01:28:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebeadd46d0c86c5625aea68681bcda392c224e9a22cb503f9c8d85a5b9de8b5f-merged.mount: Deactivated successfully.
Dec 05 01:28:10 compute-0 podman[295973]: 2025-12-05 01:28:10.53386013 +0000 UTC m=+1.566784786 container remove 01afabe20f064870808f9cfed8e313fea44c6d884faa1d2e6f47196c71ed0841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ride, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 05 01:28:10 compute-0 systemd[1]: libpod-conmon-01afabe20f064870808f9cfed8e313fea44c6d884faa1d2e6f47196c71ed0841.scope: Deactivated successfully.
Dec 05 01:28:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:10 compute-0 sudo[295770]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:10 compute-0 sudo[296254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:28:10 compute-0 sudo[296254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:28:10 compute-0 sudo[296254]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:10 compute-0 sudo[296302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:28:10 compute-0 sudo[296302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:28:10 compute-0 sudo[296302]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:10 compute-0 sudo[296351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:28:10 compute-0 sudo[296351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:28:10 compute-0 sudo[296351]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:10 compute-0 sudo[296401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kosptuivgizusdpvenmkscppalyfikpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898090.4620883-351-215101205770259/AnsiballZ_systemd.py'
Dec 05 01:28:10 compute-0 sudo[296401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:11 compute-0 sudo[296403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:28:11 compute-0 sudo[296403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:28:11 compute-0 python3.9[296409]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:28:11 compute-0 sudo[296401]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:11 compute-0 podman[296495]: 2025-12-05 01:28:11.496379244 +0000 UTC m=+0.064980859 container create daf53dd4cc0081ea643617e63a6f1735d3ba3650cf86a4d5a4217a9f6d1c1e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:28:11 compute-0 systemd[1]: Started libpod-conmon-daf53dd4cc0081ea643617e63a6f1735d3ba3650cf86a4d5a4217a9f6d1c1e63.scope.
Dec 05 01:28:11 compute-0 podman[296495]: 2025-12-05 01:28:11.470046077 +0000 UTC m=+0.038647682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:28:11 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:28:11 compute-0 ceph-mon[192914]: pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:11 compute-0 podman[296495]: 2025-12-05 01:28:11.636404621 +0000 UTC m=+0.205006246 container init daf53dd4cc0081ea643617e63a6f1735d3ba3650cf86a4d5a4217a9f6d1c1e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 01:28:11 compute-0 podman[296495]: 2025-12-05 01:28:11.656074111 +0000 UTC m=+0.224675696 container start daf53dd4cc0081ea643617e63a6f1735d3ba3650cf86a4d5a4217a9f6d1c1e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 01:28:11 compute-0 podman[296495]: 2025-12-05 01:28:11.661096031 +0000 UTC m=+0.229697666 container attach daf53dd4cc0081ea643617e63a6f1735d3ba3650cf86a4d5a4217a9f6d1c1e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:28:11 compute-0 unruffled_gauss[296536]: 167 167
Dec 05 01:28:11 compute-0 systemd[1]: libpod-daf53dd4cc0081ea643617e63a6f1735d3ba3650cf86a4d5a4217a9f6d1c1e63.scope: Deactivated successfully.
Dec 05 01:28:11 compute-0 conmon[296536]: conmon daf53dd4cc0081ea6436 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-daf53dd4cc0081ea643617e63a6f1735d3ba3650cf86a4d5a4217a9f6d1c1e63.scope/container/memory.events
Dec 05 01:28:11 compute-0 podman[296495]: 2025-12-05 01:28:11.668389185 +0000 UTC m=+0.236990790 container died daf53dd4cc0081ea643617e63a6f1735d3ba3650cf86a4d5a4217a9f6d1c1e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:28:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-5737be733f4411a0415f8cac954e5fdad07a1fb9042ab8262b13b6871f4f5cfa-merged.mount: Deactivated successfully.
Dec 05 01:28:11 compute-0 podman[296495]: 2025-12-05 01:28:11.729553566 +0000 UTC m=+0.298155161 container remove daf53dd4cc0081ea643617e63a6f1735d3ba3650cf86a4d5a4217a9f6d1c1e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 01:28:11 compute-0 systemd[1]: libpod-conmon-daf53dd4cc0081ea643617e63a6f1735d3ba3650cf86a4d5a4217a9f6d1c1e63.scope: Deactivated successfully.
Dec 05 01:28:11 compute-0 podman[296617]: 2025-12-05 01:28:11.961042542 +0000 UTC m=+0.066460401 container create 8d5d2c99018b381ffade40104ddd7912fdbbc1e5b854381f470779e038d76cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 01:28:12 compute-0 podman[296617]: 2025-12-05 01:28:11.933439399 +0000 UTC m=+0.038857238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:28:12 compute-0 systemd[1]: Started libpod-conmon-8d5d2c99018b381ffade40104ddd7912fdbbc1e5b854381f470779e038d76cf0.scope.
Dec 05 01:28:12 compute-0 sudo[296673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqqyvfnrczhkzywpynbgkxuyxdgdneex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898091.5495813-351-207226670310272/AnsiballZ_systemd.py'
Dec 05 01:28:12 compute-0 sudo[296673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:12 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6591cbfee6e1bd85ef1a73198f3423a2f8085e975c1cf91abb2d09c1aa12f53a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6591cbfee6e1bd85ef1a73198f3423a2f8085e975c1cf91abb2d09c1aa12f53a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6591cbfee6e1bd85ef1a73198f3423a2f8085e975c1cf91abb2d09c1aa12f53a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6591cbfee6e1bd85ef1a73198f3423a2f8085e975c1cf91abb2d09c1aa12f53a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:28:12 compute-0 podman[296617]: 2025-12-05 01:28:12.151302043 +0000 UTC m=+0.256719972 container init 8d5d2c99018b381ffade40104ddd7912fdbbc1e5b854381f470779e038d76cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:28:12 compute-0 podman[296617]: 2025-12-05 01:28:12.187451555 +0000 UTC m=+0.292869404 container start 8d5d2c99018b381ffade40104ddd7912fdbbc1e5b854381f470779e038d76cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:28:12 compute-0 podman[296617]: 2025-12-05 01:28:12.194109081 +0000 UTC m=+0.299526930 container attach 8d5d2c99018b381ffade40104ddd7912fdbbc1e5b854381f470779e038d76cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lichterman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 05 01:28:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:28:12 compute-0 python3.9[296679]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:28:12 compute-0 sudo[296673]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]: {
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:     "0": [
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:         {
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "devices": [
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "/dev/loop3"
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             ],
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "lv_name": "ceph_lv0",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "lv_size": "21470642176",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "name": "ceph_lv0",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "tags": {
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.cluster_name": "ceph",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.crush_device_class": "",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.encrypted": "0",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.osd_id": "0",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.type": "block",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.vdo": "0"
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             },
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "type": "block",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "vg_name": "ceph_vg0"
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:         }
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:     ],
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:     "1": [
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:         {
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "devices": [
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "/dev/loop4"
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             ],
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "lv_name": "ceph_lv1",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "lv_size": "21470642176",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "name": "ceph_lv1",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "tags": {
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.cluster_name": "ceph",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.crush_device_class": "",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.encrypted": "0",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.osd_id": "1",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.type": "block",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.vdo": "0"
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             },
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "type": "block",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "vg_name": "ceph_vg1"
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:         }
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:     ],
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:     "2": [
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:         {
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "devices": [
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "/dev/loop5"
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             ],
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "lv_name": "ceph_lv2",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "lv_size": "21470642176",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "name": "ceph_lv2",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "tags": {
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.cluster_name": "ceph",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.crush_device_class": "",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.encrypted": "0",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.osd_id": "2",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.type": "block",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:                 "ceph.vdo": "0"
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             },
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "type": "block",
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:             "vg_name": "ceph_vg2"
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:         }
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]:     ]
Dec 05 01:28:12 compute-0 romantic_lichterman[296677]: }
Dec 05 01:28:13 compute-0 systemd[1]: libpod-8d5d2c99018b381ffade40104ddd7912fdbbc1e5b854381f470779e038d76cf0.scope: Deactivated successfully.
Dec 05 01:28:13 compute-0 podman[296617]: 2025-12-05 01:28:13.039467187 +0000 UTC m=+1.144885046 container died 8d5d2c99018b381ffade40104ddd7912fdbbc1e5b854381f470779e038d76cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lichterman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:28:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-6591cbfee6e1bd85ef1a73198f3423a2f8085e975c1cf91abb2d09c1aa12f53a-merged.mount: Deactivated successfully.
Dec 05 01:28:13 compute-0 podman[296617]: 2025-12-05 01:28:13.15825968 +0000 UTC m=+1.263677509 container remove 8d5d2c99018b381ffade40104ddd7912fdbbc1e5b854381f470779e038d76cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 05 01:28:13 compute-0 systemd[1]: libpod-conmon-8d5d2c99018b381ffade40104ddd7912fdbbc1e5b854381f470779e038d76cf0.scope: Deactivated successfully.
Dec 05 01:28:13 compute-0 sudo[296403]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:13 compute-0 sudo[296820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:28:13 compute-0 sudo[296820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:28:13 compute-0 sudo[296820]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:13 compute-0 sudo[296879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isqtkiqudyhiatdkexvhobcmszosehdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898092.838065-387-256334725310208/AnsiballZ_systemd.py'
Dec 05 01:28:13 compute-0 sudo[296879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:13 compute-0 sudo[296871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:28:13 compute-0 sudo[296871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:28:13 compute-0 sudo[296871]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:13 compute-0 sudo[296902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:28:13 compute-0 sudo[296902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:28:13 compute-0 sudo[296902]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:13 compute-0 ceph-mon[192914]: pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:13 compute-0 sudo[296927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:28:13 compute-0 sudo[296927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:28:13 compute-0 python3.9[296894]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 05 01:28:13 compute-0 sudo[296879]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:14 compute-0 podman[297048]: 2025-12-05 01:28:14.34053304 +0000 UTC m=+0.086511711 container create 8a019cb4190653f5eb27e389fd473e1827af376673c6bf3439660b48bba108e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cartwright, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 05 01:28:14 compute-0 podman[297048]: 2025-12-05 01:28:14.308770442 +0000 UTC m=+0.054749143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:28:14 compute-0 systemd[1]: Started libpod-conmon-8a019cb4190653f5eb27e389fd473e1827af376673c6bf3439660b48bba108e7.scope.
Dec 05 01:28:14 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:28:14 compute-0 podman[297048]: 2025-12-05 01:28:14.484671372 +0000 UTC m=+0.230650063 container init 8a019cb4190653f5eb27e389fd473e1827af376673c6bf3439660b48bba108e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:28:14 compute-0 podman[297048]: 2025-12-05 01:28:14.503712055 +0000 UTC m=+0.249690756 container start 8a019cb4190653f5eb27e389fd473e1827af376673c6bf3439660b48bba108e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 05 01:28:14 compute-0 vigorous_cartwright[297096]: 167 167
Dec 05 01:28:14 compute-0 podman[297048]: 2025-12-05 01:28:14.511263606 +0000 UTC m=+0.257242287 container attach 8a019cb4190653f5eb27e389fd473e1827af376673c6bf3439660b48bba108e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cartwright, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 05 01:28:14 compute-0 systemd[1]: libpod-8a019cb4190653f5eb27e389fd473e1827af376673c6bf3439660b48bba108e7.scope: Deactivated successfully.
Dec 05 01:28:14 compute-0 podman[297048]: 2025-12-05 01:28:14.513697634 +0000 UTC m=+0.259676335 container died 8a019cb4190653f5eb27e389fd473e1827af376673c6bf3439660b48bba108e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cartwright, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 05 01:28:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e906e0e3c43dee6130d8afbf3bc2f23fd5c2d0a48dadb23c7c267d04895ed5b-merged.mount: Deactivated successfully.
Dec 05 01:28:14 compute-0 podman[297048]: 2025-12-05 01:28:14.591775048 +0000 UTC m=+0.337753719 container remove 8a019cb4190653f5eb27e389fd473e1827af376673c6bf3439660b48bba108e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 05 01:28:14 compute-0 systemd[1]: libpod-conmon-8a019cb4190653f5eb27e389fd473e1827af376673c6bf3439660b48bba108e7.scope: Deactivated successfully.
Dec 05 01:28:14 compute-0 sudo[297184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smvhyqogwjntdindfukqvoykqbcklmyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898094.1848361-395-248653134183199/AnsiballZ_systemd.py'
Dec 05 01:28:14 compute-0 sudo[297184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:14 compute-0 podman[297183]: 2025-12-05 01:28:14.843547861 +0000 UTC m=+0.078282061 container create 3ae07302d23e504b5df5d60cc775df34b5cf9a344dbf140a2ad148bf6bde2597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:28:14 compute-0 podman[297183]: 2025-12-05 01:28:14.806456363 +0000 UTC m=+0.041190613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:28:14 compute-0 systemd[1]: Started libpod-conmon-3ae07302d23e504b5df5d60cc775df34b5cf9a344dbf140a2ad148bf6bde2597.scope.
Dec 05 01:28:14 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:28:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89caa85fac1461e37885d1eeca9e87fa41eb94322c5875d165248217b2750913/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:28:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89caa85fac1461e37885d1eeca9e87fa41eb94322c5875d165248217b2750913/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:28:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89caa85fac1461e37885d1eeca9e87fa41eb94322c5875d165248217b2750913/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:28:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89caa85fac1461e37885d1eeca9e87fa41eb94322c5875d165248217b2750913/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:28:15 compute-0 podman[297183]: 2025-12-05 01:28:15.001872209 +0000 UTC m=+0.236606459 container init 3ae07302d23e504b5df5d60cc775df34b5cf9a344dbf140a2ad148bf6bde2597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 01:28:15 compute-0 podman[297183]: 2025-12-05 01:28:15.018393611 +0000 UTC m=+0.253127791 container start 3ae07302d23e504b5df5d60cc775df34b5cf9a344dbf140a2ad148bf6bde2597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 05 01:28:15 compute-0 podman[297183]: 2025-12-05 01:28:15.023572086 +0000 UTC m=+0.258306346 container attach 3ae07302d23e504b5df5d60cc775df34b5cf9a344dbf140a2ad148bf6bde2597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:28:15 compute-0 python3.9[297192]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:28:15 compute-0 sudo[297184]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:15 compute-0 ceph-mon[192914]: pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:16 compute-0 sudo[297380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yognxpoeyfpthvjvqwlvfliwolvexavj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898095.4970102-395-63788114113166/AnsiballZ_systemd.py'
Dec 05 01:28:16 compute-0 sudo[297380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:16 compute-0 vigilant_shtern[297203]: {
Dec 05 01:28:16 compute-0 vigilant_shtern[297203]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:28:16 compute-0 vigilant_shtern[297203]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:28:16 compute-0 vigilant_shtern[297203]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:28:16 compute-0 vigilant_shtern[297203]:         "osd_id": 0,
Dec 05 01:28:16 compute-0 vigilant_shtern[297203]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:28:16 compute-0 vigilant_shtern[297203]:         "type": "bluestore"
Dec 05 01:28:16 compute-0 vigilant_shtern[297203]:     },
Dec 05 01:28:16 compute-0 vigilant_shtern[297203]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:28:16 compute-0 vigilant_shtern[297203]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:28:16 compute-0 vigilant_shtern[297203]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:28:16 compute-0 vigilant_shtern[297203]:         "osd_id": 1,
Dec 05 01:28:16 compute-0 vigilant_shtern[297203]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:28:16 compute-0 vigilant_shtern[297203]:         "type": "bluestore"
Dec 05 01:28:16 compute-0 vigilant_shtern[297203]:     },
Dec 05 01:28:16 compute-0 vigilant_shtern[297203]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:28:16 compute-0 vigilant_shtern[297203]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:28:16 compute-0 vigilant_shtern[297203]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:28:16 compute-0 vigilant_shtern[297203]:         "osd_id": 2,
Dec 05 01:28:16 compute-0 vigilant_shtern[297203]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:28:16 compute-0 vigilant_shtern[297203]:         "type": "bluestore"
Dec 05 01:28:16 compute-0 vigilant_shtern[297203]:     }
Dec 05 01:28:16 compute-0 vigilant_shtern[297203]: }
Dec 05 01:28:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:28:16
Dec 05 01:28:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:28:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:28:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'volumes', 'vms', 'images', 'default.rgw.control', 'default.rgw.meta']
Dec 05 01:28:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:28:16 compute-0 systemd[1]: libpod-3ae07302d23e504b5df5d60cc775df34b5cf9a344dbf140a2ad148bf6bde2597.scope: Deactivated successfully.
Dec 05 01:28:16 compute-0 systemd[1]: libpod-3ae07302d23e504b5df5d60cc775df34b5cf9a344dbf140a2ad148bf6bde2597.scope: Consumed 1.170s CPU time.
Dec 05 01:28:16 compute-0 podman[297183]: 2025-12-05 01:28:16.185032525 +0000 UTC m=+1.419766735 container died 3ae07302d23e504b5df5d60cc775df34b5cf9a344dbf140a2ad148bf6bde2597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 05 01:28:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:28:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:28:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:28:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:28:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:28:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:28:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-89caa85fac1461e37885d1eeca9e87fa41eb94322c5875d165248217b2750913-merged.mount: Deactivated successfully.
Dec 05 01:28:16 compute-0 python3.9[297382]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:28:16 compute-0 podman[297183]: 2025-12-05 01:28:16.324418084 +0000 UTC m=+1.559152254 container remove 3ae07302d23e504b5df5d60cc775df34b5cf9a344dbf140a2ad148bf6bde2597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:28:16 compute-0 systemd[1]: libpod-conmon-3ae07302d23e504b5df5d60cc775df34b5cf9a344dbf140a2ad148bf6bde2597.scope: Deactivated successfully.
Dec 05 01:28:16 compute-0 sudo[296927]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:28:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:28:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:28:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:28:16 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c08dfc6d-194e-434a-ad1d-9a23eb9411cd does not exist
Dec 05 01:28:16 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c74ddc19-eb6f-46a4-a2ab-4b510b01a936 does not exist
Dec 05 01:28:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:28:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:28:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:28:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:28:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:28:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:28:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:28:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:28:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:28:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:28:16 compute-0 podman[297402]: 2025-12-05 01:28:16.419633347 +0000 UTC m=+0.130888432 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vendor=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 01:28:16 compute-0 sudo[297380]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:16 compute-0 sudo[297423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:28:16 compute-0 sudo[297423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:28:16 compute-0 sudo[297423]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:16 compute-0 sudo[297450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:28:16 compute-0 sudo[297450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:28:16 compute-0 sudo[297450]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:28:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:28:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:28:17 compute-0 ceph-mon[192914]: pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:17 compute-0 podman[297557]: 2025-12-05 01:28:17.70747802 +0000 UTC m=+0.113683001 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:28:17 compute-0 sudo[297647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcptyimeomfttedgxrtsgvbjhqqulbde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898097.3743548-395-92758975746924/AnsiballZ_systemd.py'
Dec 05 01:28:17 compute-0 sudo[297647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:18 compute-0 python3.9[297649]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:28:18 compute-0 sudo[297647]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:19 compute-0 ceph-mon[192914]: pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:19 compute-0 sudo[297803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttymsqktajbdnpnvmfwhwrzyueeosigs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898099.0829005-395-85931998722189/AnsiballZ_systemd.py'
Dec 05 01:28:19 compute-0 sudo[297803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:19 compute-0 python3.9[297805]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:28:20 compute-0 sudo[297803]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:20 compute-0 sudo[297958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyrkjirvujeadwsikqdxvumjucwzqkln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898100.3548799-395-117886621001341/AnsiballZ_systemd.py'
Dec 05 01:28:20 compute-0 sudo[297958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:21 compute-0 python3.9[297960]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:28:21 compute-0 sudo[297958]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:21 compute-0 ceph-mon[192914]: pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:22 compute-0 sudo[298113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxcxgdbqsufjxqfpumnwbfqnskyflqnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898101.659148-395-17450785758684/AnsiballZ_systemd.py'
Dec 05 01:28:22 compute-0 sudo[298113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:28:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:22 compute-0 python3.9[298115]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:28:22 compute-0 sudo[298113]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:23 compute-0 sudo[298268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxsftmcuouodjltcaoenkzftqrytcijw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898102.988644-395-6499912145794/AnsiballZ_systemd.py'
Dec 05 01:28:23 compute-0 sudo[298268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:23 compute-0 ceph-mon[192914]: pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:23 compute-0 python3.9[298270]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:28:24 compute-0 sudo[298268]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:24 compute-0 ceph-mon[192914]: pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:24 compute-0 sudo[298423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxiuixgptewgmkpddrlcsgixjonlnqro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898104.3271158-395-8106755613886/AnsiballZ_systemd.py'
Dec 05 01:28:24 compute-0 sudo[298423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:25 compute-0 python3.9[298425]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:28:25 compute-0 sudo[298423]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:25 compute-0 podman[298427]: 2025-12-05 01:28:25.326646103 +0000 UTC m=+0.134280147 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 01:28:26 compute-0 sudo[298596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvexyalngkgpscatewzurgstowebftzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898105.5381966-395-208116785489482/AnsiballZ_systemd.py'
Dec 05 01:28:26 compute-0 sudo[298596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:28:26 compute-0 python3.9[298598]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:28:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:26 compute-0 sudo[298596]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:28:27 compute-0 sudo[298751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhqtwdsctmrwlnvdjmpspeamgwasvixe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898106.8508072-395-182514076508580/AnsiballZ_systemd.py'
Dec 05 01:28:27 compute-0 sudo[298751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:27 compute-0 ceph-mon[192914]: pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:27 compute-0 python3.9[298753]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:28:27 compute-0 sudo[298751]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:28 compute-0 sudo[298906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlnqjruryrpcbnjbozuncvqycmjbhbps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898108.2091172-395-168001485861494/AnsiballZ_systemd.py'
Dec 05 01:28:28 compute-0 sudo[298906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:29 compute-0 python3.9[298908]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:28:29 compute-0 sudo[298906]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:29 compute-0 ceph-mon[192914]: pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:29 compute-0 podman[158197]: time="2025-12-05T01:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:28:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Dec 05 01:28:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7281 "" "Go-http-client/1.1"
Dec 05 01:28:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:30 compute-0 sudo[299061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xczczzpxwhyfjuzhsaiejlilsckmxbgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898110.0747163-395-190051558809904/AnsiballZ_systemd.py'
Dec 05 01:28:30 compute-0 sudo[299061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:30 compute-0 python3.9[299063]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:28:31 compute-0 sudo[299061]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:31 compute-0 openstack_network_exporter[160350]: ERROR   01:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:28:31 compute-0 openstack_network_exporter[160350]: ERROR   01:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:28:31 compute-0 openstack_network_exporter[160350]: ERROR   01:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:28:31 compute-0 openstack_network_exporter[160350]: ERROR   01:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:28:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:28:31 compute-0 openstack_network_exporter[160350]: ERROR   01:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:28:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:28:31 compute-0 ceph-mon[192914]: pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:28:32 compute-0 sudo[299216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjxbgypdfltedjlawplefexnzduxwqdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898111.3509445-395-9871311035722/AnsiballZ_systemd.py'
Dec 05 01:28:32 compute-0 sudo[299216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:32 compute-0 python3.9[299218]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:28:32 compute-0 sudo[299216]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:33 compute-0 sudo[299371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzspzkokecofficnxirjpudvegwujyhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898113.0232947-395-128344062701706/AnsiballZ_systemd.py'
Dec 05 01:28:33 compute-0 sudo[299371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:33 compute-0 ceph-mon[192914]: pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:33 compute-0 python3.9[299373]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 05 01:28:34 compute-0 sudo[299371]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:34 compute-0 podman[299376]: 2025-12-05 01:28:34.026149176 +0000 UTC m=+0.082593002 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:28:34 compute-0 podman[299375]: 2025-12-05 01:28:34.032054001 +0000 UTC m=+0.088329852 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 05 01:28:34 compute-0 podman[299377]: 2025-12-05 01:28:34.070858906 +0000 UTC m=+0.121301354 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 01:28:34 compute-0 podman[299378]: 2025-12-05 01:28:34.07741557 +0000 UTC m=+0.126218362 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:28:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:34 compute-0 sudo[299611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cquvavfcsthycoqwzufjvoaiukxdxmyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898114.4235387-497-143415907033594/AnsiballZ_file.py'
Dec 05 01:28:34 compute-0 sudo[299611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:35 compute-0 python3.9[299613]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:28:35 compute-0 sudo[299611]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:35 compute-0 ceph-mon[192914]: pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:35 compute-0 sudo[299763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnmxvokajylrtpumhbgdeljfljkupyaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898115.39949-497-235893288225540/AnsiballZ_file.py'
Dec 05 01:28:35 compute-0 sudo[299763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:36 compute-0 python3.9[299765]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:28:36 compute-0 sudo[299763]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:36 compute-0 sudo[299915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rubhzvamyujwqdraohifewogutogziyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898116.4496715-497-21189811059655/AnsiballZ_file.py'
Dec 05 01:28:36 compute-0 sudo[299915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:37 compute-0 python3.9[299917]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:28:37 compute-0 sudo[299915]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:28:37 compute-0 ceph-mon[192914]: pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:37 compute-0 sudo[300067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyyhzxqznrxsrtlbtkxfiyhynmoqbwhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898117.4352667-497-108573542944217/AnsiballZ_file.py'
Dec 05 01:28:37 compute-0 sudo[300067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:38 compute-0 python3.9[300069]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:28:38 compute-0 sudo[300067]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:38 compute-0 ceph-mon[192914]: pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:38 compute-0 sudo[300219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxpnxpbbsiqxgfqhpbkpvnztgzgfhxin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898118.4428155-497-39687464976120/AnsiballZ_file.py'
Dec 05 01:28:38 compute-0 sudo[300219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:39 compute-0 python3.9[300221]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:28:39 compute-0 sudo[300219]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:40 compute-0 sudo[300384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjpxmztixfywxeqwhkyfgxwzjaxsiqvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898119.5036912-497-205370636360056/AnsiballZ_file.py'
Dec 05 01:28:40 compute-0 sudo[300384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:40 compute-0 podman[300345]: 2025-12-05 01:28:40.091809502 +0000 UTC m=+0.128036932 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., io.buildah.version=1.29.0, release-0.7.12=, config_id=edpm, architecture=x86_64, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, version=9.4, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 01:28:40 compute-0 python3.9[300389]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:28:40 compute-0 sudo[300384]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:41 compute-0 sudo[300541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzvdrgziaunnddeqitnrhnseugjktdbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898120.6020617-540-54326828842842/AnsiballZ_stat.py'
Dec 05 01:28:41 compute-0 sudo[300541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:41 compute-0 python3.9[300543]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:28:41 compute-0 sudo[300541]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:41 compute-0 ceph-mon[192914]: pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.546 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.546 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.554 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.554 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.557 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.558 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.558 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.559 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.560 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.560 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.561 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.561 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.562 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.562 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.563 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.564 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.564 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.565 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.567 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.567 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.567 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.567 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.569 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.569 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.570 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.570 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.571 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.566 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.573 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.576 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.576 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.577 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.577 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.575 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.578 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.578 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.579 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.579 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.579 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.579 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.579 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.580 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.580 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.582 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.583 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.584 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.584 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.585 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.586 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.586 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.587 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.587 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.588 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.589 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.593 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.593 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.593 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.593 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:28:42 compute-0 sudo[300620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfmzxppsgyoprztsllhvhrfdjfxfegbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898120.6020617-540-54326828842842/AnsiballZ_file.py'
Dec 05 01:28:42 compute-0 sudo[300620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:42 compute-0 python3.9[300622]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtlogd.conf _original_basename=virtlogd.conf recurse=False state=file path=/etc/libvirt/virtlogd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:28:42 compute-0 sudo[300620]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:43 compute-0 ceph-mon[192914]: pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:44 compute-0 sudo[300772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klqbxhriafqohlcyhqvyombvlpqqtant ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898123.2106926-540-247059441146005/AnsiballZ_stat.py'
Dec 05 01:28:44 compute-0 sudo[300772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:44 compute-0 python3.9[300774]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:28:44 compute-0 sudo[300772]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:45 compute-0 sudo[300850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xczreikwaojmvdnaoedvfzxmjlcowptl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898123.2106926-540-247059441146005/AnsiballZ_file.py'
Dec 05 01:28:45 compute-0 sudo[300850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:45 compute-0 python3.9[300852]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtnodedevd.conf _original_basename=virtnodedevd.conf recurse=False state=file path=/etc/libvirt/virtnodedevd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:28:45 compute-0 sudo[300850]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:45 compute-0 ceph-mon[192914]: pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:28:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:28:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:28:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:28:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:28:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:28:46 compute-0 sudo[301002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmktbyjqfdgqzinytqwupaacyntwzswt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898125.7954156-540-176168338117759/AnsiballZ_stat.py'
Dec 05 01:28:46 compute-0 sudo[301002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:46 compute-0 python3.9[301004]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:28:46 compute-0 sudo[301002]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:46 compute-0 podman[301005]: 2025-12-05 01:28:46.701135658 +0000 UTC m=+0.110748009 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 05 01:28:47 compute-0 sudo[301101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnmjbrlhwqacyoowlcgrurlqppyzmhse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898125.7954156-540-176168338117759/AnsiballZ_file.py'
Dec 05 01:28:47 compute-0 sudo[301101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:28:47 compute-0 python3.9[301103]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtproxyd.conf _original_basename=virtproxyd.conf recurse=False state=file path=/etc/libvirt/virtproxyd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:28:47 compute-0 sudo[301101]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:47 compute-0 ceph-mon[192914]: pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:48 compute-0 podman[301227]: 2025-12-05 01:28:48.177587738 +0000 UTC m=+0.112736245 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 01:28:48 compute-0 sudo[301268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnvyykvfeonbbcumapxbbsfnxzqrqklk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898127.5495234-540-61608950269710/AnsiballZ_stat.py'
Dec 05 01:28:48 compute-0 sudo[301268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:48 compute-0 python3.9[301277]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:28:48 compute-0 sudo[301268]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:48 compute-0 sudo[301353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqrrbbmegvbsrrdjhvwnbgshgkrkaeli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898127.5495234-540-61608950269710/AnsiballZ_file.py'
Dec 05 01:28:48 compute-0 sudo[301353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:48 compute-0 python3.9[301355]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtqemud.conf _original_basename=virtqemud.conf recurse=False state=file path=/etc/libvirt/virtqemud.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:28:49 compute-0 sudo[301353]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:49 compute-0 ceph-mon[192914]: pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:49 compute-0 sudo[301506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lagvpjtiefkbwpwlucrlssximtzeogwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898129.2544646-540-217888994601511/AnsiballZ_stat.py'
Dec 05 01:28:49 compute-0 sudo[301506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:49 compute-0 python3.9[301508]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:28:50 compute-0 sudo[301506]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:50 compute-0 sudo[301584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahmolpmyckfcptdsqxhkptxcujwoqrkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898129.2544646-540-217888994601511/AnsiballZ_file.py'
Dec 05 01:28:50 compute-0 sudo[301584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:50 compute-0 python3.9[301586]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/qemu.conf _original_basename=qemu.conf.j2 recurse=False state=file path=/etc/libvirt/qemu.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:28:50 compute-0 sudo[301584]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:51 compute-0 sudo[301736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnjozkdibqftdygakmgcclvtyypfxhun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898130.918176-540-120599964709704/AnsiballZ_stat.py'
Dec 05 01:28:51 compute-0 sudo[301736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:51 compute-0 python3.9[301738]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:28:51 compute-0 ceph-mon[192914]: pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:51 compute-0 sudo[301736]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:52 compute-0 sudo[301814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uegshhvgoehdsuncvykndobltctibadi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898130.918176-540-120599964709704/AnsiballZ_file.py'
Dec 05 01:28:52 compute-0 sudo[301814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:28:52 compute-0 python3.9[301816]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtsecretd.conf _original_basename=virtsecretd.conf recurse=False state=file path=/etc/libvirt/virtsecretd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:28:52 compute-0 sudo[301814]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:52 compute-0 ceph-mon[192914]: pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:53 compute-0 sudo[301966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkmkbyqqeokykxeuzbyaumxfnfrttucm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898132.7198446-540-195380613080955/AnsiballZ_stat.py'
Dec 05 01:28:53 compute-0 sudo[301966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:53 compute-0 python3.9[301968]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:28:53 compute-0 sudo[301966]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:53 compute-0 sudo[302044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymjflimqxvinzrupvuoesntkozipmrvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898132.7198446-540-195380613080955/AnsiballZ_file.py'
Dec 05 01:28:53 compute-0 sudo[302044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:54 compute-0 python3.9[302046]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0600 owner=libvirt dest=/etc/libvirt/auth.conf _original_basename=auth.conf recurse=False state=file path=/etc/libvirt/auth.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:28:54 compute-0 sudo[302044]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:55 compute-0 ceph-mon[192914]: pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:55 compute-0 podman[302170]: 2025-12-05 01:28:55.675517029 +0000 UTC m=+0.104980038 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 01:28:55 compute-0 sudo[302213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdyldlufuznnksvtdrrweychfdzodbcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898135.0852008-540-111777199044762/AnsiballZ_stat.py'
Dec 05 01:28:55 compute-0 sudo[302213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:55 compute-0 python3.9[302217]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:28:55 compute-0 sudo[302213]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:28:56.155 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:28:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:28:56.155 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:28:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:28:56.156 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:28:56 compute-0 sudo[302294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzwimragrifsyczdmbhhxfctjrjpwcte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898135.0852008-540-111777199044762/AnsiballZ_file.py'
Dec 05 01:28:56 compute-0 sudo[302294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:56 compute-0 python3.9[302296]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/sasl2/libvirt.conf _original_basename=sasl_libvirt.conf recurse=False state=file path=/etc/sasl2/libvirt.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:28:56 compute-0 sudo[302294]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:28:57 compute-0 ceph-mon[192914]: pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:58 compute-0 sudo[302446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnimiczmqupjovdslhocfipcyfhraaqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898137.410251-629-166276891065518/AnsiballZ_command.py'
Dec 05 01:28:58 compute-0 sudo[302446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:58 compute-0 python3.9[302448]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec 05 01:28:58 compute-0 sudo[302446]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:59 compute-0 sudo[302599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhemvtsskoppslykgytzdbrjerbzwdwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898138.7127204-638-110056990417672/AnsiballZ_file.py'
Dec 05 01:28:59 compute-0 sudo[302599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:28:59 compute-0 python3.9[302601]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:28:59 compute-0 sudo[302599]: pam_unix(sudo:session): session closed for user root
Dec 05 01:28:59 compute-0 ceph-mon[192914]: pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:28:59 compute-0 podman[158197]: time="2025-12-05T01:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:28:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Dec 05 01:28:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7269 "" "Go-http-client/1.1"
Dec 05 01:29:00 compute-0 sudo[302751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ensltkxcjummxfevsildcgikhlfdmixn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898139.7897334-638-3744265061747/AnsiballZ_file.py'
Dec 05 01:29:00 compute-0 sudo[302751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:00 compute-0 python3.9[302753]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:00 compute-0 sudo[302751]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:01 compute-0 sudo[302903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yruzfuzrmqfoztyjjzlgbpoxmolhblmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898140.806945-638-94189951317624/AnsiballZ_file.py'
Dec 05 01:29:01 compute-0 sudo[302903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:01 compute-0 openstack_network_exporter[160350]: ERROR   01:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:29:01 compute-0 openstack_network_exporter[160350]: ERROR   01:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:29:01 compute-0 openstack_network_exporter[160350]: ERROR   01:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:29:01 compute-0 openstack_network_exporter[160350]: ERROR   01:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:29:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:29:01 compute-0 openstack_network_exporter[160350]: ERROR   01:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:29:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:29:01 compute-0 python3.9[302905]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:01 compute-0 sudo[302903]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:01 compute-0 ceph-mon[192914]: pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:29:02 compute-0 sudo[303055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amcnuxkemlpvwgwqngphududbapkicgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898141.8816295-638-201341462227710/AnsiballZ_file.py'
Dec 05 01:29:02 compute-0 sudo[303055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:02 compute-0 python3.9[303057]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:02 compute-0 sudo[303055]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:03 compute-0 sudo[303207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyrcydpiokubehrsntwvulzkzwrogdlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898142.9515746-638-251510646643205/AnsiballZ_file.py'
Dec 05 01:29:03 compute-0 sudo[303207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:03 compute-0 python3.9[303209]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:03 compute-0 ceph-mon[192914]: pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:03 compute-0 sudo[303207]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:04 compute-0 podman[303334]: 2025-12-05 01:29:04.589560961 +0000 UTC m=+0.098786554 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:29:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:04 compute-0 sudo[303417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ielvymakbwrhejxrgwjsvjxlbclxzlgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898143.990097-638-225162799742657/AnsiballZ_file.py'
Dec 05 01:29:04 compute-0 sudo[303417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:04 compute-0 podman[303335]: 2025-12-05 01:29:04.603112211 +0000 UTC m=+0.104701610 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec 05 01:29:04 compute-0 podman[303336]: 2025-12-05 01:29:04.626791723 +0000 UTC m=+0.131178170 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 05 01:29:04 compute-0 podman[303333]: 2025-12-05 01:29:04.640305021 +0000 UTC m=+0.152352793 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute)
Dec 05 01:29:04 compute-0 ceph-mon[192914]: pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:04 compute-0 python3.9[303439]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:04 compute-0 sudo[303417]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:05 compute-0 sudo[303593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snsiwkqswpnfoeafuxssjfykojwagcgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898145.122643-638-90272763919324/AnsiballZ_file.py'
Dec 05 01:29:05 compute-0 sudo[303593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:05 compute-0 python3.9[303595]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:05 compute-0 sudo[303593]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:06 compute-0 sudo[303745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uctllyqcpuunoubmluowbevhuppcujjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898146.1412885-638-231169295307885/AnsiballZ_file.py'
Dec 05 01:29:06 compute-0 sudo[303745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:06 compute-0 python3.9[303747]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:06 compute-0 sudo[303745]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:29:07 compute-0 ceph-mon[192914]: pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:08 compute-0 sudo[303897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dapdwqdzgczinjxuebuwhyteebxekesv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898147.180601-638-17197764758178/AnsiballZ_file.py'
Dec 05 01:29:08 compute-0 sudo[303897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:08 compute-0 python3.9[303899]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:08 compute-0 sudo[303897]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:09 compute-0 sudo[304049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boyhbpjajnmjsvrnrgkngvgncmmpowzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898148.861408-638-75290381184226/AnsiballZ_file.py'
Dec 05 01:29:09 compute-0 sudo[304049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:09 compute-0 ceph-mon[192914]: pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:09 compute-0 python3.9[304051]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:09 compute-0 sudo[304049]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:10 compute-0 podman[304134]: 2025-12-05 01:29:10.746522494 +0000 UTC m=+0.148954036 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, io.openshift.tags=base rhel9, release-0.7.12=, distribution-scope=public, maintainer=Red Hat, Inc., release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, container_name=kepler, config_id=edpm)
Dec 05 01:29:10 compute-0 sudo[304220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwvcfkdiiumqixylptqchhkiwuwunazf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898150.3854928-638-85396516494408/AnsiballZ_file.py'
Dec 05 01:29:10 compute-0 sudo[304220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:11 compute-0 python3.9[304222]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:11 compute-0 sudo[304220]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:11 compute-0 ceph-mon[192914]: pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:11 compute-0 sudo[304372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wktitccdawqfnrayyxrgegpadvygvufd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898151.4259365-638-89647308938764/AnsiballZ_file.py'
Dec 05 01:29:11 compute-0 sudo[304372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:12 compute-0 python3.9[304374]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:12 compute-0 sudo[304372]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:29:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:13 compute-0 sudo[304524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncripqqxwuoyddmmexanrkmvjrbuoswg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898152.6583474-638-54508353079201/AnsiballZ_file.py'
Dec 05 01:29:13 compute-0 sudo[304524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:13 compute-0 python3.9[304526]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:13 compute-0 sudo[304524]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:13 compute-0 ceph-mon[192914]: pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:14 compute-0 sudo[304676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukpwfialmvlsnfqqkzbozzhkhwvmcmsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898153.7043982-638-166398791137592/AnsiballZ_file.py'
Dec 05 01:29:14 compute-0 sudo[304676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:14 compute-0 python3.9[304678]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:14 compute-0 sudo[304676]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:15 compute-0 sudo[304828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcokqehipowdlubvdkduicpcyxuadvbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898154.664034-737-171336190797889/AnsiballZ_stat.py'
Dec 05 01:29:15 compute-0 sudo[304828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:15 compute-0 python3.9[304830]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:29:15 compute-0 sudo[304828]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:15 compute-0 ceph-mon[192914]: pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:15 compute-0 sudo[304906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtxyrkidbjoxixdewxblizchuppuihlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898154.664034-737-171336190797889/AnsiballZ_file.py'
Dec 05 01:29:15 compute-0 sudo[304906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:15 compute-0 python3.9[304908]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtlogd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtlogd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:16 compute-0 sudo[304906]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:29:16
Dec 05 01:29:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:29:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:29:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'images', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'cephfs.cephfs.data']
Dec 05 01:29:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:29:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:29:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:29:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:29:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:29:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:29:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:29:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:29:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:29:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:29:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:29:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:29:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:29:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:29:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:29:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:29:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:29:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:16 compute-0 sudo[305010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:29:16 compute-0 sudo[305010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:29:16 compute-0 sudo[305010]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:16 compute-0 ceph-mon[192914]: pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:16 compute-0 sudo[305106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyxngjryvpxrspiqxheczpgfhmwruavi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898156.2658887-737-271633850822874/AnsiballZ_stat.py'
Dec 05 01:29:16 compute-0 sudo[305063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:29:16 compute-0 sudo[305106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:16 compute-0 sudo[305063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:29:16 compute-0 sudo[305063]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:16 compute-0 podman[305109]: 2025-12-05 01:29:16.976451717 +0000 UTC m=+0.110165582 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_id=edpm, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, version=9.6, managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=openstack_network_exporter)
Dec 05 01:29:16 compute-0 sudo[305121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:29:16 compute-0 sudo[305121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:29:16 compute-0 sudo[305121]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:17 compute-0 python3.9[305110]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:29:17 compute-0 sudo[305156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:29:17 compute-0 sudo[305156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:29:17 compute-0 sudo[305106]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:29:17 compute-0 sudo[305270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsolucnhamgntemfoknwinehshouoiwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898156.2658887-737-271633850822874/AnsiballZ_file.py'
Dec 05 01:29:17 compute-0 sudo[305270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:17 compute-0 python3.9[305274]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:17 compute-0 sudo[305156]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:17 compute-0 sudo[305270]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:29:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:29:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:29:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:29:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:29:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:29:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 3a4186c5-7e9b-45e5-94d9-c6cd27b6f618 does not exist
Dec 05 01:29:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 378c0e33-e4b2-4195-bab0-a78b4aaf5b8a does not exist
Dec 05 01:29:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8edfd303-9bc4-494f-9b33-329afd70a601 does not exist
Dec 05 01:29:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:29:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:29:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:29:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:29:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:29:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:29:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:29:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:29:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:29:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:29:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:29:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:29:17 compute-0 sudo[305313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:29:17 compute-0 sudo[305313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:29:17 compute-0 sudo[305313]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:17 compute-0 sudo[305338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:29:17 compute-0 sudo[305338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:29:17 compute-0 sudo[305338]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:18 compute-0 sudo[305386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:29:18 compute-0 sudo[305386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:29:18 compute-0 sudo[305386]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:18 compute-0 sudo[305440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:29:18 compute-0 sudo[305440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:29:18 compute-0 podman[305514]: 2025-12-05 01:29:18.416680263 +0000 UTC m=+0.081403708 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:29:18 compute-0 sudo[305573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfpqumneygurdukrtwgbsisdisoaknlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898157.943445-737-33026701045781/AnsiballZ_stat.py'
Dec 05 01:29:18 compute-0 sudo[305573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:18 compute-0 podman[305603]: 2025-12-05 01:29:18.627554411 +0000 UTC m=+0.050166754 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:29:18 compute-0 ceph-mon[192914]: pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:18 compute-0 podman[305603]: 2025-12-05 01:29:18.956563194 +0000 UTC m=+0.379175547 container create 331726bc4d85fe1ff1361d85591f20d514466163b266ce38d02491fe181f8a05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_brahmagupta, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:29:19 compute-0 systemd[1]: Started libpod-conmon-331726bc4d85fe1ff1361d85591f20d514466163b266ce38d02491fe181f8a05.scope.
Dec 05 01:29:19 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:29:19 compute-0 podman[305603]: 2025-12-05 01:29:19.139043698 +0000 UTC m=+0.561656061 container init 331726bc4d85fe1ff1361d85591f20d514466163b266ce38d02491fe181f8a05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:29:19 compute-0 podman[305603]: 2025-12-05 01:29:19.155373385 +0000 UTC m=+0.577985748 container start 331726bc4d85fe1ff1361d85591f20d514466163b266ce38d02491fe181f8a05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_brahmagupta, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Dec 05 01:29:19 compute-0 podman[305603]: 2025-12-05 01:29:19.162653359 +0000 UTC m=+0.585265732 container attach 331726bc4d85fe1ff1361d85591f20d514466163b266ce38d02491fe181f8a05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_brahmagupta, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:29:19 compute-0 interesting_brahmagupta[305619]: 167 167
Dec 05 01:29:19 compute-0 systemd[1]: libpod-331726bc4d85fe1ff1361d85591f20d514466163b266ce38d02491fe181f8a05.scope: Deactivated successfully.
Dec 05 01:29:19 compute-0 podman[305603]: 2025-12-05 01:29:19.166880967 +0000 UTC m=+0.589493320 container died 331726bc4d85fe1ff1361d85591f20d514466163b266ce38d02491fe181f8a05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_brahmagupta, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:29:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-5853d4ba12680505c24186270451c1de0f78e9b2bf4319bd3caee503eea61d4e-merged.mount: Deactivated successfully.
Dec 05 01:29:19 compute-0 podman[305603]: 2025-12-05 01:29:19.242193964 +0000 UTC m=+0.664806287 container remove 331726bc4d85fe1ff1361d85591f20d514466163b266ce38d02491fe181f8a05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Dec 05 01:29:19 compute-0 python3.9[305588]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:29:19 compute-0 systemd[1]: libpod-conmon-331726bc4d85fe1ff1361d85591f20d514466163b266ce38d02491fe181f8a05.scope: Deactivated successfully.
Dec 05 01:29:19 compute-0 sudo[305573]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:19 compute-0 podman[305645]: 2025-12-05 01:29:19.485587522 +0000 UTC m=+0.075578415 container create 72f616471711f3d752ba11b9b6c14fd090779e5db97a53b95e971617373207a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 05 01:29:19 compute-0 podman[305645]: 2025-12-05 01:29:19.447356323 +0000 UTC m=+0.037347306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:29:19 compute-0 systemd[1]: Started libpod-conmon-72f616471711f3d752ba11b9b6c14fd090779e5db97a53b95e971617373207a9.scope.
Dec 05 01:29:19 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:29:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37460b3eac7fd82b1964115a51411a931f75530c74aa14f99e0c9e9cf831d556/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:29:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37460b3eac7fd82b1964115a51411a931f75530c74aa14f99e0c9e9cf831d556/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:29:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37460b3eac7fd82b1964115a51411a931f75530c74aa14f99e0c9e9cf831d556/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:29:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37460b3eac7fd82b1964115a51411a931f75530c74aa14f99e0c9e9cf831d556/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:29:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37460b3eac7fd82b1964115a51411a931f75530c74aa14f99e0c9e9cf831d556/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:29:19 compute-0 podman[305645]: 2025-12-05 01:29:19.670578007 +0000 UTC m=+0.260568920 container init 72f616471711f3d752ba11b9b6c14fd090779e5db97a53b95e971617373207a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_benz, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:29:19 compute-0 podman[305645]: 2025-12-05 01:29:19.687474389 +0000 UTC m=+0.277465272 container start 72f616471711f3d752ba11b9b6c14fd090779e5db97a53b95e971617373207a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:29:19 compute-0 podman[305645]: 2025-12-05 01:29:19.693119777 +0000 UTC m=+0.283110660 container attach 72f616471711f3d752ba11b9b6c14fd090779e5db97a53b95e971617373207a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_benz, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec 05 01:29:19 compute-0 sudo[305741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asrzmczsgpbkfbyuhewoscbzbeizhwdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898157.943445-737-33026701045781/AnsiballZ_file.py'
Dec 05 01:29:19 compute-0 sudo[305741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:19 compute-0 python3.9[305743]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtnodedevd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:20 compute-0 sudo[305741]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:20 compute-0 sudo[305908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfdwsjbypwlmhverjjoxnrzfylpwbuiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898160.2459934-737-73882168356779/AnsiballZ_stat.py'
Dec 05 01:29:20 compute-0 sudo[305908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:20 compute-0 python3.9[305911]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:29:20 compute-0 zealous_benz[305686]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:29:20 compute-0 zealous_benz[305686]: --> relative data size: 1.0
Dec 05 01:29:20 compute-0 zealous_benz[305686]: --> All data devices are unavailable
Dec 05 01:29:21 compute-0 sudo[305908]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:21 compute-0 systemd[1]: libpod-72f616471711f3d752ba11b9b6c14fd090779e5db97a53b95e971617373207a9.scope: Deactivated successfully.
Dec 05 01:29:21 compute-0 podman[305645]: 2025-12-05 01:29:21.035537017 +0000 UTC m=+1.625527900 container died 72f616471711f3d752ba11b9b6c14fd090779e5db97a53b95e971617373207a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_benz, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:29:21 compute-0 systemd[1]: libpod-72f616471711f3d752ba11b9b6c14fd090779e5db97a53b95e971617373207a9.scope: Consumed 1.256s CPU time.
Dec 05 01:29:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-37460b3eac7fd82b1964115a51411a931f75530c74aa14f99e0c9e9cf831d556-merged.mount: Deactivated successfully.
Dec 05 01:29:21 compute-0 podman[305645]: 2025-12-05 01:29:21.122011616 +0000 UTC m=+1.712002519 container remove 72f616471711f3d752ba11b9b6c14fd090779e5db97a53b95e971617373207a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_benz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec 05 01:29:21 compute-0 systemd[1]: libpod-conmon-72f616471711f3d752ba11b9b6c14fd090779e5db97a53b95e971617373207a9.scope: Deactivated successfully.
Dec 05 01:29:21 compute-0 sudo[305440]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:21 compute-0 sudo[305934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:29:21 compute-0 sudo[305934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:29:21 compute-0 sudo[305934]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:21 compute-0 sudo[305959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:29:21 compute-0 sudo[305959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:29:21 compute-0 sudo[305959]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:21 compute-0 sudo[305984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:29:21 compute-0 sudo[305984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:29:21 compute-0 sudo[305984]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:21 compute-0 sudo[306009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:29:21 compute-0 sudo[306009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:29:21 compute-0 ceph-mon[192914]: pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:22 compute-0 sudo[306140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqkehrcricedfisowhzhwyamjukcsjyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898160.2459934-737-73882168356779/AnsiballZ_file.py'
Dec 05 01:29:22 compute-0 sudo[306140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:22 compute-0 podman[306145]: 2025-12-05 01:29:22.114292811 +0000 UTC m=+0.050104112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:29:22 compute-0 podman[306145]: 2025-12-05 01:29:22.215465441 +0000 UTC m=+0.151276762 container create 93e3415d8e524e753eba8786254d3dca00992de1642841fe3fd9c19c9e98765f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jones, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 01:29:22 compute-0 python3.9[306144]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:22 compute-0 sudo[306140]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:29:22 compute-0 systemd[1]: Started libpod-conmon-93e3415d8e524e753eba8786254d3dca00992de1642841fe3fd9c19c9e98765f.scope.
Dec 05 01:29:22 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:29:22 compute-0 podman[306145]: 2025-12-05 01:29:22.358806271 +0000 UTC m=+0.294617592 container init 93e3415d8e524e753eba8786254d3dca00992de1642841fe3fd9c19c9e98765f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:29:22 compute-0 podman[306145]: 2025-12-05 01:29:22.373250545 +0000 UTC m=+0.309061826 container start 93e3415d8e524e753eba8786254d3dca00992de1642841fe3fd9c19c9e98765f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:29:22 compute-0 podman[306145]: 2025-12-05 01:29:22.378246675 +0000 UTC m=+0.314057986 container attach 93e3415d8e524e753eba8786254d3dca00992de1642841fe3fd9c19c9e98765f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:29:22 compute-0 brave_jones[306161]: 167 167
Dec 05 01:29:22 compute-0 systemd[1]: libpod-93e3415d8e524e753eba8786254d3dca00992de1642841fe3fd9c19c9e98765f.scope: Deactivated successfully.
Dec 05 01:29:22 compute-0 podman[306145]: 2025-12-05 01:29:22.383404159 +0000 UTC m=+0.319215450 container died 93e3415d8e524e753eba8786254d3dca00992de1642841fe3fd9c19c9e98765f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:29:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e699b1f4e54cea1ec2aea7e8dea0e7eca4c7430bd9b3083d7d2b437c2ab5bde-merged.mount: Deactivated successfully.
Dec 05 01:29:22 compute-0 podman[306145]: 2025-12-05 01:29:22.438984794 +0000 UTC m=+0.374796085 container remove 93e3415d8e524e753eba8786254d3dca00992de1642841fe3fd9c19c9e98765f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jones, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:29:22 compute-0 systemd[1]: libpod-conmon-93e3415d8e524e753eba8786254d3dca00992de1642841fe3fd9c19c9e98765f.scope: Deactivated successfully.
Dec 05 01:29:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:22 compute-0 podman[306252]: 2025-12-05 01:29:22.683048891 +0000 UTC m=+0.083619390 container create 50b826528a76947eebf56f36b4436239a269d09586c6c241687118e26dae194f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 05 01:29:22 compute-0 podman[306252]: 2025-12-05 01:29:22.647800845 +0000 UTC m=+0.048371384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:29:22 compute-0 systemd[1]: Started libpod-conmon-50b826528a76947eebf56f36b4436239a269d09586c6c241687118e26dae194f.scope.
Dec 05 01:29:22 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:29:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bc212ceb3ce5191e5e5ce2936ce2afc136757f6c4e766ef86b4ba0bb1d6ff7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:29:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bc212ceb3ce5191e5e5ce2936ce2afc136757f6c4e766ef86b4ba0bb1d6ff7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:29:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bc212ceb3ce5191e5e5ce2936ce2afc136757f6c4e766ef86b4ba0bb1d6ff7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:29:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bc212ceb3ce5191e5e5ce2936ce2afc136757f6c4e766ef86b4ba0bb1d6ff7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:29:22 compute-0 podman[306252]: 2025-12-05 01:29:22.863620872 +0000 UTC m=+0.264191351 container init 50b826528a76947eebf56f36b4436239a269d09586c6c241687118e26dae194f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_swartz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 05 01:29:22 compute-0 podman[306252]: 2025-12-05 01:29:22.88502299 +0000 UTC m=+0.285593449 container start 50b826528a76947eebf56f36b4436239a269d09586c6c241687118e26dae194f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:29:22 compute-0 podman[306252]: 2025-12-05 01:29:22.890284828 +0000 UTC m=+0.290855287 container attach 50b826528a76947eebf56f36b4436239a269d09586c6c241687118e26dae194f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_swartz, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:29:23 compute-0 sudo[306355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcfdmcpzyqdxxvomyylnukhcssgxloae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898162.5066698-737-254391853177472/AnsiballZ_stat.py'
Dec 05 01:29:23 compute-0 sudo[306355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:23 compute-0 python3.9[306357]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:29:23 compute-0 sudo[306355]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:23 compute-0 ceph-mon[192914]: pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]: {
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:     "0": [
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:         {
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "devices": [
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "/dev/loop3"
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             ],
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "lv_name": "ceph_lv0",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "lv_size": "21470642176",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "name": "ceph_lv0",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "tags": {
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.cluster_name": "ceph",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.crush_device_class": "",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.encrypted": "0",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.osd_id": "0",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.type": "block",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.vdo": "0"
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             },
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "type": "block",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "vg_name": "ceph_vg0"
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:         }
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:     ],
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:     "1": [
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:         {
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "devices": [
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "/dev/loop4"
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             ],
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "lv_name": "ceph_lv1",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "lv_size": "21470642176",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "name": "ceph_lv1",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "tags": {
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.cluster_name": "ceph",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.crush_device_class": "",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.encrypted": "0",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.osd_id": "1",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.type": "block",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.vdo": "0"
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             },
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "type": "block",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "vg_name": "ceph_vg1"
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:         }
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:     ],
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:     "2": [
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:         {
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "devices": [
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "/dev/loop5"
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             ],
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "lv_name": "ceph_lv2",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "lv_size": "21470642176",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "name": "ceph_lv2",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "tags": {
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.cluster_name": "ceph",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.crush_device_class": "",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.encrypted": "0",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.osd_id": "2",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.type": "block",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:                 "ceph.vdo": "0"
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             },
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "type": "block",
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:             "vg_name": "ceph_vg2"
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:         }
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]:     ]
Dec 05 01:29:23 compute-0 hardcore_swartz[306300]: }
Dec 05 01:29:23 compute-0 systemd[1]: libpod-50b826528a76947eebf56f36b4436239a269d09586c6c241687118e26dae194f.scope: Deactivated successfully.
Dec 05 01:29:23 compute-0 podman[306252]: 2025-12-05 01:29:23.729364579 +0000 UTC m=+1.129935068 container died 50b826528a76947eebf56f36b4436239a269d09586c6c241687118e26dae194f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_swartz, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Dec 05 01:29:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-31bc212ceb3ce5191e5e5ce2936ce2afc136757f6c4e766ef86b4ba0bb1d6ff7-merged.mount: Deactivated successfully.
Dec 05 01:29:23 compute-0 podman[306252]: 2025-12-05 01:29:23.841290269 +0000 UTC m=+1.241860848 container remove 50b826528a76947eebf56f36b4436239a269d09586c6c241687118e26dae194f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Dec 05 01:29:23 compute-0 systemd[1]: libpod-conmon-50b826528a76947eebf56f36b4436239a269d09586c6c241687118e26dae194f.scope: Deactivated successfully.
Dec 05 01:29:23 compute-0 sudo[306009]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:23 compute-0 sudo[306451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvzphvfxejrydspqqnuqowdtwgwrcmlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898162.5066698-737-254391853177472/AnsiballZ_file.py'
Dec 05 01:29:23 compute-0 sudo[306451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:24 compute-0 sudo[306444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:29:24 compute-0 sudo[306444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:29:24 compute-0 sudo[306444]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:24 compute-0 sudo[306475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:29:24 compute-0 sudo[306475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:29:24 compute-0 sudo[306475]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:24 compute-0 python3.9[306466]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:24 compute-0 sudo[306451]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:24 compute-0 sudo[306500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:29:24 compute-0 sudo[306500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:29:24 compute-0 sudo[306500]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:24 compute-0 sudo[306538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:29:24 compute-0 sudo[306538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:29:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:24 compute-0 podman[306715]: 2025-12-05 01:29:24.907486923 +0000 UTC m=+0.066221563 container create 61cc64ec096690bcd2c5a4077e315daffbb1e529b236bb6630b30c11ab9ccf77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:29:24 compute-0 systemd[1]: Started libpod-conmon-61cc64ec096690bcd2c5a4077e315daffbb1e529b236bb6630b30c11ab9ccf77.scope.
Dec 05 01:29:24 compute-0 sudo[306752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pojeeablsrmpalupabencgjgvciubspk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898164.3954232-737-78695250973184/AnsiballZ_stat.py'
Dec 05 01:29:24 compute-0 podman[306715]: 2025-12-05 01:29:24.881417134 +0000 UTC m=+0.040151854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:29:24 compute-0 sudo[306752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:25 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:29:25 compute-0 podman[306715]: 2025-12-05 01:29:25.040522083 +0000 UTC m=+0.199256713 container init 61cc64ec096690bcd2c5a4077e315daffbb1e529b236bb6630b30c11ab9ccf77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 05 01:29:25 compute-0 podman[306715]: 2025-12-05 01:29:25.050102381 +0000 UTC m=+0.208837051 container start 61cc64ec096690bcd2c5a4077e315daffbb1e529b236bb6630b30c11ab9ccf77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 01:29:25 compute-0 podman[306715]: 2025-12-05 01:29:25.056718286 +0000 UTC m=+0.215452946 container attach 61cc64ec096690bcd2c5a4077e315daffbb1e529b236bb6630b30c11ab9ccf77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:29:25 compute-0 exciting_goldberg[306756]: 167 167
Dec 05 01:29:25 compute-0 systemd[1]: libpod-61cc64ec096690bcd2c5a4077e315daffbb1e529b236bb6630b30c11ab9ccf77.scope: Deactivated successfully.
Dec 05 01:29:25 compute-0 podman[306715]: 2025-12-05 01:29:25.06434332 +0000 UTC m=+0.223077990 container died 61cc64ec096690bcd2c5a4077e315daffbb1e529b236bb6630b30c11ab9ccf77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldberg, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:29:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-74eb120e3352d268cb64039b0b440cf5ffdc35c0e69618f505735dba3e250a7e-merged.mount: Deactivated successfully.
Dec 05 01:29:25 compute-0 podman[306715]: 2025-12-05 01:29:25.138791202 +0000 UTC m=+0.297525872 container remove 61cc64ec096690bcd2c5a4077e315daffbb1e529b236bb6630b30c11ab9ccf77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldberg, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 01:29:25 compute-0 systemd[1]: libpod-conmon-61cc64ec096690bcd2c5a4077e315daffbb1e529b236bb6630b30c11ab9ccf77.scope: Deactivated successfully.
Dec 05 01:29:25 compute-0 python3.9[306758]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:29:25 compute-0 sudo[306752]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:25 compute-0 podman[306782]: 2025-12-05 01:29:25.398787175 +0000 UTC m=+0.063254581 container create 984504081e0cdccde018514780f5c2c4948cf1b7f788e0dbe5cf69eed2659141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 05 01:29:25 compute-0 podman[306782]: 2025-12-05 01:29:25.372827539 +0000 UTC m=+0.037294965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:29:25 compute-0 systemd[1]: Started libpod-conmon-984504081e0cdccde018514780f5c2c4948cf1b7f788e0dbe5cf69eed2659141.scope.
Dec 05 01:29:25 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a72d8ad3e3954e0efe6eb8b1461d11916646ab0a651ab34e91345f319e404b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a72d8ad3e3954e0efe6eb8b1461d11916646ab0a651ab34e91345f319e404b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a72d8ad3e3954e0efe6eb8b1461d11916646ab0a651ab34e91345f319e404b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a72d8ad3e3954e0efe6eb8b1461d11916646ab0a651ab34e91345f319e404b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:29:25 compute-0 podman[306782]: 2025-12-05 01:29:25.558640766 +0000 UTC m=+0.223108192 container init 984504081e0cdccde018514780f5c2c4948cf1b7f788e0dbe5cf69eed2659141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 01:29:25 compute-0 podman[306782]: 2025-12-05 01:29:25.576837075 +0000 UTC m=+0.241304481 container start 984504081e0cdccde018514780f5c2c4948cf1b7f788e0dbe5cf69eed2659141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 05 01:29:25 compute-0 podman[306782]: 2025-12-05 01:29:25.582419171 +0000 UTC m=+0.246886587 container attach 984504081e0cdccde018514780f5c2c4948cf1b7f788e0dbe5cf69eed2659141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wilbur, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:29:25 compute-0 sudo[306876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otojrezycbtvvetevmzdlykybmhibemo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898164.3954232-737-78695250973184/AnsiballZ_file.py'
Dec 05 01:29:25 compute-0 sudo[306876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:25 compute-0 ceph-mon[192914]: pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:25 compute-0 python3.9[306878]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtproxyd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtproxyd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:25 compute-0 sudo[306876]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:26 compute-0 charming_wilbur[306834]: {
Dec 05 01:29:26 compute-0 charming_wilbur[306834]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:29:26 compute-0 charming_wilbur[306834]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:29:26 compute-0 charming_wilbur[306834]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:29:26 compute-0 charming_wilbur[306834]:         "osd_id": 0,
Dec 05 01:29:26 compute-0 charming_wilbur[306834]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:29:26 compute-0 charming_wilbur[306834]:         "type": "bluestore"
Dec 05 01:29:26 compute-0 charming_wilbur[306834]:     },
Dec 05 01:29:26 compute-0 charming_wilbur[306834]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:29:26 compute-0 charming_wilbur[306834]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:29:26 compute-0 charming_wilbur[306834]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:29:26 compute-0 charming_wilbur[306834]:         "osd_id": 1,
Dec 05 01:29:26 compute-0 charming_wilbur[306834]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:29:26 compute-0 charming_wilbur[306834]:         "type": "bluestore"
Dec 05 01:29:26 compute-0 charming_wilbur[306834]:     },
Dec 05 01:29:26 compute-0 charming_wilbur[306834]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:29:26 compute-0 charming_wilbur[306834]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:29:26 compute-0 charming_wilbur[306834]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:29:26 compute-0 charming_wilbur[306834]:         "osd_id": 2,
Dec 05 01:29:26 compute-0 charming_wilbur[306834]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:29:26 compute-0 charming_wilbur[306834]:         "type": "bluestore"
Dec 05 01:29:26 compute-0 charming_wilbur[306834]:     }
Dec 05 01:29:26 compute-0 charming_wilbur[306834]: }
Dec 05 01:29:26 compute-0 sudo[307071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmmommebrfymtuaeqfomrojnbychragz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898166.0997107-737-169320879671379/AnsiballZ_stat.py'
Dec 05 01:29:26 compute-0 sudo[307071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:26 compute-0 podman[307019]: 2025-12-05 01:29:26.71061786 +0000 UTC m=+0.122413086 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 05 01:29:26 compute-0 systemd[1]: libpod-984504081e0cdccde018514780f5c2c4948cf1b7f788e0dbe5cf69eed2659141.scope: Deactivated successfully.
Dec 05 01:29:26 compute-0 systemd[1]: libpod-984504081e0cdccde018514780f5c2c4948cf1b7f788e0dbe5cf69eed2659141.scope: Consumed 1.138s CPU time.
Dec 05 01:29:26 compute-0 conmon[306834]: conmon 984504081e0cdccde018 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-984504081e0cdccde018514780f5c2c4948cf1b7f788e0dbe5cf69eed2659141.scope/container/memory.events
Dec 05 01:29:26 compute-0 podman[306782]: 2025-12-05 01:29:26.719174709 +0000 UTC m=+1.383642105 container died 984504081e0cdccde018514780f5c2c4948cf1b7f788e0dbe5cf69eed2659141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wilbur, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 05 01:29:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a72d8ad3e3954e0efe6eb8b1461d11916646ab0a651ab34e91345f319e404b0-merged.mount: Deactivated successfully.
Dec 05 01:29:26 compute-0 podman[306782]: 2025-12-05 01:29:26.808073446 +0000 UTC m=+1.472540872 container remove 984504081e0cdccde018514780f5c2c4948cf1b7f788e0dbe5cf69eed2659141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:29:26 compute-0 systemd[1]: libpod-conmon-984504081e0cdccde018514780f5c2c4948cf1b7f788e0dbe5cf69eed2659141.scope: Deactivated successfully.
Dec 05 01:29:26 compute-0 sudo[306538]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:29:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:29:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:29:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 674b5970-f4a2-4c16-a196-f2021c9226c9 does not exist
Dec 05 01:29:26 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 536a2f30-69fe-4883-8789-629c0be5e6b8 does not exist
Dec 05 01:29:26 compute-0 python3.9[307076]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:29:26 compute-0 sudo[307071]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:26 compute-0 sudo[307088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:29:26 compute-0 sudo[307088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:29:26 compute-0 sudo[307088]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:27 compute-0 sudo[307115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:29:27 compute-0 sudo[307115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:29:27 compute-0 sudo[307115]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:29:27 compute-0 sudo[307213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aelltwgockfqagyhcnilprjknuypufhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898166.0997107-737-169320879671379/AnsiballZ_file.py'
Dec 05 01:29:27 compute-0 sudo[307213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:27 compute-0 python3.9[307215]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:27 compute-0 sudo[307213]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:27 compute-0 ceph-mon[192914]: pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:29:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:29:28 compute-0 sudo[307365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnpvrveydukkceqvbmeyydblzisotoga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898167.900216-737-245529687263897/AnsiballZ_stat.py'
Dec 05 01:29:28 compute-0 sudo[307365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:28 compute-0 python3.9[307367]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:29:28 compute-0 sudo[307365]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:29 compute-0 sudo[307443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngzfuxpioeroothynposuijpyfjpugjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898167.900216-737-245529687263897/AnsiballZ_file.py'
Dec 05 01:29:29 compute-0 sudo[307443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:29 compute-0 python3.9[307445]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:29 compute-0 sudo[307443]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:29 compute-0 ceph-mon[192914]: pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:29 compute-0 podman[158197]: time="2025-12-05T01:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:29:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Dec 05 01:29:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7273 "" "Go-http-client/1.1"
Dec 05 01:29:30 compute-0 sudo[307595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtmmujxwtnxnniftujuxidmuyftfzjqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898169.5659864-737-260700577913781/AnsiballZ_stat.py'
Dec 05 01:29:30 compute-0 sudo[307595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:30 compute-0 python3.9[307597]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:29:30 compute-0 sudo[307595]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:30 compute-0 sudo[307673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdvvvnvqdbhxutglmcjnqxqlopooxpib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898169.5659864-737-260700577913781/AnsiballZ_file.py'
Dec 05 01:29:30 compute-0 sudo[307673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:30 compute-0 ceph-mon[192914]: pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:30 compute-0 python3.9[307675]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtqemud.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtqemud.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:30 compute-0 sudo[307673]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:31 compute-0 openstack_network_exporter[160350]: ERROR   01:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:29:31 compute-0 openstack_network_exporter[160350]: ERROR   01:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:29:31 compute-0 openstack_network_exporter[160350]: ERROR   01:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:29:31 compute-0 openstack_network_exporter[160350]: ERROR   01:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:29:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:29:31 compute-0 openstack_network_exporter[160350]: ERROR   01:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:29:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:29:31 compute-0 sudo[307825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdstmejzwcvyqfverncvtxbtzvbehcys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898171.537645-737-70318026430355/AnsiballZ_stat.py'
Dec 05 01:29:31 compute-0 sudo[307825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:32 compute-0 python3.9[307827]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:29:32 compute-0 sudo[307825]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:29:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:32 compute-0 sudo[307903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auoxdiiapdydimjfhajpnrrseancelpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898171.537645-737-70318026430355/AnsiballZ_file.py'
Dec 05 01:29:32 compute-0 sudo[307903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:32 compute-0 python3.9[307905]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:32 compute-0 sudo[307903]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:33 compute-0 ceph-mon[192914]: pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:34 compute-0 sudo[308055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcmewaomuigophanxvsevitzpaoiqrle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898173.7433774-737-116874504799768/AnsiballZ_stat.py'
Dec 05 01:29:34 compute-0 sudo[308055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:34 compute-0 python3.9[308057]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:29:34 compute-0 sudo[308055]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:34 compute-0 sudo[308189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzxicfmuzpvsppjxsnipbtnccqulobjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898173.7433774-737-116874504799768/AnsiballZ_file.py'
Dec 05 01:29:34 compute-0 sudo[308189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:34 compute-0 podman[308109]: 2025-12-05 01:29:34.919267062 +0000 UTC m=+0.126533681 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 05 01:29:34 compute-0 podman[308108]: 2025-12-05 01:29:34.919409976 +0000 UTC m=+0.124531715 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:29:34 compute-0 podman[308107]: 2025-12-05 01:29:34.934743914 +0000 UTC m=+0.151380895 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Dec 05 01:29:34 compute-0 podman[308115]: 2025-12-05 01:29:34.97320302 +0000 UTC m=+0.165325505 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 01:29:35 compute-0 python3.9[308211]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:35 compute-0 sudo[308189]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:35 compute-0 ceph-mon[192914]: pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:35 compute-0 sudo[308367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnjoflmhnitjswpfmicqxqoxtdpnxndt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898175.338189-737-69429579683886/AnsiballZ_stat.py'
Dec 05 01:29:35 compute-0 sudo[308367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:36 compute-0 python3.9[308369]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:29:36 compute-0 sudo[308367]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:36 compute-0 sudo[308445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvhjbmzibkcoazltvkyhnqxmiaiszywr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898175.338189-737-69429579683886/AnsiballZ_file.py'
Dec 05 01:29:36 compute-0 sudo[308445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:36 compute-0 python3.9[308447]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtsecretd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtsecretd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:36 compute-0 sudo[308445]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:29:37 compute-0 sudo[308597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtzxeeinuyhyupzybaeyaqytmqevnong ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898176.9311104-737-138346345154455/AnsiballZ_stat.py'
Dec 05 01:29:37 compute-0 sudo[308597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:37 compute-0 python3.9[308599]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:29:37 compute-0 ceph-mon[192914]: pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:37 compute-0 sudo[308597]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:38 compute-0 sudo[308675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyznyvswcqwoxgskfjbtssgukfkuqwas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898176.9311104-737-138346345154455/AnsiballZ_file.py'
Dec 05 01:29:38 compute-0 sudo[308675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:38 compute-0 python3.9[308677]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:38 compute-0 sudo[308675]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:39 compute-0 sudo[308827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlbuzoldxqlcytplxrrhsnpbidelhjgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898178.649066-737-210262711415103/AnsiballZ_stat.py'
Dec 05 01:29:39 compute-0 sudo[308827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:39 compute-0 python3.9[308829]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:29:39 compute-0 sudo[308827]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:39 compute-0 ceph-mon[192914]: pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:39 compute-0 sudo[308905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnwcfcaukduebjpfqaavmobijsifruaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898178.649066-737-210262711415103/AnsiballZ_file.py'
Dec 05 01:29:39 compute-0 sudo[308905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:40 compute-0 python3.9[308907]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:40 compute-0 sudo[308905]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:41 compute-0 python3.9[309057]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:29:41 compute-0 podman[309061]: 2025-12-05 01:29:41.190447579 +0000 UTC m=+0.108224799 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, version=9.4, container_name=kepler, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, distribution-scope=public, name=ubi9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec 05 01:29:41 compute-0 ceph-mon[192914]: pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:29:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:42 compute-0 ceph-mon[192914]: pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:43 compute-0 sudo[309228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvkmkqbjqgapvwoscsvredykwhlsznqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898181.3847811-901-24180208991437/AnsiballZ_seboolean.py'
Dec 05 01:29:43 compute-0 sudo[309228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:43 compute-0 python3.9[309230]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec 05 01:29:43 compute-0 sudo[309228]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:45 compute-0 sudo[309380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgieryekbjtpydduaurxganoitbzbbew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898184.1912014-909-229081042086975/AnsiballZ_copy.py'
Dec 05 01:29:45 compute-0 sudo[309380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:45 compute-0 python3.9[309382]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:45 compute-0 sudo[309380]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:45 compute-0 ceph-mon[192914]: pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:29:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:29:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:29:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:29:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:29:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:29:46 compute-0 sudo[309532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozxxywdonppktruhwonxeheswfsagakx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898185.7859406-909-133996633302270/AnsiballZ_copy.py'
Dec 05 01:29:46 compute-0 sudo[309532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:46 compute-0 python3.9[309534]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:46 compute-0 sudo[309532]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:47 compute-0 sudo[309698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekibwkgezrsacciqlbnrwuphhfgplrmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898186.7285917-909-156114082873451/AnsiballZ_copy.py'
Dec 05 01:29:47 compute-0 sudo[309698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:47 compute-0 podman[309658]: 2025-12-05 01:29:47.201967232 +0000 UTC m=+0.119644228 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., version=9.6, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, name=ubi9-minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41)
Dec 05 01:29:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:29:47 compute-0 python3.9[309704]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:47 compute-0 sudo[309698]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:47 compute-0 ceph-mon[192914]: pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:48 compute-0 sudo[309854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwgykioazuiqtlrpjpsrpmmdzpufoetg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898187.6441424-909-237907367607308/AnsiballZ_copy.py'
Dec 05 01:29:48 compute-0 sudo[309854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:48 compute-0 python3.9[309856]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:48 compute-0 sudo[309854]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:48 compute-0 podman[309905]: 2025-12-05 01:29:48.645955523 +0000 UTC m=+0.068153147 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 01:29:48 compute-0 ceph-mon[192914]: pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:49 compute-0 sudo[310027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioobuprwvfkgdvwhigjmjmniedcovqvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898188.533511-909-142454945725834/AnsiballZ_copy.py'
Dec 05 01:29:49 compute-0 sudo[310027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:49 compute-0 python3.9[310029]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:49 compute-0 sudo[310027]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:50 compute-0 sudo[310180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxtfgrsupftrnzqxjvalysqhcbjxnren ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898189.5512826-945-109844856593419/AnsiballZ_copy.py'
Dec 05 01:29:50 compute-0 sudo[310180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:50 compute-0 python3.9[310182]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:50 compute-0 sudo[310180]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:51 compute-0 sudo[310332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxofipeycdedwudgsrmqrkluolrmzdnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898190.5446181-945-157429801446239/AnsiballZ_copy.py'
Dec 05 01:29:51 compute-0 sudo[310332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:51 compute-0 python3.9[310334]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:51 compute-0 sudo[310332]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:51 compute-0 ceph-mon[192914]: pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:52 compute-0 sudo[310484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdhdhnbxyfkwqbvvxbmpjsgnzhdxyofa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898191.559675-945-191340275244043/AnsiballZ_copy.py'
Dec 05 01:29:52 compute-0 sudo[310484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:29:52 compute-0 python3.9[310486]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:52 compute-0 sudo[310484]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:53 compute-0 sudo[310636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrfptfhwuwluzmqyuhwqzbixandanlzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898192.6430254-945-200622901834361/AnsiballZ_copy.py'
Dec 05 01:29:53 compute-0 sudo[310636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:53 compute-0 python3.9[310638]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:53 compute-0 sudo[310636]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:53 compute-0 ceph-mon[192914]: pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:54 compute-0 sudo[310788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfzdlgdbhthbhalvtoxctwrpkuighurn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898193.6802928-945-60757123195888/AnsiballZ_copy.py'
Dec 05 01:29:54 compute-0 sudo[310788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:54 compute-0 python3.9[310790]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:54 compute-0 sudo[310788]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:54 compute-0 ceph-mon[192914]: pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:55 compute-0 sudo[310940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ximxqkjhhkhoposhaehkgywwlpinnlip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898195.2306566-982-144530578435317/AnsiballZ_file.py'
Dec 05 01:29:55 compute-0 sudo[310940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:55 compute-0 python3.9[310942]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:29:55 compute-0 sudo[310940]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:29:56.156 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:29:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:29:56.157 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:29:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:29:56.157 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:29:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:29:57 compute-0 podman[311066]: 2025-12-05 01:29:57.482535876 +0000 UTC m=+0.091943563 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 01:29:57 compute-0 sudo[311106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqgqxbjxverwnwhymmqiofbpldtxonte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898196.2537549-990-165315226501539/AnsiballZ_find.py'
Dec 05 01:29:57 compute-0 sudo[311106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:57 compute-0 python3.9[311110]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 01:29:57 compute-0 ceph-mon[192914]: pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:57 compute-0 sudo[311106]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:58 compute-0 sudo[311260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhppetrxwdqqwmtscnykwvixixnhmpea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898197.9400747-998-113844548752587/AnsiballZ_command.py'
Dec 05 01:29:58 compute-0 sudo[311260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:29:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:58 compute-0 python3.9[311262]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:29:58 compute-0 sudo[311260]: pam_unix(sudo:session): session closed for user root
Dec 05 01:29:59 compute-0 ceph-mon[192914]: pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:29:59 compute-0 python3.9[311416]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 01:29:59 compute-0 podman[158197]: time="2025-12-05T01:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:29:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Dec 05 01:29:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7269 "" "Go-http-client/1.1"
Dec 05 01:30:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:00 compute-0 python3.9[311566]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:30:01 compute-0 openstack_network_exporter[160350]: ERROR   01:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:30:01 compute-0 openstack_network_exporter[160350]: ERROR   01:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:30:01 compute-0 openstack_network_exporter[160350]: ERROR   01:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:30:01 compute-0 openstack_network_exporter[160350]: ERROR   01:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:30:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:30:01 compute-0 openstack_network_exporter[160350]: ERROR   01:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:30:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:30:01 compute-0 ceph-mon[192914]: pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:01 compute-0 python3.9[311687]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764898200.1960008-1017-134455890810254/.source.xml follow=False _original_basename=secret.xml.j2 checksum=fdb3975e1f666f2811f2fcfa5c297c7e31466e55 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:30:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:30:02 compute-0 sudo[311837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ronfckhadwtivtjuplwnuatrklayyxtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898201.9902098-1032-276750451202224/AnsiballZ_command.py'
Dec 05 01:30:02 compute-0 sudo[311837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:02 compute-0 python3.9[311839]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine cbd280d3-cbd8-528b-ace6-2b3a887cdcee
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:30:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:02 compute-0 polkitd[43575]: Registered Authentication Agent for unix-process:311841:428978 (system bus name :1.3906 [pkttyagent --process 311841 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 05 01:30:02 compute-0 polkitd[43575]: Unregistered Authentication Agent for unix-process:311841:428978 (system bus name :1.3906, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 05 01:30:02 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 05 01:30:02 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 05 01:30:02 compute-0 polkitd[43575]: Registered Authentication Agent for unix-process:311840:428976 (system bus name :1.3908 [pkttyagent --process 311840 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 05 01:30:02 compute-0 polkitd[43575]: Unregistered Authentication Agent for unix-process:311840:428976 (system bus name :1.3908, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 05 01:30:02 compute-0 sudo[311837]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:03 compute-0 ceph-mon[192914]: pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:03 compute-0 python3.9[312020]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:30:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:04 compute-0 sudo[312170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umumlxgyiavaictymnrpwdasgtsbnfrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898204.1393845-1048-25541190338798/AnsiballZ_command.py'
Dec 05 01:30:04 compute-0 sudo[312170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:04 compute-0 sudo[312170]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:05 compute-0 podman[312251]: 2025-12-05 01:30:05.707000881 +0000 UTC m=+0.104390141 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:30:05 compute-0 podman[312253]: 2025-12-05 01:30:05.729070418 +0000 UTC m=+0.114236497 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 01:30:05 compute-0 podman[312250]: 2025-12-05 01:30:05.734381006 +0000 UTC m=+0.145771158 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 01:30:05 compute-0 ceph-mon[192914]: pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:05 compute-0 podman[312258]: 2025-12-05 01:30:05.771708991 +0000 UTC m=+0.152730964 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:30:06 compute-0 sudo[312405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elnvgrpfdxxpvbabjkxnpahpxrtugqgx ; FSID=cbd280d3-cbd8-528b-ace6-2b3a887cdcee KEY=AQBBMTJpAAAAABAAQWv2lkQhfZ74+C7m+rCDZA== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898205.3569248-1056-237180182167385/AnsiballZ_command.py'
Dec 05 01:30:06 compute-0 sudo[312405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:06 compute-0 ceph-mon[192914]: pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:06 compute-0 polkitd[43575]: Registered Authentication Agent for unix-process:312408:429389 (system bus name :1.3911 [pkttyagent --process 312408 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 05 01:30:06 compute-0 polkitd[43575]: Unregistered Authentication Agent for unix-process:312408:429389 (system bus name :1.3911, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 05 01:30:06 compute-0 sudo[312405]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:30:07 compute-0 sudo[312563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tplwypaiqgwborwfourkuxqnszirvuwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898207.1838586-1064-205837291630636/AnsiballZ_copy.py'
Dec 05 01:30:07 compute-0 sudo[312563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:07 compute-0 python3.9[312565]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:30:07 compute-0 sudo[312563]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:09 compute-0 sudo[312715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcmuoyseumakkfthjmaildockixpcuck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898208.8449965-1072-250046812306736/AnsiballZ_stat.py'
Dec 05 01:30:09 compute-0 sudo[312715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:09 compute-0 python3.9[312717]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:30:09 compute-0 sudo[312715]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:09 compute-0 ceph-mon[192914]: pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:10 compute-0 sudo[312793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfvqhqapskvpoqxvsjdopphtodsxgrba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898208.8449965-1072-250046812306736/AnsiballZ_file.py'
Dec 05 01:30:10 compute-0 sudo[312793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:10 compute-0 python3.9[312795]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/edpm-config/firewall/libvirt.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/libvirt.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:30:10 compute-0 sudo[312793]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:11 compute-0 sudo[312961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybnxbzcqppofqrykbofydixbtogxothh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898210.8066967-1085-194947325443118/AnsiballZ_file.py'
Dec 05 01:30:11 compute-0 podman[312919]: 2025-12-05 01:30:11.422646409 +0000 UTC m=+0.123015762 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, distribution-scope=public, vcs-type=git, build-date=2024-09-18T21:23:30, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec 05 01:30:11 compute-0 sudo[312961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:11 compute-0 python3.9[312965]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:30:11 compute-0 sudo[312961]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:11 compute-0 ceph-mon[192914]: pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:30:12 compute-0 sudo[313115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxnngecbhmaakmgrpfskoyknrotyiobf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898211.9399934-1093-208190596469167/AnsiballZ_stat.py'
Dec 05 01:30:12 compute-0 sudo[313115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:12 compute-0 python3.9[313117]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:30:12 compute-0 sudo[313115]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:12 compute-0 sudo[313193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otfynjidhhzyvilwsyxtxxioagsizwqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898211.9399934-1093-208190596469167/AnsiballZ_file.py'
Dec 05 01:30:13 compute-0 sudo[313193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:13 compute-0 python3.9[313195]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:30:13 compute-0 sudo[313193]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:13 compute-0 ceph-mon[192914]: pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:13 compute-0 sudo[313345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blknnvviqvdebpbmgsssikpsumfgxkmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898213.4721138-1105-209229018066542/AnsiballZ_stat.py'
Dec 05 01:30:13 compute-0 sudo[313345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:14 compute-0 python3.9[313347]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:30:14 compute-0 sudo[313345]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:14 compute-0 sudo[313423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkbitizrisurezikkofncpmmoradlgfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898213.4721138-1105-209229018066542/AnsiballZ_file.py'
Dec 05 01:30:14 compute-0 sudo[313423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:14 compute-0 python3.9[313425]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.dlok90ob recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:30:14 compute-0 sudo[313423]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:15 compute-0 sudo[313575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyaibwqethcvpromqxezurghuquujnhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898215.1988099-1117-167663480712064/AnsiballZ_stat.py'
Dec 05 01:30:15 compute-0 sudo[313575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:15 compute-0 ceph-mon[192914]: pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:15 compute-0 python3.9[313577]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:30:15 compute-0 sudo[313575]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:30:16
Dec 05 01:30:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:30:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:30:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'images']
Dec 05 01:30:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:30:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:30:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:30:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:30:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:30:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:30:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:30:16 compute-0 sudo[313653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-halkvbtvguepclibdipnvgiugsgcnmcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898215.1988099-1117-167663480712064/AnsiballZ_file.py'
Dec 05 01:30:16 compute-0 sudo[313653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:30:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:30:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:30:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:30:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:30:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:30:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:30:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:30:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:30:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:30:16 compute-0 python3.9[313655]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:30:16 compute-0 sudo[313653]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:16 compute-0 ceph-mon[192914]: pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:30:17 compute-0 podman[313779]: 2025-12-05 01:30:17.696120029 +0000 UTC m=+0.113492795 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, release=1755695350, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vcs-type=git, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 05 01:30:18 compute-0 sudo[313824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnslgqiezmcvkhwxtessjudaifulhble ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898216.9627578-1130-218260100709478/AnsiballZ_command.py'
Dec 05 01:30:18 compute-0 sudo[313824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:18 compute-0 python3.9[313826]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:30:18 compute-0 sudo[313824]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:19 compute-0 sudo[313994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egpdhgqurdwzgkniwphyvkbhedxgqgbe ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764898218.8263862-1138-182088904617602/AnsiballZ_edpm_nftables_from_files.py'
Dec 05 01:30:19 compute-0 podman[313951]: 2025-12-05 01:30:19.557523686 +0000 UTC m=+0.131862159 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:30:19 compute-0 sudo[313994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:19 compute-0 ceph-mon[192914]: pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:19 compute-0 python3[314003]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 05 01:30:19 compute-0 sudo[313994]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:21 compute-0 sudo[314153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrxwbpfbvysuidixdfeebtjuxaydjfsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898220.5081427-1146-84175262939368/AnsiballZ_stat.py'
Dec 05 01:30:21 compute-0 sudo[314153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:21 compute-0 python3.9[314155]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:30:21 compute-0 sudo[314153]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:21 compute-0 ceph-mon[192914]: pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:21 compute-0 sudo[314231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjfzpbqqnijzrpbklbrffxfzfrlyiuip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898220.5081427-1146-84175262939368/AnsiballZ_file.py'
Dec 05 01:30:21 compute-0 sudo[314231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:22 compute-0 python3.9[314233]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:30:22 compute-0 sudo[314231]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:30:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:22 compute-0 sudo[314383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vffnmjwjkrhmuyetldthkyhtacrrklrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898222.3344927-1158-279588010463439/AnsiballZ_stat.py'
Dec 05 01:30:22 compute-0 sudo[314383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:23 compute-0 python3.9[314385]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:30:23 compute-0 sudo[314383]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:23 compute-0 sudo[314461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scegmcjyzsilglsqxmtcaecrfyxdcoun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898222.3344927-1158-279588010463439/AnsiballZ_file.py'
Dec 05 01:30:23 compute-0 sudo[314461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:23 compute-0 ceph-mon[192914]: pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:23 compute-0 python3.9[314463]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:30:23 compute-0 sudo[314461]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:24 compute-0 sudo[314613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cukfmpwzucxczshizkvtugggtxqfpxlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898224.2188978-1170-265374349581787/AnsiballZ_stat.py'
Dec 05 01:30:24 compute-0 sudo[314613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:24 compute-0 python3.9[314615]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:30:25 compute-0 sudo[314613]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:25 compute-0 sudo[314691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdrbenyprwxuxpegxmynescmswxpiwgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898224.2188978-1170-265374349581787/AnsiballZ_file.py'
Dec 05 01:30:25 compute-0 sudo[314691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:25 compute-0 python3.9[314693]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:30:25 compute-0 sudo[314691]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:25 compute-0 ceph-mon[192914]: pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:30:26 compute-0 sudo[314843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chwkmeqdvdrvltmafqdfgfvgipqtexix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898225.9614093-1182-130021312254251/AnsiballZ_stat.py'
Dec 05 01:30:26 compute-0 sudo[314843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:26 compute-0 python3.9[314845]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:30:26 compute-0 sudo[314843]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:26 compute-0 ceph-mon[192914]: pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:27 compute-0 sudo[314929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwfgxpwpcynsjpftnlykbqcwvmrvtbsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898225.9614093-1182-130021312254251/AnsiballZ_file.py'
Dec 05 01:30:27 compute-0 sudo[314929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:27 compute-0 sudo[314911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:30:27 compute-0 sudo[314911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:30:27 compute-0 sudo[314911]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.301252) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898227301306, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1795, "num_deletes": 250, "total_data_size": 3041035, "memory_usage": 3083944, "flush_reason": "Manual Compaction"}
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898227316795, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1720154, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11729, "largest_seqno": 13523, "table_properties": {"data_size": 1714302, "index_size": 2927, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14541, "raw_average_key_size": 20, "raw_value_size": 1701411, "raw_average_value_size": 2346, "num_data_blocks": 136, "num_entries": 725, "num_filter_entries": 725, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764898023, "oldest_key_time": 1764898023, "file_creation_time": 1764898227, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 15690 microseconds, and 8345 cpu microseconds.
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.316887) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1720154 bytes OK
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.316969) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.319714) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.319735) EVENT_LOG_v1 {"time_micros": 1764898227319728, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.319757) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3033452, prev total WAL file size 3033452, number of live WAL files 2.
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.321469) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1679KB)], [29(7640KB)]
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898227321525, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9544233, "oldest_snapshot_seqno": -1}
Dec 05 01:30:27 compute-0 sudo[314949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 3995 keys, 7506489 bytes, temperature: kUnknown
Dec 05 01:30:27 compute-0 sudo[314949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898227363137, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7506489, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7478069, "index_size": 17302, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 94986, "raw_average_key_size": 23, "raw_value_size": 7404293, "raw_average_value_size": 1853, "num_data_blocks": 755, "num_entries": 3995, "num_filter_entries": 3995, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764898227, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.363358) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7506489 bytes
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.365459) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 229.0 rd, 180.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.5 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(9.9) write-amplify(4.4) OK, records in: 4411, records dropped: 416 output_compression: NoCompression
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.365478) EVENT_LOG_v1 {"time_micros": 1764898227365469, "job": 12, "event": "compaction_finished", "compaction_time_micros": 41681, "compaction_time_cpu_micros": 18252, "output_level": 6, "num_output_files": 1, "total_output_size": 7506489, "num_input_records": 4411, "num_output_records": 3995, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898227365815, "job": 12, "event": "table_file_deletion", "file_number": 31}
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898227366953, "job": 12, "event": "table_file_deletion", "file_number": 29}
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.321288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.367368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.367379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.367382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.367385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.367388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:30:27 compute-0 sudo[314949]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:27 compute-0 python3.9[314944]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:30:27 compute-0 sudo[314929]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:27 compute-0 sudo[314974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:30:27 compute-0 sudo[314974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:30:27 compute-0 sudo[314974]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:27 compute-0 sudo[315000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:30:27 compute-0 sudo[315000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:30:27 compute-0 podman[315033]: 2025-12-05 01:30:27.71261189 +0000 UTC m=+0.120582804 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 01:30:28 compute-0 sudo[315000]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:30:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:30:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:30:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:30:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:30:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:30:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev cf29278d-c9e2-4f2a-8847-4e58fa987b5d does not exist
Dec 05 01:30:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 69d778fd-78ce-465f-9011-9d82589199e4 does not exist
Dec 05 01:30:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev cfd01f4e-f4c1-4e3e-848e-bd344e04b8ac does not exist
Dec 05 01:30:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:30:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:30:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:30:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:30:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:30:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:30:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:30:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:30:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:30:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:30:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:30:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:30:28 compute-0 sudo[315234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzquqherjdnketqgqhjxxampkpnqvmuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898227.804968-1194-139042382001377/AnsiballZ_stat.py'
Dec 05 01:30:28 compute-0 sudo[315234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:28 compute-0 sudo[315211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:30:28 compute-0 sudo[315211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:30:28 compute-0 sudo[315211]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:28 compute-0 sudo[315249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:30:28 compute-0 sudo[315249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:30:28 compute-0 sudo[315249]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:28 compute-0 python3.9[315246]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:30:28 compute-0 sudo[315274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:30:28 compute-0 sudo[315274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:30:28 compute-0 sudo[315274]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:28 compute-0 sudo[315234]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:28 compute-0 sudo[315301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:30:28 compute-0 sudo[315301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:30:29 compute-0 podman[315364]: 2025-12-05 01:30:29.294024576 +0000 UTC m=+0.067985133 container create b71e339e05db0027ddd23bff32642e1b5573ede06c2ba5bcfaa5bff68a9001d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mayer, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 05 01:30:29 compute-0 systemd[1]: Started libpod-conmon-b71e339e05db0027ddd23bff32642e1b5573ede06c2ba5bcfaa5bff68a9001d3.scope.
Dec 05 01:30:29 compute-0 podman[315364]: 2025-12-05 01:30:29.269200732 +0000 UTC m=+0.043161339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:30:29 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:30:29 compute-0 ceph-mon[192914]: pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:29 compute-0 podman[315364]: 2025-12-05 01:30:29.426773998 +0000 UTC m=+0.200734645 container init b71e339e05db0027ddd23bff32642e1b5573ede06c2ba5bcfaa5bff68a9001d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mayer, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 05 01:30:29 compute-0 podman[315364]: 2025-12-05 01:30:29.444767062 +0000 UTC m=+0.218727649 container start b71e339e05db0027ddd23bff32642e1b5573ede06c2ba5bcfaa5bff68a9001d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Dec 05 01:30:29 compute-0 podman[315364]: 2025-12-05 01:30:29.451381307 +0000 UTC m=+0.225341954 container attach b71e339e05db0027ddd23bff32642e1b5573ede06c2ba5bcfaa5bff68a9001d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mayer, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 01:30:29 compute-0 systemd[1]: libpod-b71e339e05db0027ddd23bff32642e1b5573ede06c2ba5bcfaa5bff68a9001d3.scope: Deactivated successfully.
Dec 05 01:30:29 compute-0 xenodochial_mayer[315380]: 167 167
Dec 05 01:30:29 compute-0 conmon[315380]: conmon b71e339e05db0027ddd2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b71e339e05db0027ddd23bff32642e1b5573ede06c2ba5bcfaa5bff68a9001d3.scope/container/memory.events
Dec 05 01:30:29 compute-0 podman[315364]: 2025-12-05 01:30:29.459069872 +0000 UTC m=+0.233030469 container died b71e339e05db0027ddd23bff32642e1b5573ede06c2ba5bcfaa5bff68a9001d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mayer, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:30:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-70137407807c0c71aa720abb61c99e7cad221592bbe4985fb92d1100314650ec-merged.mount: Deactivated successfully.
Dec 05 01:30:29 compute-0 podman[315364]: 2025-12-05 01:30:29.543684819 +0000 UTC m=+0.317645376 container remove b71e339e05db0027ddd23bff32642e1b5573ede06c2ba5bcfaa5bff68a9001d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec 05 01:30:29 compute-0 systemd[1]: libpod-conmon-b71e339e05db0027ddd23bff32642e1b5573ede06c2ba5bcfaa5bff68a9001d3.scope: Deactivated successfully.
Dec 05 01:30:29 compute-0 podman[158197]: time="2025-12-05T01:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:30:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Dec 05 01:30:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7279 "" "Go-http-client/1.1"
Dec 05 01:30:29 compute-0 podman[315449]: 2025-12-05 01:30:29.824124193 +0000 UTC m=+0.096540531 container create 0666dc5e432a1582d7795c0dfe7059e8a52a0b018af4d7e2ba864bdc17c21953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mayer, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:30:29 compute-0 sudo[315487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlrcvubvqlmvwykzxegilvfefqzoebxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898227.804968-1194-139042382001377/AnsiballZ_file.py'
Dec 05 01:30:29 compute-0 sudo[315487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:29 compute-0 systemd[1]: Started libpod-conmon-0666dc5e432a1582d7795c0dfe7059e8a52a0b018af4d7e2ba864bdc17c21953.scope.
Dec 05 01:30:29 compute-0 podman[315449]: 2025-12-05 01:30:29.79078613 +0000 UTC m=+0.063202508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:30:29 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ccfb3169c3b85b0e83fc17347e81c61a15e3ff34127068b8def5763688dc86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ccfb3169c3b85b0e83fc17347e81c61a15e3ff34127068b8def5763688dc86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ccfb3169c3b85b0e83fc17347e81c61a15e3ff34127068b8def5763688dc86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ccfb3169c3b85b0e83fc17347e81c61a15e3ff34127068b8def5763688dc86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ccfb3169c3b85b0e83fc17347e81c61a15e3ff34127068b8def5763688dc86/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:30:29 compute-0 podman[315449]: 2025-12-05 01:30:29.971226288 +0000 UTC m=+0.243642616 container init 0666dc5e432a1582d7795c0dfe7059e8a52a0b018af4d7e2ba864bdc17c21953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:30:30 compute-0 podman[315449]: 2025-12-05 01:30:30.007736369 +0000 UTC m=+0.280152697 container start 0666dc5e432a1582d7795c0dfe7059e8a52a0b018af4d7e2ba864bdc17c21953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mayer, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:30:30 compute-0 podman[315449]: 2025-12-05 01:30:30.013621064 +0000 UTC m=+0.286037392 container attach 0666dc5e432a1582d7795c0dfe7059e8a52a0b018af4d7e2ba864bdc17c21953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:30:30 compute-0 python3.9[315491]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:30:30 compute-0 sudo[315487]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:30 compute-0 sudo[315660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyeydvvihytciqusvwadtoqoeqspocqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898230.3989785-1207-239877998666753/AnsiballZ_command.py'
Dec 05 01:30:30 compute-0 sudo[315660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:31 compute-0 python3.9[315665]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:30:31 compute-0 sudo[315660]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:31 compute-0 distracted_mayer[315494]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:30:31 compute-0 distracted_mayer[315494]: --> relative data size: 1.0
Dec 05 01:30:31 compute-0 distracted_mayer[315494]: --> All data devices are unavailable
Dec 05 01:30:31 compute-0 systemd[1]: libpod-0666dc5e432a1582d7795c0dfe7059e8a52a0b018af4d7e2ba864bdc17c21953.scope: Deactivated successfully.
Dec 05 01:30:31 compute-0 systemd[1]: libpod-0666dc5e432a1582d7795c0dfe7059e8a52a0b018af4d7e2ba864bdc17c21953.scope: Consumed 1.146s CPU time.
Dec 05 01:30:31 compute-0 podman[315449]: 2025-12-05 01:30:31.213929239 +0000 UTC m=+1.486345577 container died 0666dc5e432a1582d7795c0dfe7059e8a52a0b018af4d7e2ba864bdc17c21953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 05 01:30:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-99ccfb3169c3b85b0e83fc17347e81c61a15e3ff34127068b8def5763688dc86-merged.mount: Deactivated successfully.
Dec 05 01:30:31 compute-0 podman[315449]: 2025-12-05 01:30:31.297563798 +0000 UTC m=+1.569980126 container remove 0666dc5e432a1582d7795c0dfe7059e8a52a0b018af4d7e2ba864bdc17c21953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mayer, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 05 01:30:31 compute-0 systemd[1]: libpod-conmon-0666dc5e432a1582d7795c0dfe7059e8a52a0b018af4d7e2ba864bdc17c21953.scope: Deactivated successfully.
Dec 05 01:30:31 compute-0 sudo[315301]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:31 compute-0 openstack_network_exporter[160350]: ERROR   01:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:30:31 compute-0 openstack_network_exporter[160350]: ERROR   01:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:30:31 compute-0 openstack_network_exporter[160350]: ERROR   01:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:30:31 compute-0 openstack_network_exporter[160350]: ERROR   01:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:30:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:30:31 compute-0 openstack_network_exporter[160350]: ERROR   01:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:30:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:30:31 compute-0 sudo[315693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:30:31 compute-0 sudo[315693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:30:31 compute-0 sudo[315693]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:31 compute-0 sudo[315718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:30:31 compute-0 sshd-session[315499]: Connection reset by authenticating user root 45.135.232.92 port 42662 [preauth]
Dec 05 01:30:31 compute-0 sudo[315718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:30:31 compute-0 sudo[315718]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:31 compute-0 ceph-mon[192914]: pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:31 compute-0 sudo[315743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:30:31 compute-0 sudo[315743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:30:31 compute-0 sudo[315743]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:31 compute-0 sudo[315769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:30:31 compute-0 sudo[315769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:30:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:30:32 compute-0 podman[315907]: 2025-12-05 01:30:32.331190791 +0000 UTC m=+0.081259234 container create 20e64b30bf05a60e280cfe8f9617c265d6da39756a3a360b47e02194a9c1f3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:30:32 compute-0 podman[315907]: 2025-12-05 01:30:32.280414601 +0000 UTC m=+0.030483064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:30:32 compute-0 systemd[1]: Started libpod-conmon-20e64b30bf05a60e280cfe8f9617c265d6da39756a3a360b47e02194a9c1f3b1.scope.
Dec 05 01:30:32 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:30:32 compute-0 podman[315907]: 2025-12-05 01:30:32.582313016 +0000 UTC m=+0.332381479 container init 20e64b30bf05a60e280cfe8f9617c265d6da39756a3a360b47e02194a9c1f3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 05 01:30:32 compute-0 podman[315907]: 2025-12-05 01:30:32.601382129 +0000 UTC m=+0.351450612 container start 20e64b30bf05a60e280cfe8f9617c265d6da39756a3a360b47e02194a9c1f3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 01:30:32 compute-0 friendly_allen[315923]: 167 167
Dec 05 01:30:32 compute-0 systemd[1]: libpod-20e64b30bf05a60e280cfe8f9617c265d6da39756a3a360b47e02194a9c1f3b1.scope: Deactivated successfully.
Dec 05 01:30:32 compute-0 conmon[315923]: conmon 20e64b30bf05a60e280c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-20e64b30bf05a60e280cfe8f9617c265d6da39756a3a360b47e02194a9c1f3b1.scope/container/memory.events
Dec 05 01:30:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:32 compute-0 podman[315907]: 2025-12-05 01:30:32.70294052 +0000 UTC m=+0.453008983 container attach 20e64b30bf05a60e280cfe8f9617c265d6da39756a3a360b47e02194a9c1f3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:30:32 compute-0 podman[315907]: 2025-12-05 01:30:32.70328941 +0000 UTC m=+0.453357853 container died 20e64b30bf05a60e280cfe8f9617c265d6da39756a3a360b47e02194a9c1f3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:30:32 compute-0 sudo[316011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqjlboqgcwtyseoqgqcleknsaxotfdsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898232.0997279-1215-171351066308418/AnsiballZ_blockinfile.py'
Dec 05 01:30:32 compute-0 sudo[316011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:32 compute-0 ceph-mon[192914]: pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-c621b018d240c442ac28991e7c88ce2b1332cf6c6c96ab82041bf70661333363-merged.mount: Deactivated successfully.
Dec 05 01:30:33 compute-0 podman[315907]: 2025-12-05 01:30:32.999218586 +0000 UTC m=+0.749287059 container remove 20e64b30bf05a60e280cfe8f9617c265d6da39756a3a360b47e02194a9c1f3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_allen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:30:33 compute-0 systemd[1]: libpod-conmon-20e64b30bf05a60e280cfe8f9617c265d6da39756a3a360b47e02194a9c1f3b1.scope: Deactivated successfully.
Dec 05 01:30:33 compute-0 python3.9[316013]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:30:33 compute-0 sudo[316011]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:33 compute-0 sshd-session[315766]: Invalid user support from 45.135.232.92 port 42694
Dec 05 01:30:33 compute-0 podman[316048]: 2025-12-05 01:30:33.280541006 +0000 UTC m=+0.081705777 container create 9ea72fde5076d95973adc43ce02667430872a17e2b6b1a9592f1981dc95c8bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_driscoll, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:30:33 compute-0 podman[316048]: 2025-12-05 01:30:33.24424754 +0000 UTC m=+0.045412381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:30:33 compute-0 systemd[1]: Started libpod-conmon-9ea72fde5076d95973adc43ce02667430872a17e2b6b1a9592f1981dc95c8bf6.scope.
Dec 05 01:30:33 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:30:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71dc2bbab6841be7ad81fbb9e6936ec1a548608b6c71f7d90cee86109a444a48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:30:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71dc2bbab6841be7ad81fbb9e6936ec1a548608b6c71f7d90cee86109a444a48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:30:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71dc2bbab6841be7ad81fbb9e6936ec1a548608b6c71f7d90cee86109a444a48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:30:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71dc2bbab6841be7ad81fbb9e6936ec1a548608b6c71f7d90cee86109a444a48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:30:33 compute-0 podman[316048]: 2025-12-05 01:30:33.519095629 +0000 UTC m=+0.320260410 container init 9ea72fde5076d95973adc43ce02667430872a17e2b6b1a9592f1981dc95c8bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_driscoll, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:30:33 compute-0 sshd-session[315766]: Connection reset by invalid user support 45.135.232.92 port 42694 [preauth]
Dec 05 01:30:33 compute-0 podman[316048]: 2025-12-05 01:30:33.532436402 +0000 UTC m=+0.333601133 container start 9ea72fde5076d95973adc43ce02667430872a17e2b6b1a9592f1981dc95c8bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 01:30:33 compute-0 podman[316048]: 2025-12-05 01:30:33.58134985 +0000 UTC m=+0.382514611 container attach 9ea72fde5076d95973adc43ce02667430872a17e2b6b1a9592f1981dc95c8bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:30:33 compute-0 sudo[316195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgckrjxuyunelrtitgxqnzcddgrbdslg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898233.4229956-1224-259352081751953/AnsiballZ_command.py'
Dec 05 01:30:33 compute-0 sudo[316195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:34 compute-0 python3.9[316197]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:30:34 compute-0 sudo[316195]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:34 compute-0 busy_driscoll[316082]: {
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:     "0": [
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:         {
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "devices": [
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "/dev/loop3"
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             ],
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "lv_name": "ceph_lv0",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "lv_size": "21470642176",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "name": "ceph_lv0",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "tags": {
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.cluster_name": "ceph",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.crush_device_class": "",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.encrypted": "0",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.osd_id": "0",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.type": "block",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.vdo": "0"
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             },
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "type": "block",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "vg_name": "ceph_vg0"
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:         }
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:     ],
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:     "1": [
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:         {
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "devices": [
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "/dev/loop4"
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             ],
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "lv_name": "ceph_lv1",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "lv_size": "21470642176",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "name": "ceph_lv1",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "tags": {
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.cluster_name": "ceph",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.crush_device_class": "",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.encrypted": "0",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.osd_id": "1",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.type": "block",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.vdo": "0"
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             },
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "type": "block",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "vg_name": "ceph_vg1"
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:         }
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:     ],
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:     "2": [
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:         {
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "devices": [
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "/dev/loop5"
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             ],
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "lv_name": "ceph_lv2",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "lv_size": "21470642176",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "name": "ceph_lv2",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "tags": {
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.cluster_name": "ceph",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.crush_device_class": "",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.encrypted": "0",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.osd_id": "2",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.type": "block",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:                 "ceph.vdo": "0"
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             },
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "type": "block",
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:             "vg_name": "ceph_vg2"
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:         }
Dec 05 01:30:34 compute-0 busy_driscoll[316082]:     ]
Dec 05 01:30:34 compute-0 busy_driscoll[316082]: }
Dec 05 01:30:34 compute-0 systemd[1]: libpod-9ea72fde5076d95973adc43ce02667430872a17e2b6b1a9592f1981dc95c8bf6.scope: Deactivated successfully.
Dec 05 01:30:34 compute-0 podman[316048]: 2025-12-05 01:30:34.370655309 +0000 UTC m=+1.171820110 container died 9ea72fde5076d95973adc43ce02667430872a17e2b6b1a9592f1981dc95c8bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_driscoll, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:30:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-71dc2bbab6841be7ad81fbb9e6936ec1a548608b6c71f7d90cee86109a444a48-merged.mount: Deactivated successfully.
Dec 05 01:30:34 compute-0 podman[316048]: 2025-12-05 01:30:34.49582905 +0000 UTC m=+1.296993781 container remove 9ea72fde5076d95973adc43ce02667430872a17e2b6b1a9592f1981dc95c8bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 01:30:34 compute-0 systemd[1]: libpod-conmon-9ea72fde5076d95973adc43ce02667430872a17e2b6b1a9592f1981dc95c8bf6.scope: Deactivated successfully.
Dec 05 01:30:34 compute-0 sudo[315769]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:34 compute-0 sudo[316297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:30:34 compute-0 sudo[316297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:30:34 compute-0 sudo[316297]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:34 compute-0 sudo[316353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:30:34 compute-0 sudo[316353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:30:34 compute-0 sudo[316353]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:34 compute-0 sudo[316434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhzafbeqqxofdpfyprakdefiikektxvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898234.3878043-1232-231212201810358/AnsiballZ_stat.py'
Dec 05 01:30:34 compute-0 sudo[316434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:34 compute-0 sudo[316398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:30:34 compute-0 sudo[316398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:30:34 compute-0 sudo[316398]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:34 compute-0 sudo[316444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:30:34 compute-0 sudo[316444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:30:35 compute-0 python3.9[316441]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:30:35 compute-0 sudo[316434]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:35 compute-0 podman[316558]: 2025-12-05 01:30:35.456595875 +0000 UTC m=+0.077915311 container create 9466f48e5797698865713c97d62e5ca3aeee810097a421e5206da3e41c9ba8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 01:30:35 compute-0 podman[316558]: 2025-12-05 01:30:35.42102742 +0000 UTC m=+0.042346896 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:30:35 compute-0 systemd[1]: Started libpod-conmon-9466f48e5797698865713c97d62e5ca3aeee810097a421e5206da3e41c9ba8a9.scope.
Dec 05 01:30:35 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:30:35 compute-0 podman[316558]: 2025-12-05 01:30:35.590066338 +0000 UTC m=+0.211385824 container init 9466f48e5797698865713c97d62e5ca3aeee810097a421e5206da3e41c9ba8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:30:35 compute-0 podman[316558]: 2025-12-05 01:30:35.603135524 +0000 UTC m=+0.224454940 container start 9466f48e5797698865713c97d62e5ca3aeee810097a421e5206da3e41c9ba8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hellman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 05 01:30:35 compute-0 podman[316558]: 2025-12-05 01:30:35.609367398 +0000 UTC m=+0.230686894 container attach 9466f48e5797698865713c97d62e5ca3aeee810097a421e5206da3e41c9ba8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hellman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 05 01:30:35 compute-0 vibrant_hellman[316601]: 167 167
Dec 05 01:30:35 compute-0 systemd[1]: libpod-9466f48e5797698865713c97d62e5ca3aeee810097a421e5206da3e41c9ba8a9.scope: Deactivated successfully.
Dec 05 01:30:35 compute-0 podman[316558]: 2025-12-05 01:30:35.613567725 +0000 UTC m=+0.234887131 container died 9466f48e5797698865713c97d62e5ca3aeee810097a421e5206da3e41c9ba8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hellman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:30:35 compute-0 sudo[316721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyswuiqgvjqkajghejteqcdtqngxhjis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898235.368012-1241-75194500818055/AnsiballZ_file.py'
Dec 05 01:30:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e4f89b541516f3292e48723b1fbf8dc1b75ce3ee96835bfea7fdeb52c11c20e-merged.mount: Deactivated successfully.
Dec 05 01:30:35 compute-0 sudo[316721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:36 compute-0 ceph-mon[192914]: pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:36 compute-0 podman[316558]: 2025-12-05 01:30:36.03200866 +0000 UTC m=+0.653328046 container remove 9466f48e5797698865713c97d62e5ca3aeee810097a421e5206da3e41c9ba8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hellman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:30:36 compute-0 python3.9[316724]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:30:36 compute-0 systemd[1]: libpod-conmon-9466f48e5797698865713c97d62e5ca3aeee810097a421e5206da3e41c9ba8a9.scope: Deactivated successfully.
Dec 05 01:30:36 compute-0 podman[316669]: 2025-12-05 01:30:36.121154204 +0000 UTC m=+0.346456422 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec 05 01:30:36 compute-0 podman[316666]: 2025-12-05 01:30:36.128126629 +0000 UTC m=+0.357656526 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:30:36 compute-0 podman[316668]: 2025-12-05 01:30:36.130169106 +0000 UTC m=+0.359907338 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 05 01:30:36 compute-0 sudo[316721]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:36 compute-0 podman[316723]: 2025-12-05 01:30:36.188435916 +0000 UTC m=+0.295101346 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 01:30:36 compute-0 podman[316790]: 2025-12-05 01:30:36.276212941 +0000 UTC m=+0.059222617 container create df90a21290118ae7ff6be2be00fc9a3a61e53ce20dd7d875983a39a21b426fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_albattani, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:30:36 compute-0 systemd[1]: Started libpod-conmon-df90a21290118ae7ff6be2be00fc9a3a61e53ce20dd7d875983a39a21b426fdd.scope.
Dec 05 01:30:36 compute-0 podman[316790]: 2025-12-05 01:30:36.256922182 +0000 UTC m=+0.039931898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:30:36 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:30:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db58f119bedcde94768938759851b30780d955bcce2e20192042f3da70a82c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:30:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db58f119bedcde94768938759851b30780d955bcce2e20192042f3da70a82c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:30:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db58f119bedcde94768938759851b30780d955bcce2e20192042f3da70a82c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:30:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db58f119bedcde94768938759851b30780d955bcce2e20192042f3da70a82c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:30:36 compute-0 podman[316790]: 2025-12-05 01:30:36.41417812 +0000 UTC m=+0.197187886 container init df90a21290118ae7ff6be2be00fc9a3a61e53ce20dd7d875983a39a21b426fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:30:36 compute-0 podman[316790]: 2025-12-05 01:30:36.439150459 +0000 UTC m=+0.222160175 container start df90a21290118ae7ff6be2be00fc9a3a61e53ce20dd7d875983a39a21b426fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:30:36 compute-0 podman[316790]: 2025-12-05 01:30:36.445112436 +0000 UTC m=+0.228122122 container attach df90a21290118ae7ff6be2be00fc9a3a61e53ce20dd7d875983a39a21b426fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_albattani, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 01:30:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:36 compute-0 sudo[316954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huwuzaymlxrtmzugwenzvuulhqhsaeka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898236.4198906-1249-186880169612966/AnsiballZ_stat.py'
Dec 05 01:30:36 compute-0 sudo[316954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:37 compute-0 ceph-mon[192914]: pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:37 compute-0 python3.9[316956]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:30:37 compute-0 sudo[316954]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:30:37 compute-0 sudo[317058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wialqngrmkbfshxbnbuanwytrtkpcror ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898236.4198906-1249-186880169612966/AnsiballZ_file.py'
Dec 05 01:30:37 compute-0 sudo[317058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:37 compute-0 serene_albattani[316837]: {
Dec 05 01:30:37 compute-0 serene_albattani[316837]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:30:37 compute-0 serene_albattani[316837]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:30:37 compute-0 serene_albattani[316837]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:30:37 compute-0 serene_albattani[316837]:         "osd_id": 0,
Dec 05 01:30:37 compute-0 serene_albattani[316837]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:30:37 compute-0 serene_albattani[316837]:         "type": "bluestore"
Dec 05 01:30:37 compute-0 serene_albattani[316837]:     },
Dec 05 01:30:37 compute-0 serene_albattani[316837]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:30:37 compute-0 serene_albattani[316837]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:30:37 compute-0 serene_albattani[316837]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:30:37 compute-0 serene_albattani[316837]:         "osd_id": 1,
Dec 05 01:30:37 compute-0 serene_albattani[316837]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:30:37 compute-0 serene_albattani[316837]:         "type": "bluestore"
Dec 05 01:30:37 compute-0 serene_albattani[316837]:     },
Dec 05 01:30:37 compute-0 serene_albattani[316837]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:30:37 compute-0 serene_albattani[316837]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:30:37 compute-0 serene_albattani[316837]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:30:37 compute-0 serene_albattani[316837]:         "osd_id": 2,
Dec 05 01:30:37 compute-0 serene_albattani[316837]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:30:37 compute-0 serene_albattani[316837]:         "type": "bluestore"
Dec 05 01:30:37 compute-0 serene_albattani[316837]:     }
Dec 05 01:30:37 compute-0 serene_albattani[316837]: }
Dec 05 01:30:37 compute-0 sshd-session[316157]: Invalid user admin from 45.135.232.92 port 42700
Dec 05 01:30:37 compute-0 systemd[1]: libpod-df90a21290118ae7ff6be2be00fc9a3a61e53ce20dd7d875983a39a21b426fdd.scope: Deactivated successfully.
Dec 05 01:30:37 compute-0 systemd[1]: libpod-df90a21290118ae7ff6be2be00fc9a3a61e53ce20dd7d875983a39a21b426fdd.scope: Consumed 1.181s CPU time.
Dec 05 01:30:37 compute-0 podman[316790]: 2025-12-05 01:30:37.622867329 +0000 UTC m=+1.405877005 container died df90a21290118ae7ff6be2be00fc9a3a61e53ce20dd7d875983a39a21b426fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 05 01:30:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-0db58f119bedcde94768938759851b30780d955bcce2e20192042f3da70a82c6-merged.mount: Deactivated successfully.
Dec 05 01:30:37 compute-0 podman[316790]: 2025-12-05 01:30:37.710214802 +0000 UTC m=+1.493224488 container remove df90a21290118ae7ff6be2be00fc9a3a61e53ce20dd7d875983a39a21b426fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_albattani, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 05 01:30:37 compute-0 systemd[1]: libpod-conmon-df90a21290118ae7ff6be2be00fc9a3a61e53ce20dd7d875983a39a21b426fdd.scope: Deactivated successfully.
Dec 05 01:30:37 compute-0 sudo[316444]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:30:37 compute-0 python3.9[317060]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/systemd/system/edpm_libvirt.target _original_basename=edpm_libvirt.target recurse=False state=file path=/etc/systemd/system/edpm_libvirt.target force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:30:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:30:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:30:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:30:37 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 1b6fc748-2bc6-4708-b635-3cbeae3bc37d does not exist
Dec 05 01:30:37 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 480e9877-b788-443d-8d38-f61e007658b5 does not exist
Dec 05 01:30:37 compute-0 sudo[317058]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:37 compute-0 sudo[317074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:30:37 compute-0 sudo[317074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:30:37 compute-0 sudo[317074]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:37 compute-0 sshd-session[316157]: Connection reset by invalid user admin 45.135.232.92 port 42700 [preauth]
Dec 05 01:30:37 compute-0 sudo[317122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:30:37 compute-0 sudo[317122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:30:37 compute-0 sudo[317122]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:38 compute-0 sudo[317275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxrzbzdypbmmbglhhauqpswogoxlzdfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898238.0149004-1261-174542953468318/AnsiballZ_stat.py'
Dec 05 01:30:38 compute-0 sudo[317275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:38 compute-0 python3.9[317277]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:30:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:30:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:30:38 compute-0 ceph-mon[192914]: pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:38 compute-0 sudo[317275]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:39 compute-0 sshd-session[317197]: Connection reset by authenticating user root 45.135.232.92 port 25796 [preauth]
Dec 05 01:30:39 compute-0 sudo[317353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzlvxbmiscgosqscygqtlhewxtlrnyuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898238.0149004-1261-174542953468318/AnsiballZ_file.py'
Dec 05 01:30:39 compute-0 sudo[317353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:39 compute-0 python3.9[317355]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/systemd/system/edpm_libvirt_guests.service _original_basename=edpm_libvirt_guests.service recurse=False state=file path=/etc/systemd/system/edpm_libvirt_guests.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:30:39 compute-0 sudo[317353]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:41 compute-0 sudo[317507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljhkopmqubxfdatoqtdtvekvbpeexutq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898240.0470574-1273-83992884323651/AnsiballZ_stat.py'
Dec 05 01:30:41 compute-0 sudo[317507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:41 compute-0 python3.9[317509]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:30:41 compute-0 sudo[317507]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:41 compute-0 ceph-mon[192914]: pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:41 compute-0 podman[317515]: 2025-12-05 01:30:41.748128611 +0000 UTC m=+0.149559005 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543)
Dec 05 01:30:41 compute-0 sudo[317605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgqybrfrymmyfinxvauxdchetmvssqyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898240.0470574-1273-83992884323651/AnsiballZ_file.py'
Dec 05 01:30:41 compute-0 sudo[317605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:42 compute-0 python3.9[317607]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/systemd/system/virt-guest-shutdown.target _original_basename=virt-guest-shutdown.target recurse=False state=file path=/etc/systemd/system/virt-guest-shutdown.target force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:30:42 compute-0 sudo[317605]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.546 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.547 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.555 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.557 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.557 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.558 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.559 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:30:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:42 compute-0 sshd-session[317356]: Connection reset by authenticating user root 45.135.232.92 port 25800 [preauth]
Dec 05 01:30:43 compute-0 sshd-session[287755]: Connection closed by 192.168.122.30 port 60202
Dec 05 01:30:43 compute-0 sshd-session[287740]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:30:43 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Dec 05 01:30:43 compute-0 systemd[1]: session-55.scope: Consumed 2min 52.570s CPU time.
Dec 05 01:30:43 compute-0 systemd-logind[792]: Session 55 logged out. Waiting for processes to exit.
Dec 05 01:30:43 compute-0 systemd-logind[792]: Removed session 55.
Dec 05 01:30:43 compute-0 ceph-mon[192914]: pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:45 compute-0 ceph-mon[192914]: pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:30:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:30:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:30:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:30:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:30:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:30:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:30:47 compute-0 ceph-mon[192914]: pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:48 compute-0 podman[317633]: 2025-12-05 01:30:48.711290693 +0000 UTC m=+0.110418650 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, architecture=x86_64, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., maintainer=Red Hat, Inc.)
Dec 05 01:30:48 compute-0 ceph-mon[192914]: pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:49 compute-0 sshd-session[317653]: Accepted publickey for zuul from 192.168.122.30 port 40830 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 01:30:49 compute-0 systemd-logind[792]: New session 56 of user zuul.
Dec 05 01:30:49 compute-0 systemd[1]: Started Session 56 of User zuul.
Dec 05 01:30:49 compute-0 sshd-session[317653]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:30:50 compute-0 podman[317781]: 2025-12-05 01:30:50.505778929 +0000 UTC m=+0.101756118 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 01:30:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:50 compute-0 python3.9[317822]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:30:51 compute-0 ceph-mon[192914]: pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:30:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:53 compute-0 python3.9[317985]: ansible-ansible.builtin.service_facts Invoked
Dec 05 01:30:53 compute-0 network[318002]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 01:30:53 compute-0 network[318003]: 'network-scripts' will be removed from distribution in near future.
Dec 05 01:30:53 compute-0 network[318004]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 01:30:53 compute-0 ceph-mon[192914]: pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:55 compute-0 ceph-mon[192914]: pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:30:56.157 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:30:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:30:56.157 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:30:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:30:56.158 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:30:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:56 compute-0 ceph-mon[192914]: pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:30:58 compute-0 sudo[318284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvlhmfgxthifqqzivyxeiotmddyrzbrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898257.9824789-47-244314264808002/AnsiballZ_setup.py'
Dec 05 01:30:58 compute-0 sudo[318284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:58 compute-0 podman[318247]: 2025-12-05 01:30:58.507326117 +0000 UTC m=+0.103527647 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 05 01:30:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:58 compute-0 python3.9[318294]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 01:30:59 compute-0 sudo[318284]: pam_unix(sudo:session): session closed for user root
Dec 05 01:30:59 compute-0 sudo[318376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jghnxpfdbvtqvoqzezarbhoraxhencpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898257.9824789-47-244314264808002/AnsiballZ_dnf.py'
Dec 05 01:30:59 compute-0 sudo[318376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:30:59 compute-0 ceph-mon[192914]: pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:30:59 compute-0 podman[158197]: time="2025-12-05T01:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:30:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Dec 05 01:30:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7282 "" "Go-http-client/1.1"
Dec 05 01:30:59 compute-0 python3.9[318378]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 01:31:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:00 compute-0 ceph-mon[192914]: pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:01 compute-0 sudo[318376]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:01 compute-0 openstack_network_exporter[160350]: ERROR   01:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:31:01 compute-0 openstack_network_exporter[160350]: ERROR   01:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:31:01 compute-0 openstack_network_exporter[160350]: ERROR   01:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:31:01 compute-0 openstack_network_exporter[160350]: ERROR   01:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:31:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:31:01 compute-0 openstack_network_exporter[160350]: ERROR   01:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:31:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:31:02 compute-0 sudo[318529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aydglvqwimbhldmuifxnezewkqhsswbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898261.5864587-59-1933159131050/AnsiballZ_stat.py'
Dec 05 01:31:02 compute-0 sudo[318529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:31:02 compute-0 python3.9[318531]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:31:02 compute-0 sudo[318529]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:02 compute-0 ceph-mon[192914]: pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:03 compute-0 sudo[318681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anicqmnzozkxqeybodccsibbscrclslk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898263.2116091-69-245734163508270/AnsiballZ_command.py'
Dec 05 01:31:03 compute-0 sudo[318681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:04 compute-0 python3.9[318683]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:31:04 compute-0 sudo[318681]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:05 compute-0 sudo[318834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grvcfzohjkboxmxzykvjgxoeymggyayl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898264.509853-79-99208193694564/AnsiballZ_stat.py'
Dec 05 01:31:05 compute-0 sudo[318834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:05 compute-0 ceph-mon[192914]: pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:05 compute-0 python3.9[318836]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:31:05 compute-0 sudo[318834]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:06 compute-0 sudo[319047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzuazloooxgntxwsqsohyfcwmabrtpyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898266.1902783-87-165683054225013/AnsiballZ_command.py'
Dec 05 01:31:06 compute-0 podman[318960]: 2025-12-05 01:31:06.715692289 +0000 UTC m=+0.103590178 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 01:31:06 compute-0 sudo[319047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:06 compute-0 podman[318961]: 2025-12-05 01:31:06.721457661 +0000 UTC m=+0.105629566 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:31:06 compute-0 podman[318962]: 2025-12-05 01:31:06.729512566 +0000 UTC m=+0.107143248 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 05 01:31:06 compute-0 podman[318963]: 2025-12-05 01:31:06.786017666 +0000 UTC m=+0.162140906 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 05 01:31:06 compute-0 python3.9[319063]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:31:06 compute-0 sudo[319047]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:06 compute-0 ceph-mon[192914]: pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:31:07 compute-0 sudo[319223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkdblxmxbdybdjocukfshxmmdkqwfczj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898267.1762218-95-200585234844741/AnsiballZ_stat.py'
Dec 05 01:31:07 compute-0 sudo[319223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:07 compute-0 python3.9[319225]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:31:07 compute-0 sudo[319223]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:08 compute-0 sudo[319346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqwxnnkehbvgidoqawyfqrubnngnnhet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898267.1762218-95-200585234844741/AnsiballZ_copy.py'
Dec 05 01:31:08 compute-0 sudo[319346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:08 compute-0 python3.9[319348]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764898267.1762218-95-200585234844741/.source.iscsi _original_basename=.6wij8x2y follow=False checksum=01b6663853d932e08cc55f332ece7cb3fc654e0a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:31:08 compute-0 sudo[319346]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:09 compute-0 ceph-mon[192914]: pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:09 compute-0 sudo[319498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubmikxheallblbtumbtbxfamxhmmpqth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898269.200066-110-186987566183048/AnsiballZ_file.py'
Dec 05 01:31:09 compute-0 sudo[319498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:10 compute-0 python3.9[319500]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:31:10 compute-0 sudo[319498]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:11 compute-0 sudo[319650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjvxtrevtvildpmarraeddztcaaoubip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898270.329585-118-39358119976746/AnsiballZ_lineinfile.py'
Dec 05 01:31:11 compute-0 sudo[319650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:11 compute-0 python3.9[319652]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:31:11 compute-0 sudo[319650]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:11 compute-0 ceph-mon[192914]: pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:31:12 compute-0 sudo[319819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpxqsdbeeboonxxhbvaqsxcoheuclmsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898271.5714853-127-155074208364852/AnsiballZ_systemd_service.py'
Dec 05 01:31:12 compute-0 sudo[319819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:12 compute-0 podman[319776]: 2025-12-05 01:31:12.421607706 +0000 UTC m=+0.116294105 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-type=git, com.redhat.component=ubi9-container, config_id=edpm, maintainer=Red Hat, Inc., version=9.4, container_name=kepler, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, distribution-scope=public, release-0.7.12=)
Dec 05 01:31:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:12 compute-0 python3.9[319824]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:31:12 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec 05 01:31:12 compute-0 sudo[319819]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:13 compute-0 sudo[319978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxswuofiobqkcznuqbkdqwtolznvciyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898273.1497834-135-201723316065445/AnsiballZ_systemd_service.py'
Dec 05 01:31:13 compute-0 sudo[319978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:13 compute-0 ceph-mon[192914]: pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:14 compute-0 python3.9[319980]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:31:14 compute-0 systemd[1]: Reloading.
Dec 05 01:31:14 compute-0 systemd-rc-local-generator[320007]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:31:14 compute-0 systemd-sysv-generator[320011]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:31:14 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 05 01:31:14 compute-0 systemd[1]: Starting Open-iSCSI...
Dec 05 01:31:14 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Dec 05 01:31:14 compute-0 systemd[1]: Started Open-iSCSI.
Dec 05 01:31:14 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec 05 01:31:14 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec 05 01:31:14 compute-0 sudo[319978]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:15 compute-0 ceph-mon[192914]: pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:31:16
Dec 05 01:31:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:31:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:31:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.log', 'vms', 'default.rgw.meta', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta']
Dec 05 01:31:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:31:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:31:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:31:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:31:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:31:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:31:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:31:16 compute-0 sudo[320179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rznlhshrhorplmtpqhigdlucjcjxlkon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898275.136752-146-16428391244811/AnsiballZ_service_facts.py'
Dec 05 01:31:16 compute-0 sudo[320179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:31:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:31:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:31:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:31:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:31:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:31:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:31:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:31:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:31:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:31:16 compute-0 python3.9[320181]: ansible-ansible.builtin.service_facts Invoked
Dec 05 01:31:16 compute-0 network[320198]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 01:31:16 compute-0 network[320199]: 'network-scripts' will be removed from distribution in near future.
Dec 05 01:31:16 compute-0 network[320200]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 01:31:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:16 compute-0 ceph-mon[192914]: pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:31:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:18 compute-0 podman[320236]: 2025-12-05 01:31:18.89400788 +0000 UTC m=+0.123597500 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, name=ubi9-minimal, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, release=1755695350, container_name=openstack_network_exporter)
Dec 05 01:31:19 compute-0 ceph-mon[192914]: pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:20 compute-0 podman[320307]: 2025-12-05 01:31:20.680315507 +0000 UTC m=+0.114443804 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 01:31:21 compute-0 sudo[320179]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:21 compute-0 ceph-mon[192914]: pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:31:22 compute-0 sudo[320515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adjgbzgoeckddrjfquxfvyrhlsiscuhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898282.1367798-156-236649422903372/AnsiballZ_file.py'
Dec 05 01:31:22 compute-0 sudo[320515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:22 compute-0 ceph-mon[192914]: pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:22 compute-0 python3.9[320517]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 01:31:22 compute-0 sudo[320515]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:23 compute-0 sudo[320667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yanfzueeicosfusidhnxqfjzhonmtozn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898283.2094908-164-151303451940595/AnsiballZ_modprobe.py'
Dec 05 01:31:23 compute-0 sudo[320667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:24 compute-0 python3.9[320669]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec 05 01:31:24 compute-0 sudo[320667]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:24 compute-0 sudo[320823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smmezcxigoewjsptugxggolrurmqawsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898284.4031699-172-104956794601973/AnsiballZ_stat.py'
Dec 05 01:31:24 compute-0 sudo[320823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:25 compute-0 python3.9[320825]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:31:25 compute-0 sudo[320823]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:25 compute-0 sudo[320946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjsbmlofzoopomjhxnuarltcidgwvgbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898284.4031699-172-104956794601973/AnsiballZ_copy.py'
Dec 05 01:31:25 compute-0 sudo[320946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:25 compute-0 ceph-mon[192914]: pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:25 compute-0 python3.9[320948]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764898284.4031699-172-104956794601973/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:31:25 compute-0 sudo[320946]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:31:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:26 compute-0 sudo[321098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukgghpvlfwjqmjnhdelxvhecycntdmbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898286.2560737-188-156699792841125/AnsiballZ_lineinfile.py'
Dec 05 01:31:26 compute-0 sudo[321098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:27 compute-0 python3.9[321100]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:31:27 compute-0 sudo[321098]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:31:27 compute-0 ceph-mon[192914]: pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:28 compute-0 sudo[321269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjuvahgjqxtgqhjxvuljgaszpjbuowtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898287.3501325-196-36854318756617/AnsiballZ_systemd.py'
Dec 05 01:31:28 compute-0 sudo[321269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:28 compute-0 podman[321223]: 2025-12-05 01:31:28.699380671 +0000 UTC m=+0.108074687 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 05 01:31:29 compute-0 python3.9[321271]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 01:31:29 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 05 01:31:29 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec 05 01:31:29 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec 05 01:31:29 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 05 01:31:29 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 05 01:31:29 compute-0 sudo[321269]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:29 compute-0 podman[158197]: time="2025-12-05T01:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:31:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Dec 05 01:31:29 compute-0 ceph-mon[192914]: pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7281 "" "Go-http-client/1.1"
Dec 05 01:31:29 compute-0 sudo[321425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvdmqsutzfdwbwpedcfoydewevgkudxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898289.4240496-204-136932376998114/AnsiballZ_file.py'
Dec 05 01:31:29 compute-0 sudo[321425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:30 compute-0 python3.9[321427]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:31:30 compute-0 sudo[321425]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:30 compute-0 ceph-mon[192914]: pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:31 compute-0 openstack_network_exporter[160350]: ERROR   01:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:31:31 compute-0 openstack_network_exporter[160350]: ERROR   01:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:31:31 compute-0 openstack_network_exporter[160350]: ERROR   01:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:31:31 compute-0 openstack_network_exporter[160350]: ERROR   01:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:31:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:31:31 compute-0 openstack_network_exporter[160350]: ERROR   01:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:31:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:31:31 compute-0 sudo[321577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyoxlhqhksiecrmglpvktkuosemmhghs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898290.4531293-213-155253769887324/AnsiballZ_stat.py'
Dec 05 01:31:31 compute-0 sudo[321577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:32 compute-0 python3.9[321579]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:31:32 compute-0 sudo[321577]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:31:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:32 compute-0 sudo[321729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deaxaemrsatmblsgqlycnvqrtexcfmwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898292.3424203-222-169645745731339/AnsiballZ_stat.py'
Dec 05 01:31:32 compute-0 sudo[321729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:32 compute-0 ceph-mon[192914]: pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:33 compute-0 python3.9[321731]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:31:33 compute-0 sudo[321729]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:33 compute-0 sudo[321881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egzxhcefuoxswygxmeaemsuglfyqrtti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898293.319895-230-199805032067491/AnsiballZ_stat.py'
Dec 05 01:31:33 compute-0 sudo[321881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:34 compute-0 python3.9[321883]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:31:34 compute-0 sudo[321881]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:34 compute-0 sudo[322004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdbwaxvcjvqmhxdcfbzhxodjrqmeeifc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898293.319895-230-199805032067491/AnsiballZ_copy.py'
Dec 05 01:31:34 compute-0 sudo[322004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:34 compute-0 python3.9[322006]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764898293.319895-230-199805032067491/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:31:34 compute-0 sudo[322004]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:35 compute-0 sudo[322156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atbdacglmwpvldlugfizuneimtwgcvty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898295.1294794-245-197365782360972/AnsiballZ_command.py'
Dec 05 01:31:35 compute-0 sudo[322156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:35 compute-0 ceph-mon[192914]: pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:35 compute-0 python3.9[322158]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:31:35 compute-0 sudo[322156]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:36 compute-0 sudo[322309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbucpjqjhjkqnekxxfnvodlieqamrldq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898296.1720257-253-214016973758786/AnsiballZ_lineinfile.py'
Dec 05 01:31:36 compute-0 sudo[322309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:36 compute-0 python3.9[322311]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:31:36 compute-0 sudo[322309]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:31:37 compute-0 podman[322411]: 2025-12-05 01:31:37.676622742 +0000 UTC m=+0.081137175 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 01:31:37 compute-0 podman[322416]: 2025-12-05 01:31:37.698078341 +0000 UTC m=+0.091883015 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:31:37 compute-0 podman[322425]: 2025-12-05 01:31:37.723281074 +0000 UTC m=+0.109535257 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 01:31:37 compute-0 podman[322414]: 2025-12-05 01:31:37.734331712 +0000 UTC m=+0.120124353 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:31:37 compute-0 sudo[322545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygesaiukzknxrezuarlgrozpsgcfwdsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898297.143856-261-160328554786250/AnsiballZ_replace.py'
Dec 05 01:31:37 compute-0 sudo[322545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:37 compute-0 ceph-mon[192914]: pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:37 compute-0 python3.9[322547]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:31:37 compute-0 sudo[322545]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:38 compute-0 sudo[322554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:31:38 compute-0 sudo[322554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:31:38 compute-0 sudo[322554]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:38 compute-0 sudo[322597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:31:38 compute-0 sudo[322597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:31:38 compute-0 sudo[322597]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:38 compute-0 sudo[322643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:31:38 compute-0 sudo[322643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:31:38 compute-0 sudo[322643]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:38 compute-0 sudo[322690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:31:38 compute-0 sudo[322690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:31:38 compute-0 sudo[322811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abmwrbheglnxxvocgxgelssecarhgyrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898298.2070515-269-221076620086254/AnsiballZ_replace.py'
Dec 05 01:31:38 compute-0 sudo[322811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:38 compute-0 sudo[322690]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:31:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:31:38 compute-0 python3.9[322814]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:31:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:31:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:31:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:31:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:31:38 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 22df35af-1fff-4771-9b37-811be718a10b does not exist
Dec 05 01:31:38 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev fb682c7b-c0b1-48cc-8b74-599dc176ddb4 does not exist
Dec 05 01:31:38 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev e62674e8-92ae-49b8-a2bc-16ae455fa6d7 does not exist
Dec 05 01:31:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:31:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:31:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:31:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:31:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:31:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:31:38 compute-0 sudo[322811]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:38 compute-0 sudo[322832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:31:38 compute-0 sudo[322832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:31:39 compute-0 sudo[322832]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:39 compute-0 sudo[322880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:31:39 compute-0 sudo[322880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:31:39 compute-0 sudo[322880]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:39 compute-0 sudo[322910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:31:39 compute-0 sudo[322910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:31:39 compute-0 sudo[322910]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:39 compute-0 sudo[322961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:31:39 compute-0 sudo[322961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:31:39 compute-0 sudo[323106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grbwdfklyjucgrmzstsxazcnyqbkiwjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898299.1812963-278-130829616814023/AnsiballZ_lineinfile.py'
Dec 05 01:31:39 compute-0 sudo[323106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:39 compute-0 ceph-mon[192914]: pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:31:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:31:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:31:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:31:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:31:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:31:39 compute-0 python3.9[323110]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:31:39 compute-0 podman[323122]: 2025-12-05 01:31:39.829211732 +0000 UTC m=+0.113183070 container create d351e89151aae82f42d3de309ddb1ca735f41368d5ece24eb47d7fd61671369d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 01:31:39 compute-0 sudo[323106]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:39 compute-0 podman[323122]: 2025-12-05 01:31:39.76535851 +0000 UTC m=+0.049329888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:31:39 compute-0 systemd[1]: Started libpod-conmon-d351e89151aae82f42d3de309ddb1ca735f41368d5ece24eb47d7fd61671369d.scope.
Dec 05 01:31:39 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:31:39 compute-0 podman[323122]: 2025-12-05 01:31:39.975152044 +0000 UTC m=+0.259123392 container init d351e89151aae82f42d3de309ddb1ca735f41368d5ece24eb47d7fd61671369d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:31:39 compute-0 podman[323122]: 2025-12-05 01:31:39.992398255 +0000 UTC m=+0.276369613 container start d351e89151aae82f42d3de309ddb1ca735f41368d5ece24eb47d7fd61671369d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_matsumoto, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 01:31:39 compute-0 podman[323122]: 2025-12-05 01:31:39.999111893 +0000 UTC m=+0.283083281 container attach d351e89151aae82f42d3de309ddb1ca735f41368d5ece24eb47d7fd61671369d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_matsumoto, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:31:40 compute-0 focused_matsumoto[323136]: 167 167
Dec 05 01:31:40 compute-0 systemd[1]: libpod-d351e89151aae82f42d3de309ddb1ca735f41368d5ece24eb47d7fd61671369d.scope: Deactivated successfully.
Dec 05 01:31:40 compute-0 podman[323122]: 2025-12-05 01:31:40.005164452 +0000 UTC m=+0.289135760 container died d351e89151aae82f42d3de309ddb1ca735f41368d5ece24eb47d7fd61671369d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 01:31:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bc365e79b3b9aaf465676cc4688fef67a4d3a88d7b030465c185b84873f94f5-merged.mount: Deactivated successfully.
Dec 05 01:31:40 compute-0 podman[323122]: 2025-12-05 01:31:40.079392963 +0000 UTC m=+0.363364281 container remove d351e89151aae82f42d3de309ddb1ca735f41368d5ece24eb47d7fd61671369d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:31:40 compute-0 systemd[1]: libpod-conmon-d351e89151aae82f42d3de309ddb1ca735f41368d5ece24eb47d7fd61671369d.scope: Deactivated successfully.
Dec 05 01:31:40 compute-0 podman[323159]: 2025-12-05 01:31:40.354055168 +0000 UTC m=+0.091052852 container create 18be16e34ff2bec75ac1d8ce4e4c90904fdb72d33359a6ff893b499c337f9c74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ishizaka, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:31:40 compute-0 podman[323159]: 2025-12-05 01:31:40.321216401 +0000 UTC m=+0.058214135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:31:40 compute-0 systemd[1]: Started libpod-conmon-18be16e34ff2bec75ac1d8ce4e4c90904fdb72d33359a6ff893b499c337f9c74.scope.
Dec 05 01:31:40 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb27211814bbbe337f49bd1cae8cb49b1845dc3dcb866f9e5924f067dfc42a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb27211814bbbe337f49bd1cae8cb49b1845dc3dcb866f9e5924f067dfc42a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb27211814bbbe337f49bd1cae8cb49b1845dc3dcb866f9e5924f067dfc42a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb27211814bbbe337f49bd1cae8cb49b1845dc3dcb866f9e5924f067dfc42a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb27211814bbbe337f49bd1cae8cb49b1845dc3dcb866f9e5924f067dfc42a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:31:40 compute-0 podman[323159]: 2025-12-05 01:31:40.501761739 +0000 UTC m=+0.238759413 container init 18be16e34ff2bec75ac1d8ce4e4c90904fdb72d33359a6ff893b499c337f9c74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ishizaka, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 05 01:31:40 compute-0 podman[323159]: 2025-12-05 01:31:40.521346876 +0000 UTC m=+0.258344570 container start 18be16e34ff2bec75ac1d8ce4e4c90904fdb72d33359a6ff893b499c337f9c74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ishizaka, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:31:40 compute-0 podman[323159]: 2025-12-05 01:31:40.527857198 +0000 UTC m=+0.264854942 container attach 18be16e34ff2bec75ac1d8ce4e4c90904fdb72d33359a6ff893b499c337f9c74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:31:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:40 compute-0 ceph-mon[192914]: pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:41 compute-0 sudo[323334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqnbmsmgpeyqjuthfqzswakmjxuefslb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898300.673697-278-81590452076586/AnsiballZ_lineinfile.py'
Dec 05 01:31:41 compute-0 sudo[323334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:41 compute-0 python3.9[323339]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:31:41 compute-0 sudo[323334]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:41 compute-0 trusting_ishizaka[323175]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:31:41 compute-0 trusting_ishizaka[323175]: --> relative data size: 1.0
Dec 05 01:31:41 compute-0 trusting_ishizaka[323175]: --> All data devices are unavailable
Dec 05 01:31:41 compute-0 systemd[1]: libpod-18be16e34ff2bec75ac1d8ce4e4c90904fdb72d33359a6ff893b499c337f9c74.scope: Deactivated successfully.
Dec 05 01:31:41 compute-0 systemd[1]: libpod-18be16e34ff2bec75ac1d8ce4e4c90904fdb72d33359a6ff893b499c337f9c74.scope: Consumed 1.046s CPU time.
Dec 05 01:31:41 compute-0 podman[323159]: 2025-12-05 01:31:41.652941683 +0000 UTC m=+1.389939357 container died 18be16e34ff2bec75ac1d8ce4e4c90904fdb72d33359a6ff893b499c337f9c74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ishizaka, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 05 01:31:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-feb27211814bbbe337f49bd1cae8cb49b1845dc3dcb866f9e5924f067dfc42a3-merged.mount: Deactivated successfully.
Dec 05 01:31:41 compute-0 podman[323159]: 2025-12-05 01:31:41.736204266 +0000 UTC m=+1.473201920 container remove 18be16e34ff2bec75ac1d8ce4e4c90904fdb72d33359a6ff893b499c337f9c74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 01:31:41 compute-0 systemd[1]: libpod-conmon-18be16e34ff2bec75ac1d8ce4e4c90904fdb72d33359a6ff893b499c337f9c74.scope: Deactivated successfully.
Dec 05 01:31:41 compute-0 sudo[322961]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:41 compute-0 sudo[323441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:31:41 compute-0 sudo[323441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:31:41 compute-0 sudo[323441]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:41 compute-0 sudo[323492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:31:41 compute-0 sudo[323492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:31:41 compute-0 sudo[323492]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:42 compute-0 sudo[323517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:31:42 compute-0 sudo[323517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:31:42 compute-0 sudo[323517]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:42 compute-0 sudo[323542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:31:42 compute-0 sudo[323542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:31:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:31:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:42 compute-0 podman[323591]: 2025-12-05 01:31:42.724202147 +0000 UTC m=+0.126086020 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, release=1214.1726694543, distribution-scope=public, io.buildah.version=1.29.0, version=9.4, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, release-0.7.12=, com.redhat.component=ubi9-container, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 05 01:31:42 compute-0 podman[323639]: 2025-12-05 01:31:42.766442996 +0000 UTC m=+0.081914387 container create 41f495cd3f9be573bda24b69aa8f8fe2f1a92aeda4c38b0d785670005dd12449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_feistel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 05 01:31:42 compute-0 sudo[323686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxuvaspwbkftppwmhdhfmkfrnlmggjys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898301.7077742-278-244292647974666/AnsiballZ_lineinfile.py'
Dec 05 01:31:42 compute-0 sudo[323686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:42 compute-0 systemd[1]: Started libpod-conmon-41f495cd3f9be573bda24b69aa8f8fe2f1a92aeda4c38b0d785670005dd12449.scope.
Dec 05 01:31:42 compute-0 podman[323639]: 2025-12-05 01:31:42.736746577 +0000 UTC m=+0.052217998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:31:42 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:31:42 compute-0 podman[323639]: 2025-12-05 01:31:42.869552213 +0000 UTC m=+0.185023624 container init 41f495cd3f9be573bda24b69aa8f8fe2f1a92aeda4c38b0d785670005dd12449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_feistel, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 05 01:31:42 compute-0 podman[323639]: 2025-12-05 01:31:42.89023878 +0000 UTC m=+0.205710171 container start 41f495cd3f9be573bda24b69aa8f8fe2f1a92aeda4c38b0d785670005dd12449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:31:42 compute-0 podman[323639]: 2025-12-05 01:31:42.895225719 +0000 UTC m=+0.210697110 container attach 41f495cd3f9be573bda24b69aa8f8fe2f1a92aeda4c38b0d785670005dd12449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_feistel, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 01:31:42 compute-0 infallible_feistel[323691]: 167 167
Dec 05 01:31:42 compute-0 systemd[1]: libpod-41f495cd3f9be573bda24b69aa8f8fe2f1a92aeda4c38b0d785670005dd12449.scope: Deactivated successfully.
Dec 05 01:31:42 compute-0 podman[323639]: 2025-12-05 01:31:42.899058976 +0000 UTC m=+0.214530367 container died 41f495cd3f9be573bda24b69aa8f8fe2f1a92aeda4c38b0d785670005dd12449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_feistel, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:31:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbd857a46606b17196855c1e6a9a97182b19a75a37a2844d2b4cad29b6dde7c8-merged.mount: Deactivated successfully.
Dec 05 01:31:42 compute-0 podman[323639]: 2025-12-05 01:31:42.94900394 +0000 UTC m=+0.264475331 container remove 41f495cd3f9be573bda24b69aa8f8fe2f1a92aeda4c38b0d785670005dd12449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_feistel, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:31:42 compute-0 systemd[1]: libpod-conmon-41f495cd3f9be573bda24b69aa8f8fe2f1a92aeda4c38b0d785670005dd12449.scope: Deactivated successfully.
Dec 05 01:31:43 compute-0 python3.9[323690]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:31:43 compute-0 sudo[323686]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:43 compute-0 podman[323722]: 2025-12-05 01:31:43.182825435 +0000 UTC m=+0.075284782 container create 43b1524bc969b76319f26e33031f4982d5ee453e64b5ca6813fa9a567033e414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:31:43 compute-0 podman[323722]: 2025-12-05 01:31:43.153833716 +0000 UTC m=+0.046293133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:31:43 compute-0 systemd[1]: Started libpod-conmon-43b1524bc969b76319f26e33031f4982d5ee453e64b5ca6813fa9a567033e414.scope.
Dec 05 01:31:43 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:31:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a7985eca371c1c607d87fd2bdd39fef8463363b44fc5fdf09344cc1dc7de6de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:31:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a7985eca371c1c607d87fd2bdd39fef8463363b44fc5fdf09344cc1dc7de6de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:31:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a7985eca371c1c607d87fd2bdd39fef8463363b44fc5fdf09344cc1dc7de6de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:31:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a7985eca371c1c607d87fd2bdd39fef8463363b44fc5fdf09344cc1dc7de6de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:31:43 compute-0 podman[323722]: 2025-12-05 01:31:43.329454437 +0000 UTC m=+0.221913874 container init 43b1524bc969b76319f26e33031f4982d5ee453e64b5ca6813fa9a567033e414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 01:31:43 compute-0 podman[323722]: 2025-12-05 01:31:43.363101586 +0000 UTC m=+0.255560943 container start 43b1524bc969b76319f26e33031f4982d5ee453e64b5ca6813fa9a567033e414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_euclid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 05 01:31:43 compute-0 podman[323722]: 2025-12-05 01:31:43.368857157 +0000 UTC m=+0.261316594 container attach 43b1524bc969b76319f26e33031f4982d5ee453e64b5ca6813fa9a567033e414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 05 01:31:43 compute-0 sudo[323885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmnigbzgvqqipuudlqqowrparmrqnhhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898303.2640429-278-233731621363520/AnsiballZ_lineinfile.py'
Dec 05 01:31:43 compute-0 sudo[323885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:43 compute-0 ceph-mon[192914]: pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:43 compute-0 python3.9[323887]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:31:43 compute-0 sudo[323885]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:44 compute-0 awesome_euclid[323778]: {
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:     "0": [
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:         {
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "devices": [
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "/dev/loop3"
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             ],
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "lv_name": "ceph_lv0",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "lv_size": "21470642176",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "name": "ceph_lv0",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "tags": {
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.cluster_name": "ceph",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.crush_device_class": "",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.encrypted": "0",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.osd_id": "0",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.type": "block",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.vdo": "0"
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             },
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "type": "block",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "vg_name": "ceph_vg0"
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:         }
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:     ],
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:     "1": [
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:         {
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "devices": [
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "/dev/loop4"
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             ],
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "lv_name": "ceph_lv1",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "lv_size": "21470642176",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "name": "ceph_lv1",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "tags": {
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.cluster_name": "ceph",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.crush_device_class": "",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.encrypted": "0",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.osd_id": "1",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.type": "block",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.vdo": "0"
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             },
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "type": "block",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "vg_name": "ceph_vg1"
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:         }
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:     ],
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:     "2": [
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:         {
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "devices": [
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "/dev/loop5"
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             ],
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "lv_name": "ceph_lv2",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "lv_size": "21470642176",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "name": "ceph_lv2",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "tags": {
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.cluster_name": "ceph",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.crush_device_class": "",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.encrypted": "0",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.osd_id": "2",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.type": "block",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:                 "ceph.vdo": "0"
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             },
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "type": "block",
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:             "vg_name": "ceph_vg2"
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:         }
Dec 05 01:31:44 compute-0 awesome_euclid[323778]:     ]
Dec 05 01:31:44 compute-0 awesome_euclid[323778]: }
Dec 05 01:31:44 compute-0 systemd[1]: libpod-43b1524bc969b76319f26e33031f4982d5ee453e64b5ca6813fa9a567033e414.scope: Deactivated successfully.
Dec 05 01:31:44 compute-0 podman[323722]: 2025-12-05 01:31:44.286476103 +0000 UTC m=+1.178935470 container died 43b1524bc969b76319f26e33031f4982d5ee453e64b5ca6813fa9a567033e414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:31:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a7985eca371c1c607d87fd2bdd39fef8463363b44fc5fdf09344cc1dc7de6de-merged.mount: Deactivated successfully.
Dec 05 01:31:44 compute-0 podman[323722]: 2025-12-05 01:31:44.385535777 +0000 UTC m=+1.277995134 container remove 43b1524bc969b76319f26e33031f4982d5ee453e64b5ca6813fa9a567033e414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_euclid, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:31:44 compute-0 systemd[1]: libpod-conmon-43b1524bc969b76319f26e33031f4982d5ee453e64b5ca6813fa9a567033e414.scope: Deactivated successfully.
Dec 05 01:31:44 compute-0 sudo[323542]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:44 compute-0 sudo[324003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:31:44 compute-0 sudo[324003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:31:44 compute-0 sudo[324003]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:44 compute-0 sudo[324052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:31:44 compute-0 sudo[324052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:31:44 compute-0 sudo[324052]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:44 compute-0 sudo[324101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgzizjbyyiectyqgmtvidkwecyjtsekb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898304.1958995-307-270090479909347/AnsiballZ_stat.py'
Dec 05 01:31:44 compute-0 sudo[324101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:44 compute-0 sudo[324106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:31:44 compute-0 sudo[324106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:31:44 compute-0 sudo[324106]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:44 compute-0 python3.9[324105]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:31:44 compute-0 sudo[324131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:31:44 compute-0 sudo[324131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:31:44 compute-0 sudo[324101]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:45 compute-0 podman[324279]: 2025-12-05 01:31:45.435923938 +0000 UTC m=+0.065378556 container create 1150cf13eed1f54baa8867da4360e04b8a6d7f06b1604cfe0919fd837df33f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:31:45 compute-0 systemd[1]: Started libpod-conmon-1150cf13eed1f54baa8867da4360e04b8a6d7f06b1604cfe0919fd837df33f32.scope.
Dec 05 01:31:45 compute-0 podman[324279]: 2025-12-05 01:31:45.409641814 +0000 UTC m=+0.039096452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:31:45 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:31:45 compute-0 podman[324279]: 2025-12-05 01:31:45.564863546 +0000 UTC m=+0.194318234 container init 1150cf13eed1f54baa8867da4360e04b8a6d7f06b1604cfe0919fd837df33f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:31:45 compute-0 podman[324279]: 2025-12-05 01:31:45.582189539 +0000 UTC m=+0.211644137 container start 1150cf13eed1f54baa8867da4360e04b8a6d7f06b1604cfe0919fd837df33f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 05 01:31:45 compute-0 serene_allen[324333]: 167 167
Dec 05 01:31:45 compute-0 systemd[1]: libpod-1150cf13eed1f54baa8867da4360e04b8a6d7f06b1604cfe0919fd837df33f32.scope: Deactivated successfully.
Dec 05 01:31:45 compute-0 podman[324279]: 2025-12-05 01:31:45.597571539 +0000 UTC m=+0.227026237 container attach 1150cf13eed1f54baa8867da4360e04b8a6d7f06b1604cfe0919fd837df33f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:31:45 compute-0 podman[324279]: 2025-12-05 01:31:45.598604457 +0000 UTC m=+0.228059105 container died 1150cf13eed1f54baa8867da4360e04b8a6d7f06b1604cfe0919fd837df33f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Dec 05 01:31:45 compute-0 sudo[324365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlgazaelpjzrunbyvoejkaozywnlumoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898305.1517782-315-163993488136768/AnsiballZ_file.py'
Dec 05 01:31:45 compute-0 sudo[324365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1d192f9b1267d9c79f1de8257e095fbdf243866bf88f8ee0e454573dfa54ca0-merged.mount: Deactivated successfully.
Dec 05 01:31:45 compute-0 podman[324279]: 2025-12-05 01:31:45.66608216 +0000 UTC m=+0.295536768 container remove 1150cf13eed1f54baa8867da4360e04b8a6d7f06b1604cfe0919fd837df33f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:31:45 compute-0 systemd[1]: libpod-conmon-1150cf13eed1f54baa8867da4360e04b8a6d7f06b1604cfe0919fd837df33f32.scope: Deactivated successfully.
Dec 05 01:31:45 compute-0 ceph-mon[192914]: pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:45 compute-0 python3.9[324373]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:31:45 compute-0 sudo[324365]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:45 compute-0 podman[324384]: 2025-12-05 01:31:45.929376748 +0000 UTC m=+0.074346566 container create e2ed9d85c5581dfe4ab10af10724c19795eb13aa4921f1eca44e700ab71a2e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:31:45 compute-0 podman[324384]: 2025-12-05 01:31:45.897754985 +0000 UTC m=+0.042724873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:31:45 compute-0 systemd[1]: Started libpod-conmon-e2ed9d85c5581dfe4ab10af10724c19795eb13aa4921f1eca44e700ab71a2e73.scope.
Dec 05 01:31:46 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad5b6b2523faafe2fa06a6a48957f285fc98e9f3691f8b8c343a2f1c6129f6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad5b6b2523faafe2fa06a6a48957f285fc98e9f3691f8b8c343a2f1c6129f6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad5b6b2523faafe2fa06a6a48957f285fc98e9f3691f8b8c343a2f1c6129f6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad5b6b2523faafe2fa06a6a48957f285fc98e9f3691f8b8c343a2f1c6129f6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:31:46 compute-0 podman[324384]: 2025-12-05 01:31:46.076802812 +0000 UTC m=+0.221772710 container init e2ed9d85c5581dfe4ab10af10724c19795eb13aa4921f1eca44e700ab71a2e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:31:46 compute-0 podman[324384]: 2025-12-05 01:31:46.0878653 +0000 UTC m=+0.232835138 container start e2ed9d85c5581dfe4ab10af10724c19795eb13aa4921f1eca44e700ab71a2e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:31:46 compute-0 podman[324384]: 2025-12-05 01:31:46.095768291 +0000 UTC m=+0.240738109 container attach e2ed9d85c5581dfe4ab10af10724c19795eb13aa4921f1eca44e700ab71a2e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_varahamihira, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:31:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:31:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:31:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:31:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:31:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:31:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:31:46 compute-0 sudo[324553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aedqxhlinrqnavvocwzsxskcqkmajtuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898306.138294-324-249670001519789/AnsiballZ_file.py'
Dec 05 01:31:46 compute-0 sudo[324553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:46 compute-0 python3.9[324555]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:31:46 compute-0 sudo[324553]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:47 compute-0 friendly_varahamihira[324423]: {
Dec 05 01:31:47 compute-0 friendly_varahamihira[324423]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:31:47 compute-0 friendly_varahamihira[324423]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:31:47 compute-0 friendly_varahamihira[324423]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:31:47 compute-0 friendly_varahamihira[324423]:         "osd_id": 0,
Dec 05 01:31:47 compute-0 friendly_varahamihira[324423]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:31:47 compute-0 friendly_varahamihira[324423]:         "type": "bluestore"
Dec 05 01:31:47 compute-0 friendly_varahamihira[324423]:     },
Dec 05 01:31:47 compute-0 friendly_varahamihira[324423]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:31:47 compute-0 friendly_varahamihira[324423]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:31:47 compute-0 friendly_varahamihira[324423]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:31:47 compute-0 friendly_varahamihira[324423]:         "osd_id": 1,
Dec 05 01:31:47 compute-0 friendly_varahamihira[324423]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:31:47 compute-0 friendly_varahamihira[324423]:         "type": "bluestore"
Dec 05 01:31:47 compute-0 friendly_varahamihira[324423]:     },
Dec 05 01:31:47 compute-0 friendly_varahamihira[324423]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:31:47 compute-0 friendly_varahamihira[324423]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:31:47 compute-0 friendly_varahamihira[324423]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:31:47 compute-0 friendly_varahamihira[324423]:         "osd_id": 2,
Dec 05 01:31:47 compute-0 friendly_varahamihira[324423]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:31:47 compute-0 friendly_varahamihira[324423]:         "type": "bluestore"
Dec 05 01:31:47 compute-0 friendly_varahamihira[324423]:     }
Dec 05 01:31:47 compute-0 friendly_varahamihira[324423]: }
Dec 05 01:31:47 compute-0 systemd[1]: libpod-e2ed9d85c5581dfe4ab10af10724c19795eb13aa4921f1eca44e700ab71a2e73.scope: Deactivated successfully.
Dec 05 01:31:47 compute-0 systemd[1]: libpod-e2ed9d85c5581dfe4ab10af10724c19795eb13aa4921f1eca44e700ab71a2e73.scope: Consumed 1.097s CPU time.
Dec 05 01:31:47 compute-0 podman[324384]: 2025-12-05 01:31:47.187078884 +0000 UTC m=+1.332048692 container died e2ed9d85c5581dfe4ab10af10724c19795eb13aa4921f1eca44e700ab71a2e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 01:31:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-aad5b6b2523faafe2fa06a6a48957f285fc98e9f3691f8b8c343a2f1c6129f6d-merged.mount: Deactivated successfully.
Dec 05 01:31:47 compute-0 podman[324384]: 2025-12-05 01:31:47.279185215 +0000 UTC m=+1.424155023 container remove e2ed9d85c5581dfe4ab10af10724c19795eb13aa4921f1eca44e700ab71a2e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_varahamihira, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:31:47 compute-0 systemd[1]: libpod-conmon-e2ed9d85c5581dfe4ab10af10724c19795eb13aa4921f1eca44e700ab71a2e73.scope: Deactivated successfully.
Dec 05 01:31:47 compute-0 sudo[324131]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:31:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:31:47 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:31:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:31:47 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:31:47 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 05456b62-3519-4b28-86ee-d2a718ba5e1a does not exist
Dec 05 01:31:47 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 6c08dc9f-db9a-4ee6-9a0d-6a83bee8dca8 does not exist
Dec 05 01:31:47 compute-0 sudo[324702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:31:47 compute-0 sudo[324702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:31:47 compute-0 sudo[324702]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:47 compute-0 sudo[324792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shwawoejlkkdvlarnloysopjovxzmibl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898307.091039-332-272004488278107/AnsiballZ_stat.py'
Dec 05 01:31:47 compute-0 sudo[324792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:47 compute-0 sudo[324755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:31:47 compute-0 sudo[324755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:31:47 compute-0 sudo[324755]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:47 compute-0 python3.9[324797]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:31:47 compute-0 ceph-mon[192914]: pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:47 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:31:47 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:31:47 compute-0 sudo[324792]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:48 compute-0 sudo[324874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muhpgpagxidlkfulukwxvsgyfvabjjui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898307.091039-332-272004488278107/AnsiballZ_file.py'
Dec 05 01:31:48 compute-0 sudo[324874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:48 compute-0 python3.9[324876]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:31:48 compute-0 sudo[324874]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:48 compute-0 ceph-mon[192914]: pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:49 compute-0 podman[325000]: 2025-12-05 01:31:49.189236375 +0000 UTC m=+0.133371093 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9)
Dec 05 01:31:49 compute-0 sudo[325043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufaroawuotcnwgbybqzvogzqommzqjyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898308.6468174-332-47942852167076/AnsiballZ_stat.py'
Dec 05 01:31:49 compute-0 sudo[325043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:49 compute-0 python3.9[325048]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:31:49 compute-0 sudo[325043]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:49 compute-0 sudo[325125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-watlglyvnogtxfzlfeegwemfdfddkqsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898308.6468174-332-47942852167076/AnsiballZ_file.py'
Dec 05 01:31:49 compute-0 sudo[325125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:50 compute-0 python3.9[325127]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:31:50 compute-0 sudo[325125]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:50 compute-0 sudo[325291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrhsztznqjncjcpnfutvysvbddpkwojv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898310.4163465-355-43106148910892/AnsiballZ_file.py'
Dec 05 01:31:50 compute-0 sudo[325291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:50 compute-0 podman[325251]: 2025-12-05 01:31:50.969527534 +0000 UTC m=+0.115780581 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 01:31:51 compute-0 python3.9[325296]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:31:51 compute-0 sudo[325291]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:51 compute-0 ceph-mon[192914]: pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:51 compute-0 sudo[325451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wslekqoeqylvmctppfqkjciuqnwdufjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898311.4352548-363-115709638229058/AnsiballZ_stat.py'
Dec 05 01:31:51 compute-0 sudo[325451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:52 compute-0 python3.9[325453]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:31:52 compute-0 sudo[325451]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:31:52 compute-0 sudo[325529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwblhohaubsfryydesxloocefeglueod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898311.4352548-363-115709638229058/AnsiballZ_file.py'
Dec 05 01:31:52 compute-0 sudo[325529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:52 compute-0 python3.9[325531]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:31:52 compute-0 sudo[325529]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:53 compute-0 sudo[325681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkgmshxluvtyalsybqtijeqqpdqfxxvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898313.1181967-375-99806648769878/AnsiballZ_stat.py'
Dec 05 01:31:53 compute-0 sudo[325681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:53 compute-0 ceph-mon[192914]: pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:53 compute-0 python3.9[325683]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:31:53 compute-0 sudo[325681]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:54 compute-0 sudo[325759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apitwbymadaxaybtjcwqlwulvlzjlzel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898313.1181967-375-99806648769878/AnsiballZ_file.py'
Dec 05 01:31:54 compute-0 sudo[325759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:54 compute-0 python3.9[325761]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:31:54 compute-0 sudo[325759]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:55 compute-0 sudo[325911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnyfmxyeirzfnuhbzllyybdbltlumkax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898314.9276676-387-1319112968789/AnsiballZ_systemd.py'
Dec 05 01:31:55 compute-0 sudo[325911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:55 compute-0 python3.9[325913]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:31:55 compute-0 ceph-mon[192914]: pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:55 compute-0 systemd[1]: Reloading.
Dec 05 01:31:55 compute-0 systemd-rc-local-generator[325935]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:31:55 compute-0 systemd-sysv-generator[325943]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:31:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:31:56.158 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:31:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:31:56.159 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:31:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:31:56.159 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:31:56 compute-0 sudo[325911]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:56 compute-0 ceph-mon[192914]: pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:56 compute-0 sudo[326101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzyclcmgjjqqzubcvzvarltkgiusxvgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898316.539859-395-230031120076242/AnsiballZ_stat.py'
Dec 05 01:31:56 compute-0 sudo[326101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:57 compute-0 python3.9[326103]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:31:57 compute-0 sudo[326101]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.337957) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898317337989, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1213, "num_deletes": 507, "total_data_size": 1379934, "memory_usage": 1414848, "flush_reason": "Manual Compaction"}
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898317349113, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1356189, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13524, "largest_seqno": 14736, "table_properties": {"data_size": 1350782, "index_size": 2355, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 13913, "raw_average_key_size": 17, "raw_value_size": 1338030, "raw_average_value_size": 1719, "num_data_blocks": 108, "num_entries": 778, "num_filter_entries": 778, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764898228, "oldest_key_time": 1764898228, "file_creation_time": 1764898317, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 11230 microseconds, and 5382 cpu microseconds.
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.349172) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1356189 bytes OK
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.349206) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.351454) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.351469) EVENT_LOG_v1 {"time_micros": 1764898317351465, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.351484) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1373312, prev total WAL file size 1373312, number of live WAL files 2.
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.352377) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1324KB)], [32(7330KB)]
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898317352424, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 8862678, "oldest_snapshot_seqno": -1}
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3746 keys, 6957720 bytes, temperature: kUnknown
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898317400993, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 6957720, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6931066, "index_size": 16177, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9413, "raw_key_size": 91922, "raw_average_key_size": 24, "raw_value_size": 6861561, "raw_average_value_size": 1831, "num_data_blocks": 686, "num_entries": 3746, "num_filter_entries": 3746, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764898317, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.401322) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 6957720 bytes
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.403167) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.1 rd, 142.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.2 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(11.7) write-amplify(5.1) OK, records in: 4773, records dropped: 1027 output_compression: NoCompression
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.403189) EVENT_LOG_v1 {"time_micros": 1764898317403177, "job": 14, "event": "compaction_finished", "compaction_time_micros": 48673, "compaction_time_cpu_micros": 20817, "output_level": 6, "num_output_files": 1, "total_output_size": 6957720, "num_input_records": 4773, "num_output_records": 3746, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898317403699, "job": 14, "event": "table_file_deletion", "file_number": 34}
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898317405941, "job": 14, "event": "table_file_deletion", "file_number": 32}
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.352241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.406117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.406124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.406126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.406129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.406131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:31:57 compute-0 sudo[326179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyoddjikiwzxslgtffdqzvqxfbkaojcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898316.539859-395-230031120076242/AnsiballZ_file.py'
Dec 05 01:31:57 compute-0 sudo[326179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:57 compute-0 python3.9[326181]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:31:57 compute-0 sudo[326179]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:58 compute-0 sudo[326331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqaphovmtnuvtnyiphmdtxgasakjwutw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898318.1206415-407-103395148114213/AnsiballZ_stat.py'
Dec 05 01:31:58 compute-0 sudo[326331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:58 compute-0 python3.9[326333]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:31:58 compute-0 sudo[326331]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:59 compute-0 sudo[326425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbsjqkyvrdsnwpyymntpinzebwthzffb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898318.1206415-407-103395148114213/AnsiballZ_file.py'
Dec 05 01:31:59 compute-0 sudo[326425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:31:59 compute-0 podman[326383]: 2025-12-05 01:31:59.375606757 +0000 UTC m=+0.110469643 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:31:59 compute-0 python3.9[326429]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:31:59 compute-0 sudo[326425]: pam_unix(sudo:session): session closed for user root
Dec 05 01:31:59 compute-0 podman[158197]: time="2025-12-05T01:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:31:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Dec 05 01:31:59 compute-0 ceph-mon[192914]: pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:31:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7284 "" "Go-http-client/1.1"
Dec 05 01:32:00 compute-0 sudo[326579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muzqkuoohuykrfauqqaesavmcteayfpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898319.8634658-419-52392671034546/AnsiballZ_systemd.py'
Dec 05 01:32:00 compute-0 sudo[326579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:00 compute-0 python3.9[326581]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:32:00 compute-0 systemd[1]: Reloading.
Dec 05 01:32:00 compute-0 systemd-sysv-generator[326611]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:32:00 compute-0 systemd-rc-local-generator[326608]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:32:01 compute-0 systemd[1]: Starting Create netns directory...
Dec 05 01:32:01 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 05 01:32:01 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 05 01:32:01 compute-0 systemd[1]: Finished Create netns directory.
Dec 05 01:32:01 compute-0 sudo[326579]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:01 compute-0 openstack_network_exporter[160350]: ERROR   01:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:32:01 compute-0 openstack_network_exporter[160350]: ERROR   01:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:32:01 compute-0 openstack_network_exporter[160350]: ERROR   01:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:32:01 compute-0 openstack_network_exporter[160350]: ERROR   01:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:32:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:32:01 compute-0 openstack_network_exporter[160350]: ERROR   01:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:32:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:32:01 compute-0 ceph-mon[192914]: pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:02 compute-0 sudo[326773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bplncptcmtdfebmogijyissbegjcvkqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898321.6360276-429-1111933420409/AnsiballZ_file.py'
Dec 05 01:32:02 compute-0 sudo[326773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:32:02 compute-0 python3.9[326775]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:32:02 compute-0 sudo[326773]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:03 compute-0 sudo[326925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivljuwthfxiiqlngngzoqeqjmlqtgtag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898322.6056979-437-111670610304198/AnsiballZ_stat.py'
Dec 05 01:32:03 compute-0 sudo[326925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:03 compute-0 python3.9[326927]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:32:03 compute-0 sudo[326925]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:03 compute-0 sudo[327048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkvkvraywjpymzicragideggssgxowyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898322.6056979-437-111670610304198/AnsiballZ_copy.py'
Dec 05 01:32:03 compute-0 sudo[327048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:03 compute-0 ceph-mon[192914]: pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:03 compute-0 python3.9[327050]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764898322.6056979-437-111670610304198/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:32:04 compute-0 sudo[327048]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:04 compute-0 sudo[327200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnadunxykrxhgbvxyixkmcaeoynmzznm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898324.4075766-454-31713427209310/AnsiballZ_file.py'
Dec 05 01:32:04 compute-0 sudo[327200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:04 compute-0 ceph-mon[192914]: pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:05 compute-0 python3.9[327202]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:32:05 compute-0 sudo[327200]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:05 compute-0 sudo[327352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjdghdadsmkgcisienklefhqgifndxod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898325.3932378-462-219182495185765/AnsiballZ_stat.py'
Dec 05 01:32:05 compute-0 sudo[327352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:06 compute-0 python3.9[327354]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:32:06 compute-0 sudo[327352]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:06 compute-0 sudo[327475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfznibcyeoibynmqzuubagllnmdpueij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898325.3932378-462-219182495185765/AnsiballZ_copy.py'
Dec 05 01:32:06 compute-0 sudo[327475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:06 compute-0 python3.9[327477]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764898325.3932378-462-219182495185765/.source.json _original_basename=.vz_b2g86 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:32:06 compute-0 sudo[327475]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:06 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 05 01:32:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:32:07 compute-0 sudo[327628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgrowivnwkdbutwnouvmpxelgvrwgfwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898327.177702-477-177352581162674/AnsiballZ_file.py'
Dec 05 01:32:07 compute-0 sudo[327628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:07 compute-0 ceph-mon[192914]: pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:07 compute-0 python3.9[327630]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:32:07 compute-0 sudo[327628]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:08 compute-0 podman[327680]: 2025-12-05 01:32:08.683377283 +0000 UTC m=+0.089556180 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec 05 01:32:08 compute-0 podman[327679]: 2025-12-05 01:32:08.68971212 +0000 UTC m=+0.098118769 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:32:08 compute-0 podman[327678]: 2025-12-05 01:32:08.699772551 +0000 UTC m=+0.107300676 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 01:32:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:08 compute-0 podman[327681]: 2025-12-05 01:32:08.734042187 +0000 UTC m=+0.135276106 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:32:08 compute-0 ceph-mon[192914]: pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:09 compute-0 sudo[327860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ielqdabpueircpysaimozsbfvwzguklg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898328.1772144-485-154990497762/AnsiballZ_stat.py'
Dec 05 01:32:09 compute-0 sudo[327860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:09 compute-0 sudo[327860]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:10 compute-0 sudo[327983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbnxfvdbenbuguyjwapfoieopzjatrif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898328.1772144-485-154990497762/AnsiballZ_copy.py'
Dec 05 01:32:10 compute-0 sudo[327983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:10 compute-0 sudo[327983]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:11 compute-0 ceph-mon[192914]: pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:12 compute-0 sudo[328135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kikbcexwlpwgjjevysxkfdwyxbavkyhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898331.6332433-502-246407896681506/AnsiballZ_container_config_data.py'
Dec 05 01:32:12 compute-0 sudo[328135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:32:12 compute-0 python3.9[328137]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec 05 01:32:12 compute-0 sudo[328135]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:13 compute-0 sudo[328300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eezgcnlmzuthbxyhywipzpeavplwztzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898332.7176545-511-197150022615111/AnsiballZ_container_config_hash.py'
Dec 05 01:32:13 compute-0 sudo[328300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:13 compute-0 podman[328261]: 2025-12-05 01:32:13.409301729 +0000 UTC m=+0.139876344 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=kepler, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, release=1214.1726694543, release-0.7.12=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container)
Dec 05 01:32:13 compute-0 python3.9[328308]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 01:32:13 compute-0 sudo[328300]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:13 compute-0 ceph-mon[192914]: pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:14 compute-0 sudo[328458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjbmpvcgnanaeikuhwftultibrlrwmhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898333.933565-520-126009390663116/AnsiballZ_podman_container_info.py'
Dec 05 01:32:14 compute-0 sudo[328458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:14 compute-0 python3.9[328460]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 05 01:32:15 compute-0 sudo[328458]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:15 compute-0 ceph-mon[192914]: pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:32:16
Dec 05 01:32:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:32:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:32:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'images', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'backups', '.mgr']
Dec 05 01:32:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:32:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:32:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:32:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:32:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:32:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:32:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:32:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:32:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:32:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:32:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:32:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:32:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:32:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:32:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:32:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:32:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:32:16 compute-0 sudo[328636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kchnbepbcxrurkefdtwwbzlkcnjdjbcw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764898335.9923427-533-238256943067371/AnsiballZ_edpm_container_manage.py'
Dec 05 01:32:16 compute-0 sudo[328636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:16 compute-0 ceph-mon[192914]: pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:16 compute-0 python3[328638]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 01:32:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:32:18 compute-0 podman[328650]: 2025-12-05 01:32:18.586182823 +0000 UTC m=+1.486691128 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 05 01:32:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:18 compute-0 podman[328704]: 2025-12-05 01:32:18.868312546 +0000 UTC m=+0.101541315 container create 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:32:18 compute-0 podman[328704]: 2025-12-05 01:32:18.823114705 +0000 UTC m=+0.056343524 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 05 01:32:18 compute-0 python3[328638]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 05 01:32:19 compute-0 sudo[328636]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:19 compute-0 ceph-mon[192914]: pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:20 compute-0 podman[328768]: 2025-12-05 01:32:20.039860689 +0000 UTC m=+0.102816510 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, name=ubi9-minimal, release=1755695350, version=9.6, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.buildah.version=1.33.7)
Dec 05 01:32:20 compute-0 sudo[328911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqazqbvchwuadvenvkimewolwexdvkmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898339.9805813-541-203887453810329/AnsiballZ_stat.py'
Dec 05 01:32:20 compute-0 sudo[328911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:20 compute-0 python3.9[328913]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:32:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:20 compute-0 sudo[328911]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:20 compute-0 ceph-mon[192914]: pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:21 compute-0 podman[329039]: 2025-12-05 01:32:21.522631145 +0000 UTC m=+0.098630763 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:32:21 compute-0 sudo[329078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yajehhktqfevkawsluugfhsfkrbvmlte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898341.0430415-550-268122612330829/AnsiballZ_file.py'
Dec 05 01:32:21 compute-0 sudo[329078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:21 compute-0 python3.9[329088]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:32:21 compute-0 sudo[329078]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:22 compute-0 sudo[329162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hydmqewofjbetiqfuiuoqtpgteiungwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898341.0430415-550-268122612330829/AnsiballZ_stat.py'
Dec 05 01:32:22 compute-0 sudo[329162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:22 compute-0 python3.9[329164]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:32:22 compute-0 sudo[329162]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:32:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:23 compute-0 sudo[329313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwxxtozuzovpqdiyvqwxpxdtzsnqdbko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898342.4308462-550-247819364566459/AnsiballZ_copy.py'
Dec 05 01:32:23 compute-0 sudo[329313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:23 compute-0 python3.9[329315]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764898342.4308462-550-247819364566459/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:32:23 compute-0 sudo[329313]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:23 compute-0 sudo[329389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwtetvknresdrtgecfgzkooozcjjfuop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898342.4308462-550-247819364566459/AnsiballZ_systemd.py'
Dec 05 01:32:23 compute-0 sudo[329389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:23 compute-0 ceph-mon[192914]: pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:24 compute-0 python3.9[329391]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 01:32:24 compute-0 systemd[1]: Reloading.
Dec 05 01:32:24 compute-0 systemd-rc-local-generator[329417]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:32:24 compute-0 systemd-sysv-generator[329422]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:32:24 compute-0 sudo[329389]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:24 compute-0 ceph-mon[192914]: pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:25 compute-0 sudo[329500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avdyadotuljgyfpuwtfwmhhbniuximqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898342.4308462-550-247819364566459/AnsiballZ_systemd.py'
Dec 05 01:32:25 compute-0 sudo[329500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:25 compute-0 python3.9[329502]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:32:25 compute-0 systemd[1]: Reloading.
Dec 05 01:32:25 compute-0 systemd-rc-local-generator[329527]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:32:25 compute-0 systemd-sysv-generator[329530]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:32:25 compute-0 systemd[1]: Starting multipathd container...
Dec 05 01:32:25 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dedb3334143486a48819d334ba71eb496fd83633e8380e82d90af06bbe44260/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 05 01:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dedb3334143486a48819d334ba71eb496fd83633e8380e82d90af06bbe44260/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 05 01:32:26 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee.
Dec 05 01:32:26 compute-0 podman[329543]: 2025-12-05 01:32:26.024662926 +0000 UTC m=+0.180378845 container init 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=multipathd)
Dec 05 01:32:26 compute-0 multipathd[329558]: + sudo -E kolla_set_configs
Dec 05 01:32:26 compute-0 sudo[329564]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 01:32:26 compute-0 podman[329543]: 2025-12-05 01:32:26.070770942 +0000 UTC m=+0.226486881 container start 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 01:32:26 compute-0 sudo[329564]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 01:32:26 compute-0 sudo[329564]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 01:32:26 compute-0 podman[329543]: multipathd
Dec 05 01:32:26 compute-0 systemd[1]: Started multipathd container.
Dec 05 01:32:26 compute-0 multipathd[329558]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 01:32:26 compute-0 multipathd[329558]: INFO:__main__:Validating config file
Dec 05 01:32:26 compute-0 multipathd[329558]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 01:32:26 compute-0 multipathd[329558]: INFO:__main__:Writing out command to execute
Dec 05 01:32:26 compute-0 sudo[329564]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:26 compute-0 multipathd[329558]: ++ cat /run_command
Dec 05 01:32:26 compute-0 multipathd[329558]: + CMD='/usr/sbin/multipathd -d'
Dec 05 01:32:26 compute-0 multipathd[329558]: + ARGS=
Dec 05 01:32:26 compute-0 multipathd[329558]: + sudo kolla_copy_cacerts
Dec 05 01:32:26 compute-0 sudo[329500]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:26 compute-0 sudo[329578]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 05 01:32:26 compute-0 sudo[329578]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 01:32:26 compute-0 sudo[329578]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 01:32:26 compute-0 sudo[329578]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:26 compute-0 multipathd[329558]: + [[ ! -n '' ]]
Dec 05 01:32:26 compute-0 multipathd[329558]: + . kolla_extend_start
Dec 05 01:32:26 compute-0 multipathd[329558]: Running command: '/usr/sbin/multipathd -d'
Dec 05 01:32:26 compute-0 multipathd[329558]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 05 01:32:26 compute-0 multipathd[329558]: + umask 0022
Dec 05 01:32:26 compute-0 multipathd[329558]: + exec /usr/sbin/multipathd -d
Dec 05 01:32:26 compute-0 multipathd[329558]: 4433.331495 | --------start up--------
Dec 05 01:32:26 compute-0 multipathd[329558]: 4433.331534 | read /etc/multipath.conf
Dec 05 01:32:26 compute-0 multipathd[329558]: 4433.343690 | path checkers start up
Dec 05 01:32:26 compute-0 podman[329565]: 2025-12-05 01:32:26.231872028 +0000 UTC m=+0.141093708 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:32:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 01:32:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 1200.0 total, 600.0 interval
                                            Cumulative writes: 3306 writes, 14K keys, 3306 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                            Cumulative WAL: 3306 writes, 3306 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1280 writes, 5807 keys, 1280 commit groups, 1.0 writes per commit group, ingest: 8.47 MB, 0.01 MB/s
                                            Interval WAL: 1280 writes, 1280 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    127.7      0.12              0.06         7    0.017       0      0       0.0       0.0
                                              L6      1/0    6.64 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    149.0    122.8      0.33              0.16         6    0.055     24K   3205       0.0       0.0
                                             Sum      1/0    6.64 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.7    109.3    124.1      0.45              0.21        13    0.034     24K   3205       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8    135.5    136.2      0.25              0.12         8    0.031     17K   2471       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    149.0    122.8      0.33              0.16         6    0.055     24K   3205       0.0       0.0
                                            High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    132.8      0.11              0.06         6    0.019       0      0       0.0       0.0
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.0 total, 600.0 interval
                                            Flush(GB): cumulative 0.015, interval 0.007
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.05 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.4 seconds
                                            Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56463779d1f0#2 capacity: 308.00 MB usage: 1.54 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000117 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(99,1.32 MB,0.429292%) FilterBlock(14,74.42 KB,0.0235966%) IndexBlock(14,145.05 KB,0.0459894%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Dec 05 01:32:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:32:27 compute-0 python3.9[329744]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:32:27 compute-0 ceph-mon[192914]: pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:28 compute-0 sudo[329896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhriowwnsqeeoejbkhrnukatnizuvnlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898347.9378386-586-130875340821790/AnsiballZ_command.py'
Dec 05 01:32:28 compute-0 sudo[329896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:28 compute-0 python3.9[329898]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:32:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:28 compute-0 sudo[329896]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:29 compute-0 ceph-mon[192914]: pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:29 compute-0 podman[329956]: 2025-12-05 01:32:29.729240043 +0000 UTC m=+0.131630464 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 05 01:32:29 compute-0 podman[158197]: time="2025-12-05T01:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:32:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38323 "" "Go-http-client/1.1"
Dec 05 01:32:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7701 "" "Go-http-client/1.1"
Dec 05 01:32:30 compute-0 sudo[330078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbsvpzhhywyycsxmfmfxscuzxkoqvswt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898349.5582244-594-47838008979774/AnsiballZ_systemd.py'
Dec 05 01:32:30 compute-0 sudo[330078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:30 compute-0 python3.9[330080]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 01:32:30 compute-0 systemd[1]: Stopping multipathd container...
Dec 05 01:32:30 compute-0 multipathd[329558]: 4437.751355 | exit (signal)
Dec 05 01:32:30 compute-0 multipathd[329558]: 4437.751616 | --------shut down-------
Dec 05 01:32:30 compute-0 systemd[1]: libpod-4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee.scope: Deactivated successfully.
Dec 05 01:32:30 compute-0 conmon[329558]: conmon 4b650b296b7a2b28da70 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee.scope/container/memory.events
Dec 05 01:32:30 compute-0 podman[330084]: 2025-12-05 01:32:30.644537565 +0000 UTC m=+0.110113564 container died 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 01:32:30 compute-0 systemd[1]: 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee-70675f2a8c31aaaf.timer: Deactivated successfully.
Dec 05 01:32:30 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee.
Dec 05 01:32:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee-userdata-shm.mount: Deactivated successfully.
Dec 05 01:32:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-4dedb3334143486a48819d334ba71eb496fd83633e8380e82d90af06bbe44260-merged.mount: Deactivated successfully.
Dec 05 01:32:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:30 compute-0 podman[330084]: 2025-12-05 01:32:30.727073238 +0000 UTC m=+0.192649247 container cleanup 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Dec 05 01:32:30 compute-0 podman[330084]: multipathd
Dec 05 01:32:30 compute-0 podman[330109]: multipathd
Dec 05 01:32:30 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec 05 01:32:30 compute-0 systemd[1]: Stopped multipathd container.
Dec 05 01:32:30 compute-0 systemd[1]: Starting multipathd container...
Dec 05 01:32:30 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dedb3334143486a48819d334ba71eb496fd83633e8380e82d90af06bbe44260/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 05 01:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dedb3334143486a48819d334ba71eb496fd83633e8380e82d90af06bbe44260/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 05 01:32:31 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee.
Dec 05 01:32:31 compute-0 podman[330122]: 2025-12-05 01:32:31.092178735 +0000 UTC m=+0.212176900 container init 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:32:31 compute-0 multipathd[330136]: + sudo -E kolla_set_configs
Dec 05 01:32:31 compute-0 sudo[330142]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 01:32:31 compute-0 sudo[330142]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 01:32:31 compute-0 sudo[330142]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 01:32:31 compute-0 podman[330122]: 2025-12-05 01:32:31.131096361 +0000 UTC m=+0.251094436 container start 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:32:31 compute-0 podman[330122]: multipathd
Dec 05 01:32:31 compute-0 systemd[1]: Started multipathd container.
Dec 05 01:32:31 compute-0 sudo[330078]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:31 compute-0 multipathd[330136]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 01:32:31 compute-0 multipathd[330136]: INFO:__main__:Validating config file
Dec 05 01:32:31 compute-0 multipathd[330136]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 01:32:31 compute-0 multipathd[330136]: INFO:__main__:Writing out command to execute
Dec 05 01:32:31 compute-0 sudo[330142]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:31 compute-0 multipathd[330136]: ++ cat /run_command
Dec 05 01:32:31 compute-0 multipathd[330136]: + CMD='/usr/sbin/multipathd -d'
Dec 05 01:32:31 compute-0 multipathd[330136]: + ARGS=
Dec 05 01:32:31 compute-0 multipathd[330136]: + sudo kolla_copy_cacerts
Dec 05 01:32:31 compute-0 sudo[330159]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 05 01:32:31 compute-0 sudo[330159]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 01:32:31 compute-0 sudo[330159]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 05 01:32:31 compute-0 sudo[330159]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:31 compute-0 multipathd[330136]: + [[ ! -n '' ]]
Dec 05 01:32:31 compute-0 multipathd[330136]: + . kolla_extend_start
Dec 05 01:32:31 compute-0 multipathd[330136]: Running command: '/usr/sbin/multipathd -d'
Dec 05 01:32:31 compute-0 multipathd[330136]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 05 01:32:31 compute-0 multipathd[330136]: + umask 0022
Dec 05 01:32:31 compute-0 multipathd[330136]: + exec /usr/sbin/multipathd -d
Dec 05 01:32:31 compute-0 podman[330143]: 2025-12-05 01:32:31.268071354 +0000 UTC m=+0.111467582 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 01:32:31 compute-0 systemd[1]: 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee-684d6b443d7fa374.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 01:32:31 compute-0 systemd[1]: 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee-684d6b443d7fa374.service: Failed with result 'exit-code'.
Dec 05 01:32:31 compute-0 multipathd[330136]: 4438.428700 | --------start up--------
Dec 05 01:32:31 compute-0 multipathd[330136]: 4438.428745 | read /etc/multipath.conf
Dec 05 01:32:31 compute-0 multipathd[330136]: 4438.438180 | path checkers start up
Dec 05 01:32:31 compute-0 openstack_network_exporter[160350]: ERROR   01:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:32:31 compute-0 openstack_network_exporter[160350]: ERROR   01:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:32:31 compute-0 openstack_network_exporter[160350]: ERROR   01:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:32:31 compute-0 openstack_network_exporter[160350]: ERROR   01:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:32:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:32:31 compute-0 openstack_network_exporter[160350]: ERROR   01:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:32:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:32:31 compute-0 ceph-mon[192914]: pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:31 compute-0 sudo[330326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdhpxngmeromxehfrtxxisrnuffhuypj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898351.4739265-602-87032906926080/AnsiballZ_file.py'
Dec 05 01:32:31 compute-0 sudo[330326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:32 compute-0 python3.9[330328]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:32:32 compute-0 sudo[330326]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:32:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:33 compute-0 sudo[330478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsmxrpdiqslqvlqpdzohtrdyqgfhkfjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898352.6007183-614-145249913226601/AnsiballZ_file.py'
Dec 05 01:32:33 compute-0 sudo[330478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:33 compute-0 python3.9[330480]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 05 01:32:33 compute-0 sudo[330478]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:33 compute-0 ceph-mon[192914]: pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:34 compute-0 sudo[330630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snnevewslibpscwgdmnwnomqeipogzgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898353.5049875-622-90469251305745/AnsiballZ_modprobe.py'
Dec 05 01:32:34 compute-0 sudo[330630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:34 compute-0 python3.9[330632]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec 05 01:32:34 compute-0 kernel: Key type psk registered
Dec 05 01:32:34 compute-0 sudo[330630]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:35 compute-0 sudo[330793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfzimgjlehelcybmyaycsstrccutclkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898354.64073-630-179403272687463/AnsiballZ_stat.py'
Dec 05 01:32:35 compute-0 sudo[330793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:35 compute-0 python3.9[330795]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:32:35 compute-0 sudo[330793]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:35 compute-0 ceph-mon[192914]: pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:36 compute-0 sudo[330916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpocpowkyiprthxklppzppdhgdiihpnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898354.64073-630-179403272687463/AnsiballZ_copy.py'
Dec 05 01:32:36 compute-0 sudo[330916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:36 compute-0 python3.9[330918]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764898354.64073-630-179403272687463/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:32:36 compute-0 sudo[330916]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:36 compute-0 ceph-mon[192914]: pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:37 compute-0 sudo[331068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhcgnnplqchyoxqrdicnvkkxuhommktg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898356.6507955-646-255866590281689/AnsiballZ_lineinfile.py'
Dec 05 01:32:37 compute-0 sudo[331068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:32:37 compute-0 python3.9[331070]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:32:37 compute-0 sudo[331068]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:39 compute-0 podman[331195]: 2025-12-05 01:32:39.227995097 +0000 UTC m=+0.110532775 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:32:39 compute-0 sudo[331275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nciyefpsihrhvxofeuxfdsumjablyygy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898357.7168677-654-135753973484888/AnsiballZ_systemd.py'
Dec 05 01:32:39 compute-0 sudo[331275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:39 compute-0 podman[331196]: 2025-12-05 01:32:39.24243586 +0000 UTC m=+0.115427122 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 01:32:39 compute-0 podman[331194]: 2025-12-05 01:32:39.258526379 +0000 UTC m=+0.136545681 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 05 01:32:39 compute-0 podman[331197]: 2025-12-05 01:32:39.277732945 +0000 UTC m=+0.153939997 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 01:32:39 compute-0 python3.9[331299]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 01:32:39 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 05 01:32:39 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec 05 01:32:39 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec 05 01:32:39 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 05 01:32:39 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 05 01:32:39 compute-0 sudo[331275]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:39 compute-0 ceph-mon[192914]: pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:40 compute-0 sudo[331457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzrgmvjgdbeochgfoqrvaumghqkqgimm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898359.9961455-662-139327517029431/AnsiballZ_dnf.py'
Dec 05 01:32:40 compute-0 sudo[331457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:40 compute-0 python3.9[331459]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 01:32:41 compute-0 ceph-mon[192914]: pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.547 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.548 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.553 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.554 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.554 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.557 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:32:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:43 compute-0 systemd[1]: Reloading.
Dec 05 01:32:43 compute-0 systemd-rc-local-generator[331490]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:32:43 compute-0 systemd-sysv-generator[331496]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:32:43 compute-0 systemd[1]: Reloading.
Dec 05 01:32:43 compute-0 ceph-mon[192914]: pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:43 compute-0 systemd-rc-local-generator[331547]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:32:43 compute-0 systemd-sysv-generator[331550]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:32:43 compute-0 podman[331502]: 2025-12-05 01:32:43.915287427 +0000 UTC m=+0.154491312 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-container, container_name=kepler, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=edpm, io.openshift.tags=base rhel9)
Dec 05 01:32:44 compute-0 systemd-logind[792]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 05 01:32:44 compute-0 systemd-logind[792]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 05 01:32:44 compute-0 lvm[331595]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 01:32:44 compute-0 lvm[331595]: VG ceph_vg0 finished
Dec 05 01:32:44 compute-0 lvm[331596]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 05 01:32:44 compute-0 lvm[331596]: VG ceph_vg2 finished
Dec 05 01:32:44 compute-0 lvm[331597]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 05 01:32:44 compute-0 lvm[331597]: VG ceph_vg1 finished
Dec 05 01:32:44 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 05 01:32:44 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 05 01:32:44 compute-0 systemd[1]: Reloading.
Dec 05 01:32:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:44 compute-0 systemd-rc-local-generator[331641]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:32:44 compute-0 systemd-sysv-generator[331649]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:32:44 compute-0 ceph-mon[192914]: pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:45 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 05 01:32:45 compute-0 sudo[331457]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:32:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:32:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:32:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:32:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:32:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:32:46 compute-0 sudo[332935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntpmdrpxnoyugejusxnjshwhbhvafhsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898365.8294704-670-106609068725204/AnsiballZ_systemd_service.py'
Dec 05 01:32:46 compute-0 sudo[332935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:46 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 05 01:32:46 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 05 01:32:46 compute-0 systemd[1]: man-db-cache-update.service: Consumed 2.112s CPU time.
Dec 05 01:32:46 compute-0 systemd[1]: run-re83b8ce83f334bbe83c861ce623b1d48.service: Deactivated successfully.
Dec 05 01:32:46 compute-0 python3.9[332937]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 01:32:46 compute-0 systemd[1]: Stopping Open-iSCSI...
Dec 05 01:32:46 compute-0 iscsid[320020]: iscsid shutting down.
Dec 05 01:32:46 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Dec 05 01:32:46 compute-0 systemd[1]: Stopped Open-iSCSI.
Dec 05 01:32:46 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 05 01:32:46 compute-0 systemd[1]: Starting Open-iSCSI...
Dec 05 01:32:46 compute-0 systemd[1]: Started Open-iSCSI.
Dec 05 01:32:46 compute-0 sudo[332935]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:32:47 compute-0 sudo[333093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:32:47 compute-0 sudo[333093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:32:47 compute-0 sudo[333093]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:47 compute-0 python3.9[333092]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:32:47 compute-0 ceph-mon[192914]: pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:47 compute-0 sudo[333118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:32:47 compute-0 sudo[333118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:32:47 compute-0 sudo[333118]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:47 compute-0 sudo[333147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:32:47 compute-0 sudo[333147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:32:47 compute-0 sudo[333147]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:48 compute-0 sudo[333172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:32:48 compute-0 sudo[333172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:32:48 compute-0 sudo[333172]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:32:48 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:32:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:32:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:32:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:32:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:32:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:48 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 23fa31b3-2c1f-4352-9e82-6a35106beb16 does not exist
Dec 05 01:32:48 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7ff44f00-bec6-4e3a-b7a8-fedae456313f does not exist
Dec 05 01:32:48 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d585d513-c540-4802-85dc-7130e56b05ca does not exist
Dec 05 01:32:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:32:48 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:32:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:32:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:32:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:32:48 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:32:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:32:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:32:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:32:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:32:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:32:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:32:48 compute-0 sudo[333344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:32:48 compute-0 sudo[333344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:32:48 compute-0 sudo[333344]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:48 compute-0 sudo[333408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eovndqfreooozvaghirywqswaowglpfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898368.3080275-688-240212667142047/AnsiballZ_file.py'
Dec 05 01:32:48 compute-0 sudo[333408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:48 compute-0 sudo[333399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:32:48 compute-0 sudo[333399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:32:48 compute-0 sudo[333399]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:49 compute-0 sudo[333430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:32:49 compute-0 sudo[333430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:32:49 compute-0 sudo[333430]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:49 compute-0 sudo[333455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:32:49 compute-0 sudo[333455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:32:49 compute-0 python3.9[333425]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:32:49 compute-0 sudo[333408]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:49 compute-0 podman[333546]: 2025-12-05 01:32:49.622612591 +0000 UTC m=+0.054074880 container create 5c1d300a1342399b4c217c92daf6a5fd69276dc62db50df48c8c7ad1d6664fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 05 01:32:49 compute-0 systemd[1]: Started libpod-conmon-5c1d300a1342399b4c217c92daf6a5fd69276dc62db50df48c8c7ad1d6664fbd.scope.
Dec 05 01:32:49 compute-0 podman[333546]: 2025-12-05 01:32:49.599865816 +0000 UTC m=+0.031328125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:32:49 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:32:49 compute-0 podman[333546]: 2025-12-05 01:32:49.74441259 +0000 UTC m=+0.175874899 container init 5c1d300a1342399b4c217c92daf6a5fd69276dc62db50df48c8c7ad1d6664fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 05 01:32:49 compute-0 podman[333546]: 2025-12-05 01:32:49.756456646 +0000 UTC m=+0.187918935 container start 5c1d300a1342399b4c217c92daf6a5fd69276dc62db50df48c8c7ad1d6664fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 05 01:32:49 compute-0 podman[333546]: 2025-12-05 01:32:49.760521689 +0000 UTC m=+0.191984028 container attach 5c1d300a1342399b4c217c92daf6a5fd69276dc62db50df48c8c7ad1d6664fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:32:49 compute-0 friendly_brown[333607]: 167 167
Dec 05 01:32:49 compute-0 systemd[1]: libpod-5c1d300a1342399b4c217c92daf6a5fd69276dc62db50df48c8c7ad1d6664fbd.scope: Deactivated successfully.
Dec 05 01:32:49 compute-0 podman[333546]: 2025-12-05 01:32:49.766443275 +0000 UTC m=+0.197905554 container died 5c1d300a1342399b4c217c92daf6a5fd69276dc62db50df48c8c7ad1d6664fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 05 01:32:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d9b757b32deab524aed848b607f7b06f4cbba903c45227e10caa1022a7afe6c-merged.mount: Deactivated successfully.
Dec 05 01:32:49 compute-0 ceph-mon[192914]: pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:49 compute-0 podman[333546]: 2025-12-05 01:32:49.832719304 +0000 UTC m=+0.264181593 container remove 5c1d300a1342399b4c217c92daf6a5fd69276dc62db50df48c8c7ad1d6664fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 05 01:32:49 compute-0 systemd[1]: libpod-conmon-5c1d300a1342399b4c217c92daf6a5fd69276dc62db50df48c8c7ad1d6664fbd.scope: Deactivated successfully.
Dec 05 01:32:50 compute-0 podman[333658]: 2025-12-05 01:32:50.097707669 +0000 UTC m=+0.106713899 container create 5c79dadbbd1ff3db488fceab249330d1f3d31b8a6fa01e187d2eafdde4fe58e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaum, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:32:50 compute-0 podman[333658]: 2025-12-05 01:32:50.061013255 +0000 UTC m=+0.070019565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:32:50 compute-0 systemd[1]: Started libpod-conmon-5c79dadbbd1ff3db488fceab249330d1f3d31b8a6fa01e187d2eafdde4fe58e1.scope.
Dec 05 01:32:50 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:32:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9544aa2c1cd04dedeff1dfe56d41c044f59de67ee8511552158524c9e30df1c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:32:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9544aa2c1cd04dedeff1dfe56d41c044f59de67ee8511552158524c9e30df1c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:32:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9544aa2c1cd04dedeff1dfe56d41c044f59de67ee8511552158524c9e30df1c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:32:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9544aa2c1cd04dedeff1dfe56d41c044f59de67ee8511552158524c9e30df1c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:32:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9544aa2c1cd04dedeff1dfe56d41c044f59de67ee8511552158524c9e30df1c5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:32:50 compute-0 podman[333658]: 2025-12-05 01:32:50.26867803 +0000 UTC m=+0.277684300 container init 5c79dadbbd1ff3db488fceab249330d1f3d31b8a6fa01e187d2eafdde4fe58e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaum, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 05 01:32:50 compute-0 podman[333658]: 2025-12-05 01:32:50.290783187 +0000 UTC m=+0.299789427 container start 5c79dadbbd1ff3db488fceab249330d1f3d31b8a6fa01e187d2eafdde4fe58e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaum, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:32:50 compute-0 podman[333658]: 2025-12-05 01:32:50.296600259 +0000 UTC m=+0.305606499 container attach 5c79dadbbd1ff3db488fceab249330d1f3d31b8a6fa01e187d2eafdde4fe58e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Dec 05 01:32:50 compute-0 podman[333670]: 2025-12-05 01:32:50.308063409 +0000 UTC m=+0.149601076 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, release=1755695350, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, managed_by=edpm_ansible, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container)
Dec 05 01:32:50 compute-0 sudo[333749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdfvhjtabntglicksfezsjomndclgmvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898369.6146555-699-80154744060156/AnsiballZ_systemd_service.py'
Dec 05 01:32:50 compute-0 sudo[333749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:50 compute-0 ceph-mon[192914]: pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:50 compute-0 python3.9[333751]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 01:32:51 compute-0 systemd[1]: Reloading.
Dec 05 01:32:51 compute-0 systemd-rc-local-generator[333783]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:32:51 compute-0 systemd-sysv-generator[333787]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:32:51 compute-0 sudo[333749]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:51 compute-0 silly_chaum[333684]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:32:51 compute-0 silly_chaum[333684]: --> relative data size: 1.0
Dec 05 01:32:51 compute-0 silly_chaum[333684]: --> All data devices are unavailable
Dec 05 01:32:51 compute-0 podman[333806]: 2025-12-05 01:32:51.693197182 +0000 UTC m=+0.101575176 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:32:51 compute-0 systemd[1]: libpod-5c79dadbbd1ff3db488fceab249330d1f3d31b8a6fa01e187d2eafdde4fe58e1.scope: Deactivated successfully.
Dec 05 01:32:51 compute-0 systemd[1]: libpod-5c79dadbbd1ff3db488fceab249330d1f3d31b8a6fa01e187d2eafdde4fe58e1.scope: Consumed 1.287s CPU time.
Dec 05 01:32:51 compute-0 podman[333658]: 2025-12-05 01:32:51.714566298 +0000 UTC m=+1.723572558 container died 5c79dadbbd1ff3db488fceab249330d1f3d31b8a6fa01e187d2eafdde4fe58e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Dec 05 01:32:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-9544aa2c1cd04dedeff1dfe56d41c044f59de67ee8511552158524c9e30df1c5-merged.mount: Deactivated successfully.
Dec 05 01:32:51 compute-0 podman[333658]: 2025-12-05 01:32:51.813291813 +0000 UTC m=+1.822298033 container remove 5c79dadbbd1ff3db488fceab249330d1f3d31b8a6fa01e187d2eafdde4fe58e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:32:51 compute-0 systemd[1]: libpod-conmon-5c79dadbbd1ff3db488fceab249330d1f3d31b8a6fa01e187d2eafdde4fe58e1.scope: Deactivated successfully.
Dec 05 01:32:51 compute-0 sudo[333455]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:51 compute-0 sudo[333912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:32:51 compute-0 sudo[333912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:32:51 compute-0 sudo[333912]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:52 compute-0 sudo[333952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:32:52 compute-0 sudo[333952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:32:52 compute-0 sudo[333952]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:52 compute-0 sudo[333994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:32:52 compute-0 sudo[333994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:32:52 compute-0 sudo[333994]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:52 compute-0 sudo[334019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:32:52 compute-0 sudo[334019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:32:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:32:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:52 compute-0 podman[334133]: 2025-12-05 01:32:52.861335979 +0000 UTC m=+0.074913022 container create 63478234e6795d340475e4cbc52eed9cf0459cc5384b1a8cb8827d7cc16ab626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_gagarin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 01:32:52 compute-0 podman[334133]: 2025-12-05 01:32:52.842220025 +0000 UTC m=+0.055797068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:32:52 compute-0 systemd[1]: Started libpod-conmon-63478234e6795d340475e4cbc52eed9cf0459cc5384b1a8cb8827d7cc16ab626.scope.
Dec 05 01:32:52 compute-0 python3.9[334135]: ansible-ansible.builtin.service_facts Invoked
Dec 05 01:32:52 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:32:53 compute-0 podman[334133]: 2025-12-05 01:32:53.01904439 +0000 UTC m=+0.232621523 container init 63478234e6795d340475e4cbc52eed9cf0459cc5384b1a8cb8827d7cc16ab626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:32:53 compute-0 podman[334133]: 2025-12-05 01:32:53.034552742 +0000 UTC m=+0.248129825 container start 63478234e6795d340475e4cbc52eed9cf0459cc5384b1a8cb8827d7cc16ab626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_gagarin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 01:32:53 compute-0 podman[334133]: 2025-12-05 01:32:53.041026523 +0000 UTC m=+0.254603606 container attach 63478234e6795d340475e4cbc52eed9cf0459cc5384b1a8cb8827d7cc16ab626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:32:53 compute-0 intelligent_gagarin[334150]: 167 167
Dec 05 01:32:53 compute-0 systemd[1]: libpod-63478234e6795d340475e4cbc52eed9cf0459cc5384b1a8cb8827d7cc16ab626.scope: Deactivated successfully.
Dec 05 01:32:53 compute-0 podman[334133]: 2025-12-05 01:32:53.045463727 +0000 UTC m=+0.259040830 container died 63478234e6795d340475e4cbc52eed9cf0459cc5384b1a8cb8827d7cc16ab626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:32:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-55ba86f5c6760b8e0b025ada95d926efc7f09d17ed3300615802288cebe94133-merged.mount: Deactivated successfully.
Dec 05 01:32:53 compute-0 network[334181]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 01:32:53 compute-0 podman[334133]: 2025-12-05 01:32:53.112026874 +0000 UTC m=+0.325603927 container remove 63478234e6795d340475e4cbc52eed9cf0459cc5384b1a8cb8827d7cc16ab626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_gagarin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:32:53 compute-0 network[334183]: 'network-scripts' will be removed from distribution in near future.
Dec 05 01:32:53 compute-0 network[334184]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 01:32:53 compute-0 systemd[1]: libpod-conmon-63478234e6795d340475e4cbc52eed9cf0459cc5384b1a8cb8827d7cc16ab626.scope: Deactivated successfully.
Dec 05 01:32:53 compute-0 podman[334197]: 2025-12-05 01:32:53.379005354 +0000 UTC m=+0.088257203 container create aa682e3671d914a044d00c7d2dbec53abc7f6d74c0ec7e41eae7e050b8f0e213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:32:53 compute-0 podman[334197]: 2025-12-05 01:32:53.356446075 +0000 UTC m=+0.065697954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:32:53 compute-0 ceph-mon[192914]: pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:54 compute-0 systemd[1]: Started libpod-conmon-aa682e3671d914a044d00c7d2dbec53abc7f6d74c0ec7e41eae7e050b8f0e213.scope.
Dec 05 01:32:54 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b1c5f09093968b0f21d7a7b78d9142f0ee5f9045bef6a0863b46e998a0935f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b1c5f09093968b0f21d7a7b78d9142f0ee5f9045bef6a0863b46e998a0935f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b1c5f09093968b0f21d7a7b78d9142f0ee5f9045bef6a0863b46e998a0935f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b1c5f09093968b0f21d7a7b78d9142f0ee5f9045bef6a0863b46e998a0935f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:32:54 compute-0 podman[334197]: 2025-12-05 01:32:54.302573047 +0000 UTC m=+1.011824956 container init aa682e3671d914a044d00c7d2dbec53abc7f6d74c0ec7e41eae7e050b8f0e213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mccarthy, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 01:32:54 compute-0 podman[334197]: 2025-12-05 01:32:54.317531894 +0000 UTC m=+1.026783743 container start aa682e3671d914a044d00c7d2dbec53abc7f6d74c0ec7e41eae7e050b8f0e213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:32:54 compute-0 podman[334197]: 2025-12-05 01:32:54.323174282 +0000 UTC m=+1.032426141 container attach aa682e3671d914a044d00c7d2dbec53abc7f6d74c0ec7e41eae7e050b8f0e213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:32:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]: {
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:     "0": [
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:         {
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "devices": [
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "/dev/loop3"
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             ],
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "lv_name": "ceph_lv0",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "lv_size": "21470642176",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "name": "ceph_lv0",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "tags": {
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.cluster_name": "ceph",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.crush_device_class": "",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.encrypted": "0",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.osd_id": "0",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.type": "block",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.vdo": "0"
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             },
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "type": "block",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "vg_name": "ceph_vg0"
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:         }
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:     ],
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:     "1": [
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:         {
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "devices": [
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "/dev/loop4"
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             ],
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "lv_name": "ceph_lv1",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "lv_size": "21470642176",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "name": "ceph_lv1",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "tags": {
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.cluster_name": "ceph",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.crush_device_class": "",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.encrypted": "0",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.osd_id": "1",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.type": "block",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.vdo": "0"
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             },
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "type": "block",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "vg_name": "ceph_vg1"
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:         }
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:     ],
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:     "2": [
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:         {
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "devices": [
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "/dev/loop5"
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             ],
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "lv_name": "ceph_lv2",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "lv_size": "21470642176",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "name": "ceph_lv2",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "tags": {
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.cluster_name": "ceph",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.crush_device_class": "",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.encrypted": "0",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.osd_id": "2",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.type": "block",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:                 "ceph.vdo": "0"
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             },
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "type": "block",
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:             "vg_name": "ceph_vg2"
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:         }
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]:     ]
Dec 05 01:32:55 compute-0 festive_mccarthy[334215]: }
Dec 05 01:32:55 compute-0 systemd[1]: libpod-aa682e3671d914a044d00c7d2dbec53abc7f6d74c0ec7e41eae7e050b8f0e213.scope: Deactivated successfully.
Dec 05 01:32:55 compute-0 podman[334197]: 2025-12-05 01:32:55.132391224 +0000 UTC m=+1.841643083 container died aa682e3671d914a044d00c7d2dbec53abc7f6d74c0ec7e41eae7e050b8f0e213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:32:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5b1c5f09093968b0f21d7a7b78d9142f0ee5f9045bef6a0863b46e998a0935f-merged.mount: Deactivated successfully.
Dec 05 01:32:55 compute-0 podman[334197]: 2025-12-05 01:32:55.21718515 +0000 UTC m=+1.926437009 container remove aa682e3671d914a044d00c7d2dbec53abc7f6d74c0ec7e41eae7e050b8f0e213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 01:32:55 compute-0 systemd[1]: libpod-conmon-aa682e3671d914a044d00c7d2dbec53abc7f6d74c0ec7e41eae7e050b8f0e213.scope: Deactivated successfully.
Dec 05 01:32:55 compute-0 sudo[334019]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:55 compute-0 sudo[334269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:32:55 compute-0 sudo[334269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:32:55 compute-0 sudo[334269]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:55 compute-0 sudo[334298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:32:55 compute-0 sudo[334298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:32:55 compute-0 sudo[334298]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:55 compute-0 sudo[334326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:32:55 compute-0 sudo[334326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:32:55 compute-0 sudo[334326]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:55 compute-0 sudo[334355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:32:55 compute-0 sudo[334355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:32:55 compute-0 ceph-mon[192914]: pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:32:56.160 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:32:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:32:56.161 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:32:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:32:56.161 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:32:56 compute-0 podman[334438]: 2025-12-05 01:32:56.256875782 +0000 UTC m=+0.074526851 container create dec08016215a2a56cebf9f1f3a34293aad2945c0b94e97bcda35187bb8476254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 01:32:56 compute-0 systemd[1]: Started libpod-conmon-dec08016215a2a56cebf9f1f3a34293aad2945c0b94e97bcda35187bb8476254.scope.
Dec 05 01:32:56 compute-0 podman[334438]: 2025-12-05 01:32:56.228537581 +0000 UTC m=+0.046188690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:32:56 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:32:56 compute-0 podman[334438]: 2025-12-05 01:32:56.396401195 +0000 UTC m=+0.214052304 container init dec08016215a2a56cebf9f1f3a34293aad2945c0b94e97bcda35187bb8476254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 01:32:56 compute-0 podman[334438]: 2025-12-05 01:32:56.407925027 +0000 UTC m=+0.225576086 container start dec08016215a2a56cebf9f1f3a34293aad2945c0b94e97bcda35187bb8476254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 05 01:32:56 compute-0 podman[334438]: 2025-12-05 01:32:56.41554332 +0000 UTC m=+0.233194409 container attach dec08016215a2a56cebf9f1f3a34293aad2945c0b94e97bcda35187bb8476254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_moser, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:32:56 compute-0 zen_moser[334457]: 167 167
Dec 05 01:32:56 compute-0 systemd[1]: libpod-dec08016215a2a56cebf9f1f3a34293aad2945c0b94e97bcda35187bb8476254.scope: Deactivated successfully.
Dec 05 01:32:56 compute-0 podman[334438]: 2025-12-05 01:32:56.421239099 +0000 UTC m=+0.238890188 container died dec08016215a2a56cebf9f1f3a34293aad2945c0b94e97bcda35187bb8476254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_moser, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 01:32:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e19c2a5b7a1688e9d31e1380026204a75f36484a494c6c434733d934c524ec0-merged.mount: Deactivated successfully.
Dec 05 01:32:56 compute-0 podman[334438]: 2025-12-05 01:32:56.488591968 +0000 UTC m=+0.306243037 container remove dec08016215a2a56cebf9f1f3a34293aad2945c0b94e97bcda35187bb8476254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_moser, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:32:56 compute-0 systemd[1]: libpod-conmon-dec08016215a2a56cebf9f1f3a34293aad2945c0b94e97bcda35187bb8476254.scope: Deactivated successfully.
Dec 05 01:32:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:56 compute-0 podman[334492]: 2025-12-05 01:32:56.762730268 +0000 UTC m=+0.082510284 container create 93225c5ebee2616a19a686b405692b0556885390d0621a26f2ef524aacf068cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_swanson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:32:56 compute-0 podman[334492]: 2025-12-05 01:32:56.724709707 +0000 UTC m=+0.044489823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:32:56 compute-0 systemd[1]: Started libpod-conmon-93225c5ebee2616a19a686b405692b0556885390d0621a26f2ef524aacf068cf.scope.
Dec 05 01:32:56 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3c7790a5c013f62800b330a52ca789eb30ff23301048de6bf2fc4d2b66f6de9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3c7790a5c013f62800b330a52ca789eb30ff23301048de6bf2fc4d2b66f6de9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3c7790a5c013f62800b330a52ca789eb30ff23301048de6bf2fc4d2b66f6de9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3c7790a5c013f62800b330a52ca789eb30ff23301048de6bf2fc4d2b66f6de9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:32:56 compute-0 podman[334492]: 2025-12-05 01:32:56.886573574 +0000 UTC m=+0.206353640 container init 93225c5ebee2616a19a686b405692b0556885390d0621a26f2ef524aacf068cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:32:56 compute-0 podman[334492]: 2025-12-05 01:32:56.912813906 +0000 UTC m=+0.232593932 container start 93225c5ebee2616a19a686b405692b0556885390d0621a26f2ef524aacf068cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_swanson, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:32:56 compute-0 podman[334492]: 2025-12-05 01:32:56.919280617 +0000 UTC m=+0.239060663 container attach 93225c5ebee2616a19a686b405692b0556885390d0621a26f2ef524aacf068cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 01:32:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:32:57 compute-0 ceph-mon[192914]: pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:57 compute-0 quizzical_swanson[334513]: {
Dec 05 01:32:57 compute-0 quizzical_swanson[334513]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:32:57 compute-0 quizzical_swanson[334513]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:32:57 compute-0 quizzical_swanson[334513]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:32:57 compute-0 quizzical_swanson[334513]:         "osd_id": 0,
Dec 05 01:32:57 compute-0 quizzical_swanson[334513]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:32:57 compute-0 quizzical_swanson[334513]:         "type": "bluestore"
Dec 05 01:32:57 compute-0 quizzical_swanson[334513]:     },
Dec 05 01:32:57 compute-0 quizzical_swanson[334513]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:32:57 compute-0 quizzical_swanson[334513]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:32:57 compute-0 quizzical_swanson[334513]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:32:57 compute-0 quizzical_swanson[334513]:         "osd_id": 1,
Dec 05 01:32:57 compute-0 quizzical_swanson[334513]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:32:57 compute-0 quizzical_swanson[334513]:         "type": "bluestore"
Dec 05 01:32:57 compute-0 quizzical_swanson[334513]:     },
Dec 05 01:32:57 compute-0 quizzical_swanson[334513]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:32:57 compute-0 quizzical_swanson[334513]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:32:57 compute-0 quizzical_swanson[334513]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:32:57 compute-0 quizzical_swanson[334513]:         "osd_id": 2,
Dec 05 01:32:57 compute-0 quizzical_swanson[334513]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:32:57 compute-0 quizzical_swanson[334513]:         "type": "bluestore"
Dec 05 01:32:57 compute-0 quizzical_swanson[334513]:     }
Dec 05 01:32:57 compute-0 quizzical_swanson[334513]: }
Dec 05 01:32:58 compute-0 systemd[1]: libpod-93225c5ebee2616a19a686b405692b0556885390d0621a26f2ef524aacf068cf.scope: Deactivated successfully.
Dec 05 01:32:58 compute-0 systemd[1]: libpod-93225c5ebee2616a19a686b405692b0556885390d0621a26f2ef524aacf068cf.scope: Consumed 1.101s CPU time.
Dec 05 01:32:58 compute-0 podman[334492]: 2025-12-05 01:32:58.028803688 +0000 UTC m=+1.348583744 container died 93225c5ebee2616a19a686b405692b0556885390d0621a26f2ef524aacf068cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_swanson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 05 01:32:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3c7790a5c013f62800b330a52ca789eb30ff23301048de6bf2fc4d2b66f6de9-merged.mount: Deactivated successfully.
Dec 05 01:32:58 compute-0 podman[334492]: 2025-12-05 01:32:58.116844925 +0000 UTC m=+1.436624941 container remove 93225c5ebee2616a19a686b405692b0556885390d0621a26f2ef524aacf068cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_swanson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 05 01:32:58 compute-0 systemd[1]: libpod-conmon-93225c5ebee2616a19a686b405692b0556885390d0621a26f2ef524aacf068cf.scope: Deactivated successfully.
Dec 05 01:32:58 compute-0 sudo[334355]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:32:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:32:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:32:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:32:58 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 46d09c64-8068-48b6-9331-00162b20c71c does not exist
Dec 05 01:32:58 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 96b5dabe-4790-44c6-ac6f-86f987475bc4 does not exist
Dec 05 01:32:58 compute-0 sudo[334647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:32:58 compute-0 sudo[334647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:32:58 compute-0 sudo[334647]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:58 compute-0 sudo[334697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:32:58 compute-0 sudo[334697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:32:58 compute-0 sudo[334697]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:58 compute-0 sudo[334792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgmcycjwddqsghhdfhkgrjzlaejtxqnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898378.1616514-718-190566628271670/AnsiballZ_systemd_service.py'
Dec 05 01:32:58 compute-0 sudo[334792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:59 compute-0 python3.9[334794]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:32:59 compute-0 sudo[334792]: pam_unix(sudo:session): session closed for user root
Dec 05 01:32:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:32:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:32:59 compute-0 ceph-mon[192914]: pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:32:59 compute-0 podman[158197]: time="2025-12-05T01:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:32:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38322 "" "Go-http-client/1.1"
Dec 05 01:32:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7683 "" "Go-http-client/1.1"
Dec 05 01:32:59 compute-0 sudo[334945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzdsazntwswdcvtdknkvxwussujqprix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898379.2929177-718-70285037619102/AnsiballZ_systemd_service.py'
Dec 05 01:32:59 compute-0 sudo[334945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:32:59 compute-0 podman[334947]: 2025-12-05 01:32:59.876269992 +0000 UTC m=+0.083507212 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 01:33:00 compute-0 python3.9[334948]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:33:00 compute-0 sudo[334945]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:00 compute-0 sudo[335118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euurwaybwpitykthoukqxxaczmwsweuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898380.4238346-718-73928019972937/AnsiballZ_systemd_service.py'
Dec 05 01:33:00 compute-0 sudo[335118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:01 compute-0 python3.9[335120]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:33:01 compute-0 sudo[335118]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:01 compute-0 openstack_network_exporter[160350]: ERROR   01:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:33:01 compute-0 openstack_network_exporter[160350]: ERROR   01:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:33:01 compute-0 openstack_network_exporter[160350]: ERROR   01:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:33:01 compute-0 openstack_network_exporter[160350]: ERROR   01:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:33:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:33:01 compute-0 openstack_network_exporter[160350]: ERROR   01:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:33:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:33:01 compute-0 podman[335122]: 2025-12-05 01:33:01.430429901 +0000 UTC m=+0.087664817 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:33:01 compute-0 ceph-mon[192914]: pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:33:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:02 compute-0 sudo[335290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvhheweqmdibeltcyaqwknjimsgklkrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898382.3170195-718-151704960221205/AnsiballZ_systemd_service.py'
Dec 05 01:33:02 compute-0 sudo[335290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:03 compute-0 python3.9[335292]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:33:03 compute-0 sudo[335290]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:03 compute-0 ceph-mon[192914]: pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:04 compute-0 sudo[335443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paqdtrbbpcheewklurzaksoipdjuhtsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898383.3535461-718-254414760392097/AnsiballZ_systemd_service.py'
Dec 05 01:33:04 compute-0 sudo[335443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:04 compute-0 ceph-mon[192914]: pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:04 compute-0 python3.9[335445]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:33:05 compute-0 sudo[335443]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:05 compute-0 sudo[335596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwhggqomcgmtkmqwkyfjuimibcwobgee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898385.2687056-718-180724411213803/AnsiballZ_systemd_service.py'
Dec 05 01:33:05 compute-0 sudo[335596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:06 compute-0 python3.9[335598]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:33:06 compute-0 sudo[335596]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:06 compute-0 sudo[335749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhrawtoarsyqlzhrtspxguewfdkpitku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898386.2968333-718-47049889109950/AnsiballZ_systemd_service.py'
Dec 05 01:33:06 compute-0 sudo[335749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:07 compute-0 python3.9[335751]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:33:07 compute-0 sudo[335749]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:33:07 compute-0 ceph-mon[192914]: pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:08 compute-0 sudo[335902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itcszqbpsftjtlaesgqdkcaytmlsjaac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898387.591347-718-200567100232060/AnsiballZ_systemd_service.py'
Dec 05 01:33:08 compute-0 sudo[335902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:08 compute-0 python3.9[335904]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:33:08 compute-0 sudo[335902]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:09 compute-0 sudo[336055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxzdocmcsesvgebnuemdnueuwkcfrxmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898388.8904247-777-102256374283160/AnsiballZ_file.py'
Dec 05 01:33:09 compute-0 sudo[336055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:09 compute-0 podman[336059]: 2025-12-05 01:33:09.483829994 +0000 UTC m=+0.097361238 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 01:33:09 compute-0 podman[336057]: 2025-12-05 01:33:09.486748035 +0000 UTC m=+0.103964082 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 01:33:09 compute-0 podman[336058]: 2025-12-05 01:33:09.506874297 +0000 UTC m=+0.123160648 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:33:09 compute-0 podman[336060]: 2025-12-05 01:33:09.54531641 +0000 UTC m=+0.156684274 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 01:33:09 compute-0 python3.9[336071]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:33:09 compute-0 sudo[336055]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:09 compute-0 ceph-mon[192914]: pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:10 compute-0 sudo[336293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihtvgrvcuciskexyfancpadmfpkuxsar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898389.8500192-777-117088427633126/AnsiballZ_file.py'
Dec 05 01:33:10 compute-0 sudo[336293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:10 compute-0 python3.9[336295]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:33:10 compute-0 sudo[336293]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:10 compute-0 ceph-mon[192914]: pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:11 compute-0 sudo[336445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boebnfklqozirffsjnliilyteahkswze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898390.9109845-777-72159285276191/AnsiballZ_file.py'
Dec 05 01:33:11 compute-0 sudo[336445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:11 compute-0 python3.9[336447]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:33:11 compute-0 sudo[336445]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:33:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:13 compute-0 sudo[336597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsngzmylbcljhuivuixnukmqfsmtczxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898391.9130464-777-42591699285975/AnsiballZ_file.py'
Dec 05 01:33:13 compute-0 sudo[336597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:13 compute-0 python3.9[336599]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:33:13 compute-0 sudo[336597]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:13 compute-0 ceph-mon[192914]: pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:14 compute-0 sudo[336749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmippkmrqqtqmcrmyhwxxadgajgtyvwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898393.5752635-777-42107016110579/AnsiballZ_file.py'
Dec 05 01:33:14 compute-0 sudo[336749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:14 compute-0 python3.9[336751]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:33:14 compute-0 sudo[336749]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:14 compute-0 podman[336752]: 2025-12-05 01:33:14.707670155 +0000 UTC m=+0.118007954 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_id=edpm, vendor=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, container_name=kepler, maintainer=Red Hat, Inc.)
Dec 05 01:33:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:15 compute-0 sudo[336920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjtvpojtmvntvivimblygavveylwwyvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898395.1148243-777-106639344263424/AnsiballZ_file.py'
Dec 05 01:33:15 compute-0 sudo[336920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:15 compute-0 ceph-mon[192914]: pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:15 compute-0 python3.9[336922]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:33:15 compute-0 sudo[336920]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:33:16
Dec 05 01:33:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:33:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:33:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'volumes', 'backups', 'cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.log', 'images']
Dec 05 01:33:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:33:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:33:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:33:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:33:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:33:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:33:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:33:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:33:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:33:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:33:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:33:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:33:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:33:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:33:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:33:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:33:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:33:16 compute-0 sudo[337072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wokudzpckwrcqzfndntqgzymogeamkmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898396.089637-777-161348973259557/AnsiballZ_file.py'
Dec 05 01:33:16 compute-0 sudo[337072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:16 compute-0 python3.9[337074]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:33:16 compute-0 sudo[337072]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:16 compute-0 ceph-mon[192914]: pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:33:17 compute-0 sudo[337224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzrkgxxsojbckgwluwpnofcfgvbjjsgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898397.0083916-777-277438451845324/AnsiballZ_file.py'
Dec 05 01:33:17 compute-0 sudo[337224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:17 compute-0 python3.9[337226]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:33:17 compute-0 sudo[337224]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:18 compute-0 sudo[337376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pakycsknewlatmaggvbxubhbqjiqxofp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898398.1924572-834-61551306107262/AnsiballZ_file.py'
Dec 05 01:33:18 compute-0 sudo[337376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:18 compute-0 python3.9[337378]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:33:18 compute-0 sudo[337376]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:19 compute-0 sudo[337529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdlrgxfsxadixajqfcugxfunirmlthyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898399.0994084-834-256936601114733/AnsiballZ_file.py'
Dec 05 01:33:19 compute-0 sudo[337529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:19 compute-0 python3.9[337531]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:33:19 compute-0 sudo[337529]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:19 compute-0 ceph-mon[192914]: pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:20 compute-0 sudo[337697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yklmuwtqwxdhygssuqkmnjwtcmjximnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898400.0115368-834-242459777895033/AnsiballZ_file.py'
Dec 05 01:33:20 compute-0 sudo[337697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:20 compute-0 podman[337655]: 2025-12-05 01:33:20.511838533 +0000 UTC m=+0.122963693 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, version=9.6, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, managed_by=edpm_ansible, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 05 01:33:20 compute-0 python3.9[337704]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:33:20 compute-0 sudo[337697]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:21 compute-0 sudo[337854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlgclhzkbilkztfbgvtxfquioyrvacqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898400.9564407-834-156082382444247/AnsiballZ_file.py'
Dec 05 01:33:21 compute-0 sudo[337854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:21 compute-0 python3.9[337856]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:33:21 compute-0 sudo[337854]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:21 compute-0 ceph-mon[192914]: pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:33:22 compute-0 podman[337980]: 2025-12-05 01:33:22.418095887 +0000 UTC m=+0.093426308 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 01:33:22 compute-0 sudo[338023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pddidmcsjzewnpgqngdigbggkcqxkxpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898401.887475-834-33354011559797/AnsiballZ_file.py'
Dec 05 01:33:22 compute-0 sudo[338023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:22 compute-0 python3.9[338032]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:33:22 compute-0 sudo[338023]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:22 compute-0 ceph-mon[192914]: pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:23 compute-0 sudo[338182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcaxxisqgqqtyvojbrqtsnwozeafwgnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898402.8449137-834-71243170646963/AnsiballZ_file.py'
Dec 05 01:33:23 compute-0 sudo[338182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:23 compute-0 python3.9[338184]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:33:23 compute-0 sudo[338182]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:25 compute-0 sudo[338334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkfrvyruumjwjipfnkfxagnognbccaut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898404.6386914-834-60845967135053/AnsiballZ_file.py'
Dec 05 01:33:25 compute-0 sudo[338334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:25 compute-0 python3.9[338336]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:33:25 compute-0 sudo[338334]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:25 compute-0 ceph-mon[192914]: pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:33:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:26 compute-0 sudo[338486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aynkismhgdtyhqurmbkgjzlzvxyxyate ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898405.5505238-834-8564675485998/AnsiballZ_file.py'
Dec 05 01:33:26 compute-0 sudo[338486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:27 compute-0 python3.9[338488]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:33:27 compute-0 sudo[338486]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:33:27 compute-0 ceph-mon[192914]: pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:27 compute-0 sudo[338638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsermjofhwkoyabcgeahsyspgvbfjknw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898407.446481-892-257580097331084/AnsiballZ_command.py'
Dec 05 01:33:27 compute-0 sudo[338638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:28 compute-0 python3.9[338640]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:33:28 compute-0 sudo[338638]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:28 compute-0 ceph-mon[192914]: pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:29 compute-0 python3.9[338792]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 01:33:29 compute-0 podman[158197]: time="2025-12-05T01:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:33:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38321 "" "Go-http-client/1.1"
Dec 05 01:33:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7698 "" "Go-http-client/1.1"
Dec 05 01:33:30 compute-0 sudo[338954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzpvpokceifzmhhywancztjwxpehxgzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898409.8349397-910-88329647216356/AnsiballZ_systemd_service.py'
Dec 05 01:33:30 compute-0 podman[338916]: 2025-12-05 01:33:30.395688594 +0000 UTC m=+0.101520544 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 05 01:33:30 compute-0 sudo[338954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:30 compute-0 python3.9[338960]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 01:33:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:30 compute-0 systemd[1]: Reloading.
Dec 05 01:33:30 compute-0 systemd-rc-local-generator[338978]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:33:30 compute-0 systemd-sysv-generator[338982]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:33:31 compute-0 sudo[338954]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:31 compute-0 openstack_network_exporter[160350]: ERROR   01:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:33:31 compute-0 openstack_network_exporter[160350]: ERROR   01:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:33:31 compute-0 openstack_network_exporter[160350]: ERROR   01:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:33:31 compute-0 openstack_network_exporter[160350]: ERROR   01:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:33:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:33:31 compute-0 openstack_network_exporter[160350]: ERROR   01:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:33:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:33:31 compute-0 podman[339044]: 2025-12-05 01:33:31.6777316 +0000 UTC m=+0.083732298 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 01:33:31 compute-0 ceph-mon[192914]: pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:31 compute-0 sudo[339164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmvqbaogdvztgekwxqnwouzjkhufuggq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898411.5411534-918-187602998342717/AnsiballZ_command.py'
Dec 05 01:33:32 compute-0 sudo[339164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:32 compute-0 python3.9[339166]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:33:32 compute-0 sudo[339164]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:33:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:33 compute-0 sudo[339317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipvlfoojsqqwwyieikuqlbzcymbmsszb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898412.550381-918-54595434627141/AnsiballZ_command.py'
Dec 05 01:33:33 compute-0 sudo[339317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:33 compute-0 python3.9[339319]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:33:33 compute-0 sudo[339317]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:33 compute-0 ceph-mon[192914]: pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:34 compute-0 sudo[339470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkfewershuljquxziuiiwanuehkmsnxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898413.5219297-918-264081183851307/AnsiballZ_command.py'
Dec 05 01:33:34 compute-0 sudo[339470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:34 compute-0 python3.9[339472]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:33:34 compute-0 sudo[339470]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:34 compute-0 ceph-mon[192914]: pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:34 compute-0 sudo[339623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjwkgzodsayrlvmdquccvpwszfyxydza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898414.467627-918-32654872805691/AnsiballZ_command.py'
Dec 05 01:33:34 compute-0 sudo[339623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:35 compute-0 python3.9[339625]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:33:35 compute-0 sudo[339623]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:35 compute-0 sudo[339776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybuljyvqwwamzjjikesovkfvcskfdjll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898415.4022648-918-64603883319569/AnsiballZ_command.py'
Dec 05 01:33:35 compute-0 sudo[339776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:36 compute-0 python3.9[339778]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:33:36 compute-0 sudo[339776]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:33:37 compute-0 sudo[339929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luotuarrtmsxepclspniyxvfhuhtvkxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898417.0720031-918-73430633037253/AnsiballZ_command.py'
Dec 05 01:33:37 compute-0 sudo[339929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:37 compute-0 python3.9[339931]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:33:37 compute-0 ceph-mon[192914]: pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:37 compute-0 sudo[339929]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:38 compute-0 sudo[340082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvaiuitykgmfpgjjpxqdvlxptthdcpyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898418.0874908-918-204613852391764/AnsiballZ_command.py'
Dec 05 01:33:38 compute-0 sudo[340082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:38 compute-0 python3.9[340084]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:33:38 compute-0 sudo[340082]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:39 compute-0 podman[340111]: 2025-12-05 01:33:39.676702012 +0000 UTC m=+0.082706239 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec 05 01:33:39 compute-0 podman[340116]: 2025-12-05 01:33:39.694050146 +0000 UTC m=+0.094611651 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:33:39 compute-0 podman[340118]: 2025-12-05 01:33:39.713493508 +0000 UTC m=+0.119001211 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:33:39 compute-0 podman[340120]: 2025-12-05 01:33:39.722483399 +0000 UTC m=+0.118831877 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 05 01:33:39 compute-0 ceph-mon[192914]: pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:40 compute-0 sudo[340317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agpxzulhmujamwqhvoqhhmbfuzadsxao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898419.614763-918-237703371426556/AnsiballZ_command.py'
Dec 05 01:33:40 compute-0 sudo[340317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:40 compute-0 python3.9[340319]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:33:40 compute-0 sudo[340317]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:40 compute-0 ceph-mon[192914]: pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:33:42 compute-0 sudo[340470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjdsbvrnwsboguaivfgiqrlzwnmrdlzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898421.888151-997-237264439411585/AnsiballZ_file.py'
Dec 05 01:33:42 compute-0 sudo[340470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:42 compute-0 python3.9[340472]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:33:42 compute-0 sudo[340470]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:43 compute-0 sudo[340622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpzbsiupeyvavljvpxomomxnfoabmrcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898422.794607-997-203488623848437/AnsiballZ_file.py'
Dec 05 01:33:43 compute-0 sudo[340622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:43 compute-0 python3.9[340624]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:33:43 compute-0 sudo[340622]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:43 compute-0 ceph-mon[192914]: pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:44 compute-0 sudo[340774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idwmprbczsirwbjfxjgwxfcsqtyhzbxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898423.8362584-997-246786808822613/AnsiballZ_file.py'
Dec 05 01:33:44 compute-0 sudo[340774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:44 compute-0 python3.9[340776]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:33:44 compute-0 sudo[340774]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:44 compute-0 podman[340805]: 2025-12-05 01:33:44.844676276 +0000 UTC m=+0.098112939 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=base rhel9, io.openshift.expose-services=, vcs-type=git, container_name=kepler)
Dec 05 01:33:45 compute-0 sudo[340943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiqontfkxyvhbehosipqzoxybglcgyuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898424.7702854-1019-104783641859340/AnsiballZ_file.py'
Dec 05 01:33:45 compute-0 sudo[340943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:45 compute-0 python3.9[340945]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:33:45 compute-0 sudo[340943]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:45 compute-0 ceph-mon[192914]: pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:46 compute-0 sudo[341095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alqzzxkbpomazzqnldcbjmuyykzojbcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898425.6746018-1019-167549427127396/AnsiballZ_file.py'
Dec 05 01:33:46 compute-0 sudo[341095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:33:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:33:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:33:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:33:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:33:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:33:46 compute-0 python3.9[341097]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:33:46 compute-0 sudo[341095]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:46 compute-0 ceph-mon[192914]: pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:47 compute-0 sudo[341247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkomqefupjrmldrbylpyrbtloakxwygt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898426.6418-1019-21514148524268/AnsiballZ_file.py'
Dec 05 01:33:47 compute-0 sudo[341247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:47 compute-0 python3.9[341249]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:33:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:33:47 compute-0 sudo[341247]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:48 compute-0 sudo[341399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgrlagxknzfevqkqemlfwscndczzkhan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898427.5928497-1019-3934839431505/AnsiballZ_file.py'
Dec 05 01:33:48 compute-0 sudo[341399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:48 compute-0 python3.9[341401]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:33:48 compute-0 sudo[341399]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:49 compute-0 ceph-mon[192914]: pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:49 compute-0 sudo[341551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqelvqljssculcmtdwimoreztixtotvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898428.6094887-1019-19795030170812/AnsiballZ_file.py'
Dec 05 01:33:49 compute-0 sudo[341551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:49 compute-0 python3.9[341553]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:33:49 compute-0 sudo[341551]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:50 compute-0 sudo[341704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cttozhrsotdbcufwkhetsfhgbebdaasd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898429.6311865-1019-173066270765092/AnsiballZ_file.py'
Dec 05 01:33:50 compute-0 sudo[341704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:50 compute-0 python3.9[341706]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:33:50 compute-0 sudo[341704]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:50 compute-0 podman[341731]: 2025-12-05 01:33:50.74232628 +0000 UTC m=+0.150257054 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 01:33:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:51 compute-0 ceph-mon[192914]: pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:51 compute-0 sudo[341876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpfnkfdzddfvbpkhfsmqdgchrfdqctff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898430.6862233-1019-61520921390221/AnsiballZ_file.py'
Dec 05 01:33:51 compute-0 sudo[341876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:52 compute-0 python3.9[341878]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:33:52 compute-0 sudo[341876]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:33:52 compute-0 podman[341903]: 2025-12-05 01:33:52.693040725 +0000 UTC m=+0.098391586 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 01:33:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:53 compute-0 ceph-mon[192914]: pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:54 compute-0 ceph-mon[192914]: pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:33:56.161 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:33:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:33:56.161 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:33:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:33:56.162 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:33:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:33:57 compute-0 ceph-mon[192914]: pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:58 compute-0 sudo[342050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-girovlwwdwaeezujytoipkaxvtcwnkgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898437.731631-1208-65743887975260/AnsiballZ_getent.py'
Dec 05 01:33:58 compute-0 sudo[342050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:58 compute-0 sudo[342053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:33:58 compute-0 sudo[342053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:33:58 compute-0 sudo[342053]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:58 compute-0 python3.9[342052]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec 05 01:33:58 compute-0 sudo[342050]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:58 compute-0 sudo[342078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:33:58 compute-0 sudo[342078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:33:58 compute-0 sudo[342078]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:58 compute-0 sudo[342107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:33:58 compute-0 sudo[342107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:33:58 compute-0 sudo[342107]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:58 compute-0 sudo[342153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:33:58 compute-0 sudo[342153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:33:59 compute-0 sudo[342153]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:33:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:33:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:33:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:33:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:33:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:33:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a9e95ca3-4e8d-4c4e-8cfb-86c1c4928c86 does not exist
Dec 05 01:33:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 560a3136-509a-428a-b58b-803e3b17fc03 does not exist
Dec 05 01:33:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev fe101afd-b09c-47a1-8dcb-e63129a8984b does not exist
Dec 05 01:33:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:33:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:33:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:33:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:33:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:33:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:33:59 compute-0 sudo[342334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcwqndhizkwtfvtvfhohespfrgigzdhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898438.8602982-1216-149893589944254/AnsiballZ_group.py'
Dec 05 01:33:59 compute-0 sudo[342334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:33:59 compute-0 sudo[342336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:33:59 compute-0 sudo[342336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:33:59 compute-0 sudo[342336]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:59 compute-0 podman[158197]: time="2025-12-05T01:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:33:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38321 "" "Go-http-client/1.1"
Dec 05 01:33:59 compute-0 sudo[342362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:33:59 compute-0 sudo[342362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:33:59 compute-0 sudo[342362]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:59 compute-0 python3.9[342341]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 05 01:33:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7709 "" "Go-http-client/1.1"
Dec 05 01:33:59 compute-0 groupadd[342388]: group added to /etc/group: name=nova, GID=42436
Dec 05 01:33:59 compute-0 groupadd[342388]: group added to /etc/gshadow: name=nova
Dec 05 01:33:59 compute-0 groupadd[342388]: new group: name=nova, GID=42436
Dec 05 01:33:59 compute-0 ceph-mon[192914]: pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:33:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:33:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:33:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:33:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:33:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:33:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:33:59 compute-0 sudo[342387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:33:59 compute-0 sudo[342387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:33:59 compute-0 sudo[342387]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:59 compute-0 sudo[342334]: pam_unix(sudo:session): session closed for user root
Dec 05 01:33:59 compute-0 sudo[342418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:33:59 compute-0 sudo[342418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:34:00 compute-0 podman[342558]: 2025-12-05 01:34:00.462458803 +0000 UTC m=+0.097641126 container create d296c4e90b2ae6bab3538e442d98bed8d0cf63eff7d029c3ee65407fb94ac7eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chaplygin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 05 01:34:00 compute-0 podman[342558]: 2025-12-05 01:34:00.42187085 +0000 UTC m=+0.057053243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:34:00 compute-0 systemd[1]: Started libpod-conmon-d296c4e90b2ae6bab3538e442d98bed8d0cf63eff7d029c3ee65407fb94ac7eb.scope.
Dec 05 01:34:00 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:34:00 compute-0 podman[342558]: 2025-12-05 01:34:00.641534319 +0000 UTC m=+0.276716732 container init d296c4e90b2ae6bab3538e442d98bed8d0cf63eff7d029c3ee65407fb94ac7eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chaplygin, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Dec 05 01:34:00 compute-0 podman[342575]: 2025-12-05 01:34:00.644051399 +0000 UTC m=+0.114988809 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 01:34:00 compute-0 podman[342558]: 2025-12-05 01:34:00.657423183 +0000 UTC m=+0.292605536 container start d296c4e90b2ae6bab3538e442d98bed8d0cf63eff7d029c3ee65407fb94ac7eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chaplygin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:34:00 compute-0 podman[342558]: 2025-12-05 01:34:00.664555692 +0000 UTC m=+0.299738045 container attach d296c4e90b2ae6bab3538e442d98bed8d0cf63eff7d029c3ee65407fb94ac7eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chaplygin, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:34:00 compute-0 nostalgic_chaplygin[342606]: 167 167
Dec 05 01:34:00 compute-0 systemd[1]: libpod-d296c4e90b2ae6bab3538e442d98bed8d0cf63eff7d029c3ee65407fb94ac7eb.scope: Deactivated successfully.
Dec 05 01:34:00 compute-0 podman[342558]: 2025-12-05 01:34:00.66985678 +0000 UTC m=+0.305039133 container died d296c4e90b2ae6bab3538e442d98bed8d0cf63eff7d029c3ee65407fb94ac7eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chaplygin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 05 01:34:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-013c64b8af0236e9f2ea180c4e79df7b7840720f09776e6a9b74f6bf372d0c60-merged.mount: Deactivated successfully.
Dec 05 01:34:00 compute-0 podman[342558]: 2025-12-05 01:34:00.72181495 +0000 UTC m=+0.356997263 container remove d296c4e90b2ae6bab3538e442d98bed8d0cf63eff7d029c3ee65407fb94ac7eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 05 01:34:00 compute-0 systemd[1]: libpod-conmon-d296c4e90b2ae6bab3538e442d98bed8d0cf63eff7d029c3ee65407fb94ac7eb.scope: Deactivated successfully.
Dec 05 01:34:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:00 compute-0 sudo[342684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrzorbnlfavkoexnabmqgpokufcmykcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898440.1426423-1224-190217424652276/AnsiballZ_user.py'
Dec 05 01:34:00 compute-0 sudo[342684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:34:00 compute-0 ceph-mon[192914]: pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:00 compute-0 podman[342692]: 2025-12-05 01:34:00.940620056 +0000 UTC m=+0.066373183 container create 9bc9dc1b900ed808ad0fb7f0e72d95acd5ca66ac4fe5af720d1fcc1901957a17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_elion, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:34:01 compute-0 podman[342692]: 2025-12-05 01:34:00.908774887 +0000 UTC m=+0.034528054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:34:01 compute-0 systemd[1]: Started libpod-conmon-9bc9dc1b900ed808ad0fb7f0e72d95acd5ca66ac4fe5af720d1fcc1901957a17.scope.
Dec 05 01:34:01 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307c3fdbb472154534fe6324601cedf4878139ab65bf4d2e85ec4e18387072e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307c3fdbb472154534fe6324601cedf4878139ab65bf4d2e85ec4e18387072e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307c3fdbb472154534fe6324601cedf4878139ab65bf4d2e85ec4e18387072e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307c3fdbb472154534fe6324601cedf4878139ab65bf4d2e85ec4e18387072e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307c3fdbb472154534fe6324601cedf4878139ab65bf4d2e85ec4e18387072e6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:34:01 compute-0 python3.9[342686]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 05 01:34:01 compute-0 podman[342692]: 2025-12-05 01:34:01.103077719 +0000 UTC m=+0.228830866 container init 9bc9dc1b900ed808ad0fb7f0e72d95acd5ca66ac4fe5af720d1fcc1901957a17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_elion, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 01:34:01 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 01:34:01 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 01:34:01 compute-0 podman[342692]: 2025-12-05 01:34:01.119381934 +0000 UTC m=+0.245135041 container start 9bc9dc1b900ed808ad0fb7f0e72d95acd5ca66ac4fe5af720d1fcc1901957a17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_elion, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 05 01:34:01 compute-0 podman[342692]: 2025-12-05 01:34:01.126716939 +0000 UTC m=+0.252470046 container attach 9bc9dc1b900ed808ad0fb7f0e72d95acd5ca66ac4fe5af720d1fcc1901957a17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 05 01:34:01 compute-0 useradd[342713]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Dec 05 01:34:01 compute-0 useradd[342713]: add 'nova' to group 'libvirt'
Dec 05 01:34:01 compute-0 useradd[342713]: add 'nova' to shadow group 'libvirt'
Dec 05 01:34:01 compute-0 sudo[342684]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:01 compute-0 openstack_network_exporter[160350]: ERROR   01:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:34:01 compute-0 openstack_network_exporter[160350]: ERROR   01:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:34:01 compute-0 openstack_network_exporter[160350]: ERROR   01:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:34:01 compute-0 openstack_network_exporter[160350]: ERROR   01:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:34:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:34:01 compute-0 openstack_network_exporter[160350]: ERROR   01:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:34:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:34:02 compute-0 sshd-session[342757]: Accepted publickey for zuul from 192.168.122.30 port 39270 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 01:34:02 compute-0 systemd-logind[792]: New session 57 of user zuul.
Dec 05 01:34:02 compute-0 systemd[1]: Started Session 57 of User zuul.
Dec 05 01:34:02 compute-0 sshd-session[342757]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:34:02 compute-0 podman[342763]: 2025-12-05 01:34:02.282475512 +0000 UTC m=+0.119344302 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Dec 05 01:34:02 compute-0 jovial_elion[342709]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:34:02 compute-0 jovial_elion[342709]: --> relative data size: 1.0
Dec 05 01:34:02 compute-0 jovial_elion[342709]: --> All data devices are unavailable
Dec 05 01:34:02 compute-0 sshd-session[342775]: Received disconnect from 192.168.122.30 port 39270:11: disconnected by user
Dec 05 01:34:02 compute-0 sshd-session[342775]: Disconnected from user zuul 192.168.122.30 port 39270
Dec 05 01:34:02 compute-0 sshd-session[342757]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:34:02 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Dec 05 01:34:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:34:02 compute-0 systemd-logind[792]: Session 57 logged out. Waiting for processes to exit.
Dec 05 01:34:02 compute-0 systemd-logind[792]: Removed session 57.
Dec 05 01:34:02 compute-0 systemd[1]: libpod-9bc9dc1b900ed808ad0fb7f0e72d95acd5ca66ac4fe5af720d1fcc1901957a17.scope: Deactivated successfully.
Dec 05 01:34:02 compute-0 systemd[1]: libpod-9bc9dc1b900ed808ad0fb7f0e72d95acd5ca66ac4fe5af720d1fcc1901957a17.scope: Consumed 1.185s CPU time.
Dec 05 01:34:02 compute-0 podman[342815]: 2025-12-05 01:34:02.456456777 +0000 UTC m=+0.050201182 container died 9bc9dc1b900ed808ad0fb7f0e72d95acd5ca66ac4fe5af720d1fcc1901957a17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Dec 05 01:34:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-307c3fdbb472154534fe6324601cedf4878139ab65bf4d2e85ec4e18387072e6-merged.mount: Deactivated successfully.
Dec 05 01:34:02 compute-0 podman[342815]: 2025-12-05 01:34:02.542173609 +0000 UTC m=+0.135918054 container remove 9bc9dc1b900ed808ad0fb7f0e72d95acd5ca66ac4fe5af720d1fcc1901957a17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:34:02 compute-0 systemd[1]: libpod-conmon-9bc9dc1b900ed808ad0fb7f0e72d95acd5ca66ac4fe5af720d1fcc1901957a17.scope: Deactivated successfully.
Dec 05 01:34:02 compute-0 sudo[342418]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:02 compute-0 sudo[342859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:34:02 compute-0 sudo[342859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:34:02 compute-0 sudo[342859]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:02 compute-0 sudo[342906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:34:02 compute-0 sudo[342906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:34:02 compute-0 sudo[342906]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:02 compute-0 sudo[342954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:34:02 compute-0 sudo[342954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:34:02 compute-0 sudo[342954]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:03 compute-0 sudo[343004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:34:03 compute-0 sudo[343004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:34:03 compute-0 python3.9[343052]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:34:03 compute-0 podman[343115]: 2025-12-05 01:34:03.637962757 +0000 UTC m=+0.088921612 container create c61b1e4ce4a58849e60a79175b2152a79b4ceb44ae3c1732b72cf0c05a7ec494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 01:34:03 compute-0 podman[343115]: 2025-12-05 01:34:03.601730496 +0000 UTC m=+0.052689371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:34:03 compute-0 systemd[1]: Started libpod-conmon-c61b1e4ce4a58849e60a79175b2152a79b4ceb44ae3c1732b72cf0c05a7ec494.scope.
Dec 05 01:34:03 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:34:03 compute-0 podman[343115]: 2025-12-05 01:34:03.764848238 +0000 UTC m=+0.215807133 container init c61b1e4ce4a58849e60a79175b2152a79b4ceb44ae3c1732b72cf0c05a7ec494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:34:03 compute-0 podman[343115]: 2025-12-05 01:34:03.781486862 +0000 UTC m=+0.232445707 container start c61b1e4ce4a58849e60a79175b2152a79b4ceb44ae3c1732b72cf0c05a7ec494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_buck, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 05 01:34:03 compute-0 podman[343115]: 2025-12-05 01:34:03.787736567 +0000 UTC m=+0.238695472 container attach c61b1e4ce4a58849e60a79175b2152a79b4ceb44ae3c1732b72cf0c05a7ec494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_buck, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 05 01:34:03 compute-0 recursing_buck[343130]: 167 167
Dec 05 01:34:03 compute-0 systemd[1]: libpod-c61b1e4ce4a58849e60a79175b2152a79b4ceb44ae3c1732b72cf0c05a7ec494.scope: Deactivated successfully.
Dec 05 01:34:03 compute-0 podman[343115]: 2025-12-05 01:34:03.792648164 +0000 UTC m=+0.243607009 container died c61b1e4ce4a58849e60a79175b2152a79b4ceb44ae3c1732b72cf0c05a7ec494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_buck, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 05 01:34:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-aceff54755a3c2fab8f208344b978e53436005de3c62489df5fb6602c306136a-merged.mount: Deactivated successfully.
Dec 05 01:34:03 compute-0 ceph-mon[192914]: pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:03 compute-0 podman[343115]: 2025-12-05 01:34:03.861531196 +0000 UTC m=+0.312490041 container remove c61b1e4ce4a58849e60a79175b2152a79b4ceb44ae3c1732b72cf0c05a7ec494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:34:03 compute-0 systemd[1]: libpod-conmon-c61b1e4ce4a58849e60a79175b2152a79b4ceb44ae3c1732b72cf0c05a7ec494.scope: Deactivated successfully.
Dec 05 01:34:04 compute-0 podman[343173]: 2025-12-05 01:34:04.100610507 +0000 UTC m=+0.075917119 container create 9327bde0e646df1dbd3d2700ae95b1af7749fdf9cf06aafbcba6c26c80cc53c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_greider, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 01:34:04 compute-0 systemd[1]: Started libpod-conmon-9327bde0e646df1dbd3d2700ae95b1af7749fdf9cf06aafbcba6c26c80cc53c6.scope.
Dec 05 01:34:04 compute-0 podman[343173]: 2025-12-05 01:34:04.075178988 +0000 UTC m=+0.050485600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:34:04 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f0d8df8c212c4f3e3837d21327d5220d289e6bb6c256009783a63b7a9a03d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f0d8df8c212c4f3e3837d21327d5220d289e6bb6c256009783a63b7a9a03d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f0d8df8c212c4f3e3837d21327d5220d289e6bb6c256009783a63b7a9a03d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f0d8df8c212c4f3e3837d21327d5220d289e6bb6c256009783a63b7a9a03d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:34:04 compute-0 podman[343173]: 2025-12-05 01:34:04.246090736 +0000 UTC m=+0.221397328 container init 9327bde0e646df1dbd3d2700ae95b1af7749fdf9cf06aafbcba6c26c80cc53c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:34:04 compute-0 podman[343173]: 2025-12-05 01:34:04.265606781 +0000 UTC m=+0.240913403 container start 9327bde0e646df1dbd3d2700ae95b1af7749fdf9cf06aafbcba6c26c80cc53c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:34:04 compute-0 podman[343173]: 2025-12-05 01:34:04.271198157 +0000 UTC m=+0.246504759 container attach 9327bde0e646df1dbd3d2700ae95b1af7749fdf9cf06aafbcba6c26c80cc53c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_greider, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Dec 05 01:34:04 compute-0 python3.9[343271]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764898442.5957577-1249-269042340164892/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:34:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:05 compute-0 determined_greider[343216]: {
Dec 05 01:34:05 compute-0 determined_greider[343216]:     "0": [
Dec 05 01:34:05 compute-0 determined_greider[343216]:         {
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "devices": [
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "/dev/loop3"
Dec 05 01:34:05 compute-0 determined_greider[343216]:             ],
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "lv_name": "ceph_lv0",
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "lv_size": "21470642176",
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "name": "ceph_lv0",
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "tags": {
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.cluster_name": "ceph",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.crush_device_class": "",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.encrypted": "0",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.osd_id": "0",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.type": "block",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.vdo": "0"
Dec 05 01:34:05 compute-0 determined_greider[343216]:             },
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "type": "block",
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "vg_name": "ceph_vg0"
Dec 05 01:34:05 compute-0 determined_greider[343216]:         }
Dec 05 01:34:05 compute-0 determined_greider[343216]:     ],
Dec 05 01:34:05 compute-0 determined_greider[343216]:     "1": [
Dec 05 01:34:05 compute-0 determined_greider[343216]:         {
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "devices": [
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "/dev/loop4"
Dec 05 01:34:05 compute-0 determined_greider[343216]:             ],
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "lv_name": "ceph_lv1",
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "lv_size": "21470642176",
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "name": "ceph_lv1",
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "tags": {
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.cluster_name": "ceph",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.crush_device_class": "",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.encrypted": "0",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.osd_id": "1",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.type": "block",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.vdo": "0"
Dec 05 01:34:05 compute-0 determined_greider[343216]:             },
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "type": "block",
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "vg_name": "ceph_vg1"
Dec 05 01:34:05 compute-0 determined_greider[343216]:         }
Dec 05 01:34:05 compute-0 determined_greider[343216]:     ],
Dec 05 01:34:05 compute-0 determined_greider[343216]:     "2": [
Dec 05 01:34:05 compute-0 determined_greider[343216]:         {
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "devices": [
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "/dev/loop5"
Dec 05 01:34:05 compute-0 determined_greider[343216]:             ],
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "lv_name": "ceph_lv2",
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "lv_size": "21470642176",
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "name": "ceph_lv2",
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "tags": {
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.cluster_name": "ceph",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.crush_device_class": "",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.encrypted": "0",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.osd_id": "2",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.type": "block",
Dec 05 01:34:05 compute-0 determined_greider[343216]:                 "ceph.vdo": "0"
Dec 05 01:34:05 compute-0 determined_greider[343216]:             },
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "type": "block",
Dec 05 01:34:05 compute-0 determined_greider[343216]:             "vg_name": "ceph_vg2"
Dec 05 01:34:05 compute-0 determined_greider[343216]:         }
Dec 05 01:34:05 compute-0 determined_greider[343216]:     ]
Dec 05 01:34:05 compute-0 determined_greider[343216]: }
Dec 05 01:34:05 compute-0 systemd[1]: libpod-9327bde0e646df1dbd3d2700ae95b1af7749fdf9cf06aafbcba6c26c80cc53c6.scope: Deactivated successfully.
Dec 05 01:34:05 compute-0 podman[343173]: 2025-12-05 01:34:05.254358512 +0000 UTC m=+1.229665134 container died 9327bde0e646df1dbd3d2700ae95b1af7749fdf9cf06aafbcba6c26c80cc53c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_greider, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:34:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6f0d8df8c212c4f3e3837d21327d5220d289e6bb6c256009783a63b7a9a03d9-merged.mount: Deactivated successfully.
Dec 05 01:34:05 compute-0 podman[343173]: 2025-12-05 01:34:05.352503371 +0000 UTC m=+1.327809943 container remove 9327bde0e646df1dbd3d2700ae95b1af7749fdf9cf06aafbcba6c26c80cc53c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_greider, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Dec 05 01:34:05 compute-0 systemd[1]: libpod-conmon-9327bde0e646df1dbd3d2700ae95b1af7749fdf9cf06aafbcba6c26c80cc53c6.scope: Deactivated successfully.
Dec 05 01:34:05 compute-0 sudo[343004]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:05 compute-0 python3.9[343426]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:34:05 compute-0 sudo[343439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:34:05 compute-0 sudo[343439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:34:05 compute-0 sudo[343439]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:05 compute-0 sudo[343466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:34:05 compute-0 sudo[343466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:34:05 compute-0 sudo[343466]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:05 compute-0 sudo[343491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:34:05 compute-0 sudo[343491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:34:05 compute-0 sudo[343491]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:05 compute-0 sudo[343516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:34:05 compute-0 sudo[343516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:34:05 compute-0 ceph-mon[192914]: pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:06 compute-0 podman[343579]: 2025-12-05 01:34:06.245178171 +0000 UTC m=+0.068209084 container create 6a3f672dada0da837346a89c457f8e7863ad2ac730e9de2516cadb88bf38f988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swartz, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Dec 05 01:34:06 compute-0 podman[343579]: 2025-12-05 01:34:06.210765201 +0000 UTC m=+0.033796194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:34:06 compute-0 systemd[1]: Started libpod-conmon-6a3f672dada0da837346a89c457f8e7863ad2ac730e9de2516cadb88bf38f988.scope.
Dec 05 01:34:06 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:34:06 compute-0 podman[343579]: 2025-12-05 01:34:06.352714242 +0000 UTC m=+0.175745145 container init 6a3f672dada0da837346a89c457f8e7863ad2ac730e9de2516cadb88bf38f988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swartz, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 05 01:34:06 compute-0 podman[343579]: 2025-12-05 01:34:06.365386946 +0000 UTC m=+0.188417849 container start 6a3f672dada0da837346a89c457f8e7863ad2ac730e9de2516cadb88bf38f988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swartz, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 01:34:06 compute-0 podman[343579]: 2025-12-05 01:34:06.369293775 +0000 UTC m=+0.192324728 container attach 6a3f672dada0da837346a89c457f8e7863ad2ac730e9de2516cadb88bf38f988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swartz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 05 01:34:06 compute-0 happy_swartz[343613]: 167 167
Dec 05 01:34:06 compute-0 systemd[1]: libpod-6a3f672dada0da837346a89c457f8e7863ad2ac730e9de2516cadb88bf38f988.scope: Deactivated successfully.
Dec 05 01:34:06 compute-0 podman[343579]: 2025-12-05 01:34:06.374634674 +0000 UTC m=+0.197665607 container died 6a3f672dada0da837346a89c457f8e7863ad2ac730e9de2516cadb88bf38f988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swartz, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 05 01:34:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-58e80a61d960c5587b574c88d14efcada466dbf8e5eb80648ca5db737bcf632f-merged.mount: Deactivated successfully.
Dec 05 01:34:06 compute-0 podman[343579]: 2025-12-05 01:34:06.427069477 +0000 UTC m=+0.250100380 container remove 6a3f672dada0da837346a89c457f8e7863ad2ac730e9de2516cadb88bf38f988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swartz, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 05 01:34:06 compute-0 systemd[1]: libpod-conmon-6a3f672dada0da837346a89c457f8e7863ad2ac730e9de2516cadb88bf38f988.scope: Deactivated successfully.
Dec 05 01:34:06 compute-0 podman[343691]: 2025-12-05 01:34:06.668083193 +0000 UTC m=+0.074322635 container create c19d64a8a8c42294b92b3217a5e240f8457694fb9fd4f74feb6755f3cb0e1306 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec 05 01:34:06 compute-0 python3.9[343685]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:34:06 compute-0 podman[343691]: 2025-12-05 01:34:06.641977194 +0000 UTC m=+0.048216646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:34:06 compute-0 systemd[1]: Started libpod-conmon-c19d64a8a8c42294b92b3217a5e240f8457694fb9fd4f74feb6755f3cb0e1306.scope.
Dec 05 01:34:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:06 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01298082028ab38d72b2951a674af384a9a062aec394f53d2894ddbecd71bb8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01298082028ab38d72b2951a674af384a9a062aec394f53d2894ddbecd71bb8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01298082028ab38d72b2951a674af384a9a062aec394f53d2894ddbecd71bb8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01298082028ab38d72b2951a674af384a9a062aec394f53d2894ddbecd71bb8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:34:06 compute-0 podman[343691]: 2025-12-05 01:34:06.832291755 +0000 UTC m=+0.238531267 container init c19d64a8a8c42294b92b3217a5e240f8457694fb9fd4f74feb6755f3cb0e1306 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:34:06 compute-0 podman[343691]: 2025-12-05 01:34:06.848834777 +0000 UTC m=+0.255074229 container start c19d64a8a8c42294b92b3217a5e240f8457694fb9fd4f74feb6755f3cb0e1306 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kilby, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:34:06 compute-0 podman[343691]: 2025-12-05 01:34:06.855483162 +0000 UTC m=+0.261722614 container attach c19d64a8a8c42294b92b3217a5e240f8457694fb9fd4f74feb6755f3cb0e1306 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kilby, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 01:34:06 compute-0 ceph-mon[192914]: pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:34:07 compute-0 python3.9[343861]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:34:08 compute-0 exciting_kilby[343707]: {
Dec 05 01:34:08 compute-0 exciting_kilby[343707]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:34:08 compute-0 exciting_kilby[343707]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:34:08 compute-0 exciting_kilby[343707]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:34:08 compute-0 exciting_kilby[343707]:         "osd_id": 0,
Dec 05 01:34:08 compute-0 exciting_kilby[343707]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:34:08 compute-0 exciting_kilby[343707]:         "type": "bluestore"
Dec 05 01:34:08 compute-0 exciting_kilby[343707]:     },
Dec 05 01:34:08 compute-0 exciting_kilby[343707]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:34:08 compute-0 exciting_kilby[343707]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:34:08 compute-0 exciting_kilby[343707]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:34:08 compute-0 exciting_kilby[343707]:         "osd_id": 1,
Dec 05 01:34:08 compute-0 exciting_kilby[343707]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:34:08 compute-0 exciting_kilby[343707]:         "type": "bluestore"
Dec 05 01:34:08 compute-0 exciting_kilby[343707]:     },
Dec 05 01:34:08 compute-0 exciting_kilby[343707]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:34:08 compute-0 exciting_kilby[343707]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:34:08 compute-0 exciting_kilby[343707]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:34:08 compute-0 exciting_kilby[343707]:         "osd_id": 2,
Dec 05 01:34:08 compute-0 exciting_kilby[343707]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:34:08 compute-0 exciting_kilby[343707]:         "type": "bluestore"
Dec 05 01:34:08 compute-0 exciting_kilby[343707]:     }
Dec 05 01:34:08 compute-0 exciting_kilby[343707]: }
Dec 05 01:34:08 compute-0 systemd[1]: libpod-c19d64a8a8c42294b92b3217a5e240f8457694fb9fd4f74feb6755f3cb0e1306.scope: Deactivated successfully.
Dec 05 01:34:08 compute-0 systemd[1]: libpod-c19d64a8a8c42294b92b3217a5e240f8457694fb9fd4f74feb6755f3cb0e1306.scope: Consumed 1.223s CPU time.
Dec 05 01:34:08 compute-0 podman[343962]: 2025-12-05 01:34:08.172604686 +0000 UTC m=+0.070032015 container died c19d64a8a8c42294b92b3217a5e240f8457694fb9fd4f74feb6755f3cb0e1306 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:34:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-01298082028ab38d72b2951a674af384a9a062aec394f53d2894ddbecd71bb8d-merged.mount: Deactivated successfully.
Dec 05 01:34:08 compute-0 podman[343962]: 2025-12-05 01:34:08.28454834 +0000 UTC m=+0.181975679 container remove c19d64a8a8c42294b92b3217a5e240f8457694fb9fd4f74feb6755f3cb0e1306 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:34:08 compute-0 systemd[1]: libpod-conmon-c19d64a8a8c42294b92b3217a5e240f8457694fb9fd4f74feb6755f3cb0e1306.scope: Deactivated successfully.
Dec 05 01:34:08 compute-0 sudo[343516]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:34:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:34:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:34:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:34:08 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 35bb95c6-2613-4809-b8d6-4f8732e32256 does not exist
Dec 05 01:34:08 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev fbfb9c7f-41eb-4f93-8145-2a4fd4c3fae1 does not exist
Dec 05 01:34:08 compute-0 sudo[344022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:34:08 compute-0 sudo[344022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:34:08 compute-0 python3.9[344021]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764898446.9884043-1249-164230920362177/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:34:08 compute-0 sudo[344022]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:08 compute-0 sudo[344047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:34:08 compute-0 sudo[344047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:34:08 compute-0 sudo[344047]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:34:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:34:09 compute-0 ceph-mon[192914]: pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:09 compute-0 python3.9[344221]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:34:10 compute-0 podman[344316]: 2025-12-05 01:34:10.206578736 +0000 UTC m=+0.087233266 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec 05 01:34:10 compute-0 podman[344318]: 2025-12-05 01:34:10.222545851 +0000 UTC m=+0.095965109 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 05 01:34:10 compute-0 podman[344317]: 2025-12-05 01:34:10.22859322 +0000 UTC m=+0.096967277 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:34:10 compute-0 podman[344319]: 2025-12-05 01:34:10.268085382 +0000 UTC m=+0.130837922 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:34:10 compute-0 python3.9[344407]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764898448.7559974-1249-86805877696747/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:34:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:11 compute-0 python3.9[344575]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:34:11 compute-0 ceph-mon[192914]: pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:12 compute-0 python3.9[344696]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764898450.6688058-1249-88355885668195/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:34:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:34:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:12 compute-0 python3.9[344846]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:34:13 compute-0 python3.9[344967]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764898452.2692113-1249-228933640642297/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:34:13 compute-0 ceph-mon[192914]: pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:14 compute-0 sudo[345117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgrovmptgijrzljcpvaekakqczfetusf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898453.996472-1332-75465804823142/AnsiballZ_file.py'
Dec 05 01:34:14 compute-0 sudo[345117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:34:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:14 compute-0 python3.9[345119]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:34:14 compute-0 sudo[345117]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:14 compute-0 ceph-mon[192914]: pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:15 compute-0 sudo[345281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxojgrruspwogxdahzyxssbljprhjqbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898455.0517752-1340-173101857134075/AnsiballZ_copy.py'
Dec 05 01:34:15 compute-0 sudo[345281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:34:15 compute-0 podman[345243]: 2025-12-05 01:34:15.59535958 +0000 UTC m=+0.123671302 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.tags=base rhel9, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1214.1726694543, release-0.7.12=, com.redhat.component=ubi9-container, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec 05 01:34:15 compute-0 python3.9[345287]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:34:15 compute-0 sudo[345281]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:34:16
Dec 05 01:34:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:34:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:34:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'volumes', 'backups', '.rgw.root', 'vms', 'images', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta']
Dec 05 01:34:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:34:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:34:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:34:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:34:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:34:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:34:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:34:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:34:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:34:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:34:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:34:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:34:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:34:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:34:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:34:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:34:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:34:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:17 compute-0 sudo[345439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbpxojhdichymvdpzqajoykwoldhwdol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898456.674917-1348-105143997710358/AnsiballZ_stat.py'
Dec 05 01:34:17 compute-0 sudo[345439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:34:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:34:17 compute-0 python3.9[345441]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:34:17 compute-0 sudo[345439]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:17 compute-0 ceph-mon[192914]: pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:19 compute-0 sudo[345591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udwmrafojlivvlnccdzdgjcmnbhggpkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898457.6766553-1356-153094691212899/AnsiballZ_stat.py'
Dec 05 01:34:19 compute-0 sudo[345591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:34:19 compute-0 python3.9[345593]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:34:19 compute-0 sudo[345591]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:19 compute-0 ceph-mon[192914]: pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:19 compute-0 sudo[345715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqjtfznrlgfbvpxnmatnjnkvdaiedrwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898457.6766553-1356-153094691212899/AnsiballZ_copy.py'
Dec 05 01:34:19 compute-0 sudo[345715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:34:20 compute-0 python3.9[345717]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764898457.6766553-1356-153094691212899/.source _original_basename=.3knihjcz follow=False checksum=837b301b2ce47747228e2c392556c83935f6fd48 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec 05 01:34:20 compute-0 sudo[345715]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:20 compute-0 ceph-mon[192914]: pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:21 compute-0 podman[345843]: 2025-12-05 01:34:21.158424879 +0000 UTC m=+0.098621873 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64)
Dec 05 01:34:21 compute-0 python3.9[345885]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:34:22 compute-0 python3.9[346041]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:34:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:34:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:22 compute-0 podman[346136]: 2025-12-05 01:34:22.889397352 +0000 UTC m=+0.101367290 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:34:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 01:34:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Cumulative writes: 5667 writes, 23K keys, 5667 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                            Cumulative WAL: 5667 writes, 889 syncs, 6.37 writes per sync, written: 0.02 GB, 0.02 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                            Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Dec 05 01:34:23 compute-0 python3.9[346174]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764898461.6264431-1382-128199124472342/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:34:23 compute-0 ceph-mon[192914]: pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:24 compute-0 python3.9[346333]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:34:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:24 compute-0 python3.9[346454]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764898463.3931947-1397-254673578141208/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:34:25 compute-0 sudo[346604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyulkvcvjuyhqvudwgdctlgreqjbpanf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898465.3134959-1414-20886611642407/AnsiballZ_container_config_data.py'
Dec 05 01:34:25 compute-0 sudo[346604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:34:25 compute-0 ceph-mon[192914]: pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:26 compute-0 python3.9[346606]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec 05 01:34:26 compute-0 sudo[346604]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:34:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:26 compute-0 ceph-mon[192914]: pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:26 compute-0 sudo[346756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxrumwrphetzczgrxhitagqfkxztrzyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898466.418515-1423-38124157312203/AnsiballZ_container_config_hash.py'
Dec 05 01:34:26 compute-0 sudo[346756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:34:27 compute-0 python3.9[346758]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 01:34:27 compute-0 sudo[346756]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:34:28 compute-0 sudo[346908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzenrblxerdcufdpzhnmjqfmmujyxyrv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764898467.4745896-1433-76791495565931/AnsiballZ_edpm_container_manage.py'
Dec 05 01:34:28 compute-0 sudo[346908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:34:28 compute-0 python3[346910]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 01:34:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 01:34:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 1200.2 total, 600.0 interval
                                            Cumulative writes: 7007 writes, 28K keys, 7007 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                            Cumulative WAL: 7007 writes, 1237 syncs, 5.66 writes per sync, written: 0.02 GB, 0.02 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                            Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.2 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.2 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.2 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.2 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.2 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.2 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.2 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.2 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.2 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.2 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670a430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.2 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.2 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56484670add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Dec 05 01:34:29 compute-0 podman[158197]: time="2025-12-05T01:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:34:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38321 "" "Go-http-client/1.1"
Dec 05 01:34:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7692 "" "Go-http-client/1.1"
Dec 05 01:34:29 compute-0 ceph-mon[192914]: pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:30 compute-0 ceph-mon[192914]: pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:31 compute-0 openstack_network_exporter[160350]: ERROR   01:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:34:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:34:31 compute-0 openstack_network_exporter[160350]: ERROR   01:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:34:31 compute-0 openstack_network_exporter[160350]: ERROR   01:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:34:31 compute-0 openstack_network_exporter[160350]: ERROR   01:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:34:31 compute-0 openstack_network_exporter[160350]: ERROR   01:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:34:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:34:31 compute-0 podman[346948]: 2025-12-05 01:34:31.674132402 +0000 UTC m=+0.089811638 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Dec 05 01:34:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:34:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:33 compute-0 ceph-mon[192914]: pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:34 compute-0 ceph-mon[192914]: pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 01:34:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Cumulative writes: 5737 writes, 24K keys, 5737 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                            Cumulative WAL: 5737 writes, 931 syncs, 6.16 writes per sync, written: 0.02 GB, 0.02 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                            Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                             Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
                                            
                                            ** Compaction Stats [m-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-0] **
                                            
                                            ** Compaction Stats [m-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-1] **
                                            
                                            ** Compaction Stats [m-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [m-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [m-2] **
                                            
                                            ** Compaction Stats [p-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-0] **
                                            
                                            ** Compaction Stats [p-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-1] **
                                            
                                            ** Compaction Stats [p-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [p-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [p-2] **
                                            
                                            ** Compaction Stats [O-0] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-0] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575e430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-0] **
                                            
                                            ** Compaction Stats [O-1] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-1] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575e430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-1] **
                                            
                                            ** Compaction Stats [O-2] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [O-2] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575e430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [O-2] **
                                            
                                            ** Compaction Stats [L] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [L] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [L] **
                                            
                                            ** Compaction Stats [P] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            
                                            ** Compaction Stats [P] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1200.1 total, 600.0 interval
                                            Flush(GB): cumulative 0.000, interval 0.000
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [P] **
Dec 05 01:34:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:34:37 compute-0 ceph-mon[192914]: pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:37 compute-0 podman[346981]: 2025-12-05 01:34:37.51274498 +0000 UTC m=+5.121480687 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:34:37 compute-0 ceph-mgr[193209]: [devicehealth INFO root] Check health
Dec 05 01:34:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:40 compute-0 ceph-mon[192914]: pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:41 compute-0 ceph-mon[192914]: pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.547 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.548 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:34:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:43 compute-0 ceph-mon[192914]: pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:44 compute-0 podman[347017]: 2025-12-05 01:34:44.579554531 +0000 UTC m=+3.991445363 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 05 01:34:44 compute-0 podman[347018]: 2025-12-05 01:34:44.591627228 +0000 UTC m=+3.998738947 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:34:44 compute-0 podman[347019]: 2025-12-05 01:34:44.595667151 +0000 UTC m=+4.001364891 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 01:34:44 compute-0 podman[346925]: 2025-12-05 01:34:44.612182832 +0000 UTC m=+16.179576647 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 05 01:34:44 compute-0 podman[347020]: 2025-12-05 01:34:44.620040951 +0000 UTC m=+4.020709900 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 01:34:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:44 compute-0 podman[347118]: 2025-12-05 01:34:44.829571898 +0000 UTC m=+0.098218642 container create 4d8938d8db32fcae4f45945a49d34b745f8e8c75a9d36333a9dd0778cc2dcac2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 05 01:34:44 compute-0 podman[347118]: 2025-12-05 01:34:44.764636726 +0000 UTC m=+0.033283460 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 05 01:34:44 compute-0 python3[346910]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec 05 01:34:45 compute-0 sudo[346908]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:45 compute-0 ceph-mon[192914]: pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:46 compute-0 podman[347280]: 2025-12-05 01:34:46.025349657 +0000 UTC m=+0.117310385 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, release=1214.1726694543, version=9.4, io.openshift.tags=base rhel9, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 05 01:34:46 compute-0 sudo[347323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmwdrmjkxtcrvxcerdklpedusgohhcmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898485.479684-1441-175766587749330/AnsiballZ_stat.py'
Dec 05 01:34:46 compute-0 sudo[347323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:34:46 compute-0 python3.9[347326]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:34:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:34:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:34:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:34:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:34:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:34:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:34:46 compute-0 sudo[347323]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:46 compute-0 ceph-mon[192914]: pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:47 compute-0 sudo[347478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrmxznrycqggalhzyydusjwntlkggrmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898486.7480037-1453-216493859035229/AnsiballZ_container_config_data.py'
Dec 05 01:34:47 compute-0 sudo[347478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:34:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:34:47 compute-0 python3.9[347480]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec 05 01:34:47 compute-0 sudo[347478]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:48 compute-0 sudo[347630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udvwzytabqvgvkycyqnlodeqbvnekkds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898487.867687-1462-252584613281259/AnsiballZ_container_config_hash.py'
Dec 05 01:34:48 compute-0 sudo[347630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:34:48 compute-0 python3.9[347632]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 01:34:48 compute-0 sudo[347630]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:49 compute-0 sudo[347782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljzwyypdvhmahuvmijmgoxwonpjabyxi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764898488.982126-1472-96845072137677/AnsiballZ_edpm_container_manage.py'
Dec 05 01:34:49 compute-0 sudo[347782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:34:49 compute-0 python3[347784]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 01:34:49 compute-0 ceph-mon[192914]: pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:50 compute-0 podman[347817]: 2025-12-05 01:34:50.15552359 +0000 UTC m=+0.100069843 container create 7e4d1102a0626942d9f944e09cc1dcb68eba5a8bb8d27cbb786766a0e2d545b6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 01:34:50 compute-0 podman[347817]: 2025-12-05 01:34:50.101656557 +0000 UTC m=+0.046202860 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 05 01:34:50 compute-0 python3[347784]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec 05 01:34:50 compute-0 sudo[347782]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:50 compute-0 ceph-mon[192914]: pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:51 compute-0 sudo[348002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubmakkghkybwukzhxizumrakueixtthn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898490.671608-1480-279292055560367/AnsiballZ_stat.py'
Dec 05 01:34:51 compute-0 sudo[348002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:34:51 compute-0 python3.9[348004]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:34:51 compute-0 sudo[348002]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:51 compute-0 podman[348031]: 2025-12-05 01:34:51.709673068 +0000 UTC m=+0.116497891 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, distribution-scope=public, managed_by=edpm_ansible, architecture=x86_64, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Dec 05 01:34:51 compute-0 auditd[704]: Audit daemon rotating log files
Dec 05 01:34:52 compute-0 sudo[348176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuohviwfjcdbgdepbivirertpnvrngcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898491.7228994-1489-52565373086367/AnsiballZ_file.py'
Dec 05 01:34:52 compute-0 sudo[348176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:34:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:34:52 compute-0 python3.9[348178]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:34:52 compute-0 sudo[348176]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:53 compute-0 sudo[348339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljjotoujurzbxmnkkalnifehrlrxllat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898492.7316506-1489-225511827972435/AnsiballZ_copy.py'
Dec 05 01:34:53 compute-0 sudo[348339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:34:53 compute-0 podman[348301]: 2025-12-05 01:34:53.497064557 +0000 UTC m=+0.155694636 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:34:53 compute-0 python3.9[348352]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764898492.7316506-1489-225511827972435/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:34:53 compute-0 sudo[348339]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:53 compute-0 ceph-mon[192914]: pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:54 compute-0 sudo[348426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxmvqgnmndpbaqxxlkqkhoeahdjgrwiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898492.7316506-1489-225511827972435/AnsiballZ_systemd.py'
Dec 05 01:34:54 compute-0 sudo[348426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:34:54 compute-0 python3.9[348428]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 01:34:54 compute-0 systemd[1]: Reloading.
Dec 05 01:34:54 compute-0 systemd-sysv-generator[348458]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:34:54 compute-0 systemd-rc-local-generator[348454]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:34:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.897078) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898494897132, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1566, "num_deletes": 251, "total_data_size": 2589134, "memory_usage": 2619608, "flush_reason": "Manual Compaction"}
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Dec 05 01:34:54 compute-0 ceph-mon[192914]: pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898494911515, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2554353, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14737, "largest_seqno": 16302, "table_properties": {"data_size": 2547042, "index_size": 4382, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14509, "raw_average_key_size": 19, "raw_value_size": 2532543, "raw_average_value_size": 3422, "num_data_blocks": 200, "num_entries": 740, "num_filter_entries": 740, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764898318, "oldest_key_time": 1764898318, "file_creation_time": 1764898494, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 14480 microseconds, and 5655 cpu microseconds.
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.911568) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2554353 bytes OK
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.911586) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.914754) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.914766) EVENT_LOG_v1 {"time_micros": 1764898494914763, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.914779) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2582361, prev total WAL file size 2582361, number of live WAL files 2.
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.916451) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2494KB)], [35(6794KB)]
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898494916561, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9512073, "oldest_snapshot_seqno": -1}
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 3972 keys, 7756297 bytes, temperature: kUnknown
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898494976439, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7756297, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7727350, "index_size": 17893, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 97004, "raw_average_key_size": 24, "raw_value_size": 7653006, "raw_average_value_size": 1926, "num_data_blocks": 759, "num_entries": 3972, "num_filter_entries": 3972, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764898494, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.976850) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7756297 bytes
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.979493) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.2 rd, 129.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 6.6 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(6.8) write-amplify(3.0) OK, records in: 4486, records dropped: 514 output_compression: NoCompression
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.979525) EVENT_LOG_v1 {"time_micros": 1764898494979510, "job": 16, "event": "compaction_finished", "compaction_time_micros": 60125, "compaction_time_cpu_micros": 34748, "output_level": 6, "num_output_files": 1, "total_output_size": 7756297, "num_input_records": 4486, "num_output_records": 3972, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898494980768, "job": 16, "event": "table_file_deletion", "file_number": 37}
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898494986379, "job": 16, "event": "table_file_deletion", "file_number": 35}
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.916130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.986674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.986681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.986684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.986687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.986691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:34:55 compute-0 sudo[348426]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:55 compute-0 sudo[348536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnrpaqdmfvcmzkezrzkhotosfzcaqjtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898492.7316506-1489-225511827972435/AnsiballZ_systemd.py'
Dec 05 01:34:55 compute-0 sudo[348536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:34:55 compute-0 python3.9[348538]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:34:55 compute-0 systemd[1]: Reloading.
Dec 05 01:34:56 compute-0 systemd-sysv-generator[348566]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:34:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:34:56.162 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:34:56 compute-0 systemd-rc-local-generator[348562]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:34:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:34:56.163 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:34:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:34:56.163 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:34:56 compute-0 systemd[1]: Starting nova_compute container...
Dec 05 01:34:56 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacfa37577497aeadf1a19b9a8d6f7d0cc735e29817b277f1b42b6463a7744c0/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 05 01:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacfa37577497aeadf1a19b9a8d6f7d0cc735e29817b277f1b42b6463a7744c0/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 05 01:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacfa37577497aeadf1a19b9a8d6f7d0cc735e29817b277f1b42b6463a7744c0/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 05 01:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacfa37577497aeadf1a19b9a8d6f7d0cc735e29817b277f1b42b6463a7744c0/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 05 01:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacfa37577497aeadf1a19b9a8d6f7d0cc735e29817b277f1b42b6463a7744c0/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 01:34:56 compute-0 podman[348577]: 2025-12-05 01:34:56.656419659 +0000 UTC m=+0.158252437 container init 7e4d1102a0626942d9f944e09cc1dcb68eba5a8bb8d27cbb786766a0e2d545b6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:34:56 compute-0 podman[348577]: 2025-12-05 01:34:56.679014909 +0000 UTC m=+0.180847667 container start 7e4d1102a0626942d9f944e09cc1dcb68eba5a8bb8d27cbb786766a0e2d545b6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=nova_compute)
Dec 05 01:34:56 compute-0 podman[348577]: nova_compute
Dec 05 01:34:56 compute-0 nova_compute[348591]: + sudo -E kolla_set_configs
Dec 05 01:34:56 compute-0 systemd[1]: Started nova_compute container.
Dec 05 01:34:56 compute-0 sudo[348536]: pam_unix(sudo:session): session closed for user root
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Validating config file
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Copying service configuration files
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Deleting /etc/ceph
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Creating directory /etc/ceph
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /etc/ceph
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Writing out command to execute
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 05 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 05 01:34:56 compute-0 nova_compute[348591]: ++ cat /run_command
Dec 05 01:34:56 compute-0 nova_compute[348591]: + CMD=nova-compute
Dec 05 01:34:56 compute-0 nova_compute[348591]: + ARGS=
Dec 05 01:34:56 compute-0 nova_compute[348591]: + sudo kolla_copy_cacerts
Dec 05 01:34:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:56 compute-0 nova_compute[348591]: + [[ ! -n '' ]]
Dec 05 01:34:56 compute-0 nova_compute[348591]: + . kolla_extend_start
Dec 05 01:34:56 compute-0 nova_compute[348591]: Running command: 'nova-compute'
Dec 05 01:34:56 compute-0 nova_compute[348591]: + echo 'Running command: '\''nova-compute'\'''
Dec 05 01:34:56 compute-0 nova_compute[348591]: + umask 0022
Dec 05 01:34:56 compute-0 nova_compute[348591]: + exec nova-compute
Dec 05 01:34:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:34:58 compute-0 ceph-mon[192914]: pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:58 compute-0 python3.9[348753]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:34:59 compute-0 nova_compute[348591]: 2025-12-05 01:34:59.272 348595 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 05 01:34:59 compute-0 nova_compute[348591]: 2025-12-05 01:34:59.273 348595 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 05 01:34:59 compute-0 nova_compute[348591]: 2025-12-05 01:34:59.274 348595 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 05 01:34:59 compute-0 nova_compute[348591]: 2025-12-05 01:34:59.274 348595 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 05 01:34:59 compute-0 ceph-mon[192914]: pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:34:59 compute-0 nova_compute[348591]: 2025-12-05 01:34:59.544 348595 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:34:59 compute-0 nova_compute[348591]: 2025-12-05 01:34:59.577 348595 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:34:59 compute-0 nova_compute[348591]: 2025-12-05 01:34:59.578 348595 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 05 01:34:59 compute-0 podman[158197]: time="2025-12-05T01:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:34:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42588 "" "Go-http-client/1.1"
Dec 05 01:34:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8123 "" "Go-http-client/1.1"
Dec 05 01:34:59 compute-0 python3.9[348907]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.279 348595 INFO nova.virt.driver [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.434 348595 INFO nova.compute.provider_config [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.453 348595 DEBUG oslo_concurrency.lockutils [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.453 348595 DEBUG oslo_concurrency.lockutils [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.454 348595 DEBUG oslo_concurrency.lockutils [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.454 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.454 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.454 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.454 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.455 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.455 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.455 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.455 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.456 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.456 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.456 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.456 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.456 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.457 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.457 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.457 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.457 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.457 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.458 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.458 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.458 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.458 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.458 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.459 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.459 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.459 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.459 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.460 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.460 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.460 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.460 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.460 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.461 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.461 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.461 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.461 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.461 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.462 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.462 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.462 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.462 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.463 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.463 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.463 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.463 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.463 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.464 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.464 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.464 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.464 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.465 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.465 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.465 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.465 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.465 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.466 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.466 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.466 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.466 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.467 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.467 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.467 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.467 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.467 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.468 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.468 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.468 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.468 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.468 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.468 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.468 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.469 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.469 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.469 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.469 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.469 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.469 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.470 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.470 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.470 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.470 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.470 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.470 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.471 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.471 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.471 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.471 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.471 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.471 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.472 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.472 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.472 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.472 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.472 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.472 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.472 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.473 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.473 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.473 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.473 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.473 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.473 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.473 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.474 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.474 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.474 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.474 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.474 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.474 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.474 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.475 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.475 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.475 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.475 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.475 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.475 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.476 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.476 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.476 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.476 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.476 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.476 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.476 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.477 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.477 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.477 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.477 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.477 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.477 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.477 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.477 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.478 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.478 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.478 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.478 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.478 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.478 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.478 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.479 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.479 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.479 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.479 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.479 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.479 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.480 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.480 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.480 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.480 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.480 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.480 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.480 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.481 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.481 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.481 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.481 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.481 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.481 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.481 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.482 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.482 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.482 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.482 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.482 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.482 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.482 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.483 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.483 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.483 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.483 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.483 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.484 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.484 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.484 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.484 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.484 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.484 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.484 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.485 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.485 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.485 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.485 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.485 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.485 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.486 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.486 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.486 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.486 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.486 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.486 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.486 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.487 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.487 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.487 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.487 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.487 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.487 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.488 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.488 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.488 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.488 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.488 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.488 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.488 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.489 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.489 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.489 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.489 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.489 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.489 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.489 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.490 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.490 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.490 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.490 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.490 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.490 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.490 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.491 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.491 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.491 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.491 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.491 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.492 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.492 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.492 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.492 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.492 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.493 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.493 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.493 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.493 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.493 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.493 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.493 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.494 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.494 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.494 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.494 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.494 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.494 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.495 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.495 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.495 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.495 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.495 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.495 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.495 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.496 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.496 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.496 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.496 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.496 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.496 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.496 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.497 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.497 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.497 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.497 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.497 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.497 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.498 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.498 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.498 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.498 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.498 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.498 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.499 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.499 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.499 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.499 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.499 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.499 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.499 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.500 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.500 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.500 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.500 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.500 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.500 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.501 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.501 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.501 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.501 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.501 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.501 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.501 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.502 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.502 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.502 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.502 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.502 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.503 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.503 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.503 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.503 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.503 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.503 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.503 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.504 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.504 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.504 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.504 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.504 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.504 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.504 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.505 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.505 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.505 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.505 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.505 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.505 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.505 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.506 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.506 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.506 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.506 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.506 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.506 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.507 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.507 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.507 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.507 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.507 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.508 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.508 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.508 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.508 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.508 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.508 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.509 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.509 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.509 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.509 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.509 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.509 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.509 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.510 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.510 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.510 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.510 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.510 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.510 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.511 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.511 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.511 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.511 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.511 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.511 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.512 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.512 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.512 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.512 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.512 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.513 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.513 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.513 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.513 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.514 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.514 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.514 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.514 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.514 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.514 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.515 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.515 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.515 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.515 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.515 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.516 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.516 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.516 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.516 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.516 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.516 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.517 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.517 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.517 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.517 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.517 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.518 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.518 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.518 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.518 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.518 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.519 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.519 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.519 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.519 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.519 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.519 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.520 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.520 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.520 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.520 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.520 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.520 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.521 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.521 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.521 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.521 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.521 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.521 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.521 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.522 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.522 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.522 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.522 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.522 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.522 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.523 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.523 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.523 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.523 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.523 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.523 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.523 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.524 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.524 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.524 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.524 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.524 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.525 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.525 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.525 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.525 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.525 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.525 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.526 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.526 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.526 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.526 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.526 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.526 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.526 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.527 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.527 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.527 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.527 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.527 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.527 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.528 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.528 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.528 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.528 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.528 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.528 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.529 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.529 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.529 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.529 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.529 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.529 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.530 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.530 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.530 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.530 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.530 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.530 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.530 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.531 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.531 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.531 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.531 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.531 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.531 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.532 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.532 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.532 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.532 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.532 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.532 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.533 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.533 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.533 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.533 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.533 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.533 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.533 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.534 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.534 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.534 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.534 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.534 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.534 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.535 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.535 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.535 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.535 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.535 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.535 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.535 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.536 348595 WARNING oslo_config.cfg [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 05 01:35:00 compute-0 nova_compute[348591]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 05 01:35:00 compute-0 nova_compute[348591]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 05 01:35:00 compute-0 nova_compute[348591]: and ``live_migration_inbound_addr`` respectively.
Dec 05 01:35:00 compute-0 nova_compute[348591]: ).  Its value may be silently ignored in the future.
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.536 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.536 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.536 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.536 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.537 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.537 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.537 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.537 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.537 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.538 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.538 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.538 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.538 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.538 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.538 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.539 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.539 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.539 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.539 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.rbd_secret_uuid        = cbd280d3-cbd8-528b-ace6-2b3a887cdcee log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.539 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.539 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.540 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.540 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.540 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.540 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.540 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.540 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.540 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.541 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.541 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.541 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.541 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.541 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.541 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.542 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.542 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.542 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.542 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.542 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.543 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.543 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.543 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.543 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.543 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.544 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.544 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.544 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.544 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.544 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.545 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.545 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.545 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.545 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.545 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.546 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.546 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.546 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.546 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.546 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.546 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.546 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.547 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.547 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.547 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.547 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.547 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.547 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.548 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.548 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.548 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.548 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.548 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.548 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.548 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.549 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.549 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.549 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.549 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.549 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.549 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.550 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.550 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.550 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.550 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.550 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.550 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.551 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.551 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.551 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.551 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.551 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.552 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.552 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.552 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.552 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.552 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.553 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.553 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.553 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.553 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.553 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.554 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.554 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.554 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.554 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.554 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.555 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.555 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.555 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.555 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.555 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.555 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.555 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.556 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.556 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.556 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.556 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.556 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.556 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.557 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.557 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.557 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.557 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.557 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.557 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.558 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.558 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.558 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.558 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.558 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.558 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.559 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.559 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.559 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.559 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.559 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.559 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.560 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.560 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.560 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.560 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.560 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.561 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.561 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.561 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.561 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.561 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.561 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.562 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.562 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.562 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.562 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.562 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.562 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.563 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.563 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.563 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.563 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.564 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.564 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.564 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.564 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.564 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.565 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.565 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.565 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.565 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.565 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.566 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.566 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.566 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.566 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.566 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.567 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.567 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.567 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.567 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.567 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.568 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.568 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.568 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.568 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.568 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.569 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.569 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.569 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.569 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.569 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.569 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.570 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.570 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.570 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.570 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.570 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.570 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.571 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.571 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.571 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.571 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.571 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.572 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.572 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.572 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.572 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.572 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.572 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.572 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.573 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.573 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.573 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.573 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.573 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.574 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.574 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.574 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.574 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.574 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.574 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.575 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.575 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.575 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.575 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.575 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.575 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.575 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.576 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.576 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.576 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.576 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.576 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.577 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.577 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.577 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.577 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.577 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.577 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.578 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.578 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.578 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.579 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.579 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.579 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.579 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.579 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.580 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.580 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.580 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.580 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.580 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.580 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.581 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.581 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.581 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.581 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.581 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.581 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.582 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.582 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.582 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.582 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.582 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.583 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.583 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.583 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.583 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.583 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.584 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.584 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.584 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.584 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.584 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.585 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.585 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.585 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.585 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.585 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.586 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.586 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.586 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.586 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.587 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.587 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.587 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.587 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.587 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.587 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.588 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.588 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.588 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.588 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.588 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.588 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.589 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.589 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.589 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.589 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.589 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.589 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.590 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.590 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.590 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.590 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.590 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.590 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.591 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.591 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.591 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.591 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.591 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.591 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.592 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.592 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.592 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.592 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.592 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.592 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.593 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.593 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.593 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.593 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.593 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.593 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.593 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.594 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.594 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.594 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.594 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.594 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.594 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.595 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.595 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.595 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.595 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.595 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.595 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.596 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.596 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.596 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.596 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.596 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.596 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.597 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.597 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.597 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.597 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.597 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.597 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.598 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.598 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.598 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.598 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.598 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.598 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.598 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.599 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.599 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.599 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.599 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.599 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.599 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.600 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.600 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.600 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.600 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.600 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.600 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.601 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.601 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.601 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.601 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.601 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.601 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.602 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.602 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.602 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.602 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.602 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.603 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.603 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.603 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.603 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.603 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.604 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.604 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.604 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.604 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.604 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.605 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.605 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.605 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.605 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.605 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.606 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.606 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.606 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.606 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.606 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.607 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.607 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.607 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.607 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.607 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.607 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.608 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.608 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.608 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.608 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.608 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.609 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.609 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.609 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.609 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.609 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.609 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.610 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.610 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.610 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.610 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.610 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.610 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.610 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.611 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.611 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.611 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.612 348595 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.634 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.635 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.635 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.635 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.654 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f4b97d32190> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.660 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f4b97d32190> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.662 348595 INFO nova.virt.libvirt.driver [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Connection event '1' reason 'None'
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.682 348595 WARNING nova.virt.libvirt.driver [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 05 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.682 348595 DEBUG nova.virt.libvirt.volume.mount [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 05 01:35:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:00 compute-0 python3.9[349080]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:35:01 compute-0 openstack_network_exporter[160350]: ERROR   01:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:35:01 compute-0 openstack_network_exporter[160350]: ERROR   01:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:35:01 compute-0 openstack_network_exporter[160350]: ERROR   01:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:35:01 compute-0 openstack_network_exporter[160350]: ERROR   01:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:35:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:35:01 compute-0 openstack_network_exporter[160350]: ERROR   01:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:35:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:35:01 compute-0 nova_compute[348591]: 2025-12-05 01:35:01.767 348595 INFO nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Libvirt host capabilities <capabilities>
Dec 05 01:35:01 compute-0 nova_compute[348591]: 
Dec 05 01:35:01 compute-0 nova_compute[348591]:   <host>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <uuid>6c9ead2d-8495-4e2b-9845-f862956e441e</uuid>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <cpu>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <arch>x86_64</arch>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model>EPYC-Rome-v4</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <vendor>AMD</vendor>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <microcode version='16777317'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <signature family='23' model='49' stepping='0'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature name='x2apic'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature name='tsc-deadline'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature name='osxsave'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature name='hypervisor'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature name='tsc_adjust'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature name='spec-ctrl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature name='stibp'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature name='arch-capabilities'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature name='ssbd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature name='cmp_legacy'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature name='topoext'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature name='virt-ssbd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature name='lbrv'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature name='tsc-scale'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature name='vmcb-clean'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature name='pause-filter'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature name='pfthreshold'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature name='svme-addr-chk'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature name='rdctl-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature name='skip-l1dfl-vmentry'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature name='mds-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature name='pschange-mc-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <pages unit='KiB' size='4'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <pages unit='KiB' size='2048'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <pages unit='KiB' size='1048576'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </cpu>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <power_management>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <suspend_mem/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </power_management>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <iommu support='no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <migration_features>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <live/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <uri_transports>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <uri_transport>tcp</uri_transport>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <uri_transport>rdma</uri_transport>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </uri_transports>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </migration_features>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <topology>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <cells num='1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <cell id='0'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:           <memory unit='KiB'>7864320</memory>
Dec 05 01:35:01 compute-0 nova_compute[348591]:           <pages unit='KiB' size='4'>1966080</pages>
Dec 05 01:35:01 compute-0 nova_compute[348591]:           <pages unit='KiB' size='2048'>0</pages>
Dec 05 01:35:01 compute-0 nova_compute[348591]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 05 01:35:01 compute-0 nova_compute[348591]:           <distances>
Dec 05 01:35:01 compute-0 nova_compute[348591]:             <sibling id='0' value='10'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:           </distances>
Dec 05 01:35:01 compute-0 nova_compute[348591]:           <cpus num='8'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:           </cpus>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         </cell>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </cells>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </topology>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <cache>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </cache>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <secmodel>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model>selinux</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <doi>0</doi>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </secmodel>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <secmodel>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model>dac</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <doi>0</doi>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </secmodel>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   </host>
Dec 05 01:35:01 compute-0 nova_compute[348591]: 
Dec 05 01:35:01 compute-0 nova_compute[348591]:   <guest>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <os_type>hvm</os_type>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <arch name='i686'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <wordsize>32</wordsize>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <domain type='qemu'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <domain type='kvm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </arch>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <features>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <pae/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <nonpae/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <acpi default='on' toggle='yes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <apic default='on' toggle='no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <cpuselection/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <deviceboot/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <disksnapshot default='on' toggle='no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <externalSnapshot/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </features>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   </guest>
Dec 05 01:35:01 compute-0 nova_compute[348591]: 
Dec 05 01:35:01 compute-0 nova_compute[348591]:   <guest>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <os_type>hvm</os_type>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <arch name='x86_64'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <wordsize>64</wordsize>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <domain type='qemu'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <domain type='kvm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </arch>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <features>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <acpi default='on' toggle='yes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <apic default='on' toggle='no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <cpuselection/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <deviceboot/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <disksnapshot default='on' toggle='no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <externalSnapshot/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </features>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   </guest>
Dec 05 01:35:01 compute-0 nova_compute[348591]: 
Dec 05 01:35:01 compute-0 nova_compute[348591]: </capabilities>
Dec 05 01:35:01 compute-0 nova_compute[348591]: 
Dec 05 01:35:01 compute-0 nova_compute[348591]: 2025-12-05 01:35:01.779 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 05 01:35:01 compute-0 nova_compute[348591]: 2025-12-05 01:35:01.831 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 05 01:35:01 compute-0 nova_compute[348591]: <domainCapabilities>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   <domain>kvm</domain>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   <arch>i686</arch>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   <vcpu max='240'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   <iothreads supported='yes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   <os supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <enum name='firmware'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <loader supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='type'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>rom</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>pflash</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='readonly'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>yes</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>no</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='secure'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>no</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </loader>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   </os>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   <cpu>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <mode name='host-passthrough' supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='hostPassthroughMigratable'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>on</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>off</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </mode>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <mode name='maximum' supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='maximumMigratable'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>on</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>off</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </mode>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <mode name='host-model' supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <vendor>AMD</vendor>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='x2apic'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='hypervisor'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='stibp'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='ssbd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='overflow-recov'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='succor'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='ibrs'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='lbrv'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='tsc-scale'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='flushbyasid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='pause-filter'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='pfthreshold'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='disable' name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </mode>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <mode name='custom' supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Broadwell'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Broadwell-IBRS'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Broadwell-noTSX'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Broadwell-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Broadwell-v2'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Broadwell-v3'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Broadwell-v4'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Cooperlake'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Cooperlake-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Cooperlake-v2'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Denverton'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='mpx'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Denverton-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='mpx'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Denverton-v2'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Denverton-v3'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Dhyana-v2'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='EPYC-Genoa'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amd-psfd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='auto-ibrs'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='no-nested-data-bp'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='null-sel-clr-base'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='stibp-always-on'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amd-psfd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='auto-ibrs'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='no-nested-data-bp'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='null-sel-clr-base'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='stibp-always-on'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='EPYC-Milan'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='EPYC-Milan-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='EPYC-Milan-v2'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amd-psfd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='no-nested-data-bp'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='null-sel-clr-base'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='stibp-always-on'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='EPYC-Rome'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='EPYC-Rome-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='EPYC-Rome-v2'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='EPYC-Rome-v3'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='EPYC-v3'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='EPYC-v4'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='GraniteRapids'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-fp16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='mcdt-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pbrsb-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='prefetchiti'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='GraniteRapids-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-fp16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='mcdt-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pbrsb-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='prefetchiti'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='GraniteRapids-v2'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-fp16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx10'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx10-128'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx10-256'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx10-512'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='mcdt-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pbrsb-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='prefetchiti'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Haswell'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Haswell-IBRS'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Haswell-noTSX'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Haswell-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Haswell-v2'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Haswell-v3'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Haswell-v4'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v2'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v3'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v4'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v5'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v6'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v7'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 ceph-mon[192914]: pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='IvyBridge'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='IvyBridge-IBRS'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='IvyBridge-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='IvyBridge-v2'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='KnightsMill'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-4fmaps'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-4vnniw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512er'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512pf'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='KnightsMill-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-4fmaps'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-4vnniw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512er'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512pf'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Opteron_G4'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fma4'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xop'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Opteron_G4-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fma4'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xop'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Opteron_G5'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fma4'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='tbm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xop'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Opteron_G5-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fma4'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='tbm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xop'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='SapphireRapids'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='SapphireRapids-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='SapphireRapids-v2'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='SapphireRapids-v3'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='SierraForest'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx-ifma'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx-ne-convert'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx-vnni-int8'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='cmpccxadd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='mcdt-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pbrsb-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='SierraForest-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx-ifma'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx-ne-convert'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx-vnni-int8'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='cmpccxadd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='mcdt-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pbrsb-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client-v2'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client-v3'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client-v4'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-v2'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-v3'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-v4'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-v5'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Snowridge'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='core-capability'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='mpx'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='split-lock-detect'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Snowridge-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='core-capability'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='mpx'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='split-lock-detect'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Snowridge-v2'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='core-capability'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='split-lock-detect'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Snowridge-v3'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='core-capability'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='split-lock-detect'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Snowridge-v4'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='athlon'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='3dnow'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='3dnowext'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='athlon-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='3dnow'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='3dnowext'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='core2duo'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='core2duo-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='coreduo'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='coreduo-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='n270'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='n270-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='phenom'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='3dnow'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='3dnowext'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='phenom-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='3dnow'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='3dnowext'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </mode>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   </cpu>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   <memoryBacking supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <enum name='sourceType'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <value>file</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <value>anonymous</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <value>memfd</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   </memoryBacking>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   <devices>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <disk supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='diskDevice'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>disk</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>cdrom</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>floppy</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>lun</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='bus'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>ide</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>fdc</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>scsi</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>virtio</value>
Dec 05 01:35:01 compute-0 podman[349174]: 2025-12-05 01:35:01.933371663 +0000 UTC m=+0.128356273 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>usb</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>sata</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='model'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>virtio</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>virtio-transitional</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>virtio-non-transitional</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </disk>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <graphics supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='type'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>vnc</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>egl-headless</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>dbus</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </graphics>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <video supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='modelType'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>vga</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>cirrus</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>virtio</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>none</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>bochs</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>ramfb</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </video>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <hostdev supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='mode'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>subsystem</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='startupPolicy'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>default</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>mandatory</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>requisite</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>optional</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='subsysType'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>usb</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>pci</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>scsi</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='capsType'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='pciBackend'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </hostdev>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <rng supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='model'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>virtio</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>virtio-transitional</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>virtio-non-transitional</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='backendModel'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>random</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>egd</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>builtin</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </rng>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <filesystem supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='driverType'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>path</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>handle</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>virtiofs</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </filesystem>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <tpm supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='model'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>tpm-tis</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>tpm-crb</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='backendModel'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>emulator</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>external</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='backendVersion'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>2.0</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </tpm>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <redirdev supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='bus'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>usb</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </redirdev>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <channel supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='type'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>pty</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>unix</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </channel>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <crypto supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='model'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='type'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>qemu</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='backendModel'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>builtin</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </crypto>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <interface supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='backendType'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>default</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>passt</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </interface>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <panic supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='model'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>isa</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>hyperv</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </panic>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <console supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='type'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>null</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>vc</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>pty</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>dev</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>file</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>pipe</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>stdio</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>udp</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>tcp</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>unix</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>qemu-vdagent</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>dbus</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </console>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   </devices>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   <features>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <gic supported='no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <vmcoreinfo supported='yes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <genid supported='yes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <backingStoreInput supported='yes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <backup supported='yes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <async-teardown supported='yes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <ps2 supported='yes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <sev supported='no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <sgx supported='no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <hyperv supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='features'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>relaxed</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>vapic</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>spinlocks</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>vpindex</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>runtime</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>synic</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>stimer</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>reset</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>vendor_id</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>frequencies</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>reenlightenment</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>tlbflush</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>ipi</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>avic</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>emsr_bitmap</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>xmm_input</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <defaults>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <spinlocks>4095</spinlocks>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <stimer_direct>on</stimer_direct>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <tlbflush_direct>on</tlbflush_direct>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <tlbflush_extended>on</tlbflush_extended>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </defaults>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </hyperv>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <launchSecurity supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='sectype'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>tdx</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </launchSecurity>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   </features>
Dec 05 01:35:01 compute-0 nova_compute[348591]: </domainCapabilities>
Dec 05 01:35:01 compute-0 nova_compute[348591]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 01:35:01 compute-0 nova_compute[348591]: 2025-12-05 01:35:01.843 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 05 01:35:01 compute-0 nova_compute[348591]: <domainCapabilities>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   <domain>kvm</domain>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   <arch>i686</arch>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   <vcpu max='4096'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   <iothreads supported='yes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   <os supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <enum name='firmware'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <loader supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='type'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>rom</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>pflash</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='readonly'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>yes</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>no</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='secure'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>no</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </loader>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   </os>
Dec 05 01:35:01 compute-0 nova_compute[348591]:   <cpu>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <mode name='host-passthrough' supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='hostPassthroughMigratable'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>on</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>off</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </mode>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <mode name='maximum' supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <enum name='maximumMigratable'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>on</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <value>off</value>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </mode>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <mode name='host-model' supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <vendor>AMD</vendor>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='x2apic'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='hypervisor'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='stibp'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='ssbd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='overflow-recov'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='succor'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='ibrs'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='lbrv'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='tsc-scale'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='flushbyasid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='pause-filter'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='pfthreshold'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <feature policy='disable' name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     </mode>
Dec 05 01:35:01 compute-0 nova_compute[348591]:     <mode name='custom' supported='yes'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Broadwell'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Broadwell-IBRS'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Broadwell-noTSX'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Broadwell-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Broadwell-v2'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Broadwell-v3'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Broadwell-v4'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Cooperlake'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Cooperlake-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Cooperlake-v2'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Denverton'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='mpx'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Denverton-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='mpx'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Denverton-v2'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Denverton-v3'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Dhyana-v2'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='EPYC-Genoa'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amd-psfd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='auto-ibrs'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='no-nested-data-bp'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='null-sel-clr-base'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='stibp-always-on'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amd-psfd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='auto-ibrs'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='no-nested-data-bp'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='null-sel-clr-base'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='stibp-always-on'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='EPYC-Milan'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='EPYC-Milan-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='EPYC-Milan-v2'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amd-psfd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 anacron[91608]: Job `cron.weekly' started
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='no-nested-data-bp'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='null-sel-clr-base'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='stibp-always-on'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='EPYC-Rome'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='EPYC-Rome-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='EPYC-Rome-v2'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='EPYC-Rome-v3'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='EPYC-v3'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='EPYC-v4'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='GraniteRapids'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-fp16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='mcdt-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pbrsb-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='prefetchiti'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='GraniteRapids-v1'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-fp16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 anacron[91608]: Job `cron.weekly' terminated
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='mcdt-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pbrsb-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='prefetchiti'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='GraniteRapids-v2'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-fp16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx10'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx10-128'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx10-256'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx10-512'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='mcdt-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pbrsb-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='prefetchiti'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Haswell'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Haswell-IBRS'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 01:35:01 compute-0 nova_compute[348591]:       <blockers model='Haswell-noTSX'>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:01 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Haswell-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Haswell-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Haswell-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Haswell-v4'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v4'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v5'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v6'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v7'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='IvyBridge'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='IvyBridge-IBRS'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='IvyBridge-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='IvyBridge-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='KnightsMill'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-4fmaps'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-4vnniw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512er'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512pf'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='KnightsMill-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-4fmaps'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-4vnniw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512er'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512pf'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Opteron_G4'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fma4'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xop'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Opteron_G4-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fma4'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xop'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Opteron_G5'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fma4'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='tbm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xop'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Opteron_G5-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fma4'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='tbm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xop'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='SapphireRapids'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='SapphireRapids-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='SapphireRapids-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='SapphireRapids-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='SierraForest'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-ne-convert'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni-int8'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cmpccxadd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='mcdt-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pbrsb-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='SierraForest-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-ne-convert'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni-int8'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cmpccxadd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='mcdt-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pbrsb-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client-v4'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-v4'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-v5'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Snowridge'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='core-capability'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='mpx'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='split-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Snowridge-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='core-capability'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='mpx'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='split-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Snowridge-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='core-capability'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='split-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Snowridge-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='core-capability'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='split-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Snowridge-v4'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='athlon'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='3dnow'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='3dnowext'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='athlon-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='3dnow'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='3dnowext'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='core2duo'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='core2duo-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='coreduo'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='coreduo-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='n270'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='n270-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='phenom'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='3dnow'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='3dnowext'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='phenom-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='3dnow'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='3dnowext'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </mode>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   </cpu>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   <memoryBacking supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <enum name='sourceType'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <value>file</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <value>anonymous</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <value>memfd</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   </memoryBacking>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   <devices>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <disk supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='diskDevice'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>disk</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>cdrom</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>floppy</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>lun</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='bus'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>fdc</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>scsi</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtio</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>usb</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>sata</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='model'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtio</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtio-transitional</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtio-non-transitional</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </disk>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <graphics supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='type'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>vnc</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>egl-headless</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>dbus</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </graphics>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <video supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='modelType'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>vga</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>cirrus</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtio</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>none</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>bochs</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>ramfb</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </video>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <hostdev supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='mode'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>subsystem</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='startupPolicy'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>default</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>mandatory</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>requisite</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>optional</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='subsysType'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>usb</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>pci</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>scsi</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='capsType'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='pciBackend'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </hostdev>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <rng supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='model'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtio</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtio-transitional</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtio-non-transitional</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='backendModel'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>random</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>egd</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>builtin</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </rng>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <filesystem supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='driverType'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>path</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>handle</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtiofs</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </filesystem>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <tpm supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='model'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>tpm-tis</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>tpm-crb</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='backendModel'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>emulator</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>external</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='backendVersion'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>2.0</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </tpm>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <redirdev supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='bus'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>usb</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </redirdev>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <channel supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='type'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>pty</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>unix</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </channel>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <crypto supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='model'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='type'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>qemu</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='backendModel'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>builtin</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </crypto>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <interface supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='backendType'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>default</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>passt</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </interface>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <panic supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='model'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>isa</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>hyperv</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </panic>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <console supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='type'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>null</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>vc</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>pty</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>dev</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>file</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>pipe</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>stdio</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>udp</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>tcp</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>unix</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>qemu-vdagent</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>dbus</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </console>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   </devices>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   <features>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <gic supported='no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <vmcoreinfo supported='yes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <genid supported='yes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <backingStoreInput supported='yes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <backup supported='yes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <async-teardown supported='yes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <ps2 supported='yes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <sev supported='no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <sgx supported='no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <hyperv supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='features'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>relaxed</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>vapic</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>spinlocks</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>vpindex</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>runtime</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>synic</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>stimer</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>reset</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>vendor_id</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>frequencies</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>reenlightenment</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>tlbflush</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>ipi</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>avic</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>emsr_bitmap</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>xmm_input</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <defaults>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <spinlocks>4095</spinlocks>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <stimer_direct>on</stimer_direct>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <tlbflush_direct>on</tlbflush_direct>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <tlbflush_extended>on</tlbflush_extended>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </defaults>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </hyperv>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <launchSecurity supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='sectype'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>tdx</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </launchSecurity>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   </features>
Dec 05 01:35:02 compute-0 nova_compute[348591]: </domainCapabilities>
Dec 05 01:35:02 compute-0 nova_compute[348591]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:01.926 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 05 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:01.934 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 05 01:35:02 compute-0 nova_compute[348591]: <domainCapabilities>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   <domain>kvm</domain>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   <arch>x86_64</arch>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   <vcpu max='240'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   <iothreads supported='yes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   <os supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <enum name='firmware'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <loader supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='type'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>rom</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>pflash</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='readonly'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>yes</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>no</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='secure'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>no</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </loader>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   </os>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   <cpu>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <mode name='host-passthrough' supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='hostPassthroughMigratable'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>on</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>off</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </mode>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <mode name='maximum' supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='maximumMigratable'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>on</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>off</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </mode>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <mode name='host-model' supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <vendor>AMD</vendor>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='x2apic'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='hypervisor'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='stibp'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='ssbd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='overflow-recov'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='succor'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='ibrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='lbrv'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='tsc-scale'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='flushbyasid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='pause-filter'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='pfthreshold'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='disable' name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </mode>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <mode name='custom' supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Broadwell'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Broadwell-IBRS'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Broadwell-noTSX'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Broadwell-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Broadwell-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Broadwell-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Broadwell-v4'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Cooperlake'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Cooperlake-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Cooperlake-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Denverton'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='mpx'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Denverton-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='mpx'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Denverton-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Denverton-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Dhyana-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='EPYC-Genoa'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amd-psfd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='auto-ibrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='no-nested-data-bp'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='null-sel-clr-base'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='stibp-always-on'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amd-psfd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='auto-ibrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='no-nested-data-bp'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='null-sel-clr-base'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='stibp-always-on'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='EPYC-Milan'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='EPYC-Milan-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='EPYC-Milan-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amd-psfd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='no-nested-data-bp'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='null-sel-clr-base'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='stibp-always-on'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='EPYC-Rome'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='EPYC-Rome-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='EPYC-Rome-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='EPYC-Rome-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='EPYC-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='EPYC-v4'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='GraniteRapids'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-fp16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='mcdt-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pbrsb-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='prefetchiti'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='GraniteRapids-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-fp16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='mcdt-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pbrsb-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='prefetchiti'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='GraniteRapids-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-fp16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx10'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx10-128'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx10-256'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx10-512'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='mcdt-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pbrsb-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='prefetchiti'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 01:35:02 compute-0 sudo[349270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyskwokawewsjcbcrmzkjvtlhfojthre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898501.3092108-1549-180925429006717/AnsiballZ_podman_container.py'
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Haswell'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Haswell-IBRS'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Haswell-noTSX'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Haswell-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Haswell-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Haswell-v3'>
Dec 05 01:35:02 compute-0 sudo[349270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Haswell-v4'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v4'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v5'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v6'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v7'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='IvyBridge'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='IvyBridge-IBRS'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='IvyBridge-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='IvyBridge-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='KnightsMill'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-4fmaps'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-4vnniw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512er'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512pf'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='KnightsMill-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-4fmaps'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-4vnniw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512er'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512pf'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Opteron_G4'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fma4'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xop'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Opteron_G4-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fma4'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xop'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Opteron_G5'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fma4'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='tbm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xop'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Opteron_G5-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fma4'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='tbm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xop'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='SapphireRapids'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='SapphireRapids-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='SapphireRapids-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='SapphireRapids-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='SierraForest'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-ne-convert'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni-int8'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cmpccxadd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='mcdt-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pbrsb-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='SierraForest-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-ne-convert'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni-int8'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cmpccxadd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='mcdt-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pbrsb-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client-v4'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-v4'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-v5'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Snowridge'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='core-capability'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='mpx'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='split-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Snowridge-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='core-capability'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='mpx'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='split-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Snowridge-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='core-capability'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='split-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Snowridge-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='core-capability'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='split-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Snowridge-v4'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='athlon'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='3dnow'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='3dnowext'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='athlon-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='3dnow'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='3dnowext'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='core2duo'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='core2duo-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='coreduo'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='coreduo-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='n270'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='n270-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='phenom'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='3dnow'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='3dnowext'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='phenom-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='3dnow'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='3dnowext'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </mode>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   </cpu>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   <memoryBacking supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <enum name='sourceType'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <value>file</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <value>anonymous</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <value>memfd</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   </memoryBacking>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   <devices>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <disk supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='diskDevice'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>disk</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>cdrom</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>floppy</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>lun</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='bus'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>ide</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>fdc</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>scsi</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtio</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>usb</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>sata</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='model'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtio</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtio-transitional</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtio-non-transitional</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </disk>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <graphics supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='type'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>vnc</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>egl-headless</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>dbus</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </graphics>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <video supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='modelType'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>vga</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>cirrus</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtio</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>none</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>bochs</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>ramfb</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </video>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <hostdev supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='mode'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>subsystem</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='startupPolicy'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>default</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>mandatory</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>requisite</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>optional</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='subsysType'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>usb</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>pci</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>scsi</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='capsType'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='pciBackend'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </hostdev>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <rng supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='model'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtio</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtio-transitional</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtio-non-transitional</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='backendModel'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>random</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>egd</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>builtin</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </rng>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <filesystem supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='driverType'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>path</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>handle</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtiofs</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </filesystem>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <tpm supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='model'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>tpm-tis</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>tpm-crb</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='backendModel'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>emulator</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>external</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='backendVersion'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>2.0</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </tpm>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <redirdev supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='bus'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>usb</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </redirdev>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <channel supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='type'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>pty</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>unix</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </channel>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <crypto supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='model'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='type'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>qemu</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='backendModel'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>builtin</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </crypto>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <interface supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='backendType'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>default</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>passt</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </interface>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <panic supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='model'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>isa</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>hyperv</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </panic>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <console supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='type'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>null</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>vc</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>pty</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>dev</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>file</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>pipe</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>stdio</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>udp</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>tcp</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>unix</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>qemu-vdagent</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>dbus</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </console>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   </devices>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   <features>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <gic supported='no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <vmcoreinfo supported='yes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <genid supported='yes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <backingStoreInput supported='yes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <backup supported='yes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <async-teardown supported='yes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <ps2 supported='yes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <sev supported='no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <sgx supported='no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <hyperv supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='features'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>relaxed</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>vapic</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>spinlocks</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>vpindex</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>runtime</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>synic</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>stimer</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>reset</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>vendor_id</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>frequencies</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>reenlightenment</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>tlbflush</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>ipi</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>avic</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>emsr_bitmap</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>xmm_input</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <defaults>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <spinlocks>4095</spinlocks>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <stimer_direct>on</stimer_direct>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <tlbflush_direct>on</tlbflush_direct>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <tlbflush_extended>on</tlbflush_extended>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </defaults>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </hyperv>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <launchSecurity supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='sectype'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>tdx</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </launchSecurity>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   </features>
Dec 05 01:35:02 compute-0 nova_compute[348591]: </domainCapabilities>
Dec 05 01:35:02 compute-0 nova_compute[348591]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.087 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 05 01:35:02 compute-0 nova_compute[348591]: <domainCapabilities>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   <domain>kvm</domain>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   <arch>x86_64</arch>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   <vcpu max='4096'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   <iothreads supported='yes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   <os supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <enum name='firmware'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <value>efi</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <loader supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='type'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>rom</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>pflash</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='readonly'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>yes</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>no</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='secure'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>yes</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>no</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </loader>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   </os>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   <cpu>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <mode name='host-passthrough' supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='hostPassthroughMigratable'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>on</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>off</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </mode>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <mode name='maximum' supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='maximumMigratable'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>on</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>off</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </mode>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <mode name='host-model' supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <vendor>AMD</vendor>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='x2apic'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='hypervisor'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='stibp'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='ssbd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='overflow-recov'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='succor'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='ibrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='lbrv'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='tsc-scale'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='flushbyasid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='pause-filter'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='pfthreshold'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <feature policy='disable' name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </mode>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <mode name='custom' supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Broadwell'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Broadwell-IBRS'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Broadwell-noTSX'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Broadwell-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Broadwell-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Broadwell-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Broadwell-v4'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Cooperlake'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Cooperlake-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Cooperlake-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Denverton'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='mpx'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Denverton-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='mpx'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Denverton-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Denverton-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Dhyana-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='EPYC-Genoa'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amd-psfd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='auto-ibrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='no-nested-data-bp'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='null-sel-clr-base'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='stibp-always-on'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amd-psfd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='auto-ibrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='no-nested-data-bp'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='null-sel-clr-base'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='stibp-always-on'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='EPYC-Milan'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='EPYC-Milan-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='EPYC-Milan-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amd-psfd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='no-nested-data-bp'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='null-sel-clr-base'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='stibp-always-on'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='EPYC-Rome'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='EPYC-Rome-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='EPYC-Rome-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='EPYC-Rome-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='EPYC-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='EPYC-v4'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='GraniteRapids'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-fp16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='mcdt-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pbrsb-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='prefetchiti'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='GraniteRapids-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-fp16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='mcdt-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pbrsb-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='prefetchiti'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='GraniteRapids-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-fp16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx10'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx10-128'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx10-256'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx10-512'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='mcdt-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pbrsb-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='prefetchiti'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Haswell'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Haswell-IBRS'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Haswell-noTSX'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Haswell-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Haswell-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Haswell-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Haswell-v4'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v4'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v5'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v6'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Icelake-Server-v7'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='IvyBridge'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='IvyBridge-IBRS'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='IvyBridge-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='IvyBridge-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='KnightsMill'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-4fmaps'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-4vnniw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512er'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512pf'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='KnightsMill-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-4fmaps'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-4vnniw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512er'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512pf'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Opteron_G4'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fma4'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xop'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Opteron_G4-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fma4'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xop'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Opteron_G5'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fma4'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='tbm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xop'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Opteron_G5-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fma4'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='tbm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xop'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='SapphireRapids'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='SapphireRapids-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='SapphireRapids-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='SapphireRapids-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-int8'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='amx-tile'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-bf16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-fp16'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bitalg'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrc'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fzrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='la57'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='taa-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xfd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='SierraForest'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-ne-convert'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni-int8'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cmpccxadd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='mcdt-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pbrsb-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='SierraForest-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-ifma'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-ne-convert'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx-vnni-int8'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cmpccxadd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fbsdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='fsrs'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ibrs-all'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='mcdt-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pbrsb-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='psdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='serialize'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vaes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Client-v4'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='hle'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='rtm'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-v4'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Skylake-Server-v5'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512bw'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512cd'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512dq'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512f'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='avx512vl'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='invpcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pcid'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='pku'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Snowridge'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='core-capability'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='mpx'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='split-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Snowridge-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='core-capability'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='mpx'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='split-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Snowridge-v2'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='core-capability'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='split-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Snowridge-v3'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='core-capability'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='split-lock-detect'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='Snowridge-v4'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='cldemote'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='erms'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='gfni'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdir64b'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='movdiri'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='xsaves'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='athlon'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='3dnow'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='3dnowext'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='athlon-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='3dnow'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='3dnowext'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='core2duo'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='core2duo-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='coreduo'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='coreduo-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='n270'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='n270-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='ss'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='phenom'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='3dnow'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='3dnowext'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <blockers model='phenom-v1'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='3dnow'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <feature name='3dnowext'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </blockers>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </mode>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   </cpu>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   <memoryBacking supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <enum name='sourceType'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <value>file</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <value>anonymous</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <value>memfd</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   </memoryBacking>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   <devices>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <disk supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='diskDevice'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>disk</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>cdrom</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>floppy</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>lun</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='bus'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>fdc</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>scsi</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtio</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>usb</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>sata</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='model'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtio</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtio-transitional</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtio-non-transitional</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </disk>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <graphics supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='type'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>vnc</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>egl-headless</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>dbus</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </graphics>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <video supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='modelType'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>vga</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>cirrus</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtio</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>none</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>bochs</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>ramfb</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </video>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <hostdev supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='mode'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>subsystem</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='startupPolicy'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>default</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>mandatory</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>requisite</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>optional</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='subsysType'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>usb</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>pci</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>scsi</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='capsType'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='pciBackend'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </hostdev>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <rng supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='model'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtio</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtio-transitional</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtio-non-transitional</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='backendModel'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>random</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>egd</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>builtin</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </rng>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <filesystem supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='driverType'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>path</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>handle</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>virtiofs</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </filesystem>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <tpm supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='model'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>tpm-tis</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>tpm-crb</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='backendModel'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>emulator</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>external</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='backendVersion'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>2.0</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </tpm>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <redirdev supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='bus'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>usb</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </redirdev>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <channel supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='type'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>pty</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>unix</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </channel>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <crypto supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='model'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='type'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>qemu</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='backendModel'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>builtin</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </crypto>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <interface supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='backendType'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>default</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>passt</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </interface>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <panic supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='model'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>isa</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>hyperv</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </panic>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <console supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='type'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>null</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>vc</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>pty</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>dev</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>file</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>pipe</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>stdio</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>udp</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>tcp</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>unix</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>qemu-vdagent</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>dbus</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </console>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   </devices>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   <features>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <gic supported='no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <vmcoreinfo supported='yes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <genid supported='yes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <backingStoreInput supported='yes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <backup supported='yes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <async-teardown supported='yes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <ps2 supported='yes'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <sev supported='no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <sgx supported='no'/>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <hyperv supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='features'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>relaxed</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>vapic</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>spinlocks</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>vpindex</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>runtime</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>synic</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>stimer</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>reset</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>vendor_id</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>frequencies</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>reenlightenment</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>tlbflush</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>ipi</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>avic</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>emsr_bitmap</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>xmm_input</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <defaults>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <spinlocks>4095</spinlocks>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <stimer_direct>on</stimer_direct>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <tlbflush_direct>on</tlbflush_direct>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <tlbflush_extended>on</tlbflush_extended>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </defaults>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </hyperv>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     <launchSecurity supported='yes'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       <enum name='sectype'>
Dec 05 01:35:02 compute-0 nova_compute[348591]:         <value>tdx</value>
Dec 05 01:35:02 compute-0 nova_compute[348591]:       </enum>
Dec 05 01:35:02 compute-0 nova_compute[348591]:     </launchSecurity>
Dec 05 01:35:02 compute-0 nova_compute[348591]:   </features>
Dec 05 01:35:02 compute-0 nova_compute[348591]: </domainCapabilities>
Dec 05 01:35:02 compute-0 nova_compute[348591]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.218 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 05 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.219 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 05 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.219 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 05 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.220 348595 INFO nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Secure Boot support detected
Dec 05 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.222 348595 INFO nova.virt.libvirt.driver [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 05 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.223 348595 INFO nova.virt.libvirt.driver [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 05 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.238 348595 DEBUG nova.virt.libvirt.driver [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 05 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.341 348595 INFO nova.virt.node [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Determined node identity acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from /var/lib/nova/compute_id
Dec 05 01:35:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.397 348595 WARNING nova.compute.manager [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Compute nodes ['acf26aa2-2fef-4a53-8a44-6cfa2eb15d17'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Dec 05 01:35:02 compute-0 python3.9[349272]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 05 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.448 348595 INFO nova.compute.manager [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 05 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.503 348595 WARNING nova.compute.manager [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 05 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.503 348595 DEBUG oslo_concurrency.lockutils [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.503 348595 DEBUG oslo_concurrency.lockutils [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.504 348595 DEBUG oslo_concurrency.lockutils [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.504 348595 DEBUG nova.compute.resource_tracker [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.504 348595 DEBUG oslo_concurrency.processutils [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:35:02 compute-0 sudo[349270]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:35:03 compute-0 ceph-mon[192914]: pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:03 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1552625451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:35:03 compute-0 nova_compute[348591]: 2025-12-05 01:35:03.305 348595 DEBUG oslo_concurrency.processutils [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.800s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:35:03 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec 05 01:35:03 compute-0 sudo[349464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmzivsavtagodjaxhncwvcinpfzhwver ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898502.8826818-1557-1540948664849/AnsiballZ_systemd.py'
Dec 05 01:35:03 compute-0 sudo[349464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:35:03 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 01:35:03 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec 05 01:35:03 compute-0 python3.9[349467]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 01:35:03 compute-0 systemd[1]: Stopping nova_compute container...
Dec 05 01:35:03 compute-0 nova_compute[348591]: 2025-12-05 01:35:03.833 348595 DEBUG oslo_concurrency.lockutils [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:35:03 compute-0 nova_compute[348591]: 2025-12-05 01:35:03.834 348595 DEBUG oslo_concurrency.lockutils [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:35:03 compute-0 nova_compute[348591]: 2025-12-05 01:35:03.834 348595 DEBUG oslo_concurrency.lockutils [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:35:04 compute-0 virtqemud[138703]: End of file while reading data: Input/output error
Dec 05 01:35:04 compute-0 systemd[1]: libpod-7e4d1102a0626942d9f944e09cc1dcb68eba5a8bb8d27cbb786766a0e2d545b6.scope: Deactivated successfully.
Dec 05 01:35:04 compute-0 systemd[1]: libpod-7e4d1102a0626942d9f944e09cc1dcb68eba5a8bb8d27cbb786766a0e2d545b6.scope: Consumed 4.018s CPU time.
Dec 05 01:35:04 compute-0 podman[349493]: 2025-12-05 01:35:04.268562697 +0000 UTC m=+0.514251041 container died 7e4d1102a0626942d9f944e09cc1dcb68eba5a8bb8d27cbb786766a0e2d545b6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 05 01:35:04 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1552625451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:35:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7e4d1102a0626942d9f944e09cc1dcb68eba5a8bb8d27cbb786766a0e2d545b6-userdata-shm.mount: Deactivated successfully.
Dec 05 01:35:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-eacfa37577497aeadf1a19b9a8d6f7d0cc735e29817b277f1b42b6463a7744c0-merged.mount: Deactivated successfully.
Dec 05 01:35:04 compute-0 podman[349493]: 2025-12-05 01:35:04.36611938 +0000 UTC m=+0.611807694 container cleanup 7e4d1102a0626942d9f944e09cc1dcb68eba5a8bb8d27cbb786766a0e2d545b6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3)
Dec 05 01:35:04 compute-0 podman[349493]: nova_compute
Dec 05 01:35:04 compute-0 podman[349523]: nova_compute
Dec 05 01:35:04 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec 05 01:35:04 compute-0 systemd[1]: Stopped nova_compute container.
Dec 05 01:35:04 compute-0 systemd[1]: edpm_nova_compute.service: Consumed 1.037s CPU time, 18.6M memory peak, read 0B from disk, written 116.0K to disk.
Dec 05 01:35:04 compute-0 systemd[1]: Starting nova_compute container...
Dec 05 01:35:04 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacfa37577497aeadf1a19b9a8d6f7d0cc735e29817b277f1b42b6463a7744c0/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 05 01:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacfa37577497aeadf1a19b9a8d6f7d0cc735e29817b277f1b42b6463a7744c0/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 05 01:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacfa37577497aeadf1a19b9a8d6f7d0cc735e29817b277f1b42b6463a7744c0/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 05 01:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacfa37577497aeadf1a19b9a8d6f7d0cc735e29817b277f1b42b6463a7744c0/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 05 01:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacfa37577497aeadf1a19b9a8d6f7d0cc735e29817b277f1b42b6463a7744c0/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 05 01:35:04 compute-0 podman[349534]: 2025-12-05 01:35:04.672472729 +0000 UTC m=+0.171809906 container init 7e4d1102a0626942d9f944e09cc1dcb68eba5a8bb8d27cbb786766a0e2d545b6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 01:35:04 compute-0 podman[349534]: 2025-12-05 01:35:04.68540357 +0000 UTC m=+0.184740747 container start 7e4d1102a0626942d9f944e09cc1dcb68eba5a8bb8d27cbb786766a0e2d545b6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 01:35:04 compute-0 podman[349534]: nova_compute
Dec 05 01:35:04 compute-0 nova_compute[349548]: + sudo -E kolla_set_configs
Dec 05 01:35:04 compute-0 systemd[1]: Started nova_compute container.
Dec 05 01:35:04 compute-0 sudo[349464]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Validating config file
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Copying service configuration files
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Deleting /etc/ceph
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Creating directory /etc/ceph
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /etc/ceph
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Writing out command to execute
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 05 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 05 01:35:04 compute-0 nova_compute[349548]: ++ cat /run_command
Dec 05 01:35:04 compute-0 nova_compute[349548]: + CMD=nova-compute
Dec 05 01:35:04 compute-0 nova_compute[349548]: + ARGS=
Dec 05 01:35:04 compute-0 nova_compute[349548]: + sudo kolla_copy_cacerts
Dec 05 01:35:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:04 compute-0 nova_compute[349548]: + [[ ! -n '' ]]
Dec 05 01:35:04 compute-0 nova_compute[349548]: + . kolla_extend_start
Dec 05 01:35:04 compute-0 nova_compute[349548]: + echo 'Running command: '\''nova-compute'\'''
Dec 05 01:35:04 compute-0 nova_compute[349548]: Running command: 'nova-compute'
Dec 05 01:35:04 compute-0 nova_compute[349548]: + umask 0022
Dec 05 01:35:04 compute-0 nova_compute[349548]: + exec nova-compute
Dec 05 01:35:05 compute-0 ceph-mon[192914]: pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:05 compute-0 sudo[349709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-farwlhgohnvdblklrodwfjbdnekuswpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898505.0739007-1566-80898505707245/AnsiballZ_podman_container.py'
Dec 05 01:35:05 compute-0 sudo[349709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:35:05 compute-0 python3.9[349711]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 05 01:35:06 compute-0 systemd[1]: Started libpod-conmon-4d8938d8db32fcae4f45945a49d34b745f8e8c75a9d36333a9dd0778cc2dcac2.scope.
Dec 05 01:35:06 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:35:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0254dc70324eb2caac3f834ec1798536e6121356d6feb9bd233f8d75726b53fd/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec 05 01:35:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0254dc70324eb2caac3f834ec1798536e6121356d6feb9bd233f8d75726b53fd/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 05 01:35:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0254dc70324eb2caac3f834ec1798536e6121356d6feb9bd233f8d75726b53fd/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 05 01:35:06 compute-0 podman[349736]: 2025-12-05 01:35:06.21955037 +0000 UTC m=+0.194631852 container init 4d8938d8db32fcae4f45945a49d34b745f8e8c75a9d36333a9dd0778cc2dcac2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 05 01:35:06 compute-0 podman[349736]: 2025-12-05 01:35:06.244330881 +0000 UTC m=+0.219412393 container start 4d8938d8db32fcae4f45945a49d34b745f8e8c75a9d36333a9dd0778cc2dcac2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible)
Dec 05 01:35:06 compute-0 python3.9[349711]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec 05 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Applying nova statedir ownership
Dec 05 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec 05 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec 05 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec 05 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec 05 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec 05 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec 05 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec 05 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec 05 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec 05 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec 05 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec 05 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec 05 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Nova statedir ownership complete
Dec 05 01:35:06 compute-0 systemd[1]: libpod-4d8938d8db32fcae4f45945a49d34b745f8e8c75a9d36333a9dd0778cc2dcac2.scope: Deactivated successfully.
Dec 05 01:35:06 compute-0 podman[349758]: 2025-12-05 01:35:06.355222026 +0000 UTC m=+0.060985163 container died 4d8938d8db32fcae4f45945a49d34b745f8e8c75a9d36333a9dd0778cc2dcac2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 01:35:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4d8938d8db32fcae4f45945a49d34b745f8e8c75a9d36333a9dd0778cc2dcac2-userdata-shm.mount: Deactivated successfully.
Dec 05 01:35:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-0254dc70324eb2caac3f834ec1798536e6121356d6feb9bd233f8d75726b53fd-merged.mount: Deactivated successfully.
Dec 05 01:35:06 compute-0 podman[349766]: 2025-12-05 01:35:06.410635542 +0000 UTC m=+0.074663884 container cleanup 4d8938d8db32fcae4f45945a49d34b745f8e8c75a9d36333a9dd0778cc2dcac2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 01:35:06 compute-0 systemd[1]: libpod-conmon-4d8938d8db32fcae4f45945a49d34b745f8e8c75a9d36333a9dd0778cc2dcac2.scope: Deactivated successfully.
Dec 05 01:35:06 compute-0 sudo[349709]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:06 compute-0 nova_compute[349548]: 2025-12-05 01:35:06.899 349552 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 05 01:35:06 compute-0 nova_compute[349548]: 2025-12-05 01:35:06.900 349552 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 05 01:35:06 compute-0 nova_compute[349548]: 2025-12-05 01:35:06.900 349552 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 05 01:35:06 compute-0 nova_compute[349548]: 2025-12-05 01:35:06.900 349552 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.027 349552 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.056 349552 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.057 349552 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 05 01:35:07 compute-0 sshd-session[317656]: Connection closed by 192.168.122.30 port 40830
Dec 05 01:35:07 compute-0 sshd-session[317653]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:35:07 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Dec 05 01:35:07 compute-0 systemd[1]: session-56.scope: Consumed 3min 55.336s CPU time.
Dec 05 01:35:07 compute-0 systemd-logind[792]: Session 56 logged out. Waiting for processes to exit.
Dec 05 01:35:07 compute-0 systemd-logind[792]: Removed session 56.
Dec 05 01:35:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.621 349552 INFO nova.virt.driver [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.747 349552 INFO nova.compute.provider_config [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.782 349552 DEBUG oslo_concurrency.lockutils [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.783 349552 DEBUG oslo_concurrency.lockutils [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.783 349552 DEBUG oslo_concurrency.lockutils [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.783 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.783 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.784 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.784 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.784 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.784 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.784 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.784 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.785 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.785 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.785 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.785 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.785 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.785 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.785 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.786 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.786 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.786 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.786 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.786 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.786 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.787 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.787 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.787 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.787 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.787 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.788 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.788 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.788 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.788 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.788 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.789 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.789 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.789 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.789 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.789 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.789 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.790 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.790 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.790 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.790 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.790 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.790 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.791 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.791 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.791 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.791 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.791 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.791 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.792 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.792 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.792 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.792 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.792 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.792 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.792 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.793 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.793 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.793 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.793 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.793 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.793 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.794 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.794 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.794 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.794 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.794 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.794 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.794 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.795 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.795 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.795 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.795 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.795 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.795 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.795 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.796 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.796 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.796 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.796 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.796 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.796 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.797 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.797 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.797 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.797 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.797 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.797 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.797 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.798 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.798 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.798 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.798 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.798 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.798 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.799 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.799 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.799 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.799 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.799 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.799 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.800 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.800 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.800 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.800 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.800 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.800 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.800 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.801 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.801 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.801 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.801 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.801 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.801 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.802 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.802 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.802 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.802 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.802 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.802 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.802 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.803 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.803 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.803 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.803 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.803 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.803 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.804 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.804 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.804 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.804 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.804 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.804 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.804 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.805 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.805 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.805 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.805 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.805 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.805 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.806 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.806 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.806 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.806 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.806 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.806 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.806 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.807 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.807 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.807 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.807 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.807 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.807 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.808 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.808 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.808 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.808 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.808 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.808 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.808 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.809 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.809 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.809 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.809 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.809 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.809 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.810 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.810 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.810 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.810 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.810 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.810 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.810 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.811 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.811 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.811 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.811 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.811 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.812 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.812 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.812 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.812 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.812 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.812 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.813 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.813 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.813 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.813 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.813 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.813 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.813 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.814 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.814 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.814 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.814 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.814 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.814 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.814 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.815 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.815 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.815 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.815 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.815 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.815 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.815 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.816 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.816 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.816 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.816 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.816 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.816 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.816 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.817 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.817 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.817 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.817 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.817 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.817 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.818 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.818 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.818 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.818 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.818 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.818 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.818 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.819 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.819 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.819 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.819 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.819 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.820 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.820 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.820 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.820 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.820 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.820 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.820 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.821 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.821 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.821 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.821 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.821 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.821 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.822 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.822 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.822 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.822 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.822 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.823 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.823 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.823 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.823 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.823 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.823 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.823 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.824 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.824 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.824 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.824 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.824 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.824 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.824 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.825 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.825 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.825 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.825 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.825 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.825 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.826 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.826 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.826 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.826 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.826 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.826 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.826 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.827 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.827 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.827 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.827 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.827 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.827 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.827 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.828 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.828 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.828 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.828 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.828 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.828 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.828 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.829 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.829 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.829 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.829 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.829 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.829 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.829 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.830 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.830 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.830 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.830 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.830 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.830 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.830 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.831 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.831 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.831 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.831 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.831 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.831 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.832 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.832 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.832 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.832 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.832 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.832 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.833 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.833 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.833 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.833 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.833 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.834 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.834 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.834 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.834 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.834 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.835 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.835 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.835 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.835 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.835 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.835 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.836 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.836 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.836 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.836 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.836 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.836 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.837 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.837 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.837 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.837 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.837 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.837 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.837 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.838 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.838 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.838 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.838 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.838 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.838 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.839 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.839 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.839 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.839 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.839 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.839 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.840 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.840 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.840 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.840 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.840 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.840 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.841 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.841 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.841 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.841 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.841 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.841 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.842 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.842 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.842 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.842 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.842 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.842 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.842 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.843 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.843 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.843 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.843 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.843 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.843 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.843 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.844 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.844 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.844 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.844 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.844 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.844 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.845 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.845 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.845 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.845 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.845 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.846 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.846 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.846 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.846 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.846 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.846 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.846 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.847 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.847 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.847 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.847 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.847 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.847 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.847 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.848 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.848 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.848 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.848 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.848 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.848 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.849 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.849 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.849 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.849 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.849 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.849 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.850 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.850 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.850 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.850 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.850 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.850 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.850 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.851 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.851 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.851 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.851 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.851 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.851 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.852 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.852 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.852 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.852 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.852 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.852 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.853 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.853 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.853 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.853 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.853 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.853 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.854 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.854 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.854 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.854 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.854 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.855 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.855 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.855 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.855 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.855 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.855 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.856 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.856 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.856 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.856 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.856 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.857 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.857 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.857 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.857 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.857 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.857 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.857 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.858 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.858 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.858 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.858 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.859 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.859 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.859 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.859 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.859 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.860 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.860 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.860 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.860 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.860 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.860 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.861 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.861 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.861 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.861 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.861 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.862 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.862 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.862 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.862 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.863 349552 WARNING oslo_config.cfg [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 05 01:35:07 compute-0 nova_compute[349548]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 05 01:35:07 compute-0 nova_compute[349548]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 05 01:35:07 compute-0 nova_compute[349548]: and ``live_migration_inbound_addr`` respectively.
Dec 05 01:35:07 compute-0 nova_compute[349548]: ).  Its value may be silently ignored in the future.
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.863 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.863 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.863 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.863 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.863 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.864 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.864 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.864 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.864 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.864 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.864 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.865 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.865 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.865 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.865 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.865 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.865 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.866 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.866 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.rbd_secret_uuid        = cbd280d3-cbd8-528b-ace6-2b3a887cdcee log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.866 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.866 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.866 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.866 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.867 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.867 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.867 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.867 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.867 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.867 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.868 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.868 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.868 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.868 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.868 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.868 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.869 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.869 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.869 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.869 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.869 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.870 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.870 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.870 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.870 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.870 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.870 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.870 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.871 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.871 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.871 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.871 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.871 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.871 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.872 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.872 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.872 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.872 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.872 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.872 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.872 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.873 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.873 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.873 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.873 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.873 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.874 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.874 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.874 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.874 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.874 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.874 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.875 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.875 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.875 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.875 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.875 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.875 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.875 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.876 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.876 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.876 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.876 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.876 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.876 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.876 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.877 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.877 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.877 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.877 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.877 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.877 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.878 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.878 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.878 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.878 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.878 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.878 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.879 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.879 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.879 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.879 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.879 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.879 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.880 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.880 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.880 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.880 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.880 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.880 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.880 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.881 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.881 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.881 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.881 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 ceph-mon[192914]: pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.882 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.882 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.882 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.882 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.882 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.882 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.882 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.883 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.883 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.883 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.883 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.883 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.884 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.884 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.884 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.885 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.885 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.885 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.885 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.885 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.886 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.886 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.886 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.886 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.887 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.887 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.887 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.887 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.887 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.888 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.888 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.888 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.888 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.888 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.888 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.889 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.889 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.889 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.889 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.889 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.889 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.889 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.890 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.890 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.890 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.890 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.890 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.890 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.891 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.891 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.891 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.891 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.891 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.891 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.891 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.892 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.892 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.892 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.892 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.892 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.892 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.892 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.893 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.893 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.893 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.893 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.893 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.894 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.894 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.894 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.894 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.894 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.895 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.895 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.895 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.895 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.895 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.895 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.895 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.896 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.896 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.896 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.896 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.896 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.897 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.897 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.897 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.897 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.897 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.897 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.897 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.898 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.898 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.898 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.898 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.898 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.898 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.899 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.899 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.899 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.899 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.899 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.899 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.900 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.900 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.900 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.900 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.900 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.900 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.900 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.901 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.901 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.901 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.901 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.901 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.902 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.902 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.902 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.902 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.902 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.903 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.903 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.903 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.903 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.903 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.903 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.904 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.904 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.904 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.904 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.904 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.904 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.904 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.905 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.905 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.905 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.905 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.905 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.906 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.906 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.906 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.906 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.906 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.907 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.907 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.907 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.907 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.907 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.908 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.908 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.908 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.908 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.908 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.908 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.909 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.909 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.909 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.909 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.909 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.909 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.910 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.910 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.910 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.910 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.910 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.910 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.911 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.911 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.911 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.911 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.912 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.912 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.912 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.912 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.912 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.912 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.913 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.913 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.913 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.913 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.913 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.914 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.914 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.914 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.914 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.915 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.915 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.915 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.915 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.915 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.916 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.916 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.916 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.916 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.916 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.917 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.917 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.917 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.917 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.917 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.918 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.918 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.918 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.918 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.918 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.919 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.919 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.919 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.919 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.919 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.919 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.920 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.920 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.920 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.920 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.920 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.920 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.921 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.921 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.921 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.921 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.921 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.921 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.922 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.922 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.922 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.922 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.922 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.923 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.923 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.923 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.923 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.923 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.923 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.923 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.924 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.924 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.924 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.924 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.924 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.924 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.925 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.925 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.925 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.925 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.925 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.925 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.925 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.926 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.926 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.926 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.926 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.926 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.926 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.927 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.927 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.927 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.927 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.927 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.927 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.927 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.928 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.928 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.928 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.928 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.928 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.928 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.929 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.929 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.929 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.929 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.929 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.929 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.929 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.930 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.930 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.930 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.930 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.930 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.930 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.931 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.931 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.931 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.931 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.931 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.931 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.931 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.932 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.932 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.932 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.932 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.932 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.932 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.933 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.933 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.933 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.933 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.933 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.933 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.934 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.934 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.934 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.934 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.934 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.934 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.935 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.935 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.935 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.935 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.935 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.936 349552 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.993 349552 INFO nova.virt.node [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Determined node identity acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from /var/lib/nova/compute_id
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.994 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.995 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.995 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 05 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.995 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 05 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.015 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fb5d61f9a60> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 05 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.021 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fb5d61f9a60> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 05 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.023 349552 INFO nova.virt.libvirt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Connection event '1' reason 'None'
Dec 05 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.037 349552 INFO nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Libvirt host capabilities <capabilities>
Dec 05 01:35:08 compute-0 nova_compute[349548]: 
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <host>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <uuid>6c9ead2d-8495-4e2b-9845-f862956e441e</uuid>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <cpu>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <arch>x86_64</arch>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model>EPYC-Rome-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <vendor>AMD</vendor>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <microcode version='16777317'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <signature family='23' model='49' stepping='0'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature name='x2apic'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature name='tsc-deadline'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature name='osxsave'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature name='hypervisor'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature name='tsc_adjust'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature name='spec-ctrl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature name='stibp'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature name='arch-capabilities'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature name='ssbd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature name='cmp_legacy'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature name='topoext'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature name='virt-ssbd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature name='lbrv'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature name='tsc-scale'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature name='vmcb-clean'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature name='pause-filter'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature name='pfthreshold'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature name='svme-addr-chk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature name='rdctl-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature name='skip-l1dfl-vmentry'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature name='mds-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature name='pschange-mc-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <pages unit='KiB' size='4'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <pages unit='KiB' size='2048'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <pages unit='KiB' size='1048576'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </cpu>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <power_management>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <suspend_mem/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </power_management>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <iommu support='no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <migration_features>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <live/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <uri_transports>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <uri_transport>tcp</uri_transport>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <uri_transport>rdma</uri_transport>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </uri_transports>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </migration_features>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <topology>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <cells num='1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <cell id='0'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:           <memory unit='KiB'>7864320</memory>
Dec 05 01:35:08 compute-0 nova_compute[349548]:           <pages unit='KiB' size='4'>1966080</pages>
Dec 05 01:35:08 compute-0 nova_compute[349548]:           <pages unit='KiB' size='2048'>0</pages>
Dec 05 01:35:08 compute-0 nova_compute[349548]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 05 01:35:08 compute-0 nova_compute[349548]:           <distances>
Dec 05 01:35:08 compute-0 nova_compute[349548]:             <sibling id='0' value='10'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:           </distances>
Dec 05 01:35:08 compute-0 nova_compute[349548]:           <cpus num='8'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:           </cpus>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         </cell>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </cells>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </topology>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <cache>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </cache>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <secmodel>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model>selinux</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <doi>0</doi>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </secmodel>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <secmodel>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model>dac</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <doi>0</doi>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </secmodel>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   </host>
Dec 05 01:35:08 compute-0 nova_compute[349548]: 
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <guest>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <os_type>hvm</os_type>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <arch name='i686'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <wordsize>32</wordsize>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <domain type='qemu'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <domain type='kvm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </arch>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <features>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <pae/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <nonpae/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <acpi default='on' toggle='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <apic default='on' toggle='no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <cpuselection/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <deviceboot/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <disksnapshot default='on' toggle='no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <externalSnapshot/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </features>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   </guest>
Dec 05 01:35:08 compute-0 nova_compute[349548]: 
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <guest>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <os_type>hvm</os_type>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <arch name='x86_64'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <wordsize>64</wordsize>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <domain type='qemu'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <domain type='kvm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </arch>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <features>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <acpi default='on' toggle='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <apic default='on' toggle='no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <cpuselection/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <deviceboot/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <disksnapshot default='on' toggle='no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <externalSnapshot/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </features>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   </guest>
Dec 05 01:35:08 compute-0 nova_compute[349548]: 
Dec 05 01:35:08 compute-0 nova_compute[349548]: </capabilities>
Dec 05 01:35:08 compute-0 nova_compute[349548]: 
Dec 05 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.045 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 05 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.051 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 05 01:35:08 compute-0 nova_compute[349548]: <domainCapabilities>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <domain>kvm</domain>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <arch>i686</arch>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <vcpu max='4096'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <iothreads supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <os supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <enum name='firmware'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <loader supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='type'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>rom</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>pflash</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='readonly'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>yes</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>no</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='secure'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>no</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </loader>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   </os>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <cpu>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <mode name='host-passthrough' supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='hostPassthroughMigratable'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>on</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>off</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </mode>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <mode name='maximum' supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='maximumMigratable'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>on</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>off</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </mode>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <mode name='host-model' supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <vendor>AMD</vendor>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='x2apic'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='hypervisor'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='stibp'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='ssbd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='overflow-recov'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='succor'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='ibrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='lbrv'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='tsc-scale'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='flushbyasid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='pause-filter'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='pfthreshold'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='disable' name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </mode>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <mode name='custom' supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-noTSX'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cooperlake'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cooperlake-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cooperlake-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Denverton'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mpx'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Denverton-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mpx'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Denverton-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Denverton-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Dhyana-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Genoa'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amd-psfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='auto-ibrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='no-nested-data-bp'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='null-sel-clr-base'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='stibp-always-on'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amd-psfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='auto-ibrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='no-nested-data-bp'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='null-sel-clr-base'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='stibp-always-on'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Milan'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Milan-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Milan-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amd-psfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='no-nested-data-bp'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='null-sel-clr-base'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='stibp-always-on'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Rome'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Rome-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Rome-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Rome-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='GraniteRapids'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mcdt-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pbrsb-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='prefetchiti'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='GraniteRapids-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mcdt-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pbrsb-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='prefetchiti'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='GraniteRapids-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx10'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx10-128'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx10-256'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx10-512'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mcdt-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pbrsb-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='prefetchiti'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-noTSX'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v5'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v6'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v7'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='IvyBridge'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='IvyBridge-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='IvyBridge-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='IvyBridge-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='KnightsMill'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-4fmaps'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-4vnniw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512er'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512pf'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='KnightsMill-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-4fmaps'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-4vnniw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512er'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512pf'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Opteron_G4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fma4'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xop'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Opteron_G4-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fma4'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xop'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Opteron_G5'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fma4'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tbm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xop'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Opteron_G5-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fma4'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tbm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xop'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='SapphireRapids'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='SapphireRapids-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='SapphireRapids-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='SapphireRapids-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='SierraForest'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-ne-convert'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cmpccxadd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mcdt-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pbrsb-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='SierraForest-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-ne-convert'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cmpccxadd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mcdt-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pbrsb-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-v5'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Snowridge'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='core-capability'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mpx'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='split-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Snowridge-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='core-capability'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mpx'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='split-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Snowridge-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='core-capability'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='split-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Snowridge-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='core-capability'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='split-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Snowridge-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='athlon'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnow'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnowext'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='athlon-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnow'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnowext'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='core2duo'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='core2duo-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='coreduo'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='coreduo-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='n270'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='n270-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='phenom'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnow'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnowext'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='phenom-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnow'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnowext'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </mode>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   </cpu>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <memoryBacking supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <enum name='sourceType'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <value>file</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <value>anonymous</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <value>memfd</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   </memoryBacking>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <devices>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <disk supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='diskDevice'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>disk</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>cdrom</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>floppy</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>lun</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='bus'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>fdc</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>scsi</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>usb</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>sata</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='model'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio-transitional</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio-non-transitional</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </disk>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <graphics supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='type'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>vnc</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>egl-headless</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>dbus</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </graphics>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <video supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='modelType'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>vga</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>cirrus</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>none</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>bochs</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>ramfb</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </video>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <hostdev supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='mode'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>subsystem</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='startupPolicy'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>default</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>mandatory</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>requisite</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>optional</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='subsysType'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>usb</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>pci</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>scsi</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='capsType'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='pciBackend'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </hostdev>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <rng supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='model'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio-transitional</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio-non-transitional</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='backendModel'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>random</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>egd</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>builtin</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </rng>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <filesystem supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='driverType'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>path</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>handle</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtiofs</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </filesystem>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <tpm supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='model'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>tpm-tis</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>tpm-crb</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='backendModel'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>emulator</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>external</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='backendVersion'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>2.0</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </tpm>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <redirdev supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='bus'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>usb</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </redirdev>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <channel supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='type'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>pty</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>unix</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </channel>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <crypto supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='model'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='type'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>qemu</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='backendModel'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>builtin</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </crypto>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <interface supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='backendType'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>default</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>passt</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </interface>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <panic supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='model'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>isa</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>hyperv</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </panic>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <console supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='type'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>null</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>vc</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>pty</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>dev</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>file</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>pipe</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>stdio</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>udp</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>tcp</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>unix</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>qemu-vdagent</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>dbus</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </console>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   </devices>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <features>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <gic supported='no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <vmcoreinfo supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <genid supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <backingStoreInput supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <backup supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <async-teardown supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <ps2 supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <sev supported='no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <sgx supported='no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <hyperv supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='features'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>relaxed</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>vapic</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>spinlocks</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>vpindex</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>runtime</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>synic</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>stimer</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>reset</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>vendor_id</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>frequencies</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>reenlightenment</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>tlbflush</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>ipi</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>avic</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>emsr_bitmap</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>xmm_input</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <defaults>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <spinlocks>4095</spinlocks>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <stimer_direct>on</stimer_direct>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <tlbflush_direct>on</tlbflush_direct>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <tlbflush_extended>on</tlbflush_extended>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </defaults>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </hyperv>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <launchSecurity supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='sectype'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>tdx</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </launchSecurity>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   </features>
Dec 05 01:35:08 compute-0 nova_compute[349548]: </domainCapabilities>
Dec 05 01:35:08 compute-0 nova_compute[349548]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.061 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 05 01:35:08 compute-0 nova_compute[349548]: <domainCapabilities>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <domain>kvm</domain>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <arch>i686</arch>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <vcpu max='240'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <iothreads supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <os supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <enum name='firmware'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <loader supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='type'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>rom</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>pflash</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='readonly'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>yes</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>no</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='secure'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>no</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </loader>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   </os>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <cpu>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <mode name='host-passthrough' supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='hostPassthroughMigratable'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>on</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>off</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </mode>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <mode name='maximum' supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='maximumMigratable'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>on</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>off</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </mode>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <mode name='host-model' supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <vendor>AMD</vendor>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='x2apic'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='hypervisor'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='stibp'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='ssbd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='overflow-recov'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='succor'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='ibrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='lbrv'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='tsc-scale'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='flushbyasid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='pause-filter'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='pfthreshold'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='disable' name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </mode>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <mode name='custom' supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-noTSX'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cooperlake'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cooperlake-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cooperlake-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Denverton'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mpx'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Denverton-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mpx'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Denverton-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Denverton-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Dhyana-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Genoa'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amd-psfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='auto-ibrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='no-nested-data-bp'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='null-sel-clr-base'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='stibp-always-on'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amd-psfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='auto-ibrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='no-nested-data-bp'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='null-sel-clr-base'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='stibp-always-on'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Milan'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Milan-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Milan-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amd-psfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='no-nested-data-bp'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='null-sel-clr-base'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='stibp-always-on'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Rome'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Rome-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Rome-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Rome-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='GraniteRapids'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mcdt-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pbrsb-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='prefetchiti'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='GraniteRapids-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mcdt-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pbrsb-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='prefetchiti'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='GraniteRapids-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx10'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx10-128'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx10-256'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx10-512'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mcdt-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pbrsb-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='prefetchiti'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-noTSX'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v5'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v6'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v7'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='IvyBridge'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='IvyBridge-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='IvyBridge-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='IvyBridge-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='KnightsMill'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-4fmaps'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-4vnniw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512er'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512pf'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='KnightsMill-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-4fmaps'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-4vnniw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512er'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512pf'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Opteron_G4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fma4'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xop'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Opteron_G4-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fma4'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xop'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Opteron_G5'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fma4'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tbm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xop'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Opteron_G5-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fma4'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tbm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xop'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='SapphireRapids'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='SapphireRapids-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='SapphireRapids-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='SapphireRapids-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='SierraForest'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-ne-convert'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cmpccxadd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mcdt-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pbrsb-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='SierraForest-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-ne-convert'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cmpccxadd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mcdt-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pbrsb-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-v5'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Snowridge'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='core-capability'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mpx'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='split-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Snowridge-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='core-capability'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mpx'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='split-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Snowridge-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='core-capability'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='split-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Snowridge-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='core-capability'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='split-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Snowridge-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='athlon'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnow'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnowext'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='athlon-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnow'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnowext'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='core2duo'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='core2duo-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='coreduo'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='coreduo-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='n270'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='n270-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='phenom'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnow'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnowext'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='phenom-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnow'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnowext'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </mode>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   </cpu>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <memoryBacking supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <enum name='sourceType'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <value>file</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <value>anonymous</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <value>memfd</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   </memoryBacking>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <devices>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <disk supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='diskDevice'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>disk</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>cdrom</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>floppy</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>lun</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='bus'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>ide</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>fdc</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>scsi</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>usb</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>sata</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='model'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio-transitional</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio-non-transitional</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </disk>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <graphics supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='type'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>vnc</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>egl-headless</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>dbus</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </graphics>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <video supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='modelType'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>vga</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>cirrus</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>none</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>bochs</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>ramfb</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </video>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <hostdev supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='mode'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>subsystem</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='startupPolicy'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>default</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>mandatory</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>requisite</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>optional</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='subsysType'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>usb</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>pci</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>scsi</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='capsType'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='pciBackend'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </hostdev>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <rng supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='model'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio-transitional</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio-non-transitional</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='backendModel'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>random</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>egd</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>builtin</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </rng>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <filesystem supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='driverType'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>path</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>handle</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtiofs</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </filesystem>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <tpm supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='model'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>tpm-tis</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>tpm-crb</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='backendModel'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>emulator</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>external</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='backendVersion'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>2.0</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </tpm>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <redirdev supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='bus'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>usb</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </redirdev>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <channel supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='type'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>pty</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>unix</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </channel>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <crypto supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='model'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='type'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>qemu</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='backendModel'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>builtin</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </crypto>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <interface supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='backendType'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>default</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>passt</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </interface>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <panic supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='model'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>isa</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>hyperv</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </panic>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <console supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='type'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>null</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>vc</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>pty</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>dev</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>file</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>pipe</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>stdio</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>udp</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>tcp</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>unix</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>qemu-vdagent</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>dbus</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </console>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   </devices>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <features>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <gic supported='no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <vmcoreinfo supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <genid supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <backingStoreInput supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <backup supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <async-teardown supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <ps2 supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <sev supported='no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <sgx supported='no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <hyperv supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='features'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>relaxed</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>vapic</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>spinlocks</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>vpindex</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>runtime</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>synic</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>stimer</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>reset</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>vendor_id</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>frequencies</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>reenlightenment</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>tlbflush</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>ipi</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>avic</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>emsr_bitmap</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>xmm_input</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <defaults>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <spinlocks>4095</spinlocks>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <stimer_direct>on</stimer_direct>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <tlbflush_direct>on</tlbflush_direct>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <tlbflush_extended>on</tlbflush_extended>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </defaults>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </hyperv>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <launchSecurity supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='sectype'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>tdx</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </launchSecurity>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   </features>
Dec 05 01:35:08 compute-0 nova_compute[349548]: </domainCapabilities>
Dec 05 01:35:08 compute-0 nova_compute[349548]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.129 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 05 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.130 349552 DEBUG nova.virt.libvirt.volume.mount [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 05 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.135 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 05 01:35:08 compute-0 nova_compute[349548]: <domainCapabilities>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <domain>kvm</domain>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <arch>x86_64</arch>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <vcpu max='4096'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <iothreads supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <os supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <enum name='firmware'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <value>efi</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <loader supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='type'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>rom</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>pflash</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='readonly'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>yes</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>no</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='secure'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>yes</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>no</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </loader>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   </os>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <cpu>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <mode name='host-passthrough' supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='hostPassthroughMigratable'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>on</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>off</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </mode>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <mode name='maximum' supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='maximumMigratable'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>on</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>off</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </mode>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <mode name='host-model' supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <vendor>AMD</vendor>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='x2apic'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='hypervisor'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='stibp'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='ssbd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='overflow-recov'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='succor'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='ibrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='lbrv'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='tsc-scale'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='flushbyasid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='pause-filter'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='pfthreshold'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='disable' name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </mode>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <mode name='custom' supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-noTSX'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cooperlake'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cooperlake-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cooperlake-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Denverton'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mpx'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Denverton-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mpx'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Denverton-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Denverton-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Dhyana-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Genoa'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amd-psfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='auto-ibrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='no-nested-data-bp'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='null-sel-clr-base'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='stibp-always-on'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amd-psfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='auto-ibrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='no-nested-data-bp'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='null-sel-clr-base'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='stibp-always-on'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Milan'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Milan-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Milan-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amd-psfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='no-nested-data-bp'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='null-sel-clr-base'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='stibp-always-on'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Rome'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Rome-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Rome-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Rome-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='GraniteRapids'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mcdt-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pbrsb-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='prefetchiti'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='GraniteRapids-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mcdt-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pbrsb-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='prefetchiti'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='GraniteRapids-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx10'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx10-128'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx10-256'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx10-512'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mcdt-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pbrsb-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='prefetchiti'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-noTSX'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v5'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v6'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v7'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='IvyBridge'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='IvyBridge-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='IvyBridge-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='IvyBridge-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='KnightsMill'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-4fmaps'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-4vnniw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512er'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512pf'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='KnightsMill-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-4fmaps'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-4vnniw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512er'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512pf'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Opteron_G4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fma4'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xop'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Opteron_G4-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fma4'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xop'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Opteron_G5'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fma4'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tbm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xop'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Opteron_G5-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fma4'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tbm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xop'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='SapphireRapids'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='SapphireRapids-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='SapphireRapids-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='SapphireRapids-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='SierraForest'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-ne-convert'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cmpccxadd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mcdt-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pbrsb-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='SierraForest-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-ne-convert'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cmpccxadd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mcdt-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pbrsb-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-v5'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Snowridge'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='core-capability'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mpx'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='split-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Snowridge-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='core-capability'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mpx'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='split-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Snowridge-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='core-capability'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='split-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Snowridge-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='core-capability'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='split-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Snowridge-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='athlon'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnow'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnowext'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='athlon-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnow'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnowext'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='core2duo'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='core2duo-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='coreduo'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='coreduo-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='n270'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='n270-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='phenom'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnow'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnowext'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='phenom-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnow'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnowext'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </mode>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   </cpu>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <memoryBacking supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <enum name='sourceType'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <value>file</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <value>anonymous</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <value>memfd</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   </memoryBacking>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <devices>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <disk supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='diskDevice'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>disk</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>cdrom</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>floppy</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>lun</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='bus'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>fdc</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>scsi</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>usb</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>sata</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='model'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio-transitional</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio-non-transitional</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </disk>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <graphics supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='type'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>vnc</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>egl-headless</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>dbus</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </graphics>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <video supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='modelType'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>vga</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>cirrus</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>none</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>bochs</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>ramfb</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </video>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <hostdev supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='mode'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>subsystem</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='startupPolicy'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>default</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>mandatory</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>requisite</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>optional</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='subsysType'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>usb</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>pci</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>scsi</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='capsType'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='pciBackend'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </hostdev>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <rng supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='model'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio-transitional</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio-non-transitional</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='backendModel'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>random</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>egd</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>builtin</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </rng>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <filesystem supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='driverType'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>path</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>handle</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtiofs</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </filesystem>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <tpm supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='model'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>tpm-tis</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>tpm-crb</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='backendModel'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>emulator</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>external</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='backendVersion'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>2.0</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </tpm>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <redirdev supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='bus'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>usb</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </redirdev>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <channel supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='type'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>pty</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>unix</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </channel>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <crypto supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='model'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='type'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>qemu</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='backendModel'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>builtin</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </crypto>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <interface supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='backendType'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>default</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>passt</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </interface>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <panic supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='model'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>isa</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>hyperv</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </panic>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <console supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='type'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>null</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>vc</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>pty</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>dev</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>file</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>pipe</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>stdio</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>udp</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>tcp</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>unix</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>qemu-vdagent</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>dbus</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </console>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   </devices>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <features>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <gic supported='no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <vmcoreinfo supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <genid supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <backingStoreInput supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <backup supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <async-teardown supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <ps2 supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <sev supported='no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <sgx supported='no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <hyperv supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='features'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>relaxed</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>vapic</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>spinlocks</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>vpindex</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>runtime</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>synic</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>stimer</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>reset</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>vendor_id</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>frequencies</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>reenlightenment</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>tlbflush</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>ipi</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>avic</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>emsr_bitmap</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>xmm_input</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <defaults>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <spinlocks>4095</spinlocks>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <stimer_direct>on</stimer_direct>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <tlbflush_direct>on</tlbflush_direct>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <tlbflush_extended>on</tlbflush_extended>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </defaults>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </hyperv>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <launchSecurity supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='sectype'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>tdx</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </launchSecurity>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   </features>
Dec 05 01:35:08 compute-0 nova_compute[349548]: </domainCapabilities>
Dec 05 01:35:08 compute-0 nova_compute[349548]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.253 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 05 01:35:08 compute-0 nova_compute[349548]: <domainCapabilities>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <path>/usr/libexec/qemu-kvm</path>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <domain>kvm</domain>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <arch>x86_64</arch>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <vcpu max='240'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <iothreads supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <os supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <enum name='firmware'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <loader supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='type'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>rom</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>pflash</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='readonly'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>yes</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>no</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='secure'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>no</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </loader>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   </os>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <cpu>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <mode name='host-passthrough' supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='hostPassthroughMigratable'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>on</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>off</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </mode>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <mode name='maximum' supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='maximumMigratable'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>on</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>off</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </mode>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <mode name='host-model' supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <vendor>AMD</vendor>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='x2apic'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='tsc-deadline'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='hypervisor'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='tsc_adjust'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='spec-ctrl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='stibp'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='ssbd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='cmp_legacy'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='overflow-recov'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='succor'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='ibrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='amd-ssbd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='virt-ssbd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='lbrv'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='tsc-scale'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='vmcb-clean'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='flushbyasid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='pause-filter'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='pfthreshold'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='svme-addr-chk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <feature policy='disable' name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </mode>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <mode name='custom' supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-noTSX'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Broadwell-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cascadelake-Server-v5'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cooperlake'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cooperlake-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Cooperlake-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Denverton'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mpx'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Denverton-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mpx'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Denverton-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Denverton-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Dhyana-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Genoa'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amd-psfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='auto-ibrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='no-nested-data-bp'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='null-sel-clr-base'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='stibp-always-on'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Genoa-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amd-psfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='auto-ibrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='no-nested-data-bp'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='null-sel-clr-base'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='stibp-always-on'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Milan'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Milan-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Milan-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amd-psfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='no-nested-data-bp'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='null-sel-clr-base'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='stibp-always-on'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Rome'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Rome-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Rome-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-Rome-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='EPYC-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='GraniteRapids'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mcdt-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pbrsb-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='prefetchiti'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='GraniteRapids-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mcdt-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pbrsb-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='prefetchiti'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='GraniteRapids-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx10'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx10-128'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx10-256'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx10-512'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mcdt-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pbrsb-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='prefetchiti'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-noTSX'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Haswell-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-noTSX'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v5'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v6'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Icelake-Server-v7'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='IvyBridge'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='IvyBridge-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='IvyBridge-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='IvyBridge-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='KnightsMill'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-4fmaps'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-4vnniw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512er'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512pf'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='KnightsMill-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-4fmaps'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-4vnniw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512er'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512pf'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Opteron_G4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fma4'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xop'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Opteron_G4-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fma4'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xop'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Opteron_G5'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fma4'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tbm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xop'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Opteron_G5-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fma4'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tbm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xop'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='SapphireRapids'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='SapphireRapids-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='SapphireRapids-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='SapphireRapids-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='amx-tile'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-bf16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-fp16'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512-vpopcntdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bitalg'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vbmi2'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrc'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fzrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='la57'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='taa-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='tsx-ldtrk'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xfd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='SierraForest'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-ne-convert'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cmpccxadd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mcdt-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pbrsb-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='SierraForest-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-ifma'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-ne-convert'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx-vnni-int8'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='bus-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cmpccxadd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fbsdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='fsrs'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ibrs-all'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mcdt-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pbrsb-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='psdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='sbdr-ssdp-no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='serialize'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vaes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='vpclmulqdq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Client-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='hle'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='rtm'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Skylake-Server-v5'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512bw'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512cd'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512dq'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512f'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='avx512vl'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='invpcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pcid'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='pku'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Snowridge'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='core-capability'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mpx'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='split-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Snowridge-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='core-capability'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='mpx'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='split-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Snowridge-v2'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='core-capability'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='split-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Snowridge-v3'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='core-capability'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='split-lock-detect'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='Snowridge-v4'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='cldemote'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='erms'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='gfni'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdir64b'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='movdiri'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='xsaves'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='athlon'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnow'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnowext'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='athlon-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnow'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnowext'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='core2duo'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='core2duo-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='coreduo'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='coreduo-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='n270'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='n270-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='ss'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='phenom'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnow'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnowext'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <blockers model='phenom-v1'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnow'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <feature name='3dnowext'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </blockers>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </mode>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   </cpu>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <memoryBacking supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <enum name='sourceType'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <value>file</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <value>anonymous</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <value>memfd</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   </memoryBacking>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <devices>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <disk supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='diskDevice'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>disk</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>cdrom</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>floppy</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>lun</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='bus'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>ide</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>fdc</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>scsi</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>usb</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>sata</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='model'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio-transitional</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio-non-transitional</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </disk>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <graphics supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='type'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>vnc</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>egl-headless</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>dbus</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </graphics>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <video supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='modelType'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>vga</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>cirrus</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>none</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>bochs</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>ramfb</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </video>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <hostdev supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='mode'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>subsystem</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='startupPolicy'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>default</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>mandatory</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>requisite</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>optional</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='subsysType'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>usb</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>pci</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>scsi</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='capsType'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='pciBackend'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </hostdev>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <rng supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='model'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio-transitional</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtio-non-transitional</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='backendModel'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>random</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>egd</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>builtin</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </rng>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <filesystem supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='driverType'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>path</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>handle</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>virtiofs</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </filesystem>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <tpm supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='model'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>tpm-tis</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>tpm-crb</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='backendModel'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>emulator</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>external</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='backendVersion'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>2.0</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </tpm>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <redirdev supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='bus'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>usb</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </redirdev>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <channel supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='type'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>pty</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>unix</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </channel>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <crypto supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='model'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='type'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>qemu</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='backendModel'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>builtin</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </crypto>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <interface supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='backendType'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>default</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>passt</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </interface>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <panic supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='model'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>isa</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>hyperv</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </panic>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <console supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='type'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>null</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>vc</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>pty</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>dev</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>file</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>pipe</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>stdio</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>udp</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>tcp</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>unix</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>qemu-vdagent</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>dbus</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </console>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   </devices>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   <features>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <gic supported='no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <vmcoreinfo supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <genid supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <backingStoreInput supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <backup supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <async-teardown supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <ps2 supported='yes'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <sev supported='no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <sgx supported='no'/>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <hyperv supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='features'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>relaxed</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>vapic</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>spinlocks</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>vpindex</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>runtime</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>synic</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>stimer</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>reset</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>vendor_id</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>frequencies</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>reenlightenment</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>tlbflush</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>ipi</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>avic</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>emsr_bitmap</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>xmm_input</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <defaults>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <spinlocks>4095</spinlocks>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <stimer_direct>on</stimer_direct>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <tlbflush_direct>on</tlbflush_direct>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <tlbflush_extended>on</tlbflush_extended>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </defaults>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </hyperv>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     <launchSecurity supported='yes'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       <enum name='sectype'>
Dec 05 01:35:08 compute-0 nova_compute[349548]:         <value>tdx</value>
Dec 05 01:35:08 compute-0 nova_compute[349548]:       </enum>
Dec 05 01:35:08 compute-0 nova_compute[349548]:     </launchSecurity>
Dec 05 01:35:08 compute-0 nova_compute[349548]:   </features>
Dec 05 01:35:08 compute-0 nova_compute[349548]: </domainCapabilities>
Dec 05 01:35:08 compute-0 nova_compute[349548]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 05 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.388 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 05 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.388 349552 INFO nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Secure Boot support detected
Dec 05 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.393 349552 INFO nova.virt.libvirt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 05 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.394 349552 INFO nova.virt.libvirt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 05 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.420 349552 DEBUG nova.virt.libvirt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 05 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.676 349552 INFO nova.virt.node [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Determined node identity acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from /var/lib/nova/compute_id
Dec 05 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.705 349552 WARNING nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Compute nodes ['acf26aa2-2fef-4a53-8a44-6cfa2eb15d17'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Dec 05 01:35:08 compute-0 sudo[349846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:35:08 compute-0 sudo[349846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:35:08 compute-0 sudo[349846]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.740 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 05 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.770 349552 WARNING nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 05 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.771 349552 DEBUG oslo_concurrency.lockutils [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.771 349552 DEBUG oslo_concurrency.lockutils [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.771 349552 DEBUG oslo_concurrency.lockutils [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.771 349552 DEBUG nova.compute.resource_tracker [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.772 349552 DEBUG oslo_concurrency.processutils [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:35:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:08 compute-0 sudo[349871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:35:08 compute-0 sudo[349871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:35:08 compute-0 sudo[349871]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:08 compute-0 ceph-mon[192914]: pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:08 compute-0 sudo[349897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:35:08 compute-0 sudo[349897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:35:08 compute-0 sudo[349897]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:09 compute-0 sudo[349941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 05 01:35:09 compute-0 sudo[349941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:35:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:35:09 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/781006965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:35:09 compute-0 nova_compute[349548]: 2025-12-05 01:35:09.260 349552 DEBUG oslo_concurrency.processutils [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:35:09 compute-0 sudo[349941]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:35:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:35:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:35:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:35:09 compute-0 sudo[349987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:35:09 compute-0 sudo[349987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:35:09 compute-0 sudo[349987]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:09 compute-0 sudo[350012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:35:09 compute-0 sudo[350012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:35:09 compute-0 sudo[350012]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:09 compute-0 nova_compute[349548]: 2025-12-05 01:35:09.607 349552 WARNING nova.virt.libvirt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:35:09 compute-0 nova_compute[349548]: 2025-12-05 01:35:09.608 349552 DEBUG nova.compute.resource_tracker [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4540MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 01:35:09 compute-0 nova_compute[349548]: 2025-12-05 01:35:09.609 349552 DEBUG oslo_concurrency.lockutils [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:35:09 compute-0 nova_compute[349548]: 2025-12-05 01:35:09.609 349552 DEBUG oslo_concurrency.lockutils [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:35:09 compute-0 nova_compute[349548]: 2025-12-05 01:35:09.628 349552 WARNING nova.compute.resource_tracker [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] No compute node record for compute-0.ctlplane.example.com:acf26aa2-2fef-4a53-8a44-6cfa2eb15d17: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 could not be found.
Dec 05 01:35:09 compute-0 nova_compute[349548]: 2025-12-05 01:35:09.650 349552 INFO nova.compute.resource_tracker [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17
Dec 05 01:35:09 compute-0 sudo[350037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:35:09 compute-0 sudo[350037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:35:09 compute-0 sudo[350037]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:09 compute-0 sudo[350062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:35:09 compute-0 nova_compute[349548]: 2025-12-05 01:35:09.742 349552 DEBUG nova.compute.resource_tracker [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 01:35:09 compute-0 nova_compute[349548]: 2025-12-05 01:35:09.743 349552 DEBUG nova.compute.resource_tracker [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 01:35:09 compute-0 sudo[350062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:35:09 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/781006965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:35:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:35:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:35:10 compute-0 sudo[350062]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:35:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:35:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:35:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:35:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:35:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:35:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 447e25ae-32f6-433e-a5f0-e1c20a12f23c does not exist
Dec 05 01:35:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev f829ec47-0207-42bb-b5b9-ae6fc1004b7b does not exist
Dec 05 01:35:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 91037b98-9bd8-452a-9c69-71cbdad1ac2a does not exist
Dec 05 01:35:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:35:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:35:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:35:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:35:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:35:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:35:10 compute-0 nova_compute[349548]: 2025-12-05 01:35:10.625 349552 INFO nova.scheduler.client.report [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [req-c3e6641d-71e6-4c9e-9fd5-10cd0ee643b3] Created resource provider record via placement API for resource provider with UUID acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 and name compute-0.ctlplane.example.com.
Dec 05 01:35:10 compute-0 sudo[350116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:35:10 compute-0 sudo[350116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:35:10 compute-0 sudo[350116]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:10 compute-0 sudo[350141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:35:10 compute-0 sudo[350141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:35:10 compute-0 sudo[350141]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:35:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:35:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:35:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:35:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:35:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:35:10 compute-0 ceph-mon[192914]: pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:10 compute-0 sudo[350166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:35:10 compute-0 sudo[350166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:35:10 compute-0 sudo[350166]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.032 349552 DEBUG oslo_concurrency.processutils [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:35:11 compute-0 sudo[350191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:35:11 compute-0 sudo[350191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:35:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:35:11 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1951102783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.578 349552 DEBUG oslo_concurrency.processutils [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.588 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec 05 01:35:11 compute-0 nova_compute[349548]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Dec 05 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.588 349552 INFO nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] kernel doesn't support AMD SEV
Dec 05 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.589 349552 DEBUG nova.compute.provider_tree [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.590 349552 DEBUG nova.virt.libvirt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 05 01:35:11 compute-0 podman[350275]: 2025-12-05 01:35:11.600158947 +0000 UTC m=+0.067554466 container create 7562f3dcf258a735ca297faa550f292e4feea2c05acccfa16066370ece7765a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec 05 01:35:11 compute-0 podman[350275]: 2025-12-05 01:35:11.56263175 +0000 UTC m=+0.030027239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:35:11 compute-0 systemd[1]: Started libpod-conmon-7562f3dcf258a735ca297faa550f292e4feea2c05acccfa16066370ece7765a0.scope.
Dec 05 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.664 349552 DEBUG nova.scheduler.client.report [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Updated inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 05 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.665 349552 DEBUG nova.compute.provider_tree [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Updating resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 05 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.665 349552 DEBUG nova.compute.provider_tree [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 01:35:11 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:35:11 compute-0 podman[350275]: 2025-12-05 01:35:11.733115597 +0000 UTC m=+0.200511166 container init 7562f3dcf258a735ca297faa550f292e4feea2c05acccfa16066370ece7765a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 05 01:35:11 compute-0 podman[350275]: 2025-12-05 01:35:11.746008377 +0000 UTC m=+0.213403886 container start 7562f3dcf258a735ca297faa550f292e4feea2c05acccfa16066370ece7765a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:35:11 compute-0 podman[350275]: 2025-12-05 01:35:11.753553828 +0000 UTC m=+0.220949367 container attach 7562f3dcf258a735ca297faa550f292e4feea2c05acccfa16066370ece7765a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:35:11 compute-0 cranky_payne[350292]: 167 167
Dec 05 01:35:11 compute-0 systemd[1]: libpod-7562f3dcf258a735ca297faa550f292e4feea2c05acccfa16066370ece7765a0.scope: Deactivated successfully.
Dec 05 01:35:11 compute-0 podman[350297]: 2025-12-05 01:35:11.827551243 +0000 UTC m=+0.052473246 container died 7562f3dcf258a735ca297faa550f292e4feea2c05acccfa16066370ece7765a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:35:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ae61881b73d619828450aaab5ec25ce0fd7c0fd2b09509d289e9a51dc1fadbf-merged.mount: Deactivated successfully.
Dec 05 01:35:11 compute-0 podman[350297]: 2025-12-05 01:35:11.895652013 +0000 UTC m=+0.120573966 container remove 7562f3dcf258a735ca297faa550f292e4feea2c05acccfa16066370ece7765a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 05 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.894 349552 DEBUG nova.compute.provider_tree [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Updating resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 05 01:35:11 compute-0 systemd[1]: libpod-conmon-7562f3dcf258a735ca297faa550f292e4feea2c05acccfa16066370ece7765a0.scope: Deactivated successfully.
Dec 05 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.957 349552 DEBUG nova.compute.resource_tracker [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.958 349552 DEBUG oslo_concurrency.lockutils [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.960 349552 DEBUG nova.service [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Dec 05 01:35:12 compute-0 nova_compute[349548]: 2025-12-05 01:35:12.072 349552 DEBUG nova.service [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Dec 05 01:35:12 compute-0 nova_compute[349548]: 2025-12-05 01:35:12.072 349552 DEBUG nova.servicegroup.drivers.db [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Dec 05 01:35:12 compute-0 podman[350316]: 2025-12-05 01:35:12.202452434 +0000 UTC m=+0.090630430 container create d442690de1570d07f1af2792231dc73ab837134856d5022df3c91ac13709536d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:35:12 compute-0 podman[350316]: 2025-12-05 01:35:12.174108983 +0000 UTC m=+0.062287049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:35:12 compute-0 systemd[1]: Started libpod-conmon-d442690de1570d07f1af2792231dc73ab837134856d5022df3c91ac13709536d.scope.
Dec 05 01:35:12 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e768cc5adb0312706e0141d8fb5b4801d2aea5da5d51fdd313c278449288e081/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e768cc5adb0312706e0141d8fb5b4801d2aea5da5d51fdd313c278449288e081/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e768cc5adb0312706e0141d8fb5b4801d2aea5da5d51fdd313c278449288e081/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e768cc5adb0312706e0141d8fb5b4801d2aea5da5d51fdd313c278449288e081/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e768cc5adb0312706e0141d8fb5b4801d2aea5da5d51fdd313c278449288e081/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:35:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:35:12 compute-0 podman[350316]: 2025-12-05 01:35:12.390347477 +0000 UTC m=+0.278525563 container init d442690de1570d07f1af2792231dc73ab837134856d5022df3c91ac13709536d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mahavira, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:35:12 compute-0 podman[350316]: 2025-12-05 01:35:12.409864542 +0000 UTC m=+0.298042538 container start d442690de1570d07f1af2792231dc73ab837134856d5022df3c91ac13709536d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 05 01:35:12 compute-0 podman[350316]: 2025-12-05 01:35:12.414877442 +0000 UTC m=+0.303055558 container attach d442690de1570d07f1af2792231dc73ab837134856d5022df3c91ac13709536d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 05 01:35:12 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1951102783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:35:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:13 compute-0 sshd-session[350346]: Accepted publickey for zuul from 192.168.122.30 port 54032 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 01:35:13 compute-0 systemd-logind[792]: New session 58 of user zuul.
Dec 05 01:35:13 compute-0 systemd[1]: Started Session 58 of User zuul.
Dec 05 01:35:13 compute-0 sshd-session[350346]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:35:13 compute-0 ceph-mon[192914]: pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:13 compute-0 condescending_mahavira[350333]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:35:13 compute-0 condescending_mahavira[350333]: --> relative data size: 1.0
Dec 05 01:35:13 compute-0 condescending_mahavira[350333]: --> All data devices are unavailable
Dec 05 01:35:13 compute-0 systemd[1]: libpod-d442690de1570d07f1af2792231dc73ab837134856d5022df3c91ac13709536d.scope: Deactivated successfully.
Dec 05 01:35:13 compute-0 systemd[1]: libpod-d442690de1570d07f1af2792231dc73ab837134856d5022df3c91ac13709536d.scope: Consumed 1.341s CPU time.
Dec 05 01:35:13 compute-0 podman[350316]: 2025-12-05 01:35:13.827381938 +0000 UTC m=+1.715560024 container died d442690de1570d07f1af2792231dc73ab837134856d5022df3c91ac13709536d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 01:35:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e768cc5adb0312706e0141d8fb5b4801d2aea5da5d51fdd313c278449288e081-merged.mount: Deactivated successfully.
Dec 05 01:35:13 compute-0 podman[350316]: 2025-12-05 01:35:13.934293332 +0000 UTC m=+1.822471328 container remove d442690de1570d07f1af2792231dc73ab837134856d5022df3c91ac13709536d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mahavira, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 05 01:35:13 compute-0 systemd[1]: libpod-conmon-d442690de1570d07f1af2792231dc73ab837134856d5022df3c91ac13709536d.scope: Deactivated successfully.
Dec 05 01:35:13 compute-0 sudo[350191]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:14 compute-0 sudo[350429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:35:14 compute-0 sudo[350429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:35:14 compute-0 sudo[350429]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:14 compute-0 sudo[350475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:35:14 compute-0 sudo[350475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:35:14 compute-0 sudo[350475]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:14 compute-0 sudo[350526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:35:14 compute-0 sudo[350526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:35:14 compute-0 sudo[350526]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:14 compute-0 sudo[350555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:35:14 compute-0 sudo[350555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:35:14 compute-0 podman[350636]: 2025-12-05 01:35:14.763031678 +0000 UTC m=+0.124660250 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Dec 05 01:35:14 compute-0 podman[350633]: 2025-12-05 01:35:14.789568558 +0000 UTC m=+0.122491569 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 01:35:14 compute-0 podman[350626]: 2025-12-05 01:35:14.803815846 +0000 UTC m=+0.187232676 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:35:14 compute-0 podman[350628]: 2025-12-05 01:35:14.810937785 +0000 UTC m=+0.157834046 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, tcib_managed=true)
Dec 05 01:35:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:14 compute-0 podman[350654]: 2025-12-05 01:35:14.863500362 +0000 UTC m=+0.175923661 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 01:35:14 compute-0 python3.9[350627]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:35:14 compute-0 ceph-mon[192914]: pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:14 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 01:35:14 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 01:35:15 compute-0 podman[350761]: 2025-12-05 01:35:15.003599521 +0000 UTC m=+0.071605389 container create 0bc8061122653f78b3f09dbf591220f6ea6650a06d3fcd0a599abb6d1fa9ad25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 05 01:35:15 compute-0 systemd[1]: Started libpod-conmon-0bc8061122653f78b3f09dbf591220f6ea6650a06d3fcd0a599abb6d1fa9ad25.scope.
Dec 05 01:35:15 compute-0 podman[350761]: 2025-12-05 01:35:14.977518273 +0000 UTC m=+0.045524171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:35:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:35:15 compute-0 podman[350761]: 2025-12-05 01:35:15.122310624 +0000 UTC m=+0.190316582 container init 0bc8061122653f78b3f09dbf591220f6ea6650a06d3fcd0a599abb6d1fa9ad25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:35:15 compute-0 podman[350761]: 2025-12-05 01:35:15.139432612 +0000 UTC m=+0.207438470 container start 0bc8061122653f78b3f09dbf591220f6ea6650a06d3fcd0a599abb6d1fa9ad25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 01:35:15 compute-0 intelligent_montalcini[350782]: 167 167
Dec 05 01:35:15 compute-0 podman[350761]: 2025-12-05 01:35:15.148588227 +0000 UTC m=+0.216594125 container attach 0bc8061122653f78b3f09dbf591220f6ea6650a06d3fcd0a599abb6d1fa9ad25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:35:15 compute-0 systemd[1]: libpod-0bc8061122653f78b3f09dbf591220f6ea6650a06d3fcd0a599abb6d1fa9ad25.scope: Deactivated successfully.
Dec 05 01:35:15 compute-0 podman[350761]: 2025-12-05 01:35:15.153145754 +0000 UTC m=+0.221151622 container died 0bc8061122653f78b3f09dbf591220f6ea6650a06d3fcd0a599abb6d1fa9ad25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:35:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-d06defcf4553b9eb207e833cae5ae291bea64d627ba968cc472fe4021ca41f60-merged.mount: Deactivated successfully.
Dec 05 01:35:15 compute-0 podman[350761]: 2025-12-05 01:35:15.23361875 +0000 UTC m=+0.301624608 container remove 0bc8061122653f78b3f09dbf591220f6ea6650a06d3fcd0a599abb6d1fa9ad25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 05 01:35:15 compute-0 systemd[1]: libpod-conmon-0bc8061122653f78b3f09dbf591220f6ea6650a06d3fcd0a599abb6d1fa9ad25.scope: Deactivated successfully.
Dec 05 01:35:15 compute-0 podman[350831]: 2025-12-05 01:35:15.445358659 +0000 UTC m=+0.064726098 container create 08ed0270e21b641be6212307d29595a434566c2752523a0ab111fbda593161fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_euler, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:35:15 compute-0 podman[350831]: 2025-12-05 01:35:15.413805788 +0000 UTC m=+0.033173217 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:35:15 compute-0 systemd[1]: Started libpod-conmon-08ed0270e21b641be6212307d29595a434566c2752523a0ab111fbda593161fe.scope.
Dec 05 01:35:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e04c63bf58d004a800cc8afa69f3156ec7f37be8d72c70f9e1d2d765f80069a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e04c63bf58d004a800cc8afa69f3156ec7f37be8d72c70f9e1d2d765f80069a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e04c63bf58d004a800cc8afa69f3156ec7f37be8d72c70f9e1d2d765f80069a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e04c63bf58d004a800cc8afa69f3156ec7f37be8d72c70f9e1d2d765f80069a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:35:15 compute-0 podman[350831]: 2025-12-05 01:35:15.587210307 +0000 UTC m=+0.206577716 container init 08ed0270e21b641be6212307d29595a434566c2752523a0ab111fbda593161fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_euler, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:35:15 compute-0 podman[350831]: 2025-12-05 01:35:15.608123501 +0000 UTC m=+0.227490940 container start 08ed0270e21b641be6212307d29595a434566c2752523a0ab111fbda593161fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_euler, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 01:35:15 compute-0 podman[350831]: 2025-12-05 01:35:15.614465728 +0000 UTC m=+0.233833167 container attach 08ed0270e21b641be6212307d29595a434566c2752523a0ab111fbda593161fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:35:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:35:16
Dec 05 01:35:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:35:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:35:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'backups', 'default.rgw.control', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.meta']
Dec 05 01:35:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:35:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:35:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:35:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:35:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:35:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:35:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:35:16 compute-0 heuristic_euler[350846]: {
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:     "0": [
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:         {
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "devices": [
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "/dev/loop3"
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             ],
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "lv_name": "ceph_lv0",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "lv_size": "21470642176",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "name": "ceph_lv0",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "tags": {
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.cluster_name": "ceph",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.crush_device_class": "",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.encrypted": "0",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.osd_id": "0",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.type": "block",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.vdo": "0"
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             },
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "type": "block",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "vg_name": "ceph_vg0"
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:         }
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:     ],
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:     "1": [
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:         {
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "devices": [
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "/dev/loop4"
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             ],
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "lv_name": "ceph_lv1",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "lv_size": "21470642176",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "name": "ceph_lv1",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "tags": {
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.cluster_name": "ceph",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.crush_device_class": "",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.encrypted": "0",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.osd_id": "1",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.type": "block",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.vdo": "0"
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             },
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "type": "block",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "vg_name": "ceph_vg1"
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:         }
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:     ],
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:     "2": [
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:         {
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "devices": [
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "/dev/loop5"
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             ],
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "lv_name": "ceph_lv2",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "lv_size": "21470642176",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "name": "ceph_lv2",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "tags": {
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.cluster_name": "ceph",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.crush_device_class": "",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.encrypted": "0",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.osd_id": "2",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.type": "block",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:                 "ceph.vdo": "0"
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             },
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "type": "block",
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:             "vg_name": "ceph_vg2"
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:         }
Dec 05 01:35:16 compute-0 heuristic_euler[350846]:     ]
Dec 05 01:35:16 compute-0 heuristic_euler[350846]: }
Dec 05 01:35:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:35:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:35:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:35:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:35:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:35:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:35:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:35:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:35:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:35:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:35:16 compute-0 systemd[1]: libpod-08ed0270e21b641be6212307d29595a434566c2752523a0ab111fbda593161fe.scope: Deactivated successfully.
Dec 05 01:35:16 compute-0 podman[350831]: 2025-12-05 01:35:16.519457881 +0000 UTC m=+1.138825320 container died 08ed0270e21b641be6212307d29595a434566c2752523a0ab111fbda593161fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_euler, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:35:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e04c63bf58d004a800cc8afa69f3156ec7f37be8d72c70f9e1d2d765f80069a7-merged.mount: Deactivated successfully.
Dec 05 01:35:16 compute-0 podman[350831]: 2025-12-05 01:35:16.602446227 +0000 UTC m=+1.221813646 container remove 08ed0270e21b641be6212307d29595a434566c2752523a0ab111fbda593161fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_euler, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:35:16 compute-0 systemd[1]: libpod-conmon-08ed0270e21b641be6212307d29595a434566c2752523a0ab111fbda593161fe.scope: Deactivated successfully.
Dec 05 01:35:16 compute-0 sudo[350555]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:16 compute-0 sudo[351010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxqbiwpxtlgjekqyimdbmckpytkscmhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898515.6546156-36-14393074080826/AnsiballZ_systemd_service.py'
Dec 05 01:35:16 compute-0 podman[350954]: 2025-12-05 01:35:16.651197017 +0000 UTC m=+0.139415411 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., io.buildah.version=1.29.0, version=9.4, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, vcs-type=git)
Dec 05 01:35:16 compute-0 sudo[351010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:35:16 compute-0 sudo[351016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:35:16 compute-0 sudo[351016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:35:16 compute-0 sudo[351016]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:16 compute-0 sudo[351041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:35:16 compute-0 sudo[351041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:35:16 compute-0 sudo[351041]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:16 compute-0 ceph-mon[192914]: pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:16 compute-0 sudo[351066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:35:16 compute-0 sudo[351066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:35:16 compute-0 sudo[351066]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:16 compute-0 python3.9[351015]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 01:35:16 compute-0 systemd[1]: Reloading.
Dec 05 01:35:17 compute-0 systemd-rc-local-generator[351142]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:35:17 compute-0 systemd-sysv-generator[351147]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:35:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:35:17 compute-0 sudo[351092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:35:17 compute-0 sudo[351010]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:17 compute-0 sudo[351092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:35:18 compute-0 podman[351265]: 2025-12-05 01:35:18.041160185 +0000 UTC m=+0.078079460 container create 96784894cabdc39f1060597630ec52c9017eb3c3ab7e673d46479362093c08b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 01:35:18 compute-0 podman[351265]: 2025-12-05 01:35:18.010832988 +0000 UTC m=+0.047752333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:35:18 compute-0 systemd[1]: Started libpod-conmon-96784894cabdc39f1060597630ec52c9017eb3c3ab7e673d46479362093c08b7.scope.
Dec 05 01:35:18 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:35:18 compute-0 podman[351265]: 2025-12-05 01:35:18.186517211 +0000 UTC m=+0.223436546 container init 96784894cabdc39f1060597630ec52c9017eb3c3ab7e673d46479362093c08b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kapitsa, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 01:35:18 compute-0 podman[351265]: 2025-12-05 01:35:18.202419985 +0000 UTC m=+0.239339250 container start 96784894cabdc39f1060597630ec52c9017eb3c3ab7e673d46479362093c08b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:35:18 compute-0 podman[351265]: 2025-12-05 01:35:18.210037187 +0000 UTC m=+0.246956532 container attach 96784894cabdc39f1060597630ec52c9017eb3c3ab7e673d46479362093c08b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 01:35:18 compute-0 nostalgic_kapitsa[351304]: 167 167
Dec 05 01:35:18 compute-0 systemd[1]: libpod-96784894cabdc39f1060597630ec52c9017eb3c3ab7e673d46479362093c08b7.scope: Deactivated successfully.
Dec 05 01:35:18 compute-0 podman[351265]: 2025-12-05 01:35:18.214498452 +0000 UTC m=+0.251417737 container died 96784894cabdc39f1060597630ec52c9017eb3c3ab7e673d46479362093c08b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kapitsa, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:35:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-4aab5373b672a0897d673a1a7214d3c3c72b7b305ddac25f5c6eaf2b34104524-merged.mount: Deactivated successfully.
Dec 05 01:35:18 compute-0 podman[351265]: 2025-12-05 01:35:18.290100141 +0000 UTC m=+0.327019396 container remove 96784894cabdc39f1060597630ec52c9017eb3c3ab7e673d46479362093c08b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 05 01:35:18 compute-0 systemd[1]: libpod-conmon-96784894cabdc39f1060597630ec52c9017eb3c3ab7e673d46479362093c08b7.scope: Deactivated successfully.
Dec 05 01:35:18 compute-0 python3.9[351373]: ansible-ansible.builtin.service_facts Invoked
Dec 05 01:35:18 compute-0 podman[351379]: 2025-12-05 01:35:18.519546084 +0000 UTC m=+0.078161642 container create 63a2ef1c7486b44abf7a540ec77dee92a5e9bb6f8bf3b695df02b129314b19b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:35:18 compute-0 podman[351379]: 2025-12-05 01:35:18.487806559 +0000 UTC m=+0.046422167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:35:18 compute-0 systemd[1]: Started libpod-conmon-63a2ef1c7486b44abf7a540ec77dee92a5e9bb6f8bf3b695df02b129314b19b3.scope.
Dec 05 01:35:18 compute-0 network[351414]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 01:35:18 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aecee5a8ff6ebe74c8125af1b7141ac60e22594edd6db35538e827f3ec6b421/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:35:18 compute-0 network[351415]: 'network-scripts' will be removed from distribution in near future.
Dec 05 01:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aecee5a8ff6ebe74c8125af1b7141ac60e22594edd6db35538e827f3ec6b421/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aecee5a8ff6ebe74c8125af1b7141ac60e22594edd6db35538e827f3ec6b421/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aecee5a8ff6ebe74c8125af1b7141ac60e22594edd6db35538e827f3ec6b421/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:35:18 compute-0 podman[351379]: 2025-12-05 01:35:18.670662481 +0000 UTC m=+0.229278059 container init 63a2ef1c7486b44abf7a540ec77dee92a5e9bb6f8bf3b695df02b129314b19b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kalam, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 05 01:35:18 compute-0 network[351417]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 01:35:18 compute-0 podman[351379]: 2025-12-05 01:35:18.698029715 +0000 UTC m=+0.256645253 container start 63a2ef1c7486b44abf7a540ec77dee92a5e9bb6f8bf3b695df02b129314b19b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Dec 05 01:35:18 compute-0 podman[351379]: 2025-12-05 01:35:18.704168576 +0000 UTC m=+0.262784234 container attach 63a2ef1c7486b44abf7a540ec77dee92a5e9bb6f8bf3b695df02b129314b19b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kalam, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:35:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:18 compute-0 ceph-mon[192914]: pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:19 compute-0 quirky_kalam[351405]: {
Dec 05 01:35:19 compute-0 quirky_kalam[351405]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:35:19 compute-0 quirky_kalam[351405]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:35:19 compute-0 quirky_kalam[351405]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:35:19 compute-0 quirky_kalam[351405]:         "osd_id": 0,
Dec 05 01:35:19 compute-0 quirky_kalam[351405]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:35:19 compute-0 quirky_kalam[351405]:         "type": "bluestore"
Dec 05 01:35:19 compute-0 quirky_kalam[351405]:     },
Dec 05 01:35:19 compute-0 quirky_kalam[351405]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:35:19 compute-0 quirky_kalam[351405]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:35:19 compute-0 quirky_kalam[351405]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:35:19 compute-0 quirky_kalam[351405]:         "osd_id": 1,
Dec 05 01:35:19 compute-0 quirky_kalam[351405]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:35:19 compute-0 quirky_kalam[351405]:         "type": "bluestore"
Dec 05 01:35:19 compute-0 quirky_kalam[351405]:     },
Dec 05 01:35:19 compute-0 quirky_kalam[351405]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:35:19 compute-0 quirky_kalam[351405]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:35:19 compute-0 quirky_kalam[351405]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:35:19 compute-0 quirky_kalam[351405]:         "osd_id": 2,
Dec 05 01:35:19 compute-0 quirky_kalam[351405]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:35:19 compute-0 quirky_kalam[351405]:         "type": "bluestore"
Dec 05 01:35:19 compute-0 quirky_kalam[351405]:     }
Dec 05 01:35:19 compute-0 quirky_kalam[351405]: }
Dec 05 01:35:19 compute-0 systemd[1]: libpod-63a2ef1c7486b44abf7a540ec77dee92a5e9bb6f8bf3b695df02b129314b19b3.scope: Deactivated successfully.
Dec 05 01:35:19 compute-0 podman[351379]: 2025-12-05 01:35:19.913030759 +0000 UTC m=+1.471646357 container died 63a2ef1c7486b44abf7a540ec77dee92a5e9bb6f8bf3b695df02b129314b19b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kalam, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 01:35:19 compute-0 systemd[1]: libpod-63a2ef1c7486b44abf7a540ec77dee92a5e9bb6f8bf3b695df02b129314b19b3.scope: Consumed 1.218s CPU time.
Dec 05 01:35:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-4aecee5a8ff6ebe74c8125af1b7141ac60e22594edd6db35538e827f3ec6b421-merged.mount: Deactivated successfully.
Dec 05 01:35:20 compute-0 podman[351379]: 2025-12-05 01:35:20.011475226 +0000 UTC m=+1.570090764 container remove 63a2ef1c7486b44abf7a540ec77dee92a5e9bb6f8bf3b695df02b129314b19b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 05 01:35:20 compute-0 systemd[1]: libpod-conmon-63a2ef1c7486b44abf7a540ec77dee92a5e9bb6f8bf3b695df02b129314b19b3.scope: Deactivated successfully.
Dec 05 01:35:20 compute-0 sudo[351092]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:35:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:35:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:35:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:35:20 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 5782a631-b2fc-41df-bab4-dcedef2b4bbe does not exist
Dec 05 01:35:20 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 3d39be50-59d4-48ab-b446-3c4ebcb85de8 does not exist
Dec 05 01:35:20 compute-0 sudo[351474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:35:20 compute-0 sudo[351474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:35:20 compute-0 sudo[351474]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:20 compute-0 sudo[351504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:35:20 compute-0 sudo[351504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:35:20 compute-0 sudo[351504]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:35:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:35:21 compute-0 ceph-mon[192914]: pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:21 compute-0 podman[351555]: 2025-12-05 01:35:21.924555722 +0000 UTC m=+0.130875183 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, release=1755695350, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 01:35:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:35:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:22 compute-0 ceph-mon[192914]: pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:23 compute-0 podman[351626]: 2025-12-05 01:35:23.678541997 +0000 UTC m=+0.111484712 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 01:35:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:24 compute-0 ceph-mon[192914]: pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:25 compute-0 sudo[351821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvdycsgygofmwgzsxnkjupdmjcrvpllo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898524.8256927-55-114532151370363/AnsiballZ_systemd_service.py'
Dec 05 01:35:25 compute-0 sudo[351821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:35:25 compute-0 python3.9[351823]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:35:25 compute-0 sudo[351821]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:35:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:26 compute-0 ceph-mon[192914]: pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:26 compute-0 sudo[351974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkypylydnrnapiltkfagnmttlrrtgnvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898526.1708744-65-266380102742778/AnsiballZ_file.py'
Dec 05 01:35:26 compute-0 sudo[351974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:35:27 compute-0 python3.9[351976]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:35:27 compute-0 sudo[351974]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:35:28 compute-0 sudo[352126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdgnqjkytuqtyiyenpcpnbvfgajslmce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898527.481473-73-182876607772815/AnsiballZ_file.py'
Dec 05 01:35:28 compute-0 sudo[352126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:35:28 compute-0 python3.9[352128]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:35:28 compute-0 sudo[352126]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:28 compute-0 ceph-mon[192914]: pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:29 compute-0 podman[158197]: time="2025-12-05T01:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:35:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 01:35:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8126 "" "Go-http-client/1.1"
Dec 05 01:35:29 compute-0 sudo[352278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfcizctetqehkltrzsnbtwbprgsbwcgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898529.1559339-82-251397702364934/AnsiballZ_command.py'
Dec 05 01:35:29 compute-0 sudo[352278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:35:30 compute-0 python3.9[352280]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:35:30 compute-0 sudo[352278]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:30 compute-0 ceph-mon[192914]: pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:31 compute-0 openstack_network_exporter[160350]: ERROR   01:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:35:31 compute-0 openstack_network_exporter[160350]: ERROR   01:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:35:31 compute-0 openstack_network_exporter[160350]: ERROR   01:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:35:31 compute-0 openstack_network_exporter[160350]: ERROR   01:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:35:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:35:31 compute-0 openstack_network_exporter[160350]: ERROR   01:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:35:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:35:31 compute-0 python3.9[352432]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 01:35:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:35:32 compute-0 podman[352556]: 2025-12-05 01:35:32.636297822 +0000 UTC m=+0.124357400 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 05 01:35:32 compute-0 sudo[352598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqhnlhgqxzvxxywekxqdszzzevirqexg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898532.0513253-100-100085825117968/AnsiballZ_systemd_service.py'
Dec 05 01:35:32 compute-0 sudo[352598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:35:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:32 compute-0 ceph-mon[192914]: pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:33 compute-0 python3.9[352600]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 01:35:33 compute-0 systemd[1]: Reloading.
Dec 05 01:35:33 compute-0 systemd-rc-local-generator[352622]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:35:33 compute-0 systemd-sysv-generator[352629]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:35:33 compute-0 sudo[352598]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:34 compute-0 sudo[352785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwbyacoyubzzwwxkiokbbvrvejkjcumd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898533.7957995-108-94981840475989/AnsiballZ_command.py'
Dec 05 01:35:34 compute-0 sudo[352785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:35:34 compute-0 python3.9[352787]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:35:34 compute-0 sudo[352785]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:34 compute-0 ceph-mon[192914]: pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:35 compute-0 sudo[352938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axcxjtuciwwboecuftespjxjdpzcjtzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898534.971967-117-47007546400758/AnsiballZ_file.py'
Dec 05 01:35:35 compute-0 sudo[352938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:35:35 compute-0 python3.9[352940]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:35:35 compute-0 sudo[352938]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:36 compute-0 python3.9[353090]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:35:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:36 compute-0 ceph-mon[192914]: pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:37 compute-0 nova_compute[349548]: 2025-12-05 01:35:37.074 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:35:37 compute-0 nova_compute[349548]: 2025-12-05 01:35:37.107 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:35:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:35:37 compute-0 python3.9[353242]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:35:38 compute-0 python3.9[353318]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:35:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:38 compute-0 ceph-mon[192914]: pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:39 compute-0 sudo[353468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htxbihrnyvbwfwgdzjvgxoovrybboimc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898538.636208-145-10856003616452/AnsiballZ_group.py'
Dec 05 01:35:39 compute-0 sudo[353468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:35:39 compute-0 python3.9[353470]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Dec 05 01:35:39 compute-0 sudo[353468]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:40 compute-0 sudo[353620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idcjwzgyqvbdmrhqjvzpahqhrzkqppmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898540.3274012-156-113728395392260/AnsiballZ_getent.py'
Dec 05 01:35:40 compute-0 sudo[353620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:35:40 compute-0 ceph-mon[192914]: pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:41 compute-0 python3.9[353622]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec 05 01:35:41 compute-0 sudo[353620]: pam_unix(sudo:session): session closed for user root
Dec 05 01:35:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:35:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:42 compute-0 ceph-mon[192914]: pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:43 compute-0 python3.9[353773]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:35:43 compute-0 python3.9[353849]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry/ceilometer.conf _original_basename=ceilometer.conf recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:35:44 compute-0 python3.9[353999]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:35:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:44 compute-0 ceph-mon[192914]: pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:44 compute-0 podman[354002]: 2025-12-05 01:35:44.94278846 +0000 UTC m=+0.114553373 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:35:44 compute-0 podman[354000]: 2025-12-05 01:35:44.967604198 +0000 UTC m=+0.136953993 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 01:35:45 compute-0 podman[354043]: 2025-12-05 01:35:45.046489878 +0000 UTC m=+0.096916968 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 05 01:35:45 compute-0 podman[354045]: 2025-12-05 01:35:45.08494899 +0000 UTC m=+0.118733042 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:35:45 compute-0 podman[354047]: 2025-12-05 01:35:45.094131448 +0000 UTC m=+0.138425565 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 01:35:45 compute-0 python3.9[354176]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry/polling.yaml _original_basename=polling.yaml recurse=False state=file path=/var/lib/openstack/config/telemetry/polling.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:35:46 compute-0 python3.9[354326]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:35:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:35:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:35:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:35:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:35:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:35:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:35:46 compute-0 python3.9[354402]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry/custom.conf _original_basename=custom.conf recurse=False state=file path=/var/lib/openstack/config/telemetry/custom.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:35:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:46 compute-0 ceph-mon[192914]: pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:46 compute-0 podman[354403]: 2025-12-05 01:35:46.968280101 +0000 UTC m=+0.151279807 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, config_id=edpm, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, container_name=kepler, release-0.7.12=, distribution-scope=public, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc.)
Dec 05 01:35:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:35:47 compute-0 python3.9[354569]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:35:48 compute-0 python3.9[354721]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:35:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:48 compute-0 ceph-mon[192914]: pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:49 compute-0 python3.9[354874]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:35:50 compute-0 python3.9[354950]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json _original_basename=ceilometer-agent-compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:35:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:50 compute-0 ceph-mon[192914]: pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:51 compute-0 python3.9[355100]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:35:52 compute-0 podman[355101]: 2025-12-05 01:35:52.152128109 +0000 UTC m=+0.137597612 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 01:35:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:35:52 compute-0 python3.9[355196]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:35:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:52 compute-0 ceph-mon[192914]: pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:53 compute-0 python3.9[355346]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:35:54 compute-0 podman[355347]: 2025-12-05 01:35:54.077734669 +0000 UTC m=+0.134730941 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 01:35:54 compute-0 python3.9[355444]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json _original_basename=ceilometer_agent_compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:35:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:54 compute-0 ceph-mon[192914]: pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:55 compute-0 python3.9[355594]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:35:56 compute-0 python3.9[355670]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:35:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:35:56.162 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:35:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:35:56.163 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:35:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:35:56.163 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:35:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:56 compute-0 ceph-mon[192914]: pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:57 compute-0 python3.9[355820]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:35:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:35:57 compute-0 python3.9[355896]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/firewall.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/firewall.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:35:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:58 compute-0 ceph-mon[192914]: pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:35:58 compute-0 python3.9[356046]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:35:59 compute-0 python3.9[356122]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/node_exporter.json _original_basename=node_exporter.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:35:59 compute-0 podman[158197]: time="2025-12-05T01:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:35:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 01:35:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8119 "" "Go-http-client/1.1"
Dec 05 01:36:00 compute-0 python3.9[356272]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:36:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:00 compute-0 ceph-mon[192914]: pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:01 compute-0 python3.9[356348]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:36:01 compute-0 openstack_network_exporter[160350]: ERROR   01:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:36:01 compute-0 openstack_network_exporter[160350]: ERROR   01:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:36:01 compute-0 openstack_network_exporter[160350]: ERROR   01:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:36:01 compute-0 openstack_network_exporter[160350]: ERROR   01:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:36:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:36:01 compute-0 openstack_network_exporter[160350]: ERROR   01:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:36:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:36:02 compute-0 python3.9[356498]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:36:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:36:02 compute-0 python3.9[356574]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json _original_basename=openstack_network_exporter.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:36:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:02 compute-0 ceph-mon[192914]: pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:03 compute-0 podman[356575]: 2025-12-05 01:36:03.004841571 +0000 UTC m=+0.116722785 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 05 01:36:04 compute-0 python3.9[356743]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:36:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:04 compute-0 ceph-mon[192914]: pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:05 compute-0 python3.9[356819]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml _original_basename=openstack_network_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:36:06 compute-0 python3.9[356969]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:36:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 15 op/s
Dec 05 01:36:06 compute-0 ceph-mon[192914]: pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 15 op/s
Dec 05 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.070 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.070 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.071 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.071 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.123 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.124 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.125 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.126 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.126 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.127 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.128 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.129 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.129 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.188 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.189 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.190 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.190 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.191 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:36:07 compute-0 python3.9[357045]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/podman_exporter.json _original_basename=podman_exporter.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:36:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:36:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:36:07 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1802048944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.752 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:36:07 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1802048944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:36:08 compute-0 nova_compute[349548]: 2025-12-05 01:36:08.281 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:36:08 compute-0 nova_compute[349548]: 2025-12-05 01:36:08.282 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4542MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 01:36:08 compute-0 nova_compute[349548]: 2025-12-05 01:36:08.283 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:36:08 compute-0 nova_compute[349548]: 2025-12-05 01:36:08.283 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:36:08 compute-0 python3.9[357217]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:36:08 compute-0 nova_compute[349548]: 2025-12-05 01:36:08.382 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 01:36:08 compute-0 nova_compute[349548]: 2025-12-05 01:36:08.382 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 01:36:08 compute-0 nova_compute[349548]: 2025-12-05 01:36:08.410 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:36:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Dec 05 01:36:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:36:08 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2717515268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:36:08 compute-0 nova_compute[349548]: 2025-12-05 01:36:08.936 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:36:08 compute-0 nova_compute[349548]: 2025-12-05 01:36:08.945 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:36:08 compute-0 nova_compute[349548]: 2025-12-05 01:36:08.967 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:36:08 compute-0 nova_compute[349548]: 2025-12-05 01:36:08.971 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 01:36:08 compute-0 nova_compute[349548]: 2025-12-05 01:36:08.972 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:36:08 compute-0 ceph-mon[192914]: pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Dec 05 01:36:08 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2717515268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:36:09 compute-0 python3.9[357313]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:36:10 compute-0 python3.9[357465]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:36:10 compute-0 python3.9[357541]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:36:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:36:10 compute-0 ceph-mon[192914]: pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:36:11 compute-0 python3.9[357691]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:36:12 compute-0 python3.9[357767]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:36:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:36:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:36:12 compute-0 ceph-mon[192914]: pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:36:13 compute-0 python3.9[357917]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:36:14 compute-0 python3.9[357993]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:36:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:36:14 compute-0 ceph-mon[192914]: pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:36:15 compute-0 sudo[358143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpwpecfogiixcilrfkijeygjrcmxxmha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898574.4766862-393-189133778549825/AnsiballZ_file.py'
Dec 05 01:36:15 compute-0 sudo[358143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:15 compute-0 podman[358145]: 2025-12-05 01:36:15.198081714 +0000 UTC m=+0.111784186 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:36:15 compute-0 podman[358147]: 2025-12-05 01:36:15.230476835 +0000 UTC m=+0.125219463 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec 05 01:36:15 compute-0 podman[358146]: 2025-12-05 01:36:15.232826381 +0000 UTC m=+0.133361122 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 05 01:36:15 compute-0 python3.9[358151]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:36:15 compute-0 podman[358206]: 2025-12-05 01:36:15.311406082 +0000 UTC m=+0.079581210 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 01:36:15 compute-0 sudo[358143]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:15 compute-0 podman[358207]: 2025-12-05 01:36:15.357213141 +0000 UTC m=+0.122938200 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 01:36:16 compute-0 sudo[358401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nonpfekwvloaiwebpplhcmzekmeeksmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898575.6257155-401-229436336575423/AnsiballZ_file.py'
Dec 05 01:36:16 compute-0 sudo[358401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:36:16
Dec 05 01:36:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:36:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:36:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'volumes', '.rgw.root', '.mgr', 'vms', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'default.rgw.control']
Dec 05 01:36:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:36:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:36:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:36:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:36:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:36:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:36:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:36:16 compute-0 python3.9[358403]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:36:16 compute-0 sudo[358401]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:36:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:36:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:36:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:36:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:36:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:36:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:36:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:36:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:36:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:36:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:36:17 compute-0 ceph-mon[192914]: pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:36:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:36:17 compute-0 podman[358480]: 2025-12-05 01:36:17.746263537 +0000 UTC m=+0.146654216 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, architecture=x86_64, com.redhat.component=ubi9-container)
Dec 05 01:36:17 compute-0 sudo[358572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdjepvixnaydxowyztkruxhbroitqwvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898577.3631687-409-160562176454278/AnsiballZ_file.py'
Dec 05 01:36:18 compute-0 sudo[358572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:18 compute-0 python3.9[358574]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:36:18 compute-0 sudo[358572]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Dec 05 01:36:18 compute-0 ceph-mon[192914]: pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Dec 05 01:36:20 compute-0 sudo[358725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibzaptwurrpzewcomroaycvtsvtcffbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898578.711046-417-232766073354093/AnsiballZ_systemd_service.py'
Dec 05 01:36:20 compute-0 sudo[358725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:20 compute-0 python3.9[358727]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:36:20 compute-0 sudo[358725]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Dec 05 01:36:20 compute-0 ceph-mon[192914]: pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Dec 05 01:36:21 compute-0 sudo[358754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:36:21 compute-0 sudo[358754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:36:21 compute-0 sudo[358754]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:21 compute-0 sudo[358803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:36:21 compute-0 sudo[358803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:36:21 compute-0 sudo[358803]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:21 compute-0 sudo[358856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:36:21 compute-0 sudo[358856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:36:21 compute-0 sudo[358856]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:21 compute-0 sudo[358904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:36:21 compute-0 sudo[358904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:36:21 compute-0 sudo[358979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vypeykxiannzlkewmkjajxfbdluvtbty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898581.0263586-426-96886614110131/AnsiballZ_stat.py'
Dec 05 01:36:21 compute-0 sudo[358979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:21 compute-0 python3.9[358988]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:36:21 compute-0 sudo[358979]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:21 compute-0 sudo[358904]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 05 01:36:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 01:36:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:36:22 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:36:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:36:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:36:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:36:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:36:22 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev f08f4fed-4f98-4588-86e4-d5389892212b does not exist
Dec 05 01:36:22 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev bba1a3e2-c0cf-4923-8a18-577f5518622d does not exist
Dec 05 01:36:22 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 4d46dd09-cbb4-4240-8f4c-fa2e88bbdde4 does not exist
Dec 05 01:36:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:36:22 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:36:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:36:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:36:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:36:22 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:36:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 01:36:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:36:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:36:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:36:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:36:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:36:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:36:22 compute-0 sudo[359060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:36:22 compute-0 sudo[359060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:36:22 compute-0 sudo[359060]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:22 compute-0 sudo[359113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btzggimimscvxydfascpgrrhptvtxvjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898581.0263586-426-96886614110131/AnsiballZ_file.py'
Dec 05 01:36:22 compute-0 sudo[359113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:22 compute-0 sudo[359112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:36:22 compute-0 sudo[359112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:36:22 compute-0 sudo[359112]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:22 compute-0 sudo[359141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:36:22 compute-0 sudo[359141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:36:22 compute-0 sudo[359141]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:22 compute-0 podman[359139]: 2025-12-05 01:36:22.3572254 +0000 UTC m=+0.096853995 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, name=ubi9-minimal, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, build-date=2025-08-20T13:12:41)
Dec 05 01:36:22 compute-0 python3.9[359126]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:36:22 compute-0 sudo[359113]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:36:22 compute-0 sudo[359186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:36:22 compute-0 sudo[359186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:36:22 compute-0 sudo[359309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqjqpvqsmczidfpfcqihgghhtvwymrci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898581.0263586-426-96886614110131/AnsiballZ_stat.py'
Dec 05 01:36:22 compute-0 sudo[359309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:22 compute-0 python3.9[359311]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:36:22 compute-0 sudo[359309]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:22 compute-0 podman[359326]: 2025-12-05 01:36:22.987612514 +0000 UTC m=+0.083465539 container create 898abba783c53f951778f0696cb92d7acbe5c7058aad661e52c3fc34719c3f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:36:23 compute-0 podman[359326]: 2025-12-05 01:36:22.957641211 +0000 UTC m=+0.053494286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:36:23 compute-0 systemd[1]: Started libpod-conmon-898abba783c53f951778f0696cb92d7acbe5c7058aad661e52c3fc34719c3f06.scope.
Dec 05 01:36:23 compute-0 ceph-mon[192914]: pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:23 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:36:23 compute-0 podman[359326]: 2025-12-05 01:36:23.137367027 +0000 UTC m=+0.233220152 container init 898abba783c53f951778f0696cb92d7acbe5c7058aad661e52c3fc34719c3f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 05 01:36:23 compute-0 podman[359326]: 2025-12-05 01:36:23.153141441 +0000 UTC m=+0.248994476 container start 898abba783c53f951778f0696cb92d7acbe5c7058aad661e52c3fc34719c3f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 05 01:36:23 compute-0 podman[359326]: 2025-12-05 01:36:23.161072354 +0000 UTC m=+0.256925469 container attach 898abba783c53f951778f0696cb92d7acbe5c7058aad661e52c3fc34719c3f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_greider, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:36:23 compute-0 zen_greider[359356]: 167 167
Dec 05 01:36:23 compute-0 podman[359326]: 2025-12-05 01:36:23.168647567 +0000 UTC m=+0.264500602 container died 898abba783c53f951778f0696cb92d7acbe5c7058aad661e52c3fc34719c3f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 05 01:36:23 compute-0 systemd[1]: libpod-898abba783c53f951778f0696cb92d7acbe5c7058aad661e52c3fc34719c3f06.scope: Deactivated successfully.
Dec 05 01:36:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-65c224bfde926a51a049d740796ef69aeae3823c258cc3b598175fcc27e4503d-merged.mount: Deactivated successfully.
Dec 05 01:36:23 compute-0 podman[359326]: 2025-12-05 01:36:23.248103782 +0000 UTC m=+0.343956847 container remove 898abba783c53f951778f0696cb92d7acbe5c7058aad661e52c3fc34719c3f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_greider, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:36:23 compute-0 systemd[1]: libpod-conmon-898abba783c53f951778f0696cb92d7acbe5c7058aad661e52c3fc34719c3f06.scope: Deactivated successfully.
Dec 05 01:36:23 compute-0 sudo[359436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chargpykuxoeptffumbjnmsnlmpxudgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898581.0263586-426-96886614110131/AnsiballZ_file.py'
Dec 05 01:36:23 compute-0 sudo[359436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:23 compute-0 podman[359444]: 2025-12-05 01:36:23.514102275 +0000 UTC m=+0.084320983 container create 3fb39641a19f0a0390f5f0e1f8c511d7bb39dd79978940cbaebb8906968b3e94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 05 01:36:23 compute-0 podman[359444]: 2025-12-05 01:36:23.478495973 +0000 UTC m=+0.048714731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:36:23 compute-0 systemd[1]: Started libpod-conmon-3fb39641a19f0a0390f5f0e1f8c511d7bb39dd79978940cbaebb8906968b3e94.scope.
Dec 05 01:36:23 compute-0 python3.9[359442]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ _original_basename=healthcheck.future recurse=False state=file path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:36:23 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa177136a5cf2c0fd14568aa01a880aeffc020ab8c8aee460e53d9c53269857/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa177136a5cf2c0fd14568aa01a880aeffc020ab8c8aee460e53d9c53269857/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa177136a5cf2c0fd14568aa01a880aeffc020ab8c8aee460e53d9c53269857/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa177136a5cf2c0fd14568aa01a880aeffc020ab8c8aee460e53d9c53269857/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa177136a5cf2c0fd14568aa01a880aeffc020ab8c8aee460e53d9c53269857/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:36:23 compute-0 sudo[359436]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:23 compute-0 podman[359444]: 2025-12-05 01:36:23.703071041 +0000 UTC m=+0.273289799 container init 3fb39641a19f0a0390f5f0e1f8c511d7bb39dd79978940cbaebb8906968b3e94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 01:36:23 compute-0 podman[359444]: 2025-12-05 01:36:23.727873008 +0000 UTC m=+0.298091686 container start 3fb39641a19f0a0390f5f0e1f8c511d7bb39dd79978940cbaebb8906968b3e94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 01:36:23 compute-0 podman[359444]: 2025-12-05 01:36:23.733377842 +0000 UTC m=+0.303596610 container attach 3fb39641a19f0a0390f5f0e1f8c511d7bb39dd79978940cbaebb8906968b3e94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 01:36:24 compute-0 podman[359571]: 2025-12-05 01:36:24.731607924 +0000 UTC m=+0.143254591 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:36:24 compute-0 sudo[359653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwvfnhhvnuzdpgutcujojxbhgyemeezm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898584.0926647-448-192944887500996/AnsiballZ_container_config_data.py'
Dec 05 01:36:24 compute-0 sudo[359653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:24 compute-0 ceph-mon[192914]: pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:24 compute-0 musing_hofstadter[359459]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:36:24 compute-0 musing_hofstadter[359459]: --> relative data size: 1.0
Dec 05 01:36:24 compute-0 musing_hofstadter[359459]: --> All data devices are unavailable
Dec 05 01:36:25 compute-0 systemd[1]: libpod-3fb39641a19f0a0390f5f0e1f8c511d7bb39dd79978940cbaebb8906968b3e94.scope: Deactivated successfully.
Dec 05 01:36:25 compute-0 systemd[1]: libpod-3fb39641a19f0a0390f5f0e1f8c511d7bb39dd79978940cbaebb8906968b3e94.scope: Consumed 1.222s CPU time.
Dec 05 01:36:25 compute-0 conmon[359459]: conmon 3fb39641a19f0a0390f5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3fb39641a19f0a0390f5f0e1f8c511d7bb39dd79978940cbaebb8906968b3e94.scope/container/memory.events
Dec 05 01:36:25 compute-0 podman[359444]: 2025-12-05 01:36:25.011031385 +0000 UTC m=+1.581250053 container died 3fb39641a19f0a0390f5f0e1f8c511d7bb39dd79978940cbaebb8906968b3e94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec 05 01:36:25 compute-0 python3.9[359657]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Dec 05 01:36:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fa177136a5cf2c0fd14568aa01a880aeffc020ab8c8aee460e53d9c53269857-merged.mount: Deactivated successfully.
Dec 05 01:36:25 compute-0 sudo[359653]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:25 compute-0 podman[359444]: 2025-12-05 01:36:25.121075261 +0000 UTC m=+1.691293929 container remove 3fb39641a19f0a0390f5f0e1f8c511d7bb39dd79978940cbaebb8906968b3e94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 01:36:25 compute-0 systemd[1]: libpod-conmon-3fb39641a19f0a0390f5f0e1f8c511d7bb39dd79978940cbaebb8906968b3e94.scope: Deactivated successfully.
Dec 05 01:36:25 compute-0 sudo[359186]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:25 compute-0 sudo[359693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:36:25 compute-0 sudo[359693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:36:25 compute-0 sudo[359693]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:25 compute-0 sudo[359725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:36:25 compute-0 sudo[359725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:36:25 compute-0 sudo[359725]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:25 compute-0 sudo[359779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:36:25 compute-0 sudo[359779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:36:25 compute-0 sudo[359779]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:25 compute-0 sudo[359827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:36:25 compute-0 sudo[359827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:36:26 compute-0 sudo[359955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqjoehegoiaucexfthkvpckidfuoefpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898585.405029-457-24885302475788/AnsiballZ_container_config_hash.py'
Dec 05 01:36:26 compute-0 sudo[359955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:26 compute-0 podman[359969]: 2025-12-05 01:36:26.264874848 +0000 UTC m=+0.075052683 container create 421691e592d493159a1b9183cc47020ad6a3dfb051db35ac24e776e5a594b8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wing, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 01:36:26 compute-0 podman[359969]: 2025-12-05 01:36:26.232668202 +0000 UTC m=+0.042846117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:36:26 compute-0 systemd[1]: Started libpod-conmon-421691e592d493159a1b9183cc47020ad6a3dfb051db35ac24e776e5a594b8d2.scope.
Dec 05 01:36:26 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:36:26 compute-0 podman[359969]: 2025-12-05 01:36:26.404364062 +0000 UTC m=+0.214541967 container init 421691e592d493159a1b9183cc47020ad6a3dfb051db35ac24e776e5a594b8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wing, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:36:26 compute-0 python3.9[359962]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 01:36:26 compute-0 podman[359969]: 2025-12-05 01:36:26.415063583 +0000 UTC m=+0.225241458 container start 421691e592d493159a1b9183cc47020ad6a3dfb051db35ac24e776e5a594b8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 01:36:26 compute-0 nostalgic_wing[359985]: 167 167
Dec 05 01:36:26 compute-0 systemd[1]: libpod-421691e592d493159a1b9183cc47020ad6a3dfb051db35ac24e776e5a594b8d2.scope: Deactivated successfully.
Dec 05 01:36:26 compute-0 conmon[359985]: conmon 421691e592d493159a1b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-421691e592d493159a1b9183cc47020ad6a3dfb051db35ac24e776e5a594b8d2.scope/container/memory.events
Dec 05 01:36:26 compute-0 podman[359969]: 2025-12-05 01:36:26.424330363 +0000 UTC m=+0.234508228 container attach 421691e592d493159a1b9183cc47020ad6a3dfb051db35ac24e776e5a594b8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 05 01:36:26 compute-0 podman[359969]: 2025-12-05 01:36:26.424733115 +0000 UTC m=+0.234910970 container died 421691e592d493159a1b9183cc47020ad6a3dfb051db35ac24e776e5a594b8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wing, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:36:26 compute-0 sudo[359955]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0b1c7bfd312d9cbee8ad177aa53628e8cd342f1c76e9716b8b3dc0c4dc5adeb-merged.mount: Deactivated successfully.
Dec 05 01:36:26 compute-0 podman[359969]: 2025-12-05 01:36:26.489088385 +0000 UTC m=+0.299266210 container remove 421691e592d493159a1b9183cc47020ad6a3dfb051db35ac24e776e5a594b8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wing, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:36:26 compute-0 systemd[1]: libpod-conmon-421691e592d493159a1b9183cc47020ad6a3dfb051db35ac24e776e5a594b8d2.scope: Deactivated successfully.
Dec 05 01:36:26 compute-0 podman[360032]: 2025-12-05 01:36:26.762824826 +0000 UTC m=+0.083647794 container create 0c10d8e9db13f7651adab9c121579142e978fa801ce28747821c75434330c10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_haibt, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 01:36:26 compute-0 podman[360032]: 2025-12-05 01:36:26.729121638 +0000 UTC m=+0.049944666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:36:26 compute-0 systemd[1]: Started libpod-conmon-0c10d8e9db13f7651adab9c121579142e978fa801ce28747821c75434330c10c.scope.
Dec 05 01:36:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:26 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:36:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6adb5e79013aae27cd2a7f0d558db66b7ff190389c4d7cf3b50b072cdc163e03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:36:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6adb5e79013aae27cd2a7f0d558db66b7ff190389c4d7cf3b50b072cdc163e03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:36:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6adb5e79013aae27cd2a7f0d558db66b7ff190389c4d7cf3b50b072cdc163e03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:36:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6adb5e79013aae27cd2a7f0d558db66b7ff190389c4d7cf3b50b072cdc163e03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:36:26 compute-0 ceph-mon[192914]: pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:26 compute-0 podman[360032]: 2025-12-05 01:36:26.971590049 +0000 UTC m=+0.292413077 container init 0c10d8e9db13f7651adab9c121579142e978fa801ce28747821c75434330c10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_haibt, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 01:36:27 compute-0 podman[360032]: 2025-12-05 01:36:26.999951417 +0000 UTC m=+0.320774385 container start 0c10d8e9db13f7651adab9c121579142e978fa801ce28747821c75434330c10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_haibt, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:36:27 compute-0 podman[360032]: 2025-12-05 01:36:27.006283065 +0000 UTC m=+0.327106033 container attach 0c10d8e9db13f7651adab9c121579142e978fa801ce28747821c75434330c10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_haibt, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:36:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:36:27 compute-0 sudo[360181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sexsrrodqjctnypezvpuhptityzcdgmh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764898586.90128-467-140504334047869/AnsiballZ_edpm_container_manage.py'
Dec 05 01:36:27 compute-0 sudo[360181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]: {
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:     "0": [
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:         {
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "devices": [
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "/dev/loop3"
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             ],
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "lv_name": "ceph_lv0",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "lv_size": "21470642176",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "name": "ceph_lv0",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "tags": {
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.cluster_name": "ceph",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.crush_device_class": "",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.encrypted": "0",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.osd_id": "0",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.type": "block",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.vdo": "0"
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             },
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "type": "block",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "vg_name": "ceph_vg0"
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:         }
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:     ],
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:     "1": [
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:         {
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "devices": [
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "/dev/loop4"
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             ],
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "lv_name": "ceph_lv1",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "lv_size": "21470642176",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "name": "ceph_lv1",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "tags": {
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.cluster_name": "ceph",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.crush_device_class": "",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.encrypted": "0",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.osd_id": "1",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.type": "block",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.vdo": "0"
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             },
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "type": "block",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "vg_name": "ceph_vg1"
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:         }
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:     ],
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:     "2": [
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:         {
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "devices": [
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "/dev/loop5"
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             ],
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "lv_name": "ceph_lv2",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "lv_size": "21470642176",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "name": "ceph_lv2",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "tags": {
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.cluster_name": "ceph",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.crush_device_class": "",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.encrypted": "0",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.osd_id": "2",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.type": "block",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:                 "ceph.vdo": "0"
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             },
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "type": "block",
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:             "vg_name": "ceph_vg2"
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:         }
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]:     ]
Dec 05 01:36:27 compute-0 sleepy_haibt[360070]: }
Dec 05 01:36:27 compute-0 systemd[1]: libpod-0c10d8e9db13f7651adab9c121579142e978fa801ce28747821c75434330c10c.scope: Deactivated successfully.
Dec 05 01:36:27 compute-0 podman[360032]: 2025-12-05 01:36:27.791636437 +0000 UTC m=+1.112459425 container died 0c10d8e9db13f7651adab9c121579142e978fa801ce28747821c75434330c10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 05 01:36:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-6adb5e79013aae27cd2a7f0d558db66b7ff190389c4d7cf3b50b072cdc163e03-merged.mount: Deactivated successfully.
Dec 05 01:36:27 compute-0 podman[360032]: 2025-12-05 01:36:27.902730892 +0000 UTC m=+1.223553840 container remove 0c10d8e9db13f7651adab9c121579142e978fa801ce28747821c75434330c10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_haibt, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:36:27 compute-0 systemd[1]: libpod-conmon-0c10d8e9db13f7651adab9c121579142e978fa801ce28747821c75434330c10c.scope: Deactivated successfully.
Dec 05 01:36:27 compute-0 sudo[359827]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:28 compute-0 sudo[360197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:36:28 compute-0 sudo[360197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:36:28 compute-0 sudo[360197]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:28 compute-0 python3[360184]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 01:36:28 compute-0 sudo[360222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:36:28 compute-0 sudo[360222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:36:28 compute-0 sudo[360222]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:28 compute-0 sudo[360259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:36:28 compute-0 sudo[360259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:36:28 compute-0 sudo[360259]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:28 compute-0 python3[360184]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                {
                                                     "Id": "b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d",
                                                     "Digest": "sha256:1810de77f8d2f3059c7cc377072be9f22a136bfbd0a3ad4f08539090d9469fac",
                                                     "RepoTags": [
                                                          "quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested"
                                                     ],
                                                     "RepoDigests": [
                                                          "quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute@sha256:1810de77f8d2f3059c7cc377072be9f22a136bfbd0a3ad4f08539090d9469fac"
                                                     ],
                                                     "Parent": "",
                                                     "Comment": "",
                                                     "Created": "2025-12-01T05:11:05.921630712Z",
                                                     "Config": {
                                                          "User": "root",
                                                          "Env": [
                                                               "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                               "LANG=en_US.UTF-8",
                                                               "TZ=UTC",
                                                               "container=oci"
                                                          ],
                                                          "Entrypoint": [
                                                               "dumb-init",
                                                               "--single-child",
                                                               "--"
                                                          ],
                                                          "Cmd": [
                                                               "kolla_start"
                                                          ],
                                                          "Labels": {
                                                               "io.buildah.version": "1.41.4",
                                                               "maintainer": "OpenStack Kubernetes Operator team",
                                                               "org.label-schema.build-date": "20251125",
                                                               "org.label-schema.license": "GPLv2",
                                                               "org.label-schema.name": "CentOS Stream 10 Base Image",
                                                               "org.label-schema.schema-version": "1.0",
                                                               "org.label-schema.vendor": "CentOS",
                                                               "tcib_build_tag": "3a7876c5b6a4ff2e2bc50e11e9db5f42",
                                                               "tcib_managed": "true"
                                                          },
                                                          "StopSignal": "SIGTERM"
                                                     },
                                                     "Version": "",
                                                     "Author": "",
                                                     "Architecture": "amd64",
                                                     "Os": "linux",
                                                     "Size": 601995467,
                                                     "VirtualSize": 601995467,
                                                     "GraphDriver": {
                                                          "Name": "overlay",
                                                          "Data": {
                                                               "LowerDir": "/var/lib/containers/storage/overlay/586629c35ab12bf3c21aa8405321e52ee8dc3eb91fe319ec2e2bcffcf2f07750/diff:/var/lib/containers/storage/overlay/b726b38a9994fb8597c31b02de6a7067e1e6010e18192135f063d07cbad1efce/diff:/var/lib/containers/storage/overlay/816b6cf07292074c7d459b3269e12ec5823a680369545863b4ff246f9cf897b1/diff:/var/lib/containers/storage/overlay/9cbc2db18be2b6332ac66757d2050c04af51f422021105d6d3edc0bda0b8515c/diff",
                                                               "UpperDir": "/var/lib/containers/storage/overlay/d27b7d7dfa077a19fa71a8e66da1979beb59cc810756e543817991e757a42a46/diff",
                                                               "WorkDir": "/var/lib/containers/storage/overlay/d27b7d7dfa077a19fa71a8e66da1979beb59cc810756e543817991e757a42a46/work"
                                                          }
                                                     },
                                                     "RootFS": {
                                                          "Type": "layers",
                                                          "Layers": [
                                                               "sha256:9cbc2db18be2b6332ac66757d2050c04af51f422021105d6d3edc0bda0b8515c",
                                                               "sha256:4b40c712f1bd18fdb2c50c6adb38e6952f9d174873260f311696915f181f9947",
                                                               "sha256:eaeeda82071109aa7bb6c3500cc7a126797ce0a53bc0f8828831aba88104203b",
                                                               "sha256:c58c65fadb00ed08655f756d68fed13f115faec2bc2384f51ce46e18334fe2ae",
                                                               "sha256:2f6d51b7d12dca1a77173f044cfb4b6a796a560f1015e515fa8ee8a14f36c103"
                                                          ]
                                                     },
                                                     "Labels": {
                                                          "io.buildah.version": "1.41.4",
                                                          "maintainer": "OpenStack Kubernetes Operator team",
                                                          "org.label-schema.build-date": "20251125",
                                                          "org.label-schema.license": "GPLv2",
                                                          "org.label-schema.name": "CentOS Stream 10 Base Image",
                                                          "org.label-schema.schema-version": "1.0",
                                                          "org.label-schema.vendor": "CentOS",
                                                          "tcib_build_tag": "3a7876c5b6a4ff2e2bc50e11e9db5f42",
                                                          "tcib_managed": "true"
                                                     },
                                                     "Annotations": {},
                                                     "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                     "User": "root",
                                                     "History": [
                                                          {
                                                               "created": "2025-11-25T03:00:15.634483436Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:c435edaaf9833341bf9650d5dcfda033191519e1d9c91ecfa082699fd3e149e4 in / ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-11-25T03:00:15.634561379Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 10 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-11-25T03:00:18.392267297Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:03:54.682983025Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                               "comment": "FROM quay.io/centos/centos:stream10",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:03:54.683002525Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:03:54.683016626Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:03:54.683029656Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:03:54.683039096Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:03:54.683051027Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:03:55.032223959Z",
                                                               "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:03:55.512889527Z",
                                                               "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/centos.repo\" ]; then rm -f /etc/yum.repos.d/centos*.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:06.648921904Z",
                                                               "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:10.17980807Z",
                                                               "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:10.543770896Z",
                                                               "created_by": "/bin/sh -c if [ ! -f \"/etc/pki/tls/cert.pem\" ]; then ln -s /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem /etc/pki/tls/cert.pem; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:10.845951852Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:11.140582401Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:11.595535873Z",
                                                               "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:11.873565728Z",
                                                               "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:12.161351256Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:12.439519965Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:13.03816645Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:13.326571045Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:13.607978165Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:13.880788572Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:14.152583884Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:14.423855069Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:14.694844119Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:14.965588963Z",
                                                               "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:15.237099026Z",
                                                               "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:17.576994187Z",
                                                               "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:17.848042619Z",
                                                               "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:18.119965201Z",
                                                               "created_by": "/bin/sh -c mkdir -p /openstack",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:19.072728213Z",
                                                               "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:20.222969012Z",
                                                               "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:20.223021953Z",
                                                               "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:20.223036743Z",
                                                               "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:20.223046834Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:21.244606139Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"3a7876c5b6a4ff2e2bc50e11e9db5f42\""
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:04:59.941136358Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "comment": "FROM quay.rdoproject.org/podified-master-centos10/openstack-base:3a7876c5b6a4ff2e2bc50e11e9db5f42",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:05:21.488793949Z",
                                                               "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:05:28.702733012Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"3a7876c5b6a4ff2e2bc50e11e9db5f42\""
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:07:25.003646183Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "comment": "FROM quay.rdoproject.org/podified-master-centos10/openstack-os:3a7876c5b6a4ff2e2bc50e11e9db5f42",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:07:26.030572464Z",
                                                               "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage ceilometer",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:07:42.357527997Z",
                                                               "created_by": "/bin/sh -c dnf -y install openstack-ceilometer-common && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:07:44.698286094Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"3a7876c5b6a4ff2e2bc50e11e9db5f42\""
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:10:54.136243993Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "comment": "FROM quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-base:3a7876c5b6a4ff2e2bc50e11e9db5f42",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:11:05.918804558Z",
                                                               "created_by": "/bin/sh -c dnf -y install openstack-ceilometer-compute && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T05:11:07.91684183Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"3a7876c5b6a4ff2e2bc50e11e9db5f42\""
                                                          }
                                                     ],
                                                     "NamesHistory": [
                                                          "quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested"
                                                     ]
                                                }
                                           ]
                                           : quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec 05 01:36:28 compute-0 sudo[360306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:36:28 compute-0 sudo[360306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:36:28 compute-0 sudo[360181]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:28 compute-0 ceph-mon[192914]: pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:29 compute-0 podman[360440]: 2025-12-05 01:36:29.010441894 +0000 UTC m=+0.095212730 container create 34137d9ac5655de1be5c0759791220fda0a49f51055f7eac6ba43b0cd4ffd661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 05 01:36:29 compute-0 podman[360440]: 2025-12-05 01:36:28.966497288 +0000 UTC m=+0.051268174 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:36:29 compute-0 systemd[1]: Started libpod-conmon-34137d9ac5655de1be5c0759791220fda0a49f51055f7eac6ba43b0cd4ffd661.scope.
Dec 05 01:36:29 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:36:29 compute-0 podman[360440]: 2025-12-05 01:36:29.156998987 +0000 UTC m=+0.241769833 container init 34137d9ac5655de1be5c0759791220fda0a49f51055f7eac6ba43b0cd4ffd661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moser, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:36:29 compute-0 podman[360440]: 2025-12-05 01:36:29.178459051 +0000 UTC m=+0.263229897 container start 34137d9ac5655de1be5c0759791220fda0a49f51055f7eac6ba43b0cd4ffd661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 05 01:36:29 compute-0 clever_moser[360487]: 167 167
Dec 05 01:36:29 compute-0 podman[360440]: 2025-12-05 01:36:29.186073305 +0000 UTC m=+0.270844201 container attach 34137d9ac5655de1be5c0759791220fda0a49f51055f7eac6ba43b0cd4ffd661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 01:36:29 compute-0 systemd[1]: libpod-34137d9ac5655de1be5c0759791220fda0a49f51055f7eac6ba43b0cd4ffd661.scope: Deactivated successfully.
Dec 05 01:36:29 compute-0 podman[360440]: 2025-12-05 01:36:29.187627619 +0000 UTC m=+0.272398475 container died 34137d9ac5655de1be5c0759791220fda0a49f51055f7eac6ba43b0cd4ffd661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moser, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:36:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5a55e35b0437a5c5ccaedb5a86f6ab20b02d4b06b80a6f6aeccf376399260de-merged.mount: Deactivated successfully.
Dec 05 01:36:29 compute-0 podman[360440]: 2025-12-05 01:36:29.248656205 +0000 UTC m=+0.333427011 container remove 34137d9ac5655de1be5c0759791220fda0a49f51055f7eac6ba43b0cd4ffd661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moser, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:36:29 compute-0 systemd[1]: libpod-conmon-34137d9ac5655de1be5c0759791220fda0a49f51055f7eac6ba43b0cd4ffd661.scope: Deactivated successfully.
Dec 05 01:36:29 compute-0 sudo[360581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbjpdcurmitxstsxzceahpsoddfhultk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898588.8764384-475-23680369516891/AnsiballZ_stat.py'
Dec 05 01:36:29 compute-0 sudo[360581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:29 compute-0 podman[360579]: 2025-12-05 01:36:29.465570968 +0000 UTC m=+0.071429501 container create b07d2b2c3a67165c377fd48ca207eb8e097f1e273dce5d41e593c3168a8ebfcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 05 01:36:29 compute-0 podman[360579]: 2025-12-05 01:36:29.446436619 +0000 UTC m=+0.052295162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:36:29 compute-0 systemd[1]: Started libpod-conmon-b07d2b2c3a67165c377fd48ca207eb8e097f1e273dce5d41e593c3168a8ebfcb.scope.
Dec 05 01:36:29 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6057a51f4b430411fda0ae1b53263d4e6b76d1e06311128a35c0e9d3440190a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6057a51f4b430411fda0ae1b53263d4e6b76d1e06311128a35c0e9d3440190a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6057a51f4b430411fda0ae1b53263d4e6b76d1e06311128a35c0e9d3440190a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6057a51f4b430411fda0ae1b53263d4e6b76d1e06311128a35c0e9d3440190a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:36:29 compute-0 python3.9[360589]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:36:29 compute-0 podman[360579]: 2025-12-05 01:36:29.627667728 +0000 UTC m=+0.233526341 container init b07d2b2c3a67165c377fd48ca207eb8e097f1e273dce5d41e593c3168a8ebfcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 01:36:29 compute-0 podman[360579]: 2025-12-05 01:36:29.65087336 +0000 UTC m=+0.256731933 container start b07d2b2c3a67165c377fd48ca207eb8e097f1e273dce5d41e593c3168a8ebfcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 05 01:36:29 compute-0 podman[360579]: 2025-12-05 01:36:29.657527128 +0000 UTC m=+0.263385691 container attach b07d2b2c3a67165c377fd48ca207eb8e097f1e273dce5d41e593c3168a8ebfcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:36:29 compute-0 sudo[360581]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:29 compute-0 podman[158197]: time="2025-12-05T01:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:36:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 44147 "" "Go-http-client/1.1"
Dec 05 01:36:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8532 "" "Go-http-client/1.1"
Dec 05 01:36:30 compute-0 happy_austin[360598]: {
Dec 05 01:36:30 compute-0 happy_austin[360598]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:36:30 compute-0 happy_austin[360598]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:36:30 compute-0 happy_austin[360598]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:36:30 compute-0 happy_austin[360598]:         "osd_id": 0,
Dec 05 01:36:30 compute-0 happy_austin[360598]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:36:30 compute-0 happy_austin[360598]:         "type": "bluestore"
Dec 05 01:36:30 compute-0 happy_austin[360598]:     },
Dec 05 01:36:30 compute-0 happy_austin[360598]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:36:30 compute-0 happy_austin[360598]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:36:30 compute-0 happy_austin[360598]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:36:30 compute-0 happy_austin[360598]:         "osd_id": 1,
Dec 05 01:36:30 compute-0 happy_austin[360598]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:36:30 compute-0 happy_austin[360598]:         "type": "bluestore"
Dec 05 01:36:30 compute-0 happy_austin[360598]:     },
Dec 05 01:36:30 compute-0 happy_austin[360598]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:36:30 compute-0 happy_austin[360598]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:36:30 compute-0 happy_austin[360598]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:36:30 compute-0 happy_austin[360598]:         "osd_id": 2,
Dec 05 01:36:30 compute-0 happy_austin[360598]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:36:30 compute-0 happy_austin[360598]:         "type": "bluestore"
Dec 05 01:36:30 compute-0 happy_austin[360598]:     }
Dec 05 01:36:30 compute-0 happy_austin[360598]: }
Dec 05 01:36:30 compute-0 systemd[1]: libpod-b07d2b2c3a67165c377fd48ca207eb8e097f1e273dce5d41e593c3168a8ebfcb.scope: Deactivated successfully.
Dec 05 01:36:30 compute-0 podman[360579]: 2025-12-05 01:36:30.821564854 +0000 UTC m=+1.427423427 container died b07d2b2c3a67165c377fd48ca207eb8e097f1e273dce5d41e593c3168a8ebfcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:36:30 compute-0 systemd[1]: libpod-b07d2b2c3a67165c377fd48ca207eb8e097f1e273dce5d41e593c3168a8ebfcb.scope: Consumed 1.152s CPU time.
Dec 05 01:36:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6057a51f4b430411fda0ae1b53263d4e6b76d1e06311128a35c0e9d3440190a7-merged.mount: Deactivated successfully.
Dec 05 01:36:30 compute-0 podman[360579]: 2025-12-05 01:36:30.922615596 +0000 UTC m=+1.528474139 container remove b07d2b2c3a67165c377fd48ca207eb8e097f1e273dce5d41e593c3168a8ebfcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec 05 01:36:30 compute-0 systemd[1]: libpod-conmon-b07d2b2c3a67165c377fd48ca207eb8e097f1e273dce5d41e593c3168a8ebfcb.scope: Deactivated successfully.
Dec 05 01:36:30 compute-0 sudo[360306]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:30 compute-0 ceph-mon[192914]: pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:36:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:36:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:36:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:36:30 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 49845ac3-b80a-4ab5-9614-e1be53bc0c77 does not exist
Dec 05 01:36:30 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev af177979-a210-4622-8444-eb627b119605 does not exist
Dec 05 01:36:31 compute-0 sudo[360766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:36:31 compute-0 sudo[360766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:36:31 compute-0 sudo[360766]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:31 compute-0 sudo[360821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lihsrlienyjtpmqouftuxielffbggydi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898590.5934155-484-119641311392533/AnsiballZ_file.py'
Dec 05 01:36:31 compute-0 sudo[360821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:31 compute-0 sudo[360820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:36:31 compute-0 sudo[360820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:36:31 compute-0 sudo[360820]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:31 compute-0 python3.9[360834]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:36:31 compute-0 sudo[360821]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:31 compute-0 openstack_network_exporter[160350]: ERROR   01:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:36:31 compute-0 openstack_network_exporter[160350]: ERROR   01:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:36:31 compute-0 openstack_network_exporter[160350]: ERROR   01:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:36:31 compute-0 openstack_network_exporter[160350]: ERROR   01:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:36:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:36:31 compute-0 openstack_network_exporter[160350]: ERROR   01:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:36:31 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:36:31 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:36:31 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:36:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:36:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:32 compute-0 ceph-mon[192914]: pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:33 compute-0 sudo[360996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkmubjxbowgjmqrqohoqgvfhikvoztgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898591.4651084-484-229724654608342/AnsiballZ_copy.py'
Dec 05 01:36:33 compute-0 sudo[360996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:33 compute-0 python3.9[360998]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764898591.4651084-484-229724654608342/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:36:33 compute-0 sudo[360996]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:33 compute-0 podman[360999]: 2025-12-05 01:36:33.714322331 +0000 UTC m=+0.113477573 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 05 01:36:34 compute-0 sudo[361089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuybcmkalsnvaapytrzbpidwvszafwlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898591.4651084-484-229724654608342/AnsiballZ_systemd.py'
Dec 05 01:36:34 compute-0 sudo[361089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:34 compute-0 python3.9[361091]: ansible-systemd Invoked with state=started name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:36:34 compute-0 sudo[361089]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:34 compute-0 ceph-mon[192914]: pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:35 compute-0 sudo[361243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwglbpzmhcidwormvvkleedvptjuaywr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898594.693702-504-218030439735514/AnsiballZ_systemd.py'
Dec 05 01:36:35 compute-0 sudo[361243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:35 compute-0 python3.9[361245]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 01:36:35 compute-0 systemd[1]: Stopping ceilometer_agent_compute container...
Dec 05 01:36:35 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:36:35.742 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec 05 01:36:35 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:36:35.844 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Dec 05 01:36:35 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:36:35.845 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Dec 05 01:36:35 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:36:35.845 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Dec 05 01:36:35 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:36:35.845 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Dec 05 01:36:35 compute-0 virtqemud[138703]: End of file while reading data: Input/output error
Dec 05 01:36:35 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:36:35.860 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Dec 05 01:36:35 compute-0 virtqemud[138703]: End of file while reading data: Input/output error
Dec 05 01:36:36 compute-0 systemd[1]: libpod-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope: Deactivated successfully.
Dec 05 01:36:36 compute-0 systemd[1]: libpod-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope: Consumed 4.004s CPU time.
Dec 05 01:36:36 compute-0 podman[361249]: 2025-12-05 01:36:36.076046659 +0000 UTC m=+0.427590750 container died 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 05 01:36:36 compute-0 systemd[1]: 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-57d4f94636a0dba8.timer: Deactivated successfully.
Dec 05 01:36:36 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.
Dec 05 01:36:36 compute-0 systemd[1]: 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-57d4f94636a0dba8.service: Failed to open /run/systemd/transient/01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-57d4f94636a0dba8.service: No such file or directory
Dec 05 01:36:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-userdata-shm.mount: Deactivated successfully.
Dec 05 01:36:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1-merged.mount: Deactivated successfully.
Dec 05 01:36:36 compute-0 podman[361249]: 2025-12-05 01:36:36.164359564 +0000 UTC m=+0.515903615 container cleanup 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec 05 01:36:36 compute-0 podman[361249]: ceilometer_agent_compute
Dec 05 01:36:36 compute-0 podman[361277]: ceilometer_agent_compute
Dec 05 01:36:36 compute-0 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Dec 05 01:36:36 compute-0 systemd[1]: Stopped ceilometer_agent_compute container.
Dec 05 01:36:36 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Dec 05 01:36:36 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec 05 01:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 05 01:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 05 01:36:36 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.
Dec 05 01:36:36 compute-0 podman[361288]: 2025-12-05 01:36:36.571812746 +0000 UTC m=+0.248724218 container init 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: + sudo -E kolla_set_configs
Dec 05 01:36:36 compute-0 podman[361288]: 2025-12-05 01:36:36.62278959 +0000 UTC m=+0.299701002 container start 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 01:36:36 compute-0 podman[361288]: ceilometer_agent_compute
Dec 05 01:36:36 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Dec 05 01:36:36 compute-0 sudo[361308]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: sudo: unable to send audit message: Operation not permitted
Dec 05 01:36:36 compute-0 sudo[361308]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 05 01:36:36 compute-0 sudo[361243]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:36 compute-0 podman[361309]: 2025-12-05 01:36:36.747620112 +0000 UTC m=+0.108083042 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Validating config file
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Copying service configuration files
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Writing out command to execute
Dec 05 01:36:36 compute-0 systemd[1]: 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-1e73c017c9b80aa1.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 01:36:36 compute-0 systemd[1]: 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-1e73c017c9b80aa1.service: Failed with result 'exit-code'.
Dec 05 01:36:36 compute-0 sudo[361308]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: ++ cat /run_command
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: + ARGS=
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: + sudo kolla_copy_cacerts
Dec 05 01:36:36 compute-0 sudo[361346]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: sudo: unable to send audit message: Operation not permitted
Dec 05 01:36:36 compute-0 sudo[361346]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 05 01:36:36 compute-0 sudo[361346]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: + [[ ! -n '' ]]
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: + . kolla_extend_start
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: + umask 0022
Dec 05 01:36:36 compute-0 ceilometer_agent_compute[361302]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec 05 01:36:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:36 compute-0 ceph-mon[192914]: pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:36:37 compute-0 sudo[361481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oojqqpmmlyxxelijnhjerpptdacorgji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898596.9927173-512-113189387656390/AnsiballZ_stat.py'
Dec 05 01:36:37 compute-0 sudo[361481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:37 compute-0 python3.9[361483]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:36:37 compute-0 sudo[361481]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.967 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.967 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.967 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.967 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.967 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.967 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.967 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.968 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.968 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.968 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.968 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.968 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.968 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.968 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.968 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.968 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.968 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.968 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.968 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.969 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.969 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.969 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.969 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.969 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.969 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.969 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.969 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.969 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.970 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.970 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.970 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.970 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.970 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.970 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.970 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.970 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.970 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.970 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.976 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.976 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.976 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.976 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.976 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.976 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.976 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.976 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.976 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.976 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.976 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.979 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.979 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.979 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.979 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.979 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.979 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.979 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.979 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.979 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.979 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.980 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.980 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.006 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.007 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.007 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.007 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.007 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.008 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.008 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.008 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.008 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.009 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.009 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.009 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.009 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.009 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.009 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.010 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.010 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.010 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.010 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.010 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.010 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.010 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.011 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.011 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.011 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.011 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.011 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.011 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.011 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.012 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.012 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.012 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.012 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.012 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.012 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.012 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.013 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.013 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.013 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.013 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.013 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.013 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.014 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.014 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.014 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.014 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.014 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.014 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.014 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.014 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.015 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.015 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.015 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.015 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.015 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.015 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.015 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.016 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.016 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.016 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.016 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.016 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.016 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.016 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.017 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.017 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.017 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.017 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.017 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.018 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.018 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.018 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.018 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.018 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.018 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.018 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.018 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.018 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.018 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.018 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.018 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.023 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.023 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.023 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.023 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.023 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.023 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.023 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.023 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.025 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.029 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.031 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.048 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.067 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.069 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.069 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.252 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.252 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.252 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.252 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.252 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.252 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.252 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.252 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.253 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.253 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.253 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.254 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.254 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.254 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.255 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.255 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.255 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.255 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.255 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.255 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.255 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.255 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.256 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.256 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.256 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.256 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.256 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.256 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.256 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.256 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.256 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.257 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.257 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.257 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.257 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.257 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.257 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.257 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.257 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.257 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.257 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.257 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.258 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.258 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.258 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.258 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.258 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.258 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.258 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.258 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.258 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.259 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.259 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.259 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.259 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.259 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.259 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.259 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.259 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.259 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.260 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.260 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.260 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.260 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.260 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.260 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.260 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.260 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.260 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.260 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.260 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.261 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.261 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.261 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.261 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.261 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.261 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.261 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.261 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.261 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.262 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.262 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.262 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.262 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.262 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.262 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.262 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.262 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.262 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.262 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.262 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.263 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.263 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.263 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.263 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.263 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.263 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.263 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.263 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.263 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.264 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.264 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.264 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.264 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.264 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.264 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.264 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.264 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.264 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.264 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.265 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.265 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.265 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.265 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.265 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.265 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.265 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.265 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.265 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.265 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.265 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.265 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.267 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.267 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.267 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.267 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.267 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.267 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.267 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.267 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.267 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.267 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.267 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.267 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.269 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.269 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.269 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.269 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.269 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.269 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.269 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.274 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.310 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.311 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.312 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.320 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.321 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.321 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.321 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.322 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.322 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.322 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.322 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.327 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.327 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.327 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.327 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:36:38 compute-0 sudo[361569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oybmscszhtxywbzofhnvlwlxfppghder ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898596.9927173-512-113189387656390/AnsiballZ_file.py'
Dec 05 01:36:38 compute-0 sudo[361569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:38 compute-0 python3.9[361574]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/node_exporter/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/node_exporter/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:36:38 compute-0 sudo[361569]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:38 compute-0 ceph-mon[192914]: pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:39 compute-0 sudo[361724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghootmkgmafyewqunlubakeacjojdxxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898599.0260925-526-183013438176006/AnsiballZ_container_config_data.py'
Dec 05 01:36:39 compute-0 sudo[361724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:39 compute-0 python3.9[361726]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Dec 05 01:36:39 compute-0 sudo[361724]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:40 compute-0 sudo[361876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzclznvgullraqtemcajjuewzuzbugea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898600.160827-535-6097785177493/AnsiballZ_container_config_hash.py'
Dec 05 01:36:40 compute-0 sudo[361876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:40 compute-0 ceph-mon[192914]: pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:40 compute-0 python3.9[361878]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 01:36:40 compute-0 sudo[361876]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:41 compute-0 sudo[362028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wubbcvvqjppdukbxfyazzadxoqupvrlo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764898601.4128177-545-123901508859275/AnsiballZ_edpm_container_manage.py'
Dec 05 01:36:41 compute-0 sudo[362028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:42 compute-0 python3[362030]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 01:36:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:36:42 compute-0 python3[362030]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                {
                                                     "Id": "0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83",
                                                     "Digest": "sha256:fa8e5700b7762fffe0674e944762f44bb787a7e44d97569fe55348260453bf80",
                                                     "RepoTags": [
                                                          "quay.io/prometheus/node-exporter:v1.5.0"
                                                     ],
                                                     "RepoDigests": [
                                                          "quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c",
                                                          "quay.io/prometheus/node-exporter@sha256:fa8e5700b7762fffe0674e944762f44bb787a7e44d97569fe55348260453bf80"
                                                     ],
                                                     "Parent": "",
                                                     "Comment": "",
                                                     "Created": "2022-11-29T19:06:14.987394068Z",
                                                     "Config": {
                                                          "User": "nobody",
                                                          "ExposedPorts": {
                                                               "9100/tcp": {}
                                                          },
                                                          "Env": [
                                                               "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
                                                          ],
                                                          "Entrypoint": [
                                                               "/bin/node_exporter"
                                                          ],
                                                          "Labels": {
                                                               "maintainer": "The Prometheus Authors <prometheus-developers@googlegroups.com>"
                                                          }
                                                     },
                                                     "Version": "19.03.8",
                                                     "Author": "",
                                                     "Architecture": "amd64",
                                                     "Os": "linux",
                                                     "Size": 23851788,
                                                     "VirtualSize": 23851788,
                                                     "GraphDriver": {
                                                          "Name": "overlay",
                                                          "Data": {
                                                               "LowerDir": "/var/lib/containers/storage/overlay/a1185e7325783fe8cba63270bc6e59299386d7c73e4bc34c560a1fbc9e6d7e2c/diff:/var/lib/containers/storage/overlay/0438ade5aeea533b00cd75095bec75fbc2b307bace4c89bb39b75d428637bcd8/diff",
                                                               "UpperDir": "/var/lib/containers/storage/overlay/2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b/diff",
                                                               "WorkDir": "/var/lib/containers/storage/overlay/2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b/work"
                                                          }
                                                     },
                                                     "RootFS": {
                                                          "Type": "layers",
                                                          "Layers": [
                                                               "sha256:0438ade5aeea533b00cd75095bec75fbc2b307bace4c89bb39b75d428637bcd8",
                                                               "sha256:9f2d25037e3e722ca7f4ca9c7a885f19a2ce11140592ee0acb323dec3b26640d",
                                                               "sha256:76857a93cd03e12817c36c667cc3263d58886232cad116327e55d79036e5977d"
                                                          ]
                                                     },
                                                     "Labels": {
                                                          "maintainer": "The Prometheus Authors <prometheus-developers@googlegroups.com>"
                                                     },
                                                     "Annotations": {},
                                                     "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                     "User": "nobody",
                                                     "History": [
                                                          {
                                                               "created": "2022-10-26T06:30:33.700079457Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:5e991de3200129dc05c3130f7a64bebb5704486b4f773bfcaa6b13165d6c2416 in / "
                                                          },
                                                          {
                                                               "created": "2022-10-26T06:30:33.794221299Z",
                                                               "created_by": "/bin/sh -c #(nop)  CMD [\"sh\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2022-11-15T10:54:54.845364304Z",
                                                               "created_by": "/bin/sh -c #(nop)  MAINTAINER The Prometheus Authors <prometheus-developers@googlegroups.com>",
                                                               "author": "The Prometheus Authors <prometheus-developers@googlegroups.com>",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2022-11-15T10:54:55.54866664Z",
                                                               "created_by": "/bin/sh -c #(nop) COPY dir:02c961e21531be78a67ed9bad67d03391cfedcead8b0a35cfb9171346636f11a in / ",
                                                               "author": "The Prometheus Authors <prometheus-developers@googlegroups.com>"
                                                          },
                                                          {
                                                               "created": "2022-11-29T19:06:13.622645057Z",
                                                               "created_by": "/bin/sh -c #(nop)  LABEL maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2022-11-29T19:06:13.810765105Z",
                                                               "created_by": "/bin/sh -c #(nop)  ARG ARCH=amd64",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2022-11-29T19:06:13.990897895Z",
                                                               "created_by": "/bin/sh -c #(nop)  ARG OS=linux",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2022-11-29T19:06:14.358293759Z",
                                                               "created_by": "/bin/sh -c #(nop) COPY file:3ef20dd145817033186947b860c3b6f7bb06d4c435257258c0e5df15f6e51eb7 in /bin/node_exporter "
                                                          },
                                                          {
                                                               "created": "2022-11-29T19:06:14.630644274Z",
                                                               "created_by": "/bin/sh -c #(nop)  EXPOSE 9100",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2022-11-29T19:06:14.79596292Z",
                                                               "created_by": "/bin/sh -c #(nop)  USER nobody",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2022-11-29T19:06:14.987394068Z",
                                                               "created_by": "/bin/sh -c #(nop)  ENTRYPOINT [\"/bin/node_exporter\"]",
                                                               "empty_layer": true
                                                          }
                                                     ],
                                                     "NamesHistory": [
                                                          "quay.io/prometheus/node-exporter:v1.5.0"
                                                     ]
                                                }
                                           ]
                                           : quay.io/prometheus/node-exporter:v1.5.0
Dec 05 01:36:42 compute-0 systemd[1]: libpod-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.scope: Deactivated successfully.
Dec 05 01:36:42 compute-0 systemd[1]: libpod-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.scope: Consumed 6.104s CPU time.
Dec 05 01:36:42 compute-0 podman[362076]: 2025-12-05 01:36:42.67264058 +0000 UTC m=+0.101457445 container died 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:36:42 compute-0 systemd[1]: 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a-5a9c9f0539e84b33.timer: Deactivated successfully.
Dec 05 01:36:42 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.
Dec 05 01:36:42 compute-0 systemd[1]: 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a-5a9c9f0539e84b33.service: Failed to open /run/systemd/transient/6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a-5a9c9f0539e84b33.service: No such file or directory
Dec 05 01:36:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a-userdata-shm.mount: Deactivated successfully.
Dec 05 01:36:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae76d3462a5826074750f1233391fe337ca691f19a9e669132d737b113b57717-merged.mount: Deactivated successfully.
Dec 05 01:36:42 compute-0 podman[362076]: 2025-12-05 01:36:42.75864768 +0000 UTC m=+0.187464515 container cleanup 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:36:42 compute-0 python3[362030]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman stop node_exporter
Dec 05 01:36:42 compute-0 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 05 01:36:42 compute-0 systemd[1]: 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a-5a9c9f0539e84b33.timer: Failed to open /run/systemd/transient/6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a-5a9c9f0539e84b33.timer: No such file or directory
Dec 05 01:36:42 compute-0 systemd[1]: 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a-5a9c9f0539e84b33.service: Failed to open /run/systemd/transient/6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a-5a9c9f0539e84b33.service: No such file or directory
Dec 05 01:36:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:42 compute-0 podman[362103]: 2025-12-05 01:36:42.898134454 +0000 UTC m=+0.104080579 container remove 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:36:42 compute-0 podman[362104]: Error: no container with ID 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a found in database: no such container
Dec 05 01:36:42 compute-0 systemd[1]: edpm_node_exporter.service: Control process exited, code=exited, status=125/n/a
Dec 05 01:36:42 compute-0 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Dec 05 01:36:42 compute-0 python3[362030]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force node_exporter
Dec 05 01:36:42 compute-0 ceph-mon[192914]: pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:43 compute-0 podman[362123]: 2025-12-05 01:36:43.016113593 +0000 UTC m=+0.083266564 container create 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 01:36:43 compute-0 podman[362123]: 2025-12-05 01:36:42.977709522 +0000 UTC m=+0.044862523 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec 05 01:36:43 compute-0 python3[362030]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Dec 05 01:36:43 compute-0 systemd[1]: edpm_node_exporter.service: Scheduled restart job, restart counter is at 1.
Dec 05 01:36:43 compute-0 systemd[1]: Stopped node_exporter container.
Dec 05 01:36:43 compute-0 systemd[1]: Starting node_exporter container...
Dec 05 01:36:43 compute-0 systemd[1]: Started libpod-conmon-602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9.scope.
Dec 05 01:36:43 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:36:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf15be80c0dd292fd0f4a8782079e2ad181dc6be2900ae4af343360a4fced505/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:36:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf15be80c0dd292fd0f4a8782079e2ad181dc6be2900ae4af343360a4fced505/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 05 01:36:43 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9.
Dec 05 01:36:43 compute-0 podman[362133]: 2025-12-05 01:36:43.277801835 +0000 UTC m=+0.226746050 container init 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.310Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.310Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.310Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.312Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.312Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.312Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=arp
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=bcache
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=bonding
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=cpu
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=edac
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=filefd
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=netclass
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=netdev
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=netstat
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=nfs
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=nvme
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=softnet
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=systemd
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=xfs
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=zfs
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.315Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec 05 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.317Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec 05 01:36:43 compute-0 podman[362133]: 2025-12-05 01:36:43.323954203 +0000 UTC m=+0.272898378 container start 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:36:43 compute-0 python3[362030]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman start node_exporter
Dec 05 01:36:43 compute-0 podman[362139]: node_exporter
Dec 05 01:36:43 compute-0 systemd[1]: Started node_exporter container.
Dec 05 01:36:43 compute-0 podman[362170]: 2025-12-05 01:36:43.460556036 +0000 UTC m=+0.114375489 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 01:36:43 compute-0 sudo[362028]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:44 compute-0 ceph-mon[192914]: pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:45 compute-0 sudo[362364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owkatddeqlywpahiqbhipwhskarreyvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898604.5146406-553-268201364568031/AnsiballZ_stat.py'
Dec 05 01:36:45 compute-0 sudo[362364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:45 compute-0 python3.9[362366]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:36:45 compute-0 sudo[362364]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:45 compute-0 podman[362394]: 2025-12-05 01:36:45.698478181 +0000 UTC m=+0.108181194 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:36:45 compute-0 podman[362393]: 2025-12-05 01:36:45.699992374 +0000 UTC m=+0.114766680 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 01:36:45 compute-0 podman[362395]: 2025-12-05 01:36:45.748986202 +0000 UTC m=+0.147452919 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 01:36:45 compute-0 podman[362396]: 2025-12-05 01:36:45.780393286 +0000 UTC m=+0.172769012 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:36:46 compute-0 sudo[362602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-virmijkpbfgwikbitljfuekvxpqetgly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898605.6712892-562-226923087551647/AnsiballZ_file.py'
Dec 05 01:36:46 compute-0 sudo[362602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:36:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:36:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:36:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:36:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:36:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:36:46 compute-0 python3.9[362604]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:36:46 compute-0 sudo[362602]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:46 compute-0 ceph-mon[192914]: pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:36:47 compute-0 sudo[362753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmxzvcjrpfxzuoegazmzrbqogomhzhqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898606.7420685-562-56574346602849/AnsiballZ_copy.py'
Dec 05 01:36:47 compute-0 sudo[362753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:47 compute-0 python3.9[362755]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764898606.7420685-562-56574346602849/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:36:47 compute-0 sudo[362753]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:48 compute-0 rsyslogd[188644]: imjournal from <compute-0:sudo>: begin to drop messages due to rate-limiting
Dec 05 01:36:48 compute-0 sudo[362844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmwzhmirhqiaqtfkzwvwvasaouakreug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898606.7420685-562-56574346602849/AnsiballZ_systemd.py'
Dec 05 01:36:48 compute-0 sudo[362844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:48 compute-0 podman[362803]: 2025-12-05 01:36:48.422957145 +0000 UTC m=+0.137180190 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, config_id=edpm, io.openshift.expose-services=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.29.0, distribution-scope=public, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, architecture=x86_64, release=1214.1726694543, managed_by=edpm_ansible, io.openshift.tags=base rhel9, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9)
Dec 05 01:36:48 compute-0 python3.9[362849]: ansible-systemd Invoked with state=started name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:36:48 compute-0 sudo[362844]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:48 compute-0 ceph-mon[192914]: pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:49 compute-0 sudo[363002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvtnfjtolfsmdubgwsbxrczxkedwvppq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898609.1572871-582-122499599687940/AnsiballZ_systemd.py'
Dec 05 01:36:49 compute-0 sudo[363002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:50 compute-0 python3.9[363004]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 01:36:50 compute-0 systemd[1]: Stopping node_exporter container...
Dec 05 01:36:50 compute-0 systemd[1]: libpod-602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9.scope: Deactivated successfully.
Dec 05 01:36:50 compute-0 podman[363008]: 2025-12-05 01:36:50.307490169 +0000 UTC m=+0.096320590 container died 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:36:50 compute-0 systemd[1]: 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9-efa0b04ff92b870.timer: Deactivated successfully.
Dec 05 01:36:50 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9.
Dec 05 01:36:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9-userdata-shm.mount: Deactivated successfully.
Dec 05 01:36:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf15be80c0dd292fd0f4a8782079e2ad181dc6be2900ae4af343360a4fced505-merged.mount: Deactivated successfully.
Dec 05 01:36:50 compute-0 podman[363008]: 2025-12-05 01:36:50.398359376 +0000 UTC m=+0.187189797 container cleanup 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 01:36:50 compute-0 podman[363008]: node_exporter
Dec 05 01:36:50 compute-0 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 05 01:36:50 compute-0 systemd[1]: libpod-conmon-602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9.scope: Deactivated successfully.
Dec 05 01:36:50 compute-0 podman[363034]: node_exporter
Dec 05 01:36:50 compute-0 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Dec 05 01:36:50 compute-0 systemd[1]: Stopped node_exporter container.
Dec 05 01:36:50 compute-0 systemd[1]: Starting node_exporter container...
Dec 05 01:36:50 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:36:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf15be80c0dd292fd0f4a8782079e2ad181dc6be2900ae4af343360a4fced505/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:36:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf15be80c0dd292fd0f4a8782079e2ad181dc6be2900ae4af343360a4fced505/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 05 01:36:50 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9.
Dec 05 01:36:50 compute-0 podman[363045]: 2025-12-05 01:36:50.87340978 +0000 UTC m=+0.275843991 container init 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:36:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.904Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.904Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.904Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.906Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.906Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.906Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.906Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.907Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.907Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.907Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.907Z caller=node_exporter.go:117 level=info collector=arp
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=bcache
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=bonding
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=cpu
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=edac
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=filefd
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=netclass
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=netdev
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=netstat
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=nfs
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=nvme
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=softnet
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=systemd
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=xfs
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.908Z caller=node_exporter.go:117 level=info collector=zfs
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.909Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec 05 01:36:50 compute-0 node_exporter[363060]: ts=2025-12-05T01:36:50.909Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec 05 01:36:50 compute-0 podman[363045]: 2025-12-05 01:36:50.926597776 +0000 UTC m=+0.329031987 container start 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 01:36:50 compute-0 podman[363045]: node_exporter
Dec 05 01:36:50 compute-0 systemd[1]: Started node_exporter container.
Dec 05 01:36:50 compute-0 ceph-mon[192914]: pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:51 compute-0 sudo[363002]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:51 compute-0 podman[363069]: 2025-12-05 01:36:51.051963483 +0000 UTC m=+0.109525892 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 01:36:51 compute-0 sudo[363243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwhtgzdkanyamkuiwqebeyhkscoxcgre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898611.338197-590-212935653080475/AnsiballZ_stat.py'
Dec 05 01:36:51 compute-0 sudo[363243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:52 compute-0 python3.9[363245]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:36:52 compute-0 sudo[363243]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:36:52 compute-0 sudo[363337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeovijaltviaepplzyhfdcrtvohokpwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898611.338197-590-212935653080475/AnsiballZ_file.py'
Dec 05 01:36:52 compute-0 sudo[363337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:52 compute-0 podman[363295]: 2025-12-05 01:36:52.597211262 +0000 UTC m=+0.103215565 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, name=ubi9-minimal, version=9.6, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, distribution-scope=public, build-date=2025-08-20T13:12:41)
Dec 05 01:36:52 compute-0 python3.9[363341]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/podman_exporter/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/podman_exporter/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:36:52 compute-0 sudo[363337]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:52 compute-0 ceph-mon[192914]: pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:53 compute-0 sudo[363492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjddskzvqzudqcrqahvbdatdtyvqajep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898613.2466455-604-190420984861603/AnsiballZ_container_config_data.py'
Dec 05 01:36:53 compute-0 sudo[363492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:54 compute-0 python3.9[363494]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Dec 05 01:36:54 compute-0 sudo[363492]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:54 compute-0 ceph-mon[192914]: pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:55 compute-0 sudo[363644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjlkxcgvzmlmetvmwzhnqolfvisztquy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898614.4688435-613-100354679505402/AnsiballZ_container_config_hash.py'
Dec 05 01:36:55 compute-0 sudo[363644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:55 compute-0 python3.9[363646]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 01:36:55 compute-0 sudo[363644]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:36:56.163 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:36:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:36:56.164 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:36:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:36:56.164 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:36:56 compute-0 sudo[363796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrrtexgdomwwxtcrjqccqpsbqiwzxkki ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764898615.8138642-623-12069761520617/AnsiballZ_edpm_container_manage.py'
Dec 05 01:36:56 compute-0 sudo[363796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:56 compute-0 python3[363798]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 01:36:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:56 compute-0 ceph-mon[192914]: pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:57 compute-0 python3[363798]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                {
                                                     "Id": "e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815",
                                                     "Digest": "sha256:7b7f37816f4a78244e32f90a517fdec0c458a6d3cd132212bb6bc16a9dc4fade",
                                                     "RepoTags": [
                                                          "quay.io/navidys/prometheus-podman-exporter:v1.10.1"
                                                     ],
                                                     "RepoDigests": [
                                                          "quay.io/navidys/prometheus-podman-exporter@sha256:7b7f37816f4a78244e32f90a517fdec0c458a6d3cd132212bb6bc16a9dc4fade",
                                                          "quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd"
                                                     ],
                                                     "Parent": "",
                                                     "Comment": "",
                                                     "Created": "2024-03-17T01:45:00.251170784Z",
                                                     "Config": {
                                                          "User": "nobody",
                                                          "ExposedPorts": {
                                                               "9882/tcp": {}
                                                          },
                                                          "Env": [
                                                               "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
                                                          ],
                                                          "Entrypoint": [
                                                               "/bin/podman_exporter"
                                                          ],
                                                          "Labels": {
                                                               "maintainer": "Navid Yaghoobi <navidys@fedoraproject.org>"
                                                          }
                                                     },
                                                     "Version": "",
                                                     "Author": "The Prometheus Authors <prometheus-developers@googlegroups.com>",
                                                     "Architecture": "amd64",
                                                     "Os": "linux",
                                                     "Size": 33863535,
                                                     "VirtualSize": 33863535,
                                                     "GraphDriver": {
                                                          "Name": "overlay",
                                                          "Data": {
                                                               "LowerDir": "/var/lib/containers/storage/overlay/b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1/diff:/var/lib/containers/storage/overlay/1e604deea57dbda554a168861cff1238f93b8c6c69c863c43aed37d9d99c5fed/diff",
                                                               "UpperDir": "/var/lib/containers/storage/overlay/e1fac4507a16e359f79966290a44e975bb0ed717e8b6cc0e34b61e8c96e0a1a3/diff",
                                                               "WorkDir": "/var/lib/containers/storage/overlay/e1fac4507a16e359f79966290a44e975bb0ed717e8b6cc0e34b61e8c96e0a1a3/work"
                                                          }
                                                     },
                                                     "RootFS": {
                                                          "Type": "layers",
                                                          "Layers": [
                                                               "sha256:1e604deea57dbda554a168861cff1238f93b8c6c69c863c43aed37d9d99c5fed",
                                                               "sha256:6b83872188a9e8912bee1d43add5e9bc518601b02a14a364c0da43b0d59acf33",
                                                               "sha256:7a73cdcd46b4e3c3a632bae42ad152935f204b50dd02f0a46070f81446516318"
                                                          ]
                                                     },
                                                     "Labels": {
                                                          "maintainer": "Navid Yaghoobi <navidys@fedoraproject.org>"
                                                     },
                                                     "Annotations": {},
                                                     "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                     "User": "nobody",
                                                     "History": [
                                                          {
                                                               "created": "2023-12-05T20:23:06.467739954Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:ee9bb8755ccbdd689b434d9b4ac7518e972699604ecda33e4ddc2a15d2831443 in / "
                                                          },
                                                          {
                                                               "created": "2023-12-05T20:23:06.550971969Z",
                                                               "created_by": "/bin/sh -c #(nop)  CMD [\"sh\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2023-12-15T10:54:58.99835989Z",
                                                               "created_by": "MAINTAINER The Prometheus Authors <prometheus-developers@googlegroups.com>",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2023-12-15T10:54:58.99835989Z",
                                                               "created_by": "COPY /rootfs / # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-03-17T01:45:00.251170784Z",
                                                               "created_by": "LABEL maintainer=Navid Yaghoobi <navidys@fedoraproject.org>",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-03-17T01:45:00.251170784Z",
                                                               "created_by": "ARG TARGETPLATFORM",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-03-17T01:45:00.251170784Z",
                                                               "created_by": "ARG TARGETOS",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-03-17T01:45:00.251170784Z",
                                                               "created_by": "ARG TARGETARCH",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-03-17T01:45:00.251170784Z",
                                                               "created_by": "COPY ./bin/remote/prometheus-podman-exporter-amd64 /bin/podman_exporter # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-03-17T01:45:00.251170784Z",
                                                               "created_by": "EXPOSE map[9882/tcp:{}]",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-03-17T01:45:00.251170784Z",
                                                               "created_by": "USER nobody",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-03-17T01:45:00.251170784Z",
                                                               "created_by": "ENTRYPOINT [\"/bin/podman_exporter\"]",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          }
                                                     ],
                                                     "NamesHistory": [
                                                          "quay.io/navidys/prometheus-podman-exporter:v1.10.1"
                                                     ]
                                                }
                                           ]
                                           : quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec 05 01:36:57 compute-0 podman[158197]: @ - - [05/Dec/2025:01:07:06 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 3138272 "" "Go-http-client/1.1"
Dec 05 01:36:57 compute-0 systemd[1]: libpod-63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.scope: Deactivated successfully.
Dec 05 01:36:57 compute-0 systemd[1]: libpod-63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.scope: Consumed 4.074s CPU time.
Dec 05 01:36:57 compute-0 podman[363845]: 2025-12-05 01:36:57.195652113 +0000 UTC m=+0.091224247 container died 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:36:57 compute-0 systemd[1]: 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e-426a3b381e3b173d.timer: Deactivated successfully.
Dec 05 01:36:57 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.
Dec 05 01:36:57 compute-0 systemd[1]: 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e-426a3b381e3b173d.service: Failed to open /run/systemd/transient/63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e-426a3b381e3b173d.service: No such file or directory
Dec 05 01:36:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e-userdata-shm.mount: Deactivated successfully.
Dec 05 01:36:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-492903c1adf186d3b8596a7f27f4933dbed2c6566affc3d01772a4df0cbd308a-merged.mount: Deactivated successfully.
Dec 05 01:36:57 compute-0 podman[363845]: 2025-12-05 01:36:57.280618393 +0000 UTC m=+0.176190487 container cleanup 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 01:36:57 compute-0 python3[363798]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman stop podman_exporter
Dec 05 01:36:57 compute-0 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 05 01:36:57 compute-0 systemd[1]: 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e-426a3b381e3b173d.timer: Failed to open /run/systemd/transient/63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e-426a3b381e3b173d.timer: No such file or directory
Dec 05 01:36:57 compute-0 systemd[1]: 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e-426a3b381e3b173d.service: Failed to open /run/systemd/transient/63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e-426a3b381e3b173d.service: No such file or directory
Dec 05 01:36:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:36:57 compute-0 podman[363870]: 2025-12-05 01:36:57.44329762 +0000 UTC m=+0.124563656 container remove 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:36:57 compute-0 podman[363871]: Error: no container with ID 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e found in database: no such container
Dec 05 01:36:57 compute-0 python3[363798]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force podman_exporter
Dec 05 01:36:57 compute-0 systemd[1]: edpm_podman_exporter.service: Control process exited, code=exited, status=125/n/a
Dec 05 01:36:57 compute-0 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Dec 05 01:36:57 compute-0 podman[363891]: 2025-12-05 01:36:57.613305742 +0000 UTC m=+0.131299635 container create b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 01:36:57 compute-0 podman[363891]: 2025-12-05 01:36:57.543411156 +0000 UTC m=+0.061405089 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec 05 01:36:57 compute-0 python3[363798]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Dec 05 01:36:57 compute-0 systemd[1]: edpm_podman_exporter.service: Scheduled restart job, restart counter is at 1.
Dec 05 01:36:57 compute-0 systemd[1]: Stopped podman_exporter container.
Dec 05 01:36:57 compute-0 systemd[1]: Starting podman_exporter container...
Dec 05 01:36:57 compute-0 systemd[1]: Started libpod-conmon-b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc.scope.
Dec 05 01:36:57 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e186ef52109286d7363b92c790330ff3c895f31e7d52d9c93be46e3ae97fca1f/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e186ef52109286d7363b92c790330ff3c895f31e7d52d9c93be46e3ae97fca1f/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 05 01:36:57 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc.
Dec 05 01:36:57 compute-0 podman[363903]: 2025-12-05 01:36:57.923342764 +0000 UTC m=+0.270013247 container init b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:36:57 compute-0 podman_exporter[363926]: ts=2025-12-05T01:36:57.962Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec 05 01:36:57 compute-0 podman_exporter[363926]: ts=2025-12-05T01:36:57.962Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec 05 01:36:57 compute-0 podman[158197]: @ - - [05/Dec/2025:01:36:57 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec 05 01:36:57 compute-0 podman_exporter[363926]: ts=2025-12-05T01:36:57.962Z caller=handler.go:94 level=info msg="enabled collectors"
Dec 05 01:36:57 compute-0 podman_exporter[363926]: ts=2025-12-05T01:36:57.962Z caller=handler.go:105 level=info collector=container
Dec 05 01:36:57 compute-0 podman[158197]: time="2025-12-05T01:36:57Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:36:57 compute-0 podman[363903]: 2025-12-05 01:36:57.983837896 +0000 UTC m=+0.330508359 container start b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:36:58 compute-0 podman[363914]: podman_exporter
Dec 05 01:36:58 compute-0 python3[363798]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman start podman_exporter
Dec 05 01:36:58 compute-0 systemd[1]: Started podman_exporter container.
Dec 05 01:36:58 compute-0 podman[158197]: @ - - [05/Dec/2025:01:36:57 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 43269 "" "Go-http-client/1.1"
Dec 05 01:36:58 compute-0 podman_exporter[363926]: ts=2025-12-05T01:36:58.054Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec 05 01:36:58 compute-0 podman_exporter[363926]: ts=2025-12-05T01:36:58.056Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec 05 01:36:58 compute-0 podman_exporter[363926]: ts=2025-12-05T01:36:58.060Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec 05 01:36:58 compute-0 podman[363939]: 2025-12-05 01:36:58.126605242 +0000 UTC m=+0.120790609 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 01:36:58 compute-0 sudo[363796]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:58 compute-0 ceph-mon[192914]: pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:36:59 compute-0 sudo[364132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejhojszebckmjhtcrbzkmfjfyparpysj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898618.7030108-631-230869440489941/AnsiballZ_stat.py'
Dec 05 01:36:59 compute-0 sudo[364132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:36:59 compute-0 python3.9[364134]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:36:59 compute-0 sudo[364132]: pam_unix(sudo:session): session closed for user root
Dec 05 01:36:59 compute-0 podman[158197]: time="2025-12-05T01:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:36:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42583 "" "Go-http-client/1.1"
Dec 05 01:36:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8089 "" "Go-http-client/1.1"
Dec 05 01:37:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:00 compute-0 ceph-mon[192914]: pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:01 compute-0 sudo[364286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cesyfpkzzoidldiimkieldffdwuytjvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898619.9796643-640-21466450111099/AnsiballZ_file.py'
Dec 05 01:37:01 compute-0 sudo[364286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:01 compute-0 openstack_network_exporter[160350]: ERROR   01:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:37:01 compute-0 openstack_network_exporter[160350]: ERROR   01:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:37:01 compute-0 openstack_network_exporter[160350]: ERROR   01:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:37:01 compute-0 openstack_network_exporter[160350]: ERROR   01:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:37:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:37:01 compute-0 openstack_network_exporter[160350]: ERROR   01:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:37:01 compute-0 openstack_network_exporter[160350]: 
Dec 05 01:37:01 compute-0 python3.9[364288]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:37:01 compute-0 sudo[364286]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:37:02 compute-0 sudo[364437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfnpsdblfjesdjokctyrymrqnlxsxmrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898621.6446338-640-105743061115216/AnsiballZ_copy.py'
Dec 05 01:37:02 compute-0 sudo[364437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:02 compute-0 python3.9[364439]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764898621.6446338-640-105743061115216/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:37:02 compute-0 sudo[364437]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:02 compute-0 ceph-mon[192914]: pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:03 compute-0 sudo[364513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhhlmpemsuvvqgsoqyfpixnxfxhivniv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898621.6446338-640-105743061115216/AnsiballZ_systemd.py'
Dec 05 01:37:03 compute-0 sudo[364513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:03 compute-0 python3.9[364515]: ansible-systemd Invoked with state=started name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:37:03 compute-0 sudo[364513]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:04 compute-0 sudo[364684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rohcbhrilwlhzqmwwymdewnevkvgylwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898623.891697-660-83240886615312/AnsiballZ_systemd.py'
Dec 05 01:37:04 compute-0 sudo[364684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:04 compute-0 podman[364641]: 2025-12-05 01:37:04.529837633 +0000 UTC m=+0.147819979 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 05 01:37:04 compute-0 python3.9[364688]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 01:37:04 compute-0 systemd[1]: Stopping podman_exporter container...
Dec 05 01:37:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:04 compute-0 podman[158197]: @ - - [05/Dec/2025:01:36:57 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 5332 "" "Go-http-client/1.1"
Dec 05 01:37:04 compute-0 systemd[1]: libpod-b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc.scope: Deactivated successfully.
Dec 05 01:37:04 compute-0 podman[364692]: 2025-12-05 01:37:04.971681933 +0000 UTC m=+0.091277198 container died b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 01:37:04 compute-0 ceph-mon[192914]: pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:04 compute-0 systemd[1]: b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc-340002c8ff30ac8e.timer: Deactivated successfully.
Dec 05 01:37:04 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc.
Dec 05 01:37:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc-userdata-shm.mount: Deactivated successfully.
Dec 05 01:37:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-e186ef52109286d7363b92c790330ff3c895f31e7d52d9c93be46e3ae97fca1f-merged.mount: Deactivated successfully.
Dec 05 01:37:05 compute-0 podman[364692]: 2025-12-05 01:37:05.055025078 +0000 UTC m=+0.174620333 container cleanup b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:37:05 compute-0 podman[364692]: podman_exporter
Dec 05 01:37:05 compute-0 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 05 01:37:05 compute-0 systemd[1]: libpod-conmon-b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc.scope: Deactivated successfully.
Dec 05 01:37:05 compute-0 podman[364720]: podman_exporter
Dec 05 01:37:05 compute-0 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Dec 05 01:37:05 compute-0 systemd[1]: Stopped podman_exporter container.
Dec 05 01:37:05 compute-0 systemd[1]: Starting podman_exporter container...
Dec 05 01:37:05 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:37:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e186ef52109286d7363b92c790330ff3c895f31e7d52d9c93be46e3ae97fca1f/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:37:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e186ef52109286d7363b92c790330ff3c895f31e7d52d9c93be46e3ae97fca1f/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 05 01:37:05 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc.
Dec 05 01:37:05 compute-0 podman[364730]: 2025-12-05 01:37:05.452600002 +0000 UTC m=+0.234134597 container init b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:37:05 compute-0 podman_exporter[364746]: ts=2025-12-05T01:37:05.484Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec 05 01:37:05 compute-0 podman_exporter[364746]: ts=2025-12-05T01:37:05.484Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec 05 01:37:05 compute-0 podman_exporter[364746]: ts=2025-12-05T01:37:05.484Z caller=handler.go:94 level=info msg="enabled collectors"
Dec 05 01:37:05 compute-0 podman_exporter[364746]: ts=2025-12-05T01:37:05.484Z caller=handler.go:105 level=info collector=container
Dec 05 01:37:05 compute-0 podman[158197]: @ - - [05/Dec/2025:01:37:05 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec 05 01:37:05 compute-0 podman[158197]: time="2025-12-05T01:37:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:37:05 compute-0 podman[364730]: 2025-12-05 01:37:05.500497549 +0000 UTC m=+0.282032144 container start b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 01:37:05 compute-0 podman[364730]: podman_exporter
Dec 05 01:37:05 compute-0 podman[158197]: @ - - [05/Dec/2025:01:37:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 43265 "" "Go-http-client/1.1"
Dec 05 01:37:05 compute-0 systemd[1]: Started podman_exporter container.
Dec 05 01:37:05 compute-0 podman_exporter[364746]: ts=2025-12-05T01:37:05.533Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec 05 01:37:05 compute-0 podman_exporter[364746]: ts=2025-12-05T01:37:05.534Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec 05 01:37:05 compute-0 podman_exporter[364746]: ts=2025-12-05T01:37:05.535Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec 05 01:37:05 compute-0 sudo[364684]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:05 compute-0 podman[364756]: 2025-12-05 01:37:05.647469294 +0000 UTC m=+0.126610173 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:37:06 compute-0 sudo[364928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntgxcfekgcjhsskzyithuizlumovrkig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898625.954671-668-123720910232125/AnsiballZ_stat.py'
Dec 05 01:37:06 compute-0 sudo[364928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:06 compute-0 python3.9[364930]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:37:06 compute-0 sudo[364928]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:06 compute-0 ceph-mon[192914]: pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:07 compute-0 podman[364980]: 2025-12-05 01:37:07.217213693 +0000 UTC m=+0.109072270 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 01:37:07 compute-0 sudo[365022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxwvcawzmleihtijwjbjqmvexhyhkrbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898625.954671-668-123720910232125/AnsiballZ_file.py'
Dec 05 01:37:07 compute-0 sudo[365022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:07 compute-0 systemd[1]: 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-1e73c017c9b80aa1.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 01:37:07 compute-0 systemd[1]: 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-1e73c017c9b80aa1.service: Failed with result 'exit-code'.
Dec 05 01:37:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:37:07 compute-0 python3.9[365027]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/openstack_network_exporter/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:37:07 compute-0 sudo[365022]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:08 compute-0 sudo[365177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lspbgjaxupqdjtfydqextuwdmamhsmcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898627.8974693-682-229945092604241/AnsiballZ_container_config_data.py'
Dec 05 01:37:08 compute-0 sudo[365177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:08 compute-0 python3.9[365179]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Dec 05 01:37:08 compute-0 sudo[365177]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:08 compute-0 nova_compute[349548]: 2025-12-05 01:37:08.965 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:37:08 compute-0 nova_compute[349548]: 2025-12-05 01:37:08.965 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:37:08 compute-0 ceph-mon[192914]: pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:08 compute-0 nova_compute[349548]: 2025-12-05 01:37:08.998 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:37:09 compute-0 nova_compute[349548]: 2025-12-05 01:37:08.999 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 01:37:09 compute-0 nova_compute[349548]: 2025-12-05 01:37:09.000 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 01:37:09 compute-0 nova_compute[349548]: 2025-12-05 01:37:09.021 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 01:37:09 compute-0 nova_compute[349548]: 2025-12-05 01:37:09.022 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:37:09 compute-0 nova_compute[349548]: 2025-12-05 01:37:09.023 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:37:09 compute-0 nova_compute[349548]: 2025-12-05 01:37:09.024 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:37:09 compute-0 nova_compute[349548]: 2025-12-05 01:37:09.024 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 01:37:09 compute-0 nova_compute[349548]: 2025-12-05 01:37:09.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:37:09 compute-0 nova_compute[349548]: 2025-12-05 01:37:09.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:37:09 compute-0 nova_compute[349548]: 2025-12-05 01:37:09.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:37:09 compute-0 nova_compute[349548]: 2025-12-05 01:37:09.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:37:09 compute-0 nova_compute[349548]: 2025-12-05 01:37:09.099 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:37:09 compute-0 nova_compute[349548]: 2025-12-05 01:37:09.100 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:37:09 compute-0 nova_compute[349548]: 2025-12-05 01:37:09.101 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:37:09 compute-0 nova_compute[349548]: 2025-12-05 01:37:09.102 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:37:09 compute-0 nova_compute[349548]: 2025-12-05 01:37:09.103 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:37:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:37:09 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3671712334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:37:09 compute-0 nova_compute[349548]: 2025-12-05 01:37:09.628 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:37:09 compute-0 sudo[365351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbsziewncaloorbydtvrifznlvojuchm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898629.1222436-691-263103386995580/AnsiballZ_container_config_hash.py'
Dec 05 01:37:09 compute-0 sudo[365351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:09 compute-0 python3.9[365353]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 01:37:09 compute-0 sudo[365351]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:09 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3671712334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:37:10 compute-0 nova_compute[349548]: 2025-12-05 01:37:10.081 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:37:10 compute-0 nova_compute[349548]: 2025-12-05 01:37:10.082 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4540MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 01:37:10 compute-0 nova_compute[349548]: 2025-12-05 01:37:10.083 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:37:10 compute-0 nova_compute[349548]: 2025-12-05 01:37:10.083 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:37:10 compute-0 nova_compute[349548]: 2025-12-05 01:37:10.163 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 01:37:10 compute-0 nova_compute[349548]: 2025-12-05 01:37:10.164 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 01:37:10 compute-0 nova_compute[349548]: 2025-12-05 01:37:10.183 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:37:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:37:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1683761529' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:37:10 compute-0 nova_compute[349548]: 2025-12-05 01:37:10.688 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:37:10 compute-0 nova_compute[349548]: 2025-12-05 01:37:10.701 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:37:10 compute-0 nova_compute[349548]: 2025-12-05 01:37:10.724 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:37:10 compute-0 nova_compute[349548]: 2025-12-05 01:37:10.729 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 01:37:10 compute-0 nova_compute[349548]: 2025-12-05 01:37:10.729 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:37:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:10 compute-0 sudo[365525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znjvqwypwsnldlfcclfnjttcwjrldaqq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764898630.3754337-701-38119574790549/AnsiballZ_edpm_container_manage.py'
Dec 05 01:37:10 compute-0 sudo[365525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:10 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1683761529' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:37:10 compute-0 ceph-mon[192914]: pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:11 compute-0 python3[365527]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 01:37:11 compute-0 python3[365527]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                {
                                                     "Id": "186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1",
                                                     "Digest": "sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7",
                                                     "RepoTags": [
                                                          "quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified"
                                                     ],
                                                     "RepoDigests": [
                                                          "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7"
                                                     ],
                                                     "Parent": "",
                                                     "Comment": "",
                                                     "Created": "2025-08-26T15:52:54.446618393Z",
                                                     "Config": {
                                                          "ExposedPorts": {
                                                               "1981/tcp": {}
                                                          },
                                                          "Env": [
                                                               "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                               "container=oci"
                                                          ],
                                                          "Cmd": [
                                                               "/app/openstack-network-exporter"
                                                          ],
                                                          "WorkingDir": "/",
                                                          "Labels": {
                                                               "architecture": "x86_64",
                                                               "build-date": "2025-08-20T13:12:41",
                                                               "com.redhat.component": "ubi9-minimal-container",
                                                               "com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",
                                                               "description": "The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
                                                               "distribution-scope": "public",
                                                               "io.buildah.version": "1.33.7",
                                                               "io.k8s.description": "The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
                                                               "io.k8s.display-name": "Red Hat Universal Base Image 9 Minimal",
                                                               "io.openshift.expose-services": "",
                                                               "io.openshift.tags": "minimal rhel9",
                                                               "maintainer": "Red Hat, Inc.",
                                                               "name": "ubi9-minimal",
                                                               "release": "1755695350",
                                                               "summary": "Provides the latest release of the minimal Red Hat Universal Base Image 9.",
                                                               "url": "https://catalog.redhat.com/en/search?searchType=containers",
                                                               "vcs-ref": "f4b088292653bbf5ca8188a5e59ffd06a8671d4b",
                                                               "vcs-type": "git",
                                                               "vendor": "Red Hat, Inc.",
                                                               "version": "9.6"
                                                          }
                                                     },
                                                     "Version": "",
                                                     "Author": "Red Hat",
                                                     "Architecture": "amd64",
                                                     "Os": "linux",
                                                     "Size": 142088877,
                                                     "VirtualSize": 142088877,
                                                     "GraphDriver": {
                                                          "Name": "overlay",
                                                          "Data": {
                                                               "LowerDir": "/var/lib/containers/storage/overlay/157961e3a1fe369d02893b19044a0e08e15689974ef810b235cb5ec194c7142c/diff:/var/lib/containers/storage/overlay/778d8c610941586099cac6c507cad2d1156b71b2bb54c42cebedf8808c68edb9/diff",
                                                               "UpperDir": "/var/lib/containers/storage/overlay/cd505d6f54e550fae708d1680b6b8d44753cf72fac8d36345974b92245bc660c/diff",
                                                               "WorkDir": "/var/lib/containers/storage/overlay/cd505d6f54e550fae708d1680b6b8d44753cf72fac8d36345974b92245bc660c/work"
                                                          }
                                                     },
                                                     "RootFS": {
                                                          "Type": "layers",
                                                          "Layers": [
                                                               "sha256:778d8c610941586099cac6c507cad2d1156b71b2bb54c42cebedf8808c68edb9",
                                                               "sha256:60984b2898b5b4ad1680d36433001b7e2bebb1073775d06b4c2ff80f985caccb",
                                                               "sha256:866ed9f0f685cc1d741f560227443a94926fc22494aa7808be751e7247cda421"
                                                          ]
                                                     },
                                                     "Labels": {
                                                          "architecture": "x86_64",
                                                          "build-date": "2025-08-20T13:12:41",
                                                          "com.redhat.component": "ubi9-minimal-container",
                                                          "com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",
                                                          "description": "The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
                                                          "distribution-scope": "public",
                                                          "io.buildah.version": "1.33.7",
                                                          "io.k8s.description": "The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
                                                          "io.k8s.display-name": "Red Hat Universal Base Image 9 Minimal",
                                                          "io.openshift.expose-services": "",
                                                          "io.openshift.tags": "minimal rhel9",
                                                          "maintainer": "Red Hat, Inc.",
                                                          "name": "ubi9-minimal",
                                                          "release": "1755695350",
                                                          "summary": "Provides the latest release of the minimal Red Hat Universal Base Image 9.",
                                                          "url": "https://catalog.redhat.com/en/search?searchType=containers",
                                                          "vcs-ref": "f4b088292653bbf5ca8188a5e59ffd06a8671d4b",
                                                          "vcs-type": "git",
                                                          "vendor": "Red Hat, Inc.",
                                                          "version": "9.6"
                                                     },
                                                     "Annotations": {},
                                                     "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                     "User": "",
                                                     "History": [
                                                          {
                                                               "created": "2025-08-20T13:14:24.836114247Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"Red Hat, Inc.\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:24.907067406Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL vendor=\"Red Hat, Inc.\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:24.953912498Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL url=\"https://catalog.redhat.com/en/search?searchType=containers\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:24.99202543Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL com.redhat.component=\"ubi9-minimal-container\"       name=\"ubi9-minimal\"       version=\"9.6\"       distribution-scope=\"public\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.033232759Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL com.redhat.license_terms=\"https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.116880439Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL summary=\"Provides the latest release of the minimal Red Hat Universal Base Image 9.\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.167988017Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL description=\"The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.205286235Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL io.k8s.description=\"The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.239930205Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL io.k8s.display-name=\"Red Hat Universal Base Image 9 Minimal\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.298417937Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL io.openshift.expose-services=\"\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.346108994Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL io.openshift.tags=\"minimal rhel9\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.381850293Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV container oci",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:25.998561869Z",
                                                               "created_by": "/bin/sh -c #(nop) COPY dir:e1f22eafd6489859288910ef7585f9d694693aa84a31ba9d54dea9e7a451abe6 in / ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:26.169088157Z",
                                                               "created_by": "/bin/sh -c #(nop) COPY file:b37d593713ee21ad52a4cd1424dc019a24f7966f85df0ac4b86d234302695328 in /etc/yum.repos.d/. ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:26.222750062Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:26.44502305Z",
                                                               "created_by": "/bin/sh -c #(nop) COPY file:58cc94f5b3b2d60de2c77a6ed4b1797dcede502ccdb429a72e7a72d994235b3c in /usr/share/buildinfo/content-sets.json ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:26.581849716Z",
                                                               "created_by": "/bin/sh -c #(nop) COPY file:58cc94f5b3b2d60de2c77a6ed4b1797dcede502ccdb429a72e7a72d994235b3c in /root/buildinfo/content_manifests/content-sets.json ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-20T13:14:26.902035614Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"build-date\"=\"2025-08-20T13:12:41\" \"architecture\"=\"x86_64\" \"vcs-type\"=\"git\" \"vcs-ref\"=\"f4b088292653bbf5ca8188a5e59ffd06a8671d4b\" \"release\"=\"1755695350\""
                                                          },
                                                          {
                                                               "created": "2025-08-26T15:52:52.889456996Z",
                                                               "created_by": "/bin/sh -c microdnf update -y && rm -rf /var/cache/yum",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-26T15:52:54.116955892Z",
                                                               "created_by": "/bin/sh -c microdnf install -y iproute && microdnf clean all",
                                                               "comment": "FROM registry.access.redhat.com/ubi9/ubi-minimal:latest"
                                                          },
                                                          {
                                                               "created": "2025-08-26T15:52:54.314008349Z",
                                                               "created_by": "/bin/sh -c #(nop) COPY file:fab61bc60c39fae33dbfa4e382d473ceab94ebaf876018d5034ba62f04740767 in /etc/openstack-network-exporter.yaml ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-26T15:52:54.407547534Z",
                                                               "created_by": "/bin/sh -c #(nop) COPY file:be836064c1a23a46d9411cf2aafe0d43f5d498cf2fd92e788160ae2e0f30bb86 in /app/openstack-network-exporter ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-26T15:52:54.420490087Z",
                                                               "created_by": "/bin/sh -c #(nop) MAINTAINER Red Hat",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-26T15:52:54.432520013Z",
                                                               "created_by": "/bin/sh -c #(nop) EXPOSE 1981",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-08-26T15:52:54.48363818Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"/app/openstack-network-exporter\"]",
                                                               "author": "Red Hat",
                                                               "comment": "FROM 688666ea38a8"
                                                          }
                                                     ],
                                                     "NamesHistory": [
                                                          "quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified"
                                                     ]
                                                }
                                           ]
                                           : quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec 05 01:37:11 compute-0 systemd[1]: libpod-348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.scope: Deactivated successfully.
Dec 05 01:37:11 compute-0 systemd[1]: libpod-348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.scope: Consumed 5.058s CPU time.
Dec 05 01:37:11 compute-0 podman[365574]: 2025-12-05 01:37:11.809156141 +0000 UTC m=+0.119009159 container died 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 05 01:37:11 compute-0 systemd[1]: 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88-1523999d0a411989.timer: Deactivated successfully.
Dec 05 01:37:11 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.
Dec 05 01:37:11 compute-0 systemd[1]: 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88-1523999d0a411989.service: Failed to open /run/systemd/transient/348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88-1523999d0a411989.service: No such file or directory
Dec 05 01:37:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88-userdata-shm.mount: Deactivated successfully.
Dec 05 01:37:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-952f396ae4fa4ce7b65d319291f161c33a679374731006ecac684d22de091d0e-merged.mount: Deactivated successfully.
Dec 05 01:37:11 compute-0 podman[365574]: 2025-12-05 01:37:11.896391935 +0000 UTC m=+0.206244923 container cleanup 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, version=9.6, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 05 01:37:11 compute-0 python3[365527]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman stop openstack_network_exporter
Dec 05 01:37:11 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 05 01:37:11 compute-0 podman[365599]: openstack_network_exporter
Dec 05 01:37:11 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Dec 05 01:37:12 compute-0 systemd[1]: 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88-1523999d0a411989.timer: Failed to open /run/systemd/transient/348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88-1523999d0a411989.timer: No such file or directory
Dec 05 01:37:12 compute-0 systemd[1]: 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88-1523999d0a411989.service: Failed to open /run/systemd/transient/348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88-1523999d0a411989.service: No such file or directory
Dec 05 01:37:12 compute-0 podman[365600]: 2025-12-05 01:37:12.058687521 +0000 UTC m=+0.117425485 container remove 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 01:37:12 compute-0 python3[365527]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force openstack_network_exporter
Dec 05 01:37:12 compute-0 podman[365624]: 2025-12-05 01:37:12.207002893 +0000 UTC m=+0.109304766 container create fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, architecture=x86_64, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, release=1755695350, managed_by=edpm_ansible, name=ubi9-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc.)
Dec 05 01:37:12 compute-0 podman[365624]: 2025-12-05 01:37:12.159068494 +0000 UTC m=+0.061370367 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec 05 01:37:12 compute-0 python3[365527]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec 05 01:37:12 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Scheduled restart job, restart counter is at 1.
Dec 05 01:37:12 compute-0 systemd[1]: Stopped openstack_network_exporter container.
Dec 05 01:37:12 compute-0 systemd[1]: Starting openstack_network_exporter container...
Dec 05 01:37:12 compute-0 systemd[1]: Started libpod-conmon-fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4.scope.
Dec 05 01:37:12 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:37:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef7a4705d85f5e502756949a6773e0741505fdf053b2df0d88d0d2e7284f0a4/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 05 01:37:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef7a4705d85f5e502756949a6773e0741505fdf053b2df0d88d0d2e7284f0a4/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:37:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef7a4705d85f5e502756949a6773e0741505fdf053b2df0d88d0d2e7284f0a4/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 05 01:37:12 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4.
Dec 05 01:37:12 compute-0 podman[365636]: 2025-12-05 01:37:12.409230312 +0000 UTC m=+0.167156984 container init fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_id=edpm, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal)
Dec 05 01:37:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:37:12 compute-0 openstack_network_exporter[365651]: INFO    01:37:12 main.go:48: registering *bridge.Collector
Dec 05 01:37:12 compute-0 openstack_network_exporter[365651]: INFO    01:37:12 main.go:48: registering *coverage.Collector
Dec 05 01:37:12 compute-0 openstack_network_exporter[365651]: INFO    01:37:12 main.go:48: registering *datapath.Collector
Dec 05 01:37:12 compute-0 openstack_network_exporter[365651]: INFO    01:37:12 main.go:48: registering *iface.Collector
Dec 05 01:37:12 compute-0 openstack_network_exporter[365651]: INFO    01:37:12 main.go:48: registering *memory.Collector
Dec 05 01:37:12 compute-0 openstack_network_exporter[365651]: INFO    01:37:12 main.go:48: registering *ovnnorthd.Collector
Dec 05 01:37:12 compute-0 openstack_network_exporter[365651]: INFO    01:37:12 main.go:48: registering *ovn.Collector
Dec 05 01:37:12 compute-0 openstack_network_exporter[365651]: INFO    01:37:12 main.go:48: registering *ovsdbserver.Collector
Dec 05 01:37:12 compute-0 openstack_network_exporter[365651]: INFO    01:37:12 main.go:48: registering *pmd_perf.Collector
Dec 05 01:37:12 compute-0 openstack_network_exporter[365651]: INFO    01:37:12 main.go:48: registering *pmd_rxq.Collector
Dec 05 01:37:12 compute-0 openstack_network_exporter[365651]: INFO    01:37:12 main.go:48: registering *vswitch.Collector
Dec 05 01:37:12 compute-0 openstack_network_exporter[365651]: NOTICE  01:37:12 main.go:76: listening on https://:9105/metrics
Dec 05 01:37:12 compute-0 podman[365636]: 2025-12-05 01:37:12.454811234 +0000 UTC m=+0.212737916 container start fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, io.buildah.version=1.33.7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, architecture=x86_64)
Dec 05 01:37:12 compute-0 podman[365647]: openstack_network_exporter
Dec 05 01:37:12 compute-0 python3[365527]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman start openstack_network_exporter
Dec 05 01:37:12 compute-0 systemd[1]: Started openstack_network_exporter container.
Dec 05 01:37:12 compute-0 podman[365672]: 2025-12-05 01:37:12.604766063 +0000 UTC m=+0.132454698 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, name=ubi9-minimal, vcs-type=git, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 05 01:37:12 compute-0 sudo[365525]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:12 compute-0 ceph-mon[192914]: pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:14 compute-0 sudo[365865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjwwrszhdetxjsuwfzexfikcqqbtiyuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898633.9131584-709-238706854985002/AnsiballZ_stat.py'
Dec 05 01:37:14 compute-0 sudo[365865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:14 compute-0 python3.9[365867]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:37:14 compute-0 sudo[365865]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:14 compute-0 ceph-mon[192914]: pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:37:16
Dec 05 01:37:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:37:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:37:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.control', 'vms', 'backups', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'images']
Dec 05 01:37:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:37:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:37:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:37:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:37:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:37:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:37:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:37:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:37:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:37:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:37:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:37:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:37:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:37:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:37:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:37:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:37:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:37:16 compute-0 sudo[366055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovazjckgehxivfiazsypldxierhphsmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898635.1452491-718-184235216038136/AnsiballZ_file.py'
Dec 05 01:37:16 compute-0 sudo[366055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:16 compute-0 podman[365993]: 2025-12-05 01:37:16.627436586 +0000 UTC m=+0.112570598 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:37:16 compute-0 podman[365994]: 2025-12-05 01:37:16.645378241 +0000 UTC m=+0.128350952 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec 05 01:37:16 compute-0 podman[365995]: 2025-12-05 01:37:16.692390873 +0000 UTC m=+0.169119779 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 05 01:37:16 compute-0 python3.9[366076]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:37:16 compute-0 sudo[366055]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:16 compute-0 ceph-mon[192914]: pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:37:17 compute-0 sudo[366232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uslwxkcmqcftdfriuwpbjcygepeshbsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898636.9333768-718-27771598716012/AnsiballZ_copy.py'
Dec 05 01:37:17 compute-0 sudo[366232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:17 compute-0 python3.9[366234]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764898636.9333768-718-27771598716012/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:37:18 compute-0 sudo[366232]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:18 compute-0 sudo[366308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtvmyafnetroxhcoxyipdooxutsemizw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898636.9333768-718-27771598716012/AnsiballZ_systemd.py'
Dec 05 01:37:18 compute-0 sudo[366308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:18 compute-0 podman[366310]: 2025-12-05 01:37:18.657035331 +0000 UTC m=+0.129343600 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=edpm, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.openshift.expose-services=)
Dec 05 01:37:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:19 compute-0 ceph-mon[192914]: pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:19 compute-0 python3.9[366311]: ansible-systemd Invoked with state=started name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:37:19 compute-0 sudo[366308]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:20 compute-0 sudo[366482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffqyxxzmuzyhsozluouelgilyajegtls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898639.8134744-738-89754236630865/AnsiballZ_systemd.py'
Dec 05 01:37:20 compute-0 sudo[366482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:20 compute-0 python3.9[366484]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 01:37:20 compute-0 systemd[1]: Stopping openstack_network_exporter container...
Dec 05 01:37:20 compute-0 systemd[1]: libpod-fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4.scope: Deactivated successfully.
Dec 05 01:37:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:20 compute-0 podman[366488]: 2025-12-05 01:37:20.906676677 +0000 UTC m=+0.112605659 container died fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, version=9.6, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 01:37:20 compute-0 systemd[1]: fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4-18209a1ade8bdfa2.timer: Deactivated successfully.
Dec 05 01:37:20 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4.
Dec 05 01:37:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4-userdata-shm.mount: Deactivated successfully.
Dec 05 01:37:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ef7a4705d85f5e502756949a6773e0741505fdf053b2df0d88d0d2e7284f0a4-merged.mount: Deactivated successfully.
Dec 05 01:37:20 compute-0 ceph-mon[192914]: pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:21 compute-0 podman[366488]: 2025-12-05 01:37:20.999869348 +0000 UTC m=+0.205798330 container cleanup fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-type=git, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, release=1755695350, architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 05 01:37:21 compute-0 podman[366488]: openstack_network_exporter
Dec 05 01:37:21 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 05 01:37:21 compute-0 systemd[1]: libpod-conmon-fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4.scope: Deactivated successfully.
Dec 05 01:37:21 compute-0 podman[366515]: openstack_network_exporter
Dec 05 01:37:21 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Dec 05 01:37:21 compute-0 systemd[1]: Stopped openstack_network_exporter container.
Dec 05 01:37:21 compute-0 systemd[1]: Starting openstack_network_exporter container...
Dec 05 01:37:21 compute-0 podman[366524]: 2025-12-05 01:37:21.287402076 +0000 UTC m=+0.123045032 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:37:21 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef7a4705d85f5e502756949a6773e0741505fdf053b2df0d88d0d2e7284f0a4/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 05 01:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef7a4705d85f5e502756949a6773e0741505fdf053b2df0d88d0d2e7284f0a4/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef7a4705d85f5e502756949a6773e0741505fdf053b2df0d88d0d2e7284f0a4/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 05 01:37:21 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4.
Dec 05 01:37:21 compute-0 podman[366525]: 2025-12-05 01:37:21.395663282 +0000 UTC m=+0.226058770 container init fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, version=9.6, vcs-type=git, config_id=edpm, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 01:37:21 compute-0 openstack_network_exporter[366555]: INFO    01:37:21 main.go:48: registering *bridge.Collector
Dec 05 01:37:21 compute-0 openstack_network_exporter[366555]: INFO    01:37:21 main.go:48: registering *coverage.Collector
Dec 05 01:37:21 compute-0 openstack_network_exporter[366555]: INFO    01:37:21 main.go:48: registering *datapath.Collector
Dec 05 01:37:21 compute-0 openstack_network_exporter[366555]: INFO    01:37:21 main.go:48: registering *iface.Collector
Dec 05 01:37:21 compute-0 openstack_network_exporter[366555]: INFO    01:37:21 main.go:48: registering *memory.Collector
Dec 05 01:37:21 compute-0 openstack_network_exporter[366555]: INFO    01:37:21 main.go:48: registering *ovnnorthd.Collector
Dec 05 01:37:21 compute-0 openstack_network_exporter[366555]: INFO    01:37:21 main.go:48: registering *ovn.Collector
Dec 05 01:37:21 compute-0 openstack_network_exporter[366555]: INFO    01:37:21 main.go:48: registering *ovsdbserver.Collector
Dec 05 01:37:21 compute-0 openstack_network_exporter[366555]: INFO    01:37:21 main.go:48: registering *pmd_perf.Collector
Dec 05 01:37:21 compute-0 openstack_network_exporter[366555]: INFO    01:37:21 main.go:48: registering *pmd_rxq.Collector
Dec 05 01:37:21 compute-0 openstack_network_exporter[366555]: INFO    01:37:21 main.go:48: registering *vswitch.Collector
Dec 05 01:37:21 compute-0 openstack_network_exporter[366555]: NOTICE  01:37:21 main.go:76: listening on https://:9105/metrics
Dec 05 01:37:21 compute-0 podman[366525]: 2025-12-05 01:37:21.438748044 +0000 UTC m=+0.269143552 container start fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, distribution-scope=public, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter)
Dec 05 01:37:21 compute-0 podman[366525]: openstack_network_exporter
Dec 05 01:37:21 compute-0 systemd[1]: Started openstack_network_exporter container.
Dec 05 01:37:21 compute-0 sudo[366482]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:21 compute-0 podman[366571]: 2025-12-05 01:37:21.599173907 +0000 UTC m=+0.138512338 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Dec 05 01:37:22 compute-0 sudo[366741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zouwcimgvfxhuxmhokldogjakukmrkgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898641.815708-746-135331037751754/AnsiballZ_find.py'
Dec 05 01:37:22 compute-0 sudo[366741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:37:22 compute-0 python3.9[366743]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 01:37:22 compute-0 sudo[366741]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:22 compute-0 ceph-mon[192914]: pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:23 compute-0 sudo[366893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgspxygpnpqnmtauddmayntpxgwvovti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898643.1708603-756-4156211822807/AnsiballZ_podman_container_info.py'
Dec 05 01:37:23 compute-0 sudo[366893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:24 compute-0 python3.9[366895]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec 05 01:37:24 compute-0 sudo[366893]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:24 compute-0 ceph-mon[192914]: pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:25 compute-0 sudo[367057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asuoaknrsaghnzdufzbalneabszwfczs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898644.5797973-764-104050372674108/AnsiballZ_podman_container_exec.py'
Dec 05 01:37:25 compute-0 sudo[367057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:25 compute-0 python3.9[367059]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:37:25 compute-0 systemd[1]: Started libpod-conmon-d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.scope.
Dec 05 01:37:25 compute-0 podman[367060]: 2025-12-05 01:37:25.755336286 +0000 UTC m=+0.170927400 container exec d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 01:37:25 compute-0 podman[367060]: 2025-12-05 01:37:25.792402799 +0000 UTC m=+0.207993843 container exec_died d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:37:25 compute-0 sudo[367057]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:25 compute-0 systemd[1]: libpod-conmon-d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.scope: Deactivated successfully.
Dec 05 01:37:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:37:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:37:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:37:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:37:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:37:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:37:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:37:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:37:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:37:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:37:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:37:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:37:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:37:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:37:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:37:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:37:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:37:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:37:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:37:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:37:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:37:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:37:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:37:26 compute-0 sudo[367240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmtchatgdvyjscykdnzmpdujwkltqgsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898646.1361318-772-190785785923908/AnsiballZ_podman_container_exec.py'
Dec 05 01:37:26 compute-0 sudo[367240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:26 compute-0 python3.9[367242]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:37:26 compute-0 ceph-mon[192914]: pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:27 compute-0 systemd[1]: Started libpod-conmon-d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.scope.
Dec 05 01:37:27 compute-0 podman[367243]: 2025-12-05 01:37:27.112973248 +0000 UTC m=+0.150704540 container exec d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:37:27 compute-0 podman[367243]: 2025-12-05 01:37:27.148637902 +0000 UTC m=+0.186369204 container exec_died d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 01:37:27 compute-0 systemd[1]: libpod-conmon-d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.scope: Deactivated successfully.
Dec 05 01:37:27 compute-0 sudo[367240]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:37:28 compute-0 sudo[367424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsivhxgrclcfehttuzdbdipnwlwrekty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898647.5388958-780-30218945079603/AnsiballZ_file.py'
Dec 05 01:37:28 compute-0 sudo[367424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:28 compute-0 ceph-mon[192914]: pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:29 compute-0 python3.9[367426]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:37:29 compute-0 sudo[367424]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:29 compute-0 podman[158197]: time="2025-12-05T01:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:37:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42581 "" "Go-http-client/1.1"
Dec 05 01:37:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8083 "" "Go-http-client/1.1"
Dec 05 01:37:30 compute-0 sudo[367578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctrncffqzipdpkvtsizdspzwpdvniufj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898649.4332988-789-83267797509817/AnsiballZ_podman_container_info.py'
Dec 05 01:37:30 compute-0 sudo[367578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:30 compute-0 python3.9[367580]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec 05 01:37:30 compute-0 sudo[367578]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:30 compute-0 ceph-mon[192914]: pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:31 compute-0 sudo[367690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:37:31 compute-0 sudo[367690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:37:31 compute-0 sudo[367690]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:31 compute-0 openstack_network_exporter[366555]: ERROR   01:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:37:31 compute-0 openstack_network_exporter[366555]: ERROR   01:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:37:31 compute-0 openstack_network_exporter[366555]: ERROR   01:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:37:31 compute-0 openstack_network_exporter[366555]: ERROR   01:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:37:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:37:31 compute-0 openstack_network_exporter[366555]: ERROR   01:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:37:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:37:31 compute-0 sudo[367730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:37:31 compute-0 sudo[367730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:37:31 compute-0 sudo[367730]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:31 compute-0 sudo[367802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqydvayrrocvarbequwpwamlwsmfypzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898651.0040119-797-129547134348686/AnsiballZ_podman_container_exec.py'
Dec 05 01:37:31 compute-0 sudo[367802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:31 compute-0 sudo[367794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:37:31 compute-0 sudo[367794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:37:31 compute-0 sudo[367794]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:31 compute-0 sudo[367824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:37:31 compute-0 sudo[367824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:37:31 compute-0 python3.9[367821]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:37:31 compute-0 systemd[1]: Started libpod-conmon-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope.
Dec 05 01:37:31 compute-0 podman[367849]: 2025-12-05 01:37:31.896452163 +0000 UTC m=+0.153450457 container exec 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec 05 01:37:31 compute-0 podman[367849]: 2025-12-05 01:37:31.932251861 +0000 UTC m=+0.189250105 container exec_died 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 01:37:32 compute-0 sudo[367802]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:32 compute-0 systemd[1]: libpod-conmon-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope: Deactivated successfully.
Dec 05 01:37:32 compute-0 sudo[367824]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:37:32 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:37:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:37:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:37:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:37:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:37:32 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 19818213-27a7-4894-9592-2d977deb735d does not exist
Dec 05 01:37:32 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev ca4021fa-954a-494e-ae0f-1946399889ab does not exist
Dec 05 01:37:32 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8d6e13f8-39b6-4ca7-89f3-3c7ee643357d does not exist
Dec 05 01:37:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:37:32 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:37:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:37:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:37:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:37:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:37:32 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:37:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:37:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:37:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:37:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:37:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:37:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:37:32 compute-0 sudo[367987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:37:32 compute-0 sudo[367987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:37:32 compute-0 sudo[367987]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:32 compute-0 sudo[368036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:37:32 compute-0 sudo[368036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:37:32 compute-0 sudo[368036]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:32 compute-0 sudo[368085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:37:32 compute-0 sudo[368085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:37:32 compute-0 sudo[368085]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:32 compute-0 sudo[368134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prqowaadhbonnmeeawkfvofxcrapweyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898652.28347-805-128696728070075/AnsiballZ_podman_container_exec.py'
Dec 05 01:37:32 compute-0 sudo[368134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:32 compute-0 sudo[368137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:37:32 compute-0 sudo[368137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:37:33 compute-0 python3.9[368141]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:37:33 compute-0 systemd[1]: Started libpod-conmon-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope.
Dec 05 01:37:33 compute-0 podman[368170]: 2025-12-05 01:37:33.236986835 +0000 UTC m=+0.122462456 container exec 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:37:33 compute-0 podman[368170]: 2025-12-05 01:37:33.269138769 +0000 UTC m=+0.154614390 container exec_died 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 05 01:37:33 compute-0 systemd[1]: libpod-conmon-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope: Deactivated successfully.
Dec 05 01:37:33 compute-0 sudo[368134]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:33 compute-0 ceph-mon[192914]: pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:33 compute-0 podman[368256]: 2025-12-05 01:37:33.520296925 +0000 UTC m=+0.072308245 container create 5c05d4ee164050bac23c993e58de03301235d40ff3b88cc815e5c46978d24adb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:37:33 compute-0 podman[368256]: 2025-12-05 01:37:33.498633865 +0000 UTC m=+0.050645185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:37:33 compute-0 systemd[1]: Started libpod-conmon-5c05d4ee164050bac23c993e58de03301235d40ff3b88cc815e5c46978d24adb.scope.
Dec 05 01:37:33 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:37:33 compute-0 podman[368256]: 2025-12-05 01:37:33.658471652 +0000 UTC m=+0.210482982 container init 5c05d4ee164050bac23c993e58de03301235d40ff3b88cc815e5c46978d24adb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_dewdney, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 05 01:37:33 compute-0 podman[368256]: 2025-12-05 01:37:33.670581513 +0000 UTC m=+0.222592833 container start 5c05d4ee164050bac23c993e58de03301235d40ff3b88cc815e5c46978d24adb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_dewdney, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 05 01:37:33 compute-0 podman[368256]: 2025-12-05 01:37:33.679176354 +0000 UTC m=+0.231187744 container attach 5c05d4ee164050bac23c993e58de03301235d40ff3b88cc815e5c46978d24adb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_dewdney, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 05 01:37:33 compute-0 xenodochial_dewdney[368295]: 167 167
Dec 05 01:37:33 compute-0 systemd[1]: libpod-5c05d4ee164050bac23c993e58de03301235d40ff3b88cc815e5c46978d24adb.scope: Deactivated successfully.
Dec 05 01:37:33 compute-0 podman[368256]: 2025-12-05 01:37:33.681821259 +0000 UTC m=+0.233832589 container died 5c05d4ee164050bac23c993e58de03301235d40ff3b88cc815e5c46978d24adb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_dewdney, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 05 01:37:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a6c58fcf9a80809fd9410f78c89100e65516e3530d44abe1e6f79087ff2454d-merged.mount: Deactivated successfully.
Dec 05 01:37:33 compute-0 podman[368256]: 2025-12-05 01:37:33.745377167 +0000 UTC m=+0.297388487 container remove 5c05d4ee164050bac23c993e58de03301235d40ff3b88cc815e5c46978d24adb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_dewdney, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 05 01:37:33 compute-0 systemd[1]: libpod-conmon-5c05d4ee164050bac23c993e58de03301235d40ff3b88cc815e5c46978d24adb.scope: Deactivated successfully.
Dec 05 01:37:33 compute-0 podman[368379]: 2025-12-05 01:37:33.970761257 +0000 UTC m=+0.071511213 container create 8abb65d2555996990ebb01fb5bf013c36e32863b1f845228f19b0d3ace264132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_sammet, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:37:34 compute-0 podman[368379]: 2025-12-05 01:37:33.936513934 +0000 UTC m=+0.037263960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:37:34 compute-0 systemd[1]: Started libpod-conmon-8abb65d2555996990ebb01fb5bf013c36e32863b1f845228f19b0d3ace264132.scope.
Dec 05 01:37:34 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:37:34 compute-0 sudo[368439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttuyopavssqilevtbjijyihkvsrfrknd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898653.608156-813-125474688737723/AnsiballZ_file.py'
Dec 05 01:37:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f6b950a43e820e6224443e9ce097cb724c0745d4b91b37772ef5a4352f001ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:37:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f6b950a43e820e6224443e9ce097cb724c0745d4b91b37772ef5a4352f001ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:37:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f6b950a43e820e6224443e9ce097cb724c0745d4b91b37772ef5a4352f001ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:37:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f6b950a43e820e6224443e9ce097cb724c0745d4b91b37772ef5a4352f001ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:37:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f6b950a43e820e6224443e9ce097cb724c0745d4b91b37772ef5a4352f001ed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:37:34 compute-0 sudo[368439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:34 compute-0 podman[368379]: 2025-12-05 01:37:34.14078224 +0000 UTC m=+0.241532286 container init 8abb65d2555996990ebb01fb5bf013c36e32863b1f845228f19b0d3ace264132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_sammet, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 01:37:34 compute-0 podman[368379]: 2025-12-05 01:37:34.166064911 +0000 UTC m=+0.266814897 container start 8abb65d2555996990ebb01fb5bf013c36e32863b1f845228f19b0d3ace264132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_sammet, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 05 01:37:34 compute-0 podman[368379]: 2025-12-05 01:37:34.174036736 +0000 UTC m=+0.274786682 container attach 8abb65d2555996990ebb01fb5bf013c36e32863b1f845228f19b0d3ace264132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_sammet, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Dec 05 01:37:34 compute-0 python3.9[368442]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:37:34 compute-0 sudo[368439]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:34 compute-0 podman[368469]: 2025-12-05 01:37:34.71876594 +0000 UTC m=+0.119733139 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 05 01:37:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:34 compute-0 ceph-mon[192914]: pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:35 compute-0 sudo[368624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozrxmppwovdvfnsvezoblekhpfpepuxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898654.6480217-822-115557658575695/AnsiballZ_podman_container_info.py'
Dec 05 01:37:35 compute-0 sudo[368624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:35 compute-0 python3.9[368628]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec 05 01:37:35 compute-0 clever_sammet[368437]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:37:35 compute-0 clever_sammet[368437]: --> relative data size: 1.0
Dec 05 01:37:35 compute-0 clever_sammet[368437]: --> All data devices are unavailable
Dec 05 01:37:35 compute-0 systemd[1]: libpod-8abb65d2555996990ebb01fb5bf013c36e32863b1f845228f19b0d3ace264132.scope: Deactivated successfully.
Dec 05 01:37:35 compute-0 systemd[1]: libpod-8abb65d2555996990ebb01fb5bf013c36e32863b1f845228f19b0d3ace264132.scope: Consumed 1.198s CPU time.
Dec 05 01:37:35 compute-0 podman[368379]: 2025-12-05 01:37:35.441110149 +0000 UTC m=+1.541860145 container died 8abb65d2555996990ebb01fb5bf013c36e32863b1f845228f19b0d3ace264132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 01:37:35 compute-0 sudo[368624]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f6b950a43e820e6224443e9ce097cb724c0745d4b91b37772ef5a4352f001ed-merged.mount: Deactivated successfully.
Dec 05 01:37:35 compute-0 podman[368379]: 2025-12-05 01:37:35.537956844 +0000 UTC m=+1.638706810 container remove 8abb65d2555996990ebb01fb5bf013c36e32863b1f845228f19b0d3ace264132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:37:35 compute-0 systemd[1]: libpod-conmon-8abb65d2555996990ebb01fb5bf013c36e32863b1f845228f19b0d3ace264132.scope: Deactivated successfully.
Dec 05 01:37:35 compute-0 sudo[368137]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:35 compute-0 sudo[368688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:37:35 compute-0 sudo[368688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:37:35 compute-0 sudo[368688]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:35 compute-0 sudo[368724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:37:35 compute-0 sudo[368724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:37:35 compute-0 sudo[368724]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:35 compute-0 podman[368712]: 2025-12-05 01:37:35.891796098 +0000 UTC m=+0.140195215 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:37:35 compute-0 sudo[368801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:37:35 compute-0 sudo[368801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:37:35 compute-0 sudo[368801]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:36 compute-0 sudo[368847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:37:36 compute-0 sudo[368847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:37:36 compute-0 sudo[368938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rguahfbddmwxjbjyqgnaktyqmroeaxsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898655.8331769-830-69672861267857/AnsiballZ_podman_container_exec.py'
Dec 05 01:37:36 compute-0 sudo[368938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:36 compute-0 python3.9[368948]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:37:36 compute-0 podman[368982]: 2025-12-05 01:37:36.713599997 +0000 UTC m=+0.111327493 container create 96cce4c48017ba7edabe9524ed760e715670350b750ce46c0a23fb0479cd3869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Dec 05 01:37:36 compute-0 systemd[1]: Started libpod-conmon-602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9.scope.
Dec 05 01:37:36 compute-0 podman[368976]: 2025-12-05 01:37:36.75529214 +0000 UTC m=+0.182831465 container exec 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 01:37:36 compute-0 podman[368982]: 2025-12-05 01:37:36.679796976 +0000 UTC m=+0.077524542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:37:36 compute-0 systemd[1]: Started libpod-conmon-96cce4c48017ba7edabe9524ed760e715670350b750ce46c0a23fb0479cd3869.scope.
Dec 05 01:37:36 compute-0 podman[368976]: 2025-12-05 01:37:36.787163926 +0000 UTC m=+0.214703251 container exec_died 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 01:37:36 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:37:36 compute-0 podman[368982]: 2025-12-05 01:37:36.857217137 +0000 UTC m=+0.254944643 container init 96cce4c48017ba7edabe9524ed760e715670350b750ce46c0a23fb0479cd3869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:37:36 compute-0 sudo[368938]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:36 compute-0 podman[368982]: 2025-12-05 01:37:36.87582632 +0000 UTC m=+0.273553796 container start 96cce4c48017ba7edabe9524ed760e715670350b750ce46c0a23fb0479cd3869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bardeen, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:37:36 compute-0 podman[368982]: 2025-12-05 01:37:36.880525763 +0000 UTC m=+0.278253239 container attach 96cce4c48017ba7edabe9524ed760e715670350b750ce46c0a23fb0479cd3869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 01:37:36 compute-0 zealous_bardeen[369011]: 167 167
Dec 05 01:37:36 compute-0 systemd[1]: libpod-conmon-602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9.scope: Deactivated successfully.
Dec 05 01:37:36 compute-0 systemd[1]: libpod-96cce4c48017ba7edabe9524ed760e715670350b750ce46c0a23fb0479cd3869.scope: Deactivated successfully.
Dec 05 01:37:36 compute-0 podman[368982]: 2025-12-05 01:37:36.888305341 +0000 UTC m=+0.286032817 container died 96cce4c48017ba7edabe9524ed760e715670350b750ce46c0a23fb0479cd3869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bardeen, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:37:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-257e27456f65184c5c120e6685286125f8729c12e09ec83b2b101077c152819c-merged.mount: Deactivated successfully.
Dec 05 01:37:36 compute-0 podman[368982]: 2025-12-05 01:37:36.943610827 +0000 UTC m=+0.341338313 container remove 96cce4c48017ba7edabe9524ed760e715670350b750ce46c0a23fb0479cd3869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bardeen, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:37:36 compute-0 systemd[1]: libpod-conmon-96cce4c48017ba7edabe9524ed760e715670350b750ce46c0a23fb0479cd3869.scope: Deactivated successfully.
Dec 05 01:37:36 compute-0 ceph-mon[192914]: pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:37 compute-0 podman[369091]: 2025-12-05 01:37:37.243621967 +0000 UTC m=+0.097481203 container create 24e9a65e88b039833009f9ea92da0ccc6aed56250a3fb450bcf74a31dd5aaeb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heisenberg, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:37:37 compute-0 podman[369091]: 2025-12-05 01:37:37.208405396 +0000 UTC m=+0.062264682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:37:37 compute-0 systemd[1]: Started libpod-conmon-24e9a65e88b039833009f9ea92da0ccc6aed56250a3fb450bcf74a31dd5aaeb5.scope.
Dec 05 01:37:37 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:37:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5d2f5574813749d15558f03d375c140904f30aa784a30cf09ab642ab10915a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:37:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5d2f5574813749d15558f03d375c140904f30aa784a30cf09ab642ab10915a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:37:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5d2f5574813749d15558f03d375c140904f30aa784a30cf09ab642ab10915a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:37:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5d2f5574813749d15558f03d375c140904f30aa784a30cf09ab642ab10915a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:37:37 compute-0 podman[369091]: 2025-12-05 01:37:37.406945202 +0000 UTC m=+0.260804418 container init 24e9a65e88b039833009f9ea92da0ccc6aed56250a3fb450bcf74a31dd5aaeb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:37:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:37:37 compute-0 podman[369091]: 2025-12-05 01:37:37.428069976 +0000 UTC m=+0.281929222 container start 24e9a65e88b039833009f9ea92da0ccc6aed56250a3fb450bcf74a31dd5aaeb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heisenberg, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:37:37 compute-0 podman[369091]: 2025-12-05 01:37:37.434584039 +0000 UTC m=+0.288443285 container attach 24e9a65e88b039833009f9ea92da0ccc6aed56250a3fb450bcf74a31dd5aaeb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 01:37:37 compute-0 podman[369132]: 2025-12-05 01:37:37.456208357 +0000 UTC m=+0.136745037 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:37:37 compute-0 sudo[369230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nptteoehjtsoovaggrlgvlfigpoegpex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898657.1434212-838-2796784500235/AnsiballZ_podman_container_exec.py'
Dec 05 01:37:37 compute-0 sudo[369230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:38 compute-0 python3.9[369232]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:37:38 compute-0 systemd[1]: Started libpod-conmon-602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9.scope.
Dec 05 01:37:38 compute-0 podman[369233]: 2025-12-05 01:37:38.211036822 +0000 UTC m=+0.153918071 container exec 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]: {
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:     "0": [
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:         {
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "devices": [
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "/dev/loop3"
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             ],
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "lv_name": "ceph_lv0",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "lv_size": "21470642176",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "name": "ceph_lv0",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "tags": {
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.cluster_name": "ceph",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.crush_device_class": "",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.encrypted": "0",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.osd_id": "0",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.type": "block",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.vdo": "0"
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             },
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "type": "block",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "vg_name": "ceph_vg0"
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:         }
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:     ],
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:     "1": [
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:         {
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "devices": [
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "/dev/loop4"
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             ],
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "lv_name": "ceph_lv1",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "lv_size": "21470642176",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "name": "ceph_lv1",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "tags": {
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.cluster_name": "ceph",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.crush_device_class": "",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.encrypted": "0",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.osd_id": "1",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.type": "block",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.vdo": "0"
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             },
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "type": "block",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "vg_name": "ceph_vg1"
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:         }
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:     ],
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:     "2": [
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:         {
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "devices": [
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "/dev/loop5"
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             ],
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "lv_name": "ceph_lv2",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "lv_size": "21470642176",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "name": "ceph_lv2",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "tags": {
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.cluster_name": "ceph",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.crush_device_class": "",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.encrypted": "0",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.osd_id": "2",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.type": "block",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:                 "ceph.vdo": "0"
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             },
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "type": "block",
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:             "vg_name": "ceph_vg2"
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:         }
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]:     ]
Dec 05 01:37:38 compute-0 suspicious_heisenberg[369139]: }
Dec 05 01:37:38 compute-0 podman[369233]: 2025-12-05 01:37:38.248041203 +0000 UTC m=+0.190922452 container exec_died 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:37:38 compute-0 systemd[1]: libpod-24e9a65e88b039833009f9ea92da0ccc6aed56250a3fb450bcf74a31dd5aaeb5.scope: Deactivated successfully.
Dec 05 01:37:38 compute-0 sudo[369230]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:38 compute-0 systemd[1]: libpod-conmon-602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9.scope: Deactivated successfully.
Dec 05 01:37:38 compute-0 podman[369265]: 2025-12-05 01:37:38.3499443 +0000 UTC m=+0.040668415 container died 24e9a65e88b039833009f9ea92da0ccc6aed56250a3fb450bcf74a31dd5aaeb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heisenberg, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 01:37:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5d2f5574813749d15558f03d375c140904f30aa784a30cf09ab642ab10915a4-merged.mount: Deactivated successfully.
Dec 05 01:37:38 compute-0 podman[369265]: 2025-12-05 01:37:38.455264733 +0000 UTC m=+0.145988778 container remove 24e9a65e88b039833009f9ea92da0ccc6aed56250a3fb450bcf74a31dd5aaeb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 05 01:37:38 compute-0 systemd[1]: libpod-conmon-24e9a65e88b039833009f9ea92da0ccc6aed56250a3fb450bcf74a31dd5aaeb5.scope: Deactivated successfully.
Dec 05 01:37:38 compute-0 sudo[368847]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:38 compute-0 sudo[369304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:37:38 compute-0 sudo[369304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:37:38 compute-0 sudo[369304]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:38 compute-0 sudo[369368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:37:38 compute-0 sudo[369368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:37:38 compute-0 sudo[369368]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:38 compute-0 sudo[369413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:37:38 compute-0 sudo[369413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:37:38 compute-0 sudo[369413]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:38 compute-0 ceph-mon[192914]: pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:39 compute-0 sudo[369461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:37:39 compute-0 sudo[369461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:37:39 compute-0 sudo[369528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bndnqqdjwhlhfkyfwobystpkhyaicopi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898658.6288362-846-108169659831729/AnsiballZ_file.py'
Dec 05 01:37:39 compute-0 sudo[369528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:39 compute-0 python3.9[369530]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:37:39 compute-0 sudo[369528]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:39 compute-0 podman[369592]: 2025-12-05 01:37:39.703408914 +0000 UTC m=+0.098408859 container create d73e87b6eb0381cef2a7a8fe210d29f1cb2a7805bbbfdfea45e37014ca6fe2d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec 05 01:37:39 compute-0 podman[369592]: 2025-12-05 01:37:39.666487275 +0000 UTC m=+0.061487270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:37:39 compute-0 systemd[1]: Started libpod-conmon-d73e87b6eb0381cef2a7a8fe210d29f1cb2a7805bbbfdfea45e37014ca6fe2d1.scope.
Dec 05 01:37:39 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:37:39 compute-0 podman[369592]: 2025-12-05 01:37:39.855813691 +0000 UTC m=+0.250813676 container init d73e87b6eb0381cef2a7a8fe210d29f1cb2a7805bbbfdfea45e37014ca6fe2d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kilby, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Dec 05 01:37:39 compute-0 podman[369592]: 2025-12-05 01:37:39.875292409 +0000 UTC m=+0.270292344 container start d73e87b6eb0381cef2a7a8fe210d29f1cb2a7805bbbfdfea45e37014ca6fe2d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 05 01:37:39 compute-0 podman[369592]: 2025-12-05 01:37:39.881668729 +0000 UTC m=+0.276668684 container attach d73e87b6eb0381cef2a7a8fe210d29f1cb2a7805bbbfdfea45e37014ca6fe2d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 05 01:37:39 compute-0 frosty_kilby[369644]: 167 167
Dec 05 01:37:39 compute-0 systemd[1]: libpod-d73e87b6eb0381cef2a7a8fe210d29f1cb2a7805bbbfdfea45e37014ca6fe2d1.scope: Deactivated successfully.
Dec 05 01:37:39 compute-0 podman[369592]: 2025-12-05 01:37:39.888056548 +0000 UTC m=+0.283056503 container died d73e87b6eb0381cef2a7a8fe210d29f1cb2a7805bbbfdfea45e37014ca6fe2d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kilby, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 01:37:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-e31c579966c217fc0906ee37abd815f6714515fef4576b7d8de8179d9eac5d96-merged.mount: Deactivated successfully.
Dec 05 01:37:39 compute-0 podman[369592]: 2025-12-05 01:37:39.974422468 +0000 UTC m=+0.369422403 container remove d73e87b6eb0381cef2a7a8fe210d29f1cb2a7805bbbfdfea45e37014ca6fe2d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 01:37:39 compute-0 systemd[1]: libpod-conmon-d73e87b6eb0381cef2a7a8fe210d29f1cb2a7805bbbfdfea45e37014ca6fe2d1.scope: Deactivated successfully.
Dec 05 01:37:40 compute-0 podman[369729]: 2025-12-05 01:37:40.272703619 +0000 UTC m=+0.082298826 container create 3508443cb7b0521d2f186315f0d19a64f1c54a28fe0d47cb223e9268793e10b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:37:40 compute-0 sudo[369768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agsbuzcffwoejovormlaihugcgntiqqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898659.717961-855-119024137690563/AnsiballZ_podman_container_info.py'
Dec 05 01:37:40 compute-0 sudo[369768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:40 compute-0 podman[369729]: 2025-12-05 01:37:40.241786079 +0000 UTC m=+0.051381316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:37:40 compute-0 systemd[1]: Started libpod-conmon-3508443cb7b0521d2f186315f0d19a64f1c54a28fe0d47cb223e9268793e10b0.scope.
Dec 05 01:37:40 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:37:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041fee57f2f8da3f7470b0317d2b2202f8c1ec77c3473f55270141d2655ac391/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:37:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041fee57f2f8da3f7470b0317d2b2202f8c1ec77c3473f55270141d2655ac391/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:37:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041fee57f2f8da3f7470b0317d2b2202f8c1ec77c3473f55270141d2655ac391/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:37:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041fee57f2f8da3f7470b0317d2b2202f8c1ec77c3473f55270141d2655ac391/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:37:40 compute-0 podman[369729]: 2025-12-05 01:37:40.438477383 +0000 UTC m=+0.248072610 container init 3508443cb7b0521d2f186315f0d19a64f1c54a28fe0d47cb223e9268793e10b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec 05 01:37:40 compute-0 podman[369729]: 2025-12-05 01:37:40.460308387 +0000 UTC m=+0.269903594 container start 3508443cb7b0521d2f186315f0d19a64f1c54a28fe0d47cb223e9268793e10b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 05 01:37:40 compute-0 podman[369729]: 2025-12-05 01:37:40.4664578 +0000 UTC m=+0.276053007 container attach 3508443cb7b0521d2f186315f0d19a64f1c54a28fe0d47cb223e9268793e10b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_colden, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 05 01:37:40 compute-0 python3.9[369770]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec 05 01:37:40 compute-0 sudo[369768]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:40 compute-0 ceph-mon[192914]: pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:41 compute-0 thirsty_colden[369773]: {
Dec 05 01:37:41 compute-0 thirsty_colden[369773]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:37:41 compute-0 thirsty_colden[369773]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:37:41 compute-0 thirsty_colden[369773]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:37:41 compute-0 thirsty_colden[369773]:         "osd_id": 0,
Dec 05 01:37:41 compute-0 thirsty_colden[369773]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:37:41 compute-0 thirsty_colden[369773]:         "type": "bluestore"
Dec 05 01:37:41 compute-0 thirsty_colden[369773]:     },
Dec 05 01:37:41 compute-0 thirsty_colden[369773]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:37:41 compute-0 thirsty_colden[369773]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:37:41 compute-0 thirsty_colden[369773]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:37:41 compute-0 thirsty_colden[369773]:         "osd_id": 1,
Dec 05 01:37:41 compute-0 thirsty_colden[369773]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:37:41 compute-0 thirsty_colden[369773]:         "type": "bluestore"
Dec 05 01:37:41 compute-0 thirsty_colden[369773]:     },
Dec 05 01:37:41 compute-0 thirsty_colden[369773]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:37:41 compute-0 thirsty_colden[369773]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:37:41 compute-0 thirsty_colden[369773]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:37:41 compute-0 thirsty_colden[369773]:         "osd_id": 2,
Dec 05 01:37:41 compute-0 thirsty_colden[369773]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:37:41 compute-0 thirsty_colden[369773]:         "type": "bluestore"
Dec 05 01:37:41 compute-0 thirsty_colden[369773]:     }
Dec 05 01:37:41 compute-0 thirsty_colden[369773]: }
Dec 05 01:37:41 compute-0 systemd[1]: libpod-3508443cb7b0521d2f186315f0d19a64f1c54a28fe0d47cb223e9268793e10b0.scope: Deactivated successfully.
Dec 05 01:37:41 compute-0 podman[369729]: 2025-12-05 01:37:41.704034385 +0000 UTC m=+1.513629612 container died 3508443cb7b0521d2f186315f0d19a64f1c54a28fe0d47cb223e9268793e10b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_colden, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:37:41 compute-0 systemd[1]: libpod-3508443cb7b0521d2f186315f0d19a64f1c54a28fe0d47cb223e9268793e10b0.scope: Consumed 1.237s CPU time.
Dec 05 01:37:41 compute-0 sudo[369968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhaqyogclxbeuwfdcmoweinheegaulxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898661.2471778-863-56763648581559/AnsiballZ_podman_container_exec.py'
Dec 05 01:37:41 compute-0 sudo[369968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-041fee57f2f8da3f7470b0317d2b2202f8c1ec77c3473f55270141d2655ac391-merged.mount: Deactivated successfully.
Dec 05 01:37:41 compute-0 podman[369729]: 2025-12-05 01:37:41.791495395 +0000 UTC m=+1.601090602 container remove 3508443cb7b0521d2f186315f0d19a64f1c54a28fe0d47cb223e9268793e10b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:37:41 compute-0 systemd[1]: libpod-conmon-3508443cb7b0521d2f186315f0d19a64f1c54a28fe0d47cb223e9268793e10b0.scope: Deactivated successfully.
Dec 05 01:37:41 compute-0 sudo[369461]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:37:41 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:37:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:37:41 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:37:41 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev eb2157a5-f0d3-46a9-8370-85acef1ab88e does not exist
Dec 05 01:37:41 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 9dc754ba-1324-4949-bc59-115b60b7a7e4 does not exist
Dec 05 01:37:41 compute-0 sudo[369983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:37:41 compute-0 python3.9[369978]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:37:41 compute-0 sudo[369983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:37:42 compute-0 sudo[369983]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:42 compute-0 systemd[1]: Started libpod-conmon-b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc.scope.
Dec 05 01:37:42 compute-0 sudo[370009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:37:42 compute-0 sudo[370009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:37:42 compute-0 sudo[370009]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:42 compute-0 podman[370007]: 2025-12-05 01:37:42.162237025 +0000 UTC m=+0.145261448 container exec b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 01:37:42 compute-0 podman[370007]: 2025-12-05 01:37:42.198231017 +0000 UTC m=+0.181255420 container exec_died b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:37:42 compute-0 systemd[1]: libpod-conmon-b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc.scope: Deactivated successfully.
Dec 05 01:37:42 compute-0 sudo[369968]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:37:42 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:37:42 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:37:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:43 compute-0 sudo[370213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svpbyuvjhhfgotbuqhvesuwqcmxgkofo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898662.5599837-871-190837163726259/AnsiballZ_podman_container_exec.py'
Dec 05 01:37:43 compute-0 sudo[370213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:43 compute-0 ceph-mon[192914]: pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:43 compute-0 python3.9[370215]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:37:44 compute-0 systemd[1]: Started libpod-conmon-b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc.scope.
Dec 05 01:37:44 compute-0 podman[370216]: 2025-12-05 01:37:44.076984739 +0000 UTC m=+0.143500458 container exec b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:37:44 compute-0 podman[370216]: 2025-12-05 01:37:44.109314658 +0000 UTC m=+0.175830337 container exec_died b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:37:44 compute-0 systemd[1]: libpod-conmon-b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc.scope: Deactivated successfully.
Dec 05 01:37:44 compute-0 sudo[370213]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:44 compute-0 ceph-mon[192914]: pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:45 compute-0 sudo[370394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rckurugdclkjojusnqiydqlinnncsagy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898664.463471-879-113953425112689/AnsiballZ_file.py'
Dec 05 01:37:45 compute-0 sudo[370394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:45 compute-0 python3.9[370396]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:37:45 compute-0 sudo[370394]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:46 compute-0 sudo[370546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kypdakpryqadiyaqmivqiogjoudxzhds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898665.5840938-888-148385612183013/AnsiballZ_podman_container_info.py'
Dec 05 01:37:46 compute-0 sudo[370546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:37:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:37:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:37:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:37:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:37:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:37:46 compute-0 python3.9[370548]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec 05 01:37:46 compute-0 sudo[370546]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:46 compute-0 ceph-mon[192914]: pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:47 compute-0 sudo[370752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdgsuukvbtlzbhybnangwsysrthayuip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898666.8278723-896-209186822375662/AnsiballZ_podman_container_exec.py'
Dec 05 01:37:47 compute-0 podman[370684]: 2025-12-05 01:37:47.41457806 +0000 UTC m=+0.124556705 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd)
Dec 05 01:37:47 compute-0 podman[370685]: 2025-12-05 01:37:47.41707265 +0000 UTC m=+0.125513102 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 01:37:47 compute-0 sudo[370752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:37:47 compute-0 podman[370686]: 2025-12-05 01:37:47.452210459 +0000 UTC m=+0.155743113 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 01:37:47 compute-0 python3.9[370767]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:37:47 compute-0 systemd[1]: Started libpod-conmon-fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4.scope.
Dec 05 01:37:47 compute-0 podman[370771]: 2025-12-05 01:37:47.807668048 +0000 UTC m=+0.155877686 container exec fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.)
Dec 05 01:37:47 compute-0 podman[370771]: 2025-12-05 01:37:47.849068773 +0000 UTC m=+0.197278381 container exec_died fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=minimal rhel9, version=9.6)
Dec 05 01:37:47 compute-0 sudo[370752]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:47 compute-0 systemd[1]: libpod-conmon-fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4.scope: Deactivated successfully.
Dec 05 01:37:48 compute-0 sudo[370953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahivpidgvitoepxgzhrthvyydmwnsgje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898668.1795394-904-142907089122671/AnsiballZ_podman_container_exec.py'
Dec 05 01:37:48 compute-0 sudo[370953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:48 compute-0 podman[370955]: 2025-12-05 01:37:48.847798779 +0000 UTC m=+0.121813408 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.component=ubi9-container, version=9.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, architecture=x86_64, io.buildah.version=1.29.0, io.openshift.tags=base rhel9)
Dec 05 01:37:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:48 compute-0 python3.9[370956]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:37:49 compute-0 ceph-mon[192914]: pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:49 compute-0 systemd[1]: Started libpod-conmon-fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4.scope.
Dec 05 01:37:49 compute-0 podman[370977]: 2025-12-05 01:37:49.12583824 +0000 UTC m=+0.144533317 container exec fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-type=git, distribution-scope=public, config_id=edpm, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, version=9.6, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 01:37:49 compute-0 podman[370977]: 2025-12-05 01:37:49.164753495 +0000 UTC m=+0.183448572 container exec_died fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, config_id=edpm, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1755695350)
Dec 05 01:37:49 compute-0 sudo[370953]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:49 compute-0 systemd[1]: libpod-conmon-fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4.scope: Deactivated successfully.
Dec 05 01:37:50 compute-0 sudo[371156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuvsnsqozgkqshxxxwyfwiefxczcjwkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898669.5226793-912-38715573086525/AnsiballZ_file.py'
Dec 05 01:37:50 compute-0 sudo[371156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:50 compute-0 python3.9[371158]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:37:50 compute-0 sudo[371156]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v847: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:51 compute-0 ceph-mon[192914]: pgmap v847: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:51 compute-0 sudo[371308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-firjwdefqdevlftirtqpsldptvjuauoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898670.7134407-921-171258677355885/AnsiballZ_podman_container_info.py'
Dec 05 01:37:51 compute-0 sudo[371308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:51 compute-0 python3.9[371310]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Dec 05 01:37:51 compute-0 sudo[371308]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:51 compute-0 podman[371322]: 2025-12-05 01:37:51.728494796 +0000 UTC m=+0.135472482 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:37:51 compute-0 podman[371369]: 2025-12-05 01:37:51.898311904 +0000 UTC m=+0.128210418 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, distribution-scope=public)
Dec 05 01:37:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:37:52 compute-0 sudo[371513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvyoxuhnmwghordmtvjovmpxylteqold ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898671.9540963-929-264436968961563/AnsiballZ_podman_container_exec.py'
Dec 05 01:37:52 compute-0 sudo[371513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:52 compute-0 python3.9[371515]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:37:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v848: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:52 compute-0 systemd[1]: Started libpod-conmon-88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.scope.
Dec 05 01:37:52 compute-0 podman[371516]: 2025-12-05 01:37:52.983697337 +0000 UTC m=+0.160537187 container exec 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 05 01:37:53 compute-0 ceph-mon[192914]: pgmap v848: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:53 compute-0 podman[371516]: 2025-12-05 01:37:53.018362152 +0000 UTC m=+0.195202022 container exec_died 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 01:37:53 compute-0 systemd[1]: libpod-conmon-88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.scope: Deactivated successfully.
Dec 05 01:37:53 compute-0 sudo[371513]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:54 compute-0 sudo[371696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzrrqfpypvohyoxpxnbpgirsaqpvrrhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898673.5261397-937-248940274002006/AnsiballZ_podman_container_exec.py'
Dec 05 01:37:54 compute-0 sudo[371696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:54 compute-0 python3.9[371698]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:37:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v849: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:54 compute-0 systemd[1]: Started libpod-conmon-88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.scope.
Dec 05 01:37:55 compute-0 ceph-mon[192914]: pgmap v849: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:55 compute-0 podman[371699]: 2025-12-05 01:37:55.008551119 +0000 UTC m=+0.149247810 container exec 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm)
Dec 05 01:37:55 compute-0 podman[371699]: 2025-12-05 01:37:55.043568124 +0000 UTC m=+0.184264815 container exec_died 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 05 01:37:55 compute-0 systemd[1]: libpod-conmon-88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.scope: Deactivated successfully.
Dec 05 01:37:55 compute-0 sudo[371696]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:37:56.164 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:37:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:37:56.166 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:37:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:37:56.166 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:37:56 compute-0 sudo[371879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jntevylufzynafmukbwzcaaaadijjmwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898675.4323533-945-19802688228712/AnsiballZ_file.py'
Dec 05 01:37:56 compute-0 sudo[371879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:56 compute-0 python3.9[371881]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:37:56 compute-0 sudo[371879]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v850: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:57 compute-0 ceph-mon[192914]: pgmap v850: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:37:57 compute-0 sudo[372031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyuzertczeiqmigtyhgmwntdztywctuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898677.178705-954-174784370876140/AnsiballZ_podman_container_info.py'
Dec 05 01:37:57 compute-0 sudo[372031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:58 compute-0 python3.9[372033]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Dec 05 01:37:58 compute-0 sudo[372031]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v851: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:59 compute-0 ceph-mon[192914]: pgmap v851: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:37:59 compute-0 sudo[372194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmgzkuimhnxbahdjgfbzhcgrtcyvtpwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898678.5046055-962-170032715888656/AnsiballZ_podman_container_exec.py'
Dec 05 01:37:59 compute-0 sudo[372194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:37:59 compute-0 python3.9[372196]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:37:59 compute-0 systemd[1]: Started libpod-conmon-de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91.scope.
Dec 05 01:37:59 compute-0 podman[372197]: 2025-12-05 01:37:59.471163608 +0000 UTC m=+0.151488892 container exec de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, vcs-type=git, version=9.4, container_name=kepler, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, distribution-scope=public, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543)
Dec 05 01:37:59 compute-0 podman[372197]: 2025-12-05 01:37:59.48154533 +0000 UTC m=+0.161870594 container exec_died de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.buildah.version=1.29.0, release=1214.1726694543, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.openshift.expose-services=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container)
Dec 05 01:37:59 compute-0 sudo[372194]: pam_unix(sudo:session): session closed for user root
Dec 05 01:37:59 compute-0 systemd[1]: libpod-conmon-de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91.scope: Deactivated successfully.
Dec 05 01:37:59 compute-0 podman[158197]: time="2025-12-05T01:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:37:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 01:37:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8107 "" "Go-http-client/1.1"
Dec 05 01:38:00 compute-0 sudo[372377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rudkawhuarvrdmtzcotzbkdzthxttomw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898679.8416512-970-59937539843073/AnsiballZ_podman_container_exec.py'
Dec 05 01:38:00 compute-0 sudo[372377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:00 compute-0 python3.9[372379]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:38:00 compute-0 systemd[1]: Started libpod-conmon-de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91.scope.
Dec 05 01:38:00 compute-0 podman[372380]: 2025-12-05 01:38:00.833525953 +0000 UTC m=+0.165298541 container exec de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, container_name=kepler, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 05 01:38:00 compute-0 podman[372380]: 2025-12-05 01:38:00.86895666 +0000 UTC m=+0.200729228 container exec_died de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, architecture=x86_64, config_id=edpm, release-0.7.12=)
Dec 05 01:38:00 compute-0 systemd[1]: libpod-conmon-de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91.scope: Deactivated successfully.
Dec 05 01:38:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v852: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:00 compute-0 sudo[372377]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:01 compute-0 ceph-mon[192914]: pgmap v852: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:01 compute-0 openstack_network_exporter[366555]: ERROR   01:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:38:01 compute-0 openstack_network_exporter[366555]: ERROR   01:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:38:01 compute-0 openstack_network_exporter[366555]: ERROR   01:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:38:01 compute-0 openstack_network_exporter[366555]: ERROR   01:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:38:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:38:01 compute-0 openstack_network_exporter[366555]: ERROR   01:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:38:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:38:01 compute-0 sudo[372556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwvrfpnaivedvwhcxwthaqpmbmxjrzbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898681.2054296-978-19901387677401/AnsiballZ_file.py'
Dec 05 01:38:01 compute-0 sudo[372556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:01 compute-0 python3.9[372558]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:38:01 compute-0 sudo[372556]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:38:02 compute-0 sudo[372708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blpwnsmkiwpmautjwzhwgbzvqgembggy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898682.3769896-987-104604619682789/AnsiballZ_podman_container_info.py'
Dec 05 01:38:02 compute-0 sudo[372708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v853: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:03 compute-0 ceph-mon[192914]: pgmap v853: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:03 compute-0 python3.9[372710]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Dec 05 01:38:03 compute-0 sudo[372708]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:04 compute-0 sudo[372871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaubstovitxylhkmkteuicimlriircnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898683.6075037-995-167133787513505/AnsiballZ_podman_container_exec.py'
Dec 05 01:38:04 compute-0 sudo[372871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:04 compute-0 python3.9[372873]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:38:04 compute-0 systemd[1]: Started libpod-conmon-33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638.scope.
Dec 05 01:38:04 compute-0 podman[372874]: 2025-12-05 01:38:04.581254182 +0000 UTC m=+0.133393544 container exec 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 05 01:38:04 compute-0 podman[372874]: 2025-12-05 01:38:04.616971226 +0000 UTC m=+0.169110618 container exec_died 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 01:38:04 compute-0 systemd[1]: libpod-conmon-33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638.scope: Deactivated successfully.
Dec 05 01:38:04 compute-0 sudo[372871]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v854: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:05 compute-0 ceph-mon[192914]: pgmap v854: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:05 compute-0 podman[373026]: 2025-12-05 01:38:05.556037164 +0000 UTC m=+0.104837740 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 05 01:38:05 compute-0 sudo[373069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jazvyuximuocbexnkqhctrscwqbkulkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898685.0195463-1003-281415035959538/AnsiballZ_podman_container_exec.py'
Dec 05 01:38:05 compute-0 sudo[373069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:05 compute-0 python3.9[373073]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:38:05 compute-0 systemd[1]: Started libpod-conmon-33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638.scope.
Dec 05 01:38:05 compute-0 podman[373074]: 2025-12-05 01:38:05.976021979 +0000 UTC m=+0.163457430 container exec 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:38:06 compute-0 podman[373074]: 2025-12-05 01:38:06.01338755 +0000 UTC m=+0.200822971 container exec_died 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 05 01:38:06 compute-0 systemd[1]: libpod-conmon-33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638.scope: Deactivated successfully.
Dec 05 01:38:06 compute-0 sudo[373069]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:06 compute-0 podman[373088]: 2025-12-05 01:38:06.106702105 +0000 UTC m=+0.139566637 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:38:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v855: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:07 compute-0 ceph-mon[192914]: pgmap v855: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:38:07 compute-0 podman[373241]: 2025-12-05 01:38:07.721239904 +0000 UTC m=+0.119898484 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 05 01:38:07 compute-0 sudo[373295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uydjgndymvojllubxgxouwmutsjrrupb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898686.4628124-1011-3751103780520/AnsiballZ_file.py'
Dec 05 01:38:07 compute-0 sudo[373295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:07 compute-0 python3.9[373297]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:38:08 compute-0 sudo[373295]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:08 compute-0 nova_compute[349548]: 2025-12-05 01:38:08.731 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:38:08 compute-0 sudo[373447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njoocoigmcuetuwqbhdpwlnzpkvlvkxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898688.3589287-1020-133015018810052/AnsiballZ_podman_container_info.py'
Dec 05 01:38:08 compute-0 sudo[373447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v856: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:09 compute-0 ceph-mon[192914]: pgmap v856: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:09 compute-0 nova_compute[349548]: 2025-12-05 01:38:09.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:38:09 compute-0 nova_compute[349548]: 2025-12-05 01:38:09.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:38:09 compute-0 nova_compute[349548]: 2025-12-05 01:38:09.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:38:09 compute-0 nova_compute[349548]: 2025-12-05 01:38:09.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:38:09 compute-0 nova_compute[349548]: 2025-12-05 01:38:09.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:38:09 compute-0 nova_compute[349548]: 2025-12-05 01:38:09.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 01:38:09 compute-0 nova_compute[349548]: 2025-12-05 01:38:09.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:38:09 compute-0 nova_compute[349548]: 2025-12-05 01:38:09.115 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:38:09 compute-0 nova_compute[349548]: 2025-12-05 01:38:09.116 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:38:09 compute-0 nova_compute[349548]: 2025-12-05 01:38:09.117 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:38:09 compute-0 nova_compute[349548]: 2025-12-05 01:38:09.118 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:38:09 compute-0 nova_compute[349548]: 2025-12-05 01:38:09.118 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:38:09 compute-0 python3.9[373449]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Dec 05 01:38:09 compute-0 sudo[373447]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:38:09 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2447188084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:38:09 compute-0 nova_compute[349548]: 2025-12-05 01:38:09.645 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:38:10 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2447188084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:38:10 compute-0 nova_compute[349548]: 2025-12-05 01:38:10.169 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:38:10 compute-0 nova_compute[349548]: 2025-12-05 01:38:10.172 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4541MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 01:38:10 compute-0 nova_compute[349548]: 2025-12-05 01:38:10.172 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:38:10 compute-0 nova_compute[349548]: 2025-12-05 01:38:10.173 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:38:10 compute-0 nova_compute[349548]: 2025-12-05 01:38:10.256 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 01:38:10 compute-0 nova_compute[349548]: 2025-12-05 01:38:10.256 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 01:38:10 compute-0 nova_compute[349548]: 2025-12-05 01:38:10.278 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:38:10 compute-0 sudo[373652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dowmxinzhgrpxwvgkrwouvrsaqokyowu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898690.0780191-1028-168698613281630/AnsiballZ_podman_container_exec.py'
Dec 05 01:38:10 compute-0 sudo[373652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:38:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2517565991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:38:10 compute-0 nova_compute[349548]: 2025-12-05 01:38:10.759 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:38:10 compute-0 nova_compute[349548]: 2025-12-05 01:38:10.770 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:38:10 compute-0 nova_compute[349548]: 2025-12-05 01:38:10.794 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:38:10 compute-0 nova_compute[349548]: 2025-12-05 01:38:10.795 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 01:38:10 compute-0 nova_compute[349548]: 2025-12-05 01:38:10.796 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:38:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v857: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:10 compute-0 python3.9[373654]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:38:11 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2517565991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:38:11 compute-0 ceph-mon[192914]: pgmap v857: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:11 compute-0 systemd[1]: Started libpod-conmon-4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee.scope.
Dec 05 01:38:11 compute-0 podman[373657]: 2025-12-05 01:38:11.099712156 +0000 UTC m=+0.140226746 container exec 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd)
Dec 05 01:38:11 compute-0 podman[373657]: 2025-12-05 01:38:11.137543609 +0000 UTC m=+0.178058209 container exec_died 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec 05 01:38:11 compute-0 systemd[1]: libpod-conmon-4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee.scope: Deactivated successfully.
Dec 05 01:38:11 compute-0 sudo[373652]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:11 compute-0 nova_compute[349548]: 2025-12-05 01:38:11.796 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:38:11 compute-0 nova_compute[349548]: 2025-12-05 01:38:11.798 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 01:38:11 compute-0 nova_compute[349548]: 2025-12-05 01:38:11.798 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 01:38:11 compute-0 nova_compute[349548]: 2025-12-05 01:38:11.821 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 01:38:11 compute-0 nova_compute[349548]: 2025-12-05 01:38:11.821 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:38:12 compute-0 sudo[373837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mesyatptbgkttypobtoqewmxvkzclgrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898691.5036824-1036-132080120582462/AnsiballZ_podman_container_exec.py'
Dec 05 01:38:12 compute-0 sudo[373837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:12 compute-0 python3.9[373839]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:38:12 compute-0 systemd[1]: Started libpod-conmon-4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee.scope.
Dec 05 01:38:12 compute-0 podman[373840]: 2025-12-05 01:38:12.391046883 +0000 UTC m=+0.126532891 container exec 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 01:38:12 compute-0 podman[373840]: 2025-12-05 01:38:12.427007894 +0000 UTC m=+0.162493872 container exec_died 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:38:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:38:12 compute-0 systemd[1]: libpod-conmon-4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee.scope: Deactivated successfully.
Dec 05 01:38:12 compute-0 sudo[373837]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v858: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:13 compute-0 ceph-mon[192914]: pgmap v858: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:13 compute-0 sudo[374017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shbedtquimyelbvwbtwtassrngivgzdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898692.7886488-1044-149417946418632/AnsiballZ_file.py'
Dec 05 01:38:13 compute-0 sudo[374017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:13 compute-0 python3.9[374019]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:38:13 compute-0 sudo[374017]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:14 compute-0 sudo[374169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txfeknjzcpnsixqokyisqntksipmqnrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898693.8368707-1053-132350786466294/AnsiballZ_file.py'
Dec 05 01:38:14 compute-0 sudo[374169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:14 compute-0 python3.9[374171]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:38:14 compute-0 sudo[374169]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v859: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:15 compute-0 ceph-mon[192914]: pgmap v859: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:15 compute-0 sudo[374321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkotblohmuuowplbiwvbxwhhbvbvqmby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898695.039001-1061-66153215608158/AnsiballZ_stat.py'
Dec 05 01:38:15 compute-0 sudo[374321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:15 compute-0 python3.9[374323]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:38:15 compute-0 sudo[374321]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:38:16
Dec 05 01:38:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:38:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:38:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['backups', 'vms', '.rgw.root', 'default.rgw.meta', '.mgr', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data']
Dec 05 01:38:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:38:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:38:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:38:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:38:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:38:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:38:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:38:16 compute-0 sudo[374399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orgxbdwlfnwenmjqxfzxcnupsiyuexxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898695.039001-1061-66153215608158/AnsiballZ_file.py'
Dec 05 01:38:16 compute-0 sudo[374399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:16 compute-0 python3.9[374401]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/edpm-config/firewall/telemetry.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/telemetry.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:38:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:38:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:38:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:38:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:38:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:38:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:38:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:38:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:38:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:38:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:38:16 compute-0 sudo[374399]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v860: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:17 compute-0 ceph-mon[192914]: pgmap v860: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:38:17 compute-0 sudo[374551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tydgajxurdlswfiiggfclxwbngkjavby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898697.011667-1074-232157359792593/AnsiballZ_file.py'
Dec 05 01:38:17 compute-0 sudo[374551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:17 compute-0 podman[374553]: 2025-12-05 01:38:17.729988905 +0000 UTC m=+0.127877999 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec 05 01:38:17 compute-0 podman[374554]: 2025-12-05 01:38:17.753665931 +0000 UTC m=+0.146389659 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:38:17 compute-0 podman[374555]: 2025-12-05 01:38:17.79950853 +0000 UTC m=+0.197729403 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 05 01:38:17 compute-0 python3.9[374556]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:38:17 compute-0 sudo[374551]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v861: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:19 compute-0 ceph-mon[192914]: pgmap v861: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:19 compute-0 sudo[374775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsvkfgymndalxhmkgfbgcogyjcvckkna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898698.153194-1082-192872444423417/AnsiballZ_stat.py'
Dec 05 01:38:19 compute-0 sudo[374775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:19 compute-0 podman[374735]: 2025-12-05 01:38:19.605044992 +0000 UTC m=+0.141217864 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-container, config_id=edpm, version=9.4, architecture=x86_64, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9)
Dec 05 01:38:19 compute-0 python3.9[374781]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:38:19 compute-0 sudo[374775]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:20 compute-0 sudo[374858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytaysywdnjpnyebixksuzodwmikqtvky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898698.153194-1082-192872444423417/AnsiballZ_file.py'
Dec 05 01:38:20 compute-0 sudo[374858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:20 compute-0 python3.9[374860]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:38:20 compute-0 sudo[374858]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v862: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:21 compute-0 ceph-mon[192914]: pgmap v862: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:21 compute-0 podman[374984]: 2025-12-05 01:38:21.981271918 +0000 UTC m=+0.117842066 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 01:38:22 compute-0 sudo[375034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqyqtcvioqugjzuszayppxvvsrxnghoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898700.7930841-1094-279159453553483/AnsiballZ_stat.py'
Dec 05 01:38:22 compute-0 sudo[375034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:22 compute-0 podman[375036]: 2025-12-05 01:38:22.147646909 +0000 UTC m=+0.126410247 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.expose-services=, version=9.6, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=edpm, release=1755695350, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 05 01:38:22 compute-0 python3.9[375038]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:38:22 compute-0 sudo[375034]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:38:22 compute-0 sudo[375133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huxoigbtoldeqrihyzdvdzbsqiklikwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898700.7930841-1094-279159453553483/AnsiballZ_file.py'
Dec 05 01:38:22 compute-0 sudo[375133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:22 compute-0 python3.9[375135]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=._xle7r7m recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:38:22 compute-0 sudo[375133]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v863: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:23 compute-0 ceph-mon[192914]: pgmap v863: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:23 compute-0 sudo[375285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bupfghhdjvxrmyzrhqaqdjhauazokwyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898703.2242858-1106-130343873004078/AnsiballZ_stat.py'
Dec 05 01:38:23 compute-0 sudo[375285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:23 compute-0 python3.9[375287]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:38:23 compute-0 sudo[375285]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:24 compute-0 sudo[375363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynawiuohbhssdpjlusakvhfyptvbmzvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898703.2242858-1106-130343873004078/AnsiballZ_file.py'
Dec 05 01:38:24 compute-0 sudo[375363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:24 compute-0 python3.9[375365]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:38:24 compute-0 sudo[375363]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v864: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:25 compute-0 ceph-mon[192914]: pgmap v864: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:25 compute-0 sudo[375515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxxfwbsjjarnsittqfsvsjxqxtnvaryf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898705.1849241-1119-89310266495744/AnsiballZ_command.py'
Dec 05 01:38:25 compute-0 sudo[375515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:26 compute-0 python3.9[375517]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:38:26 compute-0 sudo[375515]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:38:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:38:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:38:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:38:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:38:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:38:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:38:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:38:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:38:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:38:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:38:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:38:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:38:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:38:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:38:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:38:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:38:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:38:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:38:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:38:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:38:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:38:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:38:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v865: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:27 compute-0 ceph-mon[192914]: pgmap v865: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:27 compute-0 sudo[375668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvozkxxdlxtidahkugpbobabsczocogg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764898706.3494704-1127-237671132775665/AnsiballZ_edpm_nftables_from_files.py'
Dec 05 01:38:27 compute-0 sudo[375668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:27 compute-0 python3[375670]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 05 01:38:27 compute-0 sudo[375668]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:38:28 compute-0 sudo[375820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbeqvfbfwbailqydyyfkbtpsscogqrjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898707.699422-1135-81637810192350/AnsiballZ_stat.py'
Dec 05 01:38:28 compute-0 sudo[375820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:28 compute-0 python3.9[375822]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:38:28 compute-0 sudo[375820]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v866: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:28 compute-0 sudo[375898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymfhburxrczzlxcwapdrinyiampjpnci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898707.699422-1135-81637810192350/AnsiballZ_file.py'
Dec 05 01:38:28 compute-0 sudo[375898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:29 compute-0 ceph-mon[192914]: pgmap v866: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:29 compute-0 python3.9[375900]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:38:29 compute-0 sudo[375898]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:29 compute-0 podman[158197]: time="2025-12-05T01:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:38:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 01:38:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8091 "" "Go-http-client/1.1"
Dec 05 01:38:30 compute-0 sudo[376050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpclymevblewtmzaqouuwzzpqdsmngkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898709.504831-1147-249629322467/AnsiballZ_stat.py'
Dec 05 01:38:30 compute-0 sudo[376050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v867: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:31 compute-0 ceph-mon[192914]: pgmap v867: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:31 compute-0 python3.9[376052]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:38:31 compute-0 sudo[376050]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:31 compute-0 openstack_network_exporter[366555]: ERROR   01:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:38:31 compute-0 openstack_network_exporter[366555]: ERROR   01:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:38:31 compute-0 openstack_network_exporter[366555]: ERROR   01:38:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:38:31 compute-0 openstack_network_exporter[366555]: ERROR   01:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:38:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:38:31 compute-0 openstack_network_exporter[366555]: ERROR   01:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:38:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:38:31 compute-0 sudo[376128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvjvurqtiqcyntlwoobzxoygsdrkvdka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898709.504831-1147-249629322467/AnsiballZ_file.py'
Dec 05 01:38:31 compute-0 sudo[376128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:31 compute-0 python3.9[376130]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:38:31 compute-0 sudo[376128]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:38:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v868: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:33 compute-0 ceph-mon[192914]: pgmap v868: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:33 compute-0 sudo[376280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkehanrfczjyjdqbtamrbqhbsrxnnkau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898712.749877-1159-46992475864398/AnsiballZ_stat.py'
Dec 05 01:38:33 compute-0 sudo[376280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:33 compute-0 python3.9[376282]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:38:33 compute-0 sudo[376280]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:34 compute-0 sudo[376358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcavqzfxoeksyxuyyuebjnsiglxsbboo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898712.749877-1159-46992475864398/AnsiballZ_file.py'
Dec 05 01:38:34 compute-0 sudo[376358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:34 compute-0 python3.9[376360]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:38:34 compute-0 sudo[376358]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v869: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:35 compute-0 ceph-mon[192914]: pgmap v869: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:35 compute-0 sudo[376510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqdpxeokwrbnexkzagjswlpnvimfdxfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898714.6817484-1171-92617911967394/AnsiballZ_stat.py'
Dec 05 01:38:35 compute-0 sudo[376510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:35 compute-0 python3.9[376512]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:38:35 compute-0 sudo[376510]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:35 compute-0 sudo[376603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgraisofhvfohbelscjtysudeggjhduv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898714.6817484-1171-92617911967394/AnsiballZ_file.py'
Dec 05 01:38:35 compute-0 podman[376562]: 2025-12-05 01:38:35.928623477 +0000 UTC m=+0.110664534 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 05 01:38:35 compute-0 sudo[376603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:36 compute-0 python3.9[376607]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:38:36 compute-0 sudo[376603]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:36 compute-0 podman[376636]: 2025-12-05 01:38:36.71825654 +0000 UTC m=+0.126068438 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 01:38:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v870: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:37 compute-0 ceph-mon[192914]: pgmap v870: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:37 compute-0 sudo[376778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxyhaflxkvchhrbyvobpkuupomxrotxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898716.6331148-1183-221348831597548/AnsiballZ_stat.py'
Dec 05 01:38:37 compute-0 sudo[376778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:38:37 compute-0 python3.9[376780]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:38:37 compute-0 sudo[376778]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.311 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.312 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 sudo[376873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cratqcelawkzltabeywpgrxlymygiaao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898716.6331148-1183-221348831597548/AnsiballZ_file.py'
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.316 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.320 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.321 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 sudo[376873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.330 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.331 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.331 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.345 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.345 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.345 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.345 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.345 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:38:38.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:38:38 compute-0 podman[376830]: 2025-12-05 01:38:38.367120665 +0000 UTC m=+0.168477570 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec 05 01:38:38 compute-0 python3.9[376879]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:38:38 compute-0 sudo[376873]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v871: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:39 compute-0 ceph-mon[192914]: pgmap v871: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:39 compute-0 sudo[377030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrpltdetkaakehwdarcenjnurmvadtcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898718.9024813-1196-264368517345742/AnsiballZ_command.py'
Dec 05 01:38:39 compute-0 sudo[377030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:39 compute-0 python3.9[377032]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:38:39 compute-0 sudo[377030]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v872: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:41 compute-0 sudo[377185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snhbwxaladzuvpjntxymfmmvhxcuujnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898720.0654016-1204-253642933266673/AnsiballZ_blockinfile.py'
Dec 05 01:38:41 compute-0 ceph-mon[192914]: pgmap v872: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:41 compute-0 sudo[377185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:41 compute-0 python3.9[377187]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:38:41 compute-0 sudo[377185]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:42 compute-0 sudo[377311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:38:42 compute-0 sudo[377311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:38:42 compute-0 sudo[377311]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:42 compute-0 sudo[377336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:38:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:38:42 compute-0 sudo[377336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:38:42 compute-0 sudo[377336]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:42 compute-0 sudo[377361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:38:42 compute-0 sudo[377361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:38:42 compute-0 sudo[377361]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:42 compute-0 sudo[377386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 05 01:38:42 compute-0 sudo[377386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:38:42 compute-0 sudo[377437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tizugylhvylrpymqktfbuotljctqtlmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898721.5985012-1213-44901180773259/AnsiballZ_command.py'
Dec 05 01:38:42 compute-0 sudo[377437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v873: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:43 compute-0 ceph-mon[192914]: pgmap v873: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:43 compute-0 python3.9[377439]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:38:43 compute-0 sudo[377437]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:43 compute-0 podman[377545]: 2025-12-05 01:38:43.503775076 +0000 UTC m=+0.096803154 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 05 01:38:43 compute-0 podman[377545]: 2025-12-05 01:38:43.620774177 +0000 UTC m=+0.213802265 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 05 01:38:43 compute-0 sudo[377720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxtwxiauyhvebtvvbvlvtllxpowzipdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898723.4284883-1221-192464113719350/AnsiballZ_stat.py'
Dec 05 01:38:43 compute-0 sudo[377720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:44 compute-0 python3.9[377725]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:38:44 compute-0 sudo[377720]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:44 compute-0 sudo[377386]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:38:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:38:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:44.768268) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898724768358, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2036, "num_deletes": 251, "total_data_size": 3398386, "memory_usage": 3462832, "flush_reason": "Manual Compaction"}
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Dec 05 01:38:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898724805370, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3332448, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16303, "largest_seqno": 18338, "table_properties": {"data_size": 3323310, "index_size": 5760, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18065, "raw_average_key_size": 19, "raw_value_size": 3305065, "raw_average_value_size": 3608, "num_data_blocks": 261, "num_entries": 916, "num_filter_entries": 916, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764898496, "oldest_key_time": 1764898496, "file_creation_time": 1764898724, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 37152 microseconds, and 15013 cpu microseconds.
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:44.805417) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3332448 bytes OK
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:44.805444) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:44.808090) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:44.808112) EVENT_LOG_v1 {"time_micros": 1764898724808105, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:44.808134) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3389907, prev total WAL file size 3429599, number of live WAL files 2.
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:44.809981) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3254KB)], [38(7574KB)]
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898724810046, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11088745, "oldest_snapshot_seqno": -1}
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4374 keys, 9319457 bytes, temperature: kUnknown
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898724895696, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9319457, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9286490, "index_size": 20953, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10949, "raw_key_size": 105642, "raw_average_key_size": 24, "raw_value_size": 9203677, "raw_average_value_size": 2104, "num_data_blocks": 891, "num_entries": 4374, "num_filter_entries": 4374, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764898724, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:44.896113) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9319457 bytes
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:44.899118) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.3 rd, 108.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.4 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 4888, records dropped: 514 output_compression: NoCompression
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:44.899151) EVENT_LOG_v1 {"time_micros": 1764898724899136, "job": 18, "event": "compaction_finished", "compaction_time_micros": 85745, "compaction_time_cpu_micros": 41031, "output_level": 6, "num_output_files": 1, "total_output_size": 9319457, "num_input_records": 4888, "num_output_records": 4374, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898724900466, "job": 18, "event": "table_file_deletion", "file_number": 40}
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898724903246, "job": 18, "event": "table_file_deletion", "file_number": 38}
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:44.809629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:44.903444) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:44.903452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:44.903455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:44.903458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:38:44 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:44.903460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:38:44 compute-0 sudo[377827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:38:44 compute-0 sudo[377827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:38:44 compute-0 sudo[377827]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v874: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:45 compute-0 sudo[377875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:38:45 compute-0 sudo[377875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:38:45 compute-0 sudo[377875]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:45 compute-0 sudo[377908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:38:45 compute-0 sudo[377908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:38:45 compute-0 sudo[377908]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:45 compute-0 sudo[377961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:38:45 compute-0 sudo[377961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:38:45 compute-0 sudo[378062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alqnkbbblwlochyzunpnjuqvygwniwkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898724.9869916-1230-126392272897996/AnsiballZ_file.py'
Dec 05 01:38:45 compute-0 sudo[378062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:45 compute-0 python3.9[378064]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:38:45 compute-0 sudo[378062]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:38:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:38:45 compute-0 ceph-mon[192914]: pgmap v874: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:45 compute-0 sudo[377961]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:38:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:38:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:38:45 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:38:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:38:45 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:38:45 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8f7bd79e-aed1-4d5d-b4ae-326360283878 does not exist
Dec 05 01:38:45 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 1ae965c3-a39b-4a53-9950-fb3756838cf1 does not exist
Dec 05 01:38:45 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 84b9ba02-05a8-4697-8616-097b909186d3 does not exist
Dec 05 01:38:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:38:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:38:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:38:46 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:38:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:38:46 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:38:46 compute-0 sudo[378108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:38:46 compute-0 sudo[378108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:38:46 compute-0 sudo[378108]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:46 compute-0 sshd-session[350355]: Connection closed by 192.168.122.30 port 54032
Dec 05 01:38:46 compute-0 sshd-session[350346]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:38:46 compute-0 systemd[1]: session-58.scope: Deactivated successfully.
Dec 05 01:38:46 compute-0 systemd[1]: session-58.scope: Consumed 2min 42.700s CPU time.
Dec 05 01:38:46 compute-0 systemd-logind[792]: Session 58 logged out. Waiting for processes to exit.
Dec 05 01:38:46 compute-0 sudo[378133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:38:46 compute-0 systemd-logind[792]: Removed session 58.
Dec 05 01:38:46 compute-0 sudo[378133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:38:46 compute-0 sudo[378133]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:38:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:38:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:38:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:38:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:38:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:38:46 compute-0 sudo[378158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:38:46 compute-0 sudo[378158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:38:46 compute-0 sudo[378158]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:46 compute-0 sudo[378183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:38:46 compute-0 sudo[378183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:38:46 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:38:46 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:38:46 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:38:46 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:38:46 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:38:46 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:38:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v875: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:47 compute-0 podman[378247]: 2025-12-05 01:38:47.039539632 +0000 UTC m=+0.088296065 container create 1e5b6aecb4b0515a0d26d8a0f8430210b60865bdfa83353eb32ac0400e0447a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nash, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:38:47 compute-0 podman[378247]: 2025-12-05 01:38:47.004509256 +0000 UTC m=+0.053265749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:38:47 compute-0 systemd[1]: Started libpod-conmon-1e5b6aecb4b0515a0d26d8a0f8430210b60865bdfa83353eb32ac0400e0447a6.scope.
Dec 05 01:38:47 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:38:47 compute-0 podman[378247]: 2025-12-05 01:38:47.190137058 +0000 UTC m=+0.238893511 container init 1e5b6aecb4b0515a0d26d8a0f8430210b60865bdfa83353eb32ac0400e0447a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:38:47 compute-0 podman[378247]: 2025-12-05 01:38:47.205362367 +0000 UTC m=+0.254118810 container start 1e5b6aecb4b0515a0d26d8a0f8430210b60865bdfa83353eb32ac0400e0447a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nash, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 05 01:38:47 compute-0 podman[378247]: 2025-12-05 01:38:47.212104246 +0000 UTC m=+0.260860729 container attach 1e5b6aecb4b0515a0d26d8a0f8430210b60865bdfa83353eb32ac0400e0447a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 05 01:38:47 compute-0 stoic_nash[378263]: 167 167
Dec 05 01:38:47 compute-0 systemd[1]: libpod-1e5b6aecb4b0515a0d26d8a0f8430210b60865bdfa83353eb32ac0400e0447a6.scope: Deactivated successfully.
Dec 05 01:38:47 compute-0 conmon[378263]: conmon 1e5b6aecb4b0515a0d26 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1e5b6aecb4b0515a0d26d8a0f8430210b60865bdfa83353eb32ac0400e0447a6.scope/container/memory.events
Dec 05 01:38:47 compute-0 podman[378247]: 2025-12-05 01:38:47.220260666 +0000 UTC m=+0.269017099 container died 1e5b6aecb4b0515a0d26d8a0f8430210b60865bdfa83353eb32ac0400e0447a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nash, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:38:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e48cc5643075d974c95a8f873b29a4c7de485b146bc43298a884a46f5f58ede-merged.mount: Deactivated successfully.
Dec 05 01:38:47 compute-0 podman[378247]: 2025-12-05 01:38:47.304340911 +0000 UTC m=+0.353097354 container remove 1e5b6aecb4b0515a0d26d8a0f8430210b60865bdfa83353eb32ac0400e0447a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 01:38:47 compute-0 systemd[1]: libpod-conmon-1e5b6aecb4b0515a0d26d8a0f8430210b60865bdfa83353eb32ac0400e0447a6.scope: Deactivated successfully.
Dec 05 01:38:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:38:47 compute-0 podman[378285]: 2025-12-05 01:38:47.58725947 +0000 UTC m=+0.086799103 container create 9fdfb9f954d85b832ef8175fbace8749fccb306a34940b64fafa4fbb1dde603d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shannon, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:38:47 compute-0 podman[378285]: 2025-12-05 01:38:47.552236305 +0000 UTC m=+0.051775998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:38:47 compute-0 systemd[1]: Started libpod-conmon-9fdfb9f954d85b832ef8175fbace8749fccb306a34940b64fafa4fbb1dde603d.scope.
Dec 05 01:38:47 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb69286b9de015d897eeeaadb1eb000d25e6f20ed25121f38354b38a287ee1df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb69286b9de015d897eeeaadb1eb000d25e6f20ed25121f38354b38a287ee1df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb69286b9de015d897eeeaadb1eb000d25e6f20ed25121f38354b38a287ee1df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb69286b9de015d897eeeaadb1eb000d25e6f20ed25121f38354b38a287ee1df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb69286b9de015d897eeeaadb1eb000d25e6f20ed25121f38354b38a287ee1df/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:38:47 compute-0 podman[378285]: 2025-12-05 01:38:47.778703226 +0000 UTC m=+0.278242909 container init 9fdfb9f954d85b832ef8175fbace8749fccb306a34940b64fafa4fbb1dde603d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 01:38:47 compute-0 podman[378285]: 2025-12-05 01:38:47.807045863 +0000 UTC m=+0.306585506 container start 9fdfb9f954d85b832ef8175fbace8749fccb306a34940b64fafa4fbb1dde603d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:38:47 compute-0 ceph-mon[192914]: pgmap v875: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:47 compute-0 podman[378285]: 2025-12-05 01:38:47.813278208 +0000 UTC m=+0.312817901 container attach 9fdfb9f954d85b832ef8175fbace8749fccb306a34940b64fafa4fbb1dde603d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shannon, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:38:48 compute-0 podman[378309]: 2025-12-05 01:38:48.703036799 +0000 UTC m=+0.102662119 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm)
Dec 05 01:38:48 compute-0 podman[378307]: 2025-12-05 01:38:48.733018302 +0000 UTC m=+0.132025735 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:38:48 compute-0 podman[378310]: 2025-12-05 01:38:48.758946881 +0000 UTC m=+0.155605178 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 05 01:38:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v876: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:49 compute-0 ceph-mon[192914]: pgmap v876: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:49 compute-0 jovial_shannon[378301]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:38:49 compute-0 jovial_shannon[378301]: --> relative data size: 1.0
Dec 05 01:38:49 compute-0 jovial_shannon[378301]: --> All data devices are unavailable
Dec 05 01:38:49 compute-0 systemd[1]: libpod-9fdfb9f954d85b832ef8175fbace8749fccb306a34940b64fafa4fbb1dde603d.scope: Deactivated successfully.
Dec 05 01:38:49 compute-0 systemd[1]: libpod-9fdfb9f954d85b832ef8175fbace8749fccb306a34940b64fafa4fbb1dde603d.scope: Consumed 1.258s CPU time.
Dec 05 01:38:49 compute-0 podman[378393]: 2025-12-05 01:38:49.223761948 +0000 UTC m=+0.058970360 container died 9fdfb9f954d85b832ef8175fbace8749fccb306a34940b64fafa4fbb1dde603d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:38:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb69286b9de015d897eeeaadb1eb000d25e6f20ed25121f38354b38a287ee1df-merged.mount: Deactivated successfully.
Dec 05 01:38:49 compute-0 podman[378393]: 2025-12-05 01:38:49.32410363 +0000 UTC m=+0.159311992 container remove 9fdfb9f954d85b832ef8175fbace8749fccb306a34940b64fafa4fbb1dde603d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shannon, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 01:38:49 compute-0 systemd[1]: libpod-conmon-9fdfb9f954d85b832ef8175fbace8749fccb306a34940b64fafa4fbb1dde603d.scope: Deactivated successfully.
Dec 05 01:38:49 compute-0 sudo[378183]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:49 compute-0 sudo[378408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:38:49 compute-0 sudo[378408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:38:49 compute-0 sudo[378408]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:49 compute-0 sudo[378434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:38:49 compute-0 sudo[378434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:38:49 compute-0 sudo[378434]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:49 compute-0 sudo[378465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:38:49 compute-0 sudo[378465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:38:49 compute-0 podman[378458]: 2025-12-05 01:38:49.857683951 +0000 UTC m=+0.133258270 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release-0.7.12=, build-date=2024-09-18T21:23:30, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=base rhel9, vendor=Red Hat, Inc.)
Dec 05 01:38:49 compute-0 sudo[378465]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:49 compute-0 sudo[378502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:38:49 compute-0 sudo[378502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:38:50 compute-0 podman[378565]: 2025-12-05 01:38:50.52278177 +0000 UTC m=+0.066558102 container create f8c26df1cbb2a08e8aae67b4e2261aa1ad7425dc6857e58c0c428c84b6aef08c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 05 01:38:50 compute-0 podman[378565]: 2025-12-05 01:38:50.500161084 +0000 UTC m=+0.043937466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:38:50 compute-0 systemd[1]: Started libpod-conmon-f8c26df1cbb2a08e8aae67b4e2261aa1ad7425dc6857e58c0c428c84b6aef08c.scope.
Dec 05 01:38:50 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:38:50 compute-0 podman[378565]: 2025-12-05 01:38:50.668834159 +0000 UTC m=+0.212610561 container init f8c26df1cbb2a08e8aae67b4e2261aa1ad7425dc6857e58c0c428c84b6aef08c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:38:50 compute-0 podman[378565]: 2025-12-05 01:38:50.68521712 +0000 UTC m=+0.228993472 container start f8c26df1cbb2a08e8aae67b4e2261aa1ad7425dc6857e58c0c428c84b6aef08c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 01:38:50 compute-0 podman[378565]: 2025-12-05 01:38:50.69163579 +0000 UTC m=+0.235412152 container attach f8c26df1cbb2a08e8aae67b4e2261aa1ad7425dc6857e58c0c428c84b6aef08c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_torvalds, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 05 01:38:50 compute-0 focused_torvalds[378581]: 167 167
Dec 05 01:38:50 compute-0 systemd[1]: libpod-f8c26df1cbb2a08e8aae67b4e2261aa1ad7425dc6857e58c0c428c84b6aef08c.scope: Deactivated successfully.
Dec 05 01:38:50 compute-0 podman[378565]: 2025-12-05 01:38:50.696751314 +0000 UTC m=+0.240527676 container died f8c26df1cbb2a08e8aae67b4e2261aa1ad7425dc6857e58c0c428c84b6aef08c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_torvalds, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 01:38:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-924714d3529c8520386cdc91571688d0f3cb4bf5e44bfcad55a9d3a152612819-merged.mount: Deactivated successfully.
Dec 05 01:38:50 compute-0 podman[378565]: 2025-12-05 01:38:50.774219993 +0000 UTC m=+0.317996355 container remove f8c26df1cbb2a08e8aae67b4e2261aa1ad7425dc6857e58c0c428c84b6aef08c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:38:50 compute-0 systemd[1]: libpod-conmon-f8c26df1cbb2a08e8aae67b4e2261aa1ad7425dc6857e58c0c428c84b6aef08c.scope: Deactivated successfully.
Dec 05 01:38:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v877: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:51 compute-0 ceph-mon[192914]: pgmap v877: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:51 compute-0 podman[378606]: 2025-12-05 01:38:51.050368932 +0000 UTC m=+0.075249208 container create aabdf0154d487b044143b4aaf642a010764966dcfc3523e172dd4574f075ffaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_morse, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 05 01:38:51 compute-0 podman[378606]: 2025-12-05 01:38:51.02505985 +0000 UTC m=+0.049940206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:38:51 compute-0 systemd[1]: Started libpod-conmon-aabdf0154d487b044143b4aaf642a010764966dcfc3523e172dd4574f075ffaa.scope.
Dec 05 01:38:51 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a803db6344c491d97b1b9ca84c6d5273d3cc402d68087550c7a2bce820ea29bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a803db6344c491d97b1b9ca84c6d5273d3cc402d68087550c7a2bce820ea29bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a803db6344c491d97b1b9ca84c6d5273d3cc402d68087550c7a2bce820ea29bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a803db6344c491d97b1b9ca84c6d5273d3cc402d68087550c7a2bce820ea29bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:38:51 compute-0 podman[378606]: 2025-12-05 01:38:51.226791445 +0000 UTC m=+0.251671771 container init aabdf0154d487b044143b4aaf642a010764966dcfc3523e172dd4574f075ffaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_morse, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:38:51 compute-0 podman[378606]: 2025-12-05 01:38:51.243538366 +0000 UTC m=+0.268418662 container start aabdf0154d487b044143b4aaf642a010764966dcfc3523e172dd4574f075ffaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 01:38:51 compute-0 podman[378606]: 2025-12-05 01:38:51.249624087 +0000 UTC m=+0.274504453 container attach aabdf0154d487b044143b4aaf642a010764966dcfc3523e172dd4574f075ffaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 05 01:38:52 compute-0 stoic_morse[378621]: {
Dec 05 01:38:52 compute-0 stoic_morse[378621]:     "0": [
Dec 05 01:38:52 compute-0 stoic_morse[378621]:         {
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "devices": [
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "/dev/loop3"
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             ],
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "lv_name": "ceph_lv0",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "lv_size": "21470642176",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "name": "ceph_lv0",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "tags": {
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.cluster_name": "ceph",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.crush_device_class": "",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.encrypted": "0",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.osd_id": "0",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.type": "block",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.vdo": "0"
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             },
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "type": "block",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "vg_name": "ceph_vg0"
Dec 05 01:38:52 compute-0 stoic_morse[378621]:         }
Dec 05 01:38:52 compute-0 stoic_morse[378621]:     ],
Dec 05 01:38:52 compute-0 stoic_morse[378621]:     "1": [
Dec 05 01:38:52 compute-0 stoic_morse[378621]:         {
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "devices": [
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "/dev/loop4"
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             ],
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "lv_name": "ceph_lv1",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "lv_size": "21470642176",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "name": "ceph_lv1",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "tags": {
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.cluster_name": "ceph",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.crush_device_class": "",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.encrypted": "0",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.osd_id": "1",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.type": "block",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.vdo": "0"
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             },
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "type": "block",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "vg_name": "ceph_vg1"
Dec 05 01:38:52 compute-0 stoic_morse[378621]:         }
Dec 05 01:38:52 compute-0 stoic_morse[378621]:     ],
Dec 05 01:38:52 compute-0 stoic_morse[378621]:     "2": [
Dec 05 01:38:52 compute-0 stoic_morse[378621]:         {
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "devices": [
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "/dev/loop5"
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             ],
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "lv_name": "ceph_lv2",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "lv_size": "21470642176",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "name": "ceph_lv2",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "tags": {
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.cluster_name": "ceph",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.crush_device_class": "",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.encrypted": "0",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.osd_id": "2",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.type": "block",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:                 "ceph.vdo": "0"
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             },
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "type": "block",
Dec 05 01:38:52 compute-0 stoic_morse[378621]:             "vg_name": "ceph_vg2"
Dec 05 01:38:52 compute-0 stoic_morse[378621]:         }
Dec 05 01:38:52 compute-0 stoic_morse[378621]:     ]
Dec 05 01:38:52 compute-0 stoic_morse[378621]: }
Dec 05 01:38:52 compute-0 systemd[1]: libpod-aabdf0154d487b044143b4aaf642a010764966dcfc3523e172dd4574f075ffaa.scope: Deactivated successfully.
Dec 05 01:38:52 compute-0 podman[378606]: 2025-12-05 01:38:52.069631156 +0000 UTC m=+1.094511472 container died aabdf0154d487b044143b4aaf642a010764966dcfc3523e172dd4574f075ffaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Dec 05 01:38:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a803db6344c491d97b1b9ca84c6d5273d3cc402d68087550c7a2bce820ea29bb-merged.mount: Deactivated successfully.
Dec 05 01:38:52 compute-0 podman[378606]: 2025-12-05 01:38:52.168000623 +0000 UTC m=+1.192880909 container remove aabdf0154d487b044143b4aaf642a010764966dcfc3523e172dd4574f075ffaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_morse, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:38:52 compute-0 systemd[1]: libpod-conmon-aabdf0154d487b044143b4aaf642a010764966dcfc3523e172dd4574f075ffaa.scope: Deactivated successfully.
Dec 05 01:38:52 compute-0 sudo[378502]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:52 compute-0 podman[378633]: 2025-12-05 01:38:52.217840685 +0000 UTC m=+0.107998049 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 01:38:52 compute-0 sshd-session[378660]: Accepted publickey for zuul from 192.168.122.30 port 37678 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 01:38:52 compute-0 systemd-logind[792]: New session 59 of user zuul.
Dec 05 01:38:52 compute-0 podman[378661]: 2025-12-05 01:38:52.309648008 +0000 UTC m=+0.090917689 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, architecture=x86_64, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, name=ubi9-minimal, vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 05 01:38:52 compute-0 systemd[1]: Started Session 59 of User zuul.
Dec 05 01:38:52 compute-0 sudo[378675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:38:52 compute-0 sshd-session[378660]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:38:52 compute-0 sudo[378675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:38:52 compute-0 sudo[378675]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:52 compute-0 sudo[378717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:38:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:38:52 compute-0 sudo[378717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:52.446853) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898732446931, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 343, "num_deletes": 252, "total_data_size": 176833, "memory_usage": 184192, "flush_reason": "Manual Compaction"}
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Dec 05 01:38:52 compute-0 sudo[378717]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898732451370, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 175422, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18339, "largest_seqno": 18681, "table_properties": {"data_size": 173218, "index_size": 368, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5875, "raw_average_key_size": 19, "raw_value_size": 168763, "raw_average_value_size": 564, "num_data_blocks": 16, "num_entries": 299, "num_filter_entries": 299, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764898724, "oldest_key_time": 1764898724, "file_creation_time": 1764898732, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 4564 microseconds, and 1333 cpu microseconds.
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:52.451419) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 175422 bytes OK
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:52.451432) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:52.453442) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:52.453458) EVENT_LOG_v1 {"time_micros": 1764898732453454, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:52.453470) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 174468, prev total WAL file size 174468, number of live WAL files 2.
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:52.454313) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353031' seq:72057594037927935, type:22 .. '6D67727374617400373534' seq:0, type:0; will stop at (end)
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(171KB)], [41(9101KB)]
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898732454455, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9494879, "oldest_snapshot_seqno": -1}
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4158 keys, 6171379 bytes, temperature: kUnknown
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898732507736, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6171379, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6144419, "index_size": 15459, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10437, "raw_key_size": 101607, "raw_average_key_size": 24, "raw_value_size": 6069924, "raw_average_value_size": 1459, "num_data_blocks": 653, "num_entries": 4158, "num_filter_entries": 4158, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764898732, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:52.508035) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6171379 bytes
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:52.510213) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.9 rd, 115.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 8.9 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(89.3) write-amplify(35.2) OK, records in: 4673, records dropped: 515 output_compression: NoCompression
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:52.510234) EVENT_LOG_v1 {"time_micros": 1764898732510224, "job": 20, "event": "compaction_finished", "compaction_time_micros": 53371, "compaction_time_cpu_micros": 29108, "output_level": 6, "num_output_files": 1, "total_output_size": 6171379, "num_input_records": 4673, "num_output_records": 4158, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898732510400, "job": 20, "event": "table_file_deletion", "file_number": 43}
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898732512613, "job": 20, "event": "table_file_deletion", "file_number": 41}
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:52.453851) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:52.512794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:52.512803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:52.512807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:52.512810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:38:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:38:52.512814) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:38:52 compute-0 sudo[378765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:38:52 compute-0 sudo[378765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:38:52 compute-0 sudo[378765]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:52 compute-0 sudo[378819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:38:52 compute-0 sudo[378819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:38:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v878: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:53 compute-0 podman[378931]: 2025-12-05 01:38:53.253117579 +0000 UTC m=+0.082625955 container create 72fe6fbd1bc21a5055ad460360ff67d80e2ab436b200f142fac1a0c7b607129c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 05 01:38:53 compute-0 podman[378931]: 2025-12-05 01:38:53.22615201 +0000 UTC m=+0.055660426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:38:53 compute-0 systemd[1]: Started libpod-conmon-72fe6fbd1bc21a5055ad460360ff67d80e2ab436b200f142fac1a0c7b607129c.scope.
Dec 05 01:38:53 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:38:53 compute-0 podman[378931]: 2025-12-05 01:38:53.409722444 +0000 UTC m=+0.239230870 container init 72fe6fbd1bc21a5055ad460360ff67d80e2ab436b200f142fac1a0c7b607129c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_benz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 01:38:53 compute-0 podman[378931]: 2025-12-05 01:38:53.428470792 +0000 UTC m=+0.257979138 container start 72fe6fbd1bc21a5055ad460360ff67d80e2ab436b200f142fac1a0c7b607129c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 05 01:38:53 compute-0 podman[378931]: 2025-12-05 01:38:53.435539981 +0000 UTC m=+0.265048327 container attach 72fe6fbd1bc21a5055ad460360ff67d80e2ab436b200f142fac1a0c7b607129c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_benz, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:38:53 compute-0 recursing_benz[378978]: 167 167
Dec 05 01:38:53 compute-0 systemd[1]: libpod-72fe6fbd1bc21a5055ad460360ff67d80e2ab436b200f142fac1a0c7b607129c.scope: Deactivated successfully.
Dec 05 01:38:53 compute-0 podman[378931]: 2025-12-05 01:38:53.444465462 +0000 UTC m=+0.273973838 container died 72fe6fbd1bc21a5055ad460360ff67d80e2ab436b200f142fac1a0c7b607129c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_benz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 05 01:38:53 compute-0 sudo[379000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfwzvdksychwwsiixspjrprcwxvsmdgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898732.4652572-24-146989548520196/AnsiballZ_systemd_service.py'
Dec 05 01:38:53 compute-0 ceph-mon[192914]: pgmap v878: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:53 compute-0 sudo[379000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:38:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-0445677685e5ded43002d61b295ea29bc375faa4b42ff514896d9975a3025f99-merged.mount: Deactivated successfully.
Dec 05 01:38:53 compute-0 podman[378931]: 2025-12-05 01:38:53.537478268 +0000 UTC m=+0.366986654 container remove 72fe6fbd1bc21a5055ad460360ff67d80e2ab436b200f142fac1a0c7b607129c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_benz, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 01:38:53 compute-0 systemd[1]: libpod-conmon-72fe6fbd1bc21a5055ad460360ff67d80e2ab436b200f142fac1a0c7b607129c.scope: Deactivated successfully.
Dec 05 01:38:53 compute-0 python3.9[379005]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 01:38:53 compute-0 systemd[1]: Reloading.
Dec 05 01:38:53 compute-0 podman[379026]: 2025-12-05 01:38:53.793974024 +0000 UTC m=+0.085996750 container create 6c0c5d2b0ad52e78c4c601d5971627c526276124abb92eceef9e43677018967e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_heisenberg, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec 05 01:38:53 compute-0 podman[379026]: 2025-12-05 01:38:53.759651519 +0000 UTC m=+0.051674305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:38:53 compute-0 systemd-rc-local-generator[379067]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:38:53 compute-0 systemd-sysv-generator[379071]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:38:54 compute-0 systemd[1]: Started libpod-conmon-6c0c5d2b0ad52e78c4c601d5971627c526276124abb92eceef9e43677018967e.scope.
Dec 05 01:38:54 compute-0 sudo[379000]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:54 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9531d64551fcdfec60538dfcee91ddc86b261490d0c3d45b1a1c8567a17b3a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9531d64551fcdfec60538dfcee91ddc86b261490d0c3d45b1a1c8567a17b3a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9531d64551fcdfec60538dfcee91ddc86b261490d0c3d45b1a1c8567a17b3a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9531d64551fcdfec60538dfcee91ddc86b261490d0c3d45b1a1c8567a17b3a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:38:54 compute-0 podman[379026]: 2025-12-05 01:38:54.368691121 +0000 UTC m=+0.660713827 container init 6c0c5d2b0ad52e78c4c601d5971627c526276124abb92eceef9e43677018967e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_heisenberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:38:54 compute-0 podman[379026]: 2025-12-05 01:38:54.394678652 +0000 UTC m=+0.686701358 container start 6c0c5d2b0ad52e78c4c601d5971627c526276124abb92eceef9e43677018967e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_heisenberg, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 05 01:38:54 compute-0 podman[379026]: 2025-12-05 01:38:54.400004902 +0000 UTC m=+0.692027618 container attach 6c0c5d2b0ad52e78c4c601d5971627c526276124abb92eceef9e43677018967e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_heisenberg, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:38:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v879: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:55 compute-0 ceph-mon[192914]: pgmap v879: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:55 compute-0 python3.9[379244]: ansible-ansible.builtin.service_facts Invoked
Dec 05 01:38:55 compute-0 zealous_heisenberg[379076]: {
Dec 05 01:38:55 compute-0 zealous_heisenberg[379076]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:38:55 compute-0 zealous_heisenberg[379076]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:38:55 compute-0 zealous_heisenberg[379076]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:38:55 compute-0 zealous_heisenberg[379076]:         "osd_id": 0,
Dec 05 01:38:55 compute-0 zealous_heisenberg[379076]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:38:55 compute-0 zealous_heisenberg[379076]:         "type": "bluestore"
Dec 05 01:38:55 compute-0 zealous_heisenberg[379076]:     },
Dec 05 01:38:55 compute-0 zealous_heisenberg[379076]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:38:55 compute-0 zealous_heisenberg[379076]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:38:55 compute-0 zealous_heisenberg[379076]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:38:55 compute-0 zealous_heisenberg[379076]:         "osd_id": 1,
Dec 05 01:38:55 compute-0 zealous_heisenberg[379076]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:38:55 compute-0 zealous_heisenberg[379076]:         "type": "bluestore"
Dec 05 01:38:55 compute-0 zealous_heisenberg[379076]:     },
Dec 05 01:38:55 compute-0 zealous_heisenberg[379076]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:38:55 compute-0 zealous_heisenberg[379076]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:38:55 compute-0 zealous_heisenberg[379076]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:38:55 compute-0 zealous_heisenberg[379076]:         "osd_id": 2,
Dec 05 01:38:55 compute-0 zealous_heisenberg[379076]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:38:55 compute-0 zealous_heisenberg[379076]:         "type": "bluestore"
Dec 05 01:38:55 compute-0 zealous_heisenberg[379076]:     }
Dec 05 01:38:55 compute-0 zealous_heisenberg[379076]: }
Dec 05 01:38:55 compute-0 systemd[1]: libpod-6c0c5d2b0ad52e78c4c601d5971627c526276124abb92eceef9e43677018967e.scope: Deactivated successfully.
Dec 05 01:38:55 compute-0 conmon[379076]: conmon 6c0c5d2b0ad52e78c4c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6c0c5d2b0ad52e78c4c601d5971627c526276124abb92eceef9e43677018967e.scope/container/memory.events
Dec 05 01:38:55 compute-0 systemd[1]: libpod-6c0c5d2b0ad52e78c4c601d5971627c526276124abb92eceef9e43677018967e.scope: Consumed 1.102s CPU time.
Dec 05 01:38:55 compute-0 podman[379026]: 2025-12-05 01:38:55.496120507 +0000 UTC m=+1.788143223 container died 6c0c5d2b0ad52e78c4c601d5971627c526276124abb92eceef9e43677018967e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:38:55 compute-0 network[379278]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 05 01:38:55 compute-0 network[379283]: 'network-scripts' will be removed from distribution in near future.
Dec 05 01:38:55 compute-0 network[379284]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 05 01:38:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9531d64551fcdfec60538dfcee91ddc86b261490d0c3d45b1a1c8567a17b3a0-merged.mount: Deactivated successfully.
Dec 05 01:38:55 compute-0 podman[379026]: 2025-12-05 01:38:55.584064591 +0000 UTC m=+1.876087287 container remove 6c0c5d2b0ad52e78c4c601d5971627c526276124abb92eceef9e43677018967e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:38:55 compute-0 systemd[1]: libpod-conmon-6c0c5d2b0ad52e78c4c601d5971627c526276124abb92eceef9e43677018967e.scope: Deactivated successfully.
Dec 05 01:38:55 compute-0 sudo[378819]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:38:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:38:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:38:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:38:55 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev e2683791-4d13-48db-ae32-0c7a930f3b6c does not exist
Dec 05 01:38:55 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7a62ae9e-6c4e-45ce-8d83-5a30535396be does not exist
Dec 05 01:38:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:38:56.165 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:38:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:38:56.167 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:38:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:38:56.167 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:38:56 compute-0 sudo[379295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:38:56 compute-0 sudo[379295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:38:56 compute-0 sudo[379295]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:56 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:38:56 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:38:56 compute-0 sudo[379321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:38:56 compute-0 sudo[379321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:38:56 compute-0 sudo[379321]: pam_unix(sudo:session): session closed for user root
Dec 05 01:38:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v880: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:38:57 compute-0 ceph-mon[192914]: pgmap v880: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v881: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:59 compute-0 ceph-mon[192914]: pgmap v881: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:38:59 compute-0 podman[158197]: time="2025-12-05T01:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:38:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 01:38:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8102 "" "Go-http-client/1.1"
Dec 05 01:39:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v882: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:01 compute-0 ceph-mon[192914]: pgmap v882: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:01 compute-0 openstack_network_exporter[366555]: ERROR   01:39:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:39:01 compute-0 openstack_network_exporter[366555]: ERROR   01:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:39:01 compute-0 openstack_network_exporter[366555]: ERROR   01:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:39:01 compute-0 openstack_network_exporter[366555]: ERROR   01:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:39:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:39:01 compute-0 openstack_network_exporter[366555]: ERROR   01:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:39:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:39:01 compute-0 sudo[379612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwxogwratsjsddbfkbsrjxlfsqbtlwfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898741.2166831-47-53689830607237/AnsiballZ_systemd_service.py'
Dec 05 01:39:01 compute-0 sudo[379612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:39:02 compute-0 python3.9[379614]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:39:02 compute-0 sudo[379612]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:39:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v883: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:03 compute-0 ceph-mon[192914]: pgmap v883: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:03 compute-0 sudo[379765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvldjusrtirfdntigsiquelvnxdzrhde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898742.8213418-57-44454321186519/AnsiballZ_file.py'
Dec 05 01:39:03 compute-0 sudo[379765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:39:03 compute-0 python3.9[379767]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:39:03 compute-0 sudo[379765]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:04 compute-0 sudo[379917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytngixmozvcdxdsmlkfxodnzunuvdvir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898744.1204064-65-163195039469183/AnsiballZ_file.py'
Dec 05 01:39:04 compute-0 sudo[379917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:39:04 compute-0 python3.9[379919]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:39:04 compute-0 sudo[379917]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v884: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:05 compute-0 ceph-mon[192914]: pgmap v884: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:06 compute-0 podman[380042]: 2025-12-05 01:39:06.713531569 +0000 UTC m=+0.117335962 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Dec 05 01:39:06 compute-0 sudo[380086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybbmofoecpnsiqwtqvugmfsfuenfgpgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898745.2987041-74-66917528908388/AnsiballZ_command.py'
Dec 05 01:39:06 compute-0 sudo[380086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:39:06 compute-0 podman[380088]: 2025-12-05 01:39:06.915271784 +0000 UTC m=+0.137538170 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:39:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v885: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:06 compute-0 python3.9[380089]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:39:07 compute-0 sudo[380086]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:07 compute-0 ceph-mon[192914]: pgmap v885: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:39:08 compute-0 podman[380238]: 2025-12-05 01:39:08.672119456 +0000 UTC m=+0.121867039 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Dec 05 01:39:08 compute-0 python3.9[380279]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 01:39:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v886: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:09 compute-0 ceph-mon[192914]: pgmap v886: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:09 compute-0 nova_compute[349548]: 2025-12-05 01:39:09.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:39:09 compute-0 nova_compute[349548]: 2025-12-05 01:39:09.118 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:39:09 compute-0 nova_compute[349548]: 2025-12-05 01:39:09.118 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:39:09 compute-0 nova_compute[349548]: 2025-12-05 01:39:09.118 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 01:39:09 compute-0 nova_compute[349548]: 2025-12-05 01:39:09.119 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:39:09 compute-0 nova_compute[349548]: 2025-12-05 01:39:09.143 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:39:09 compute-0 nova_compute[349548]: 2025-12-05 01:39:09.143 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:39:09 compute-0 nova_compute[349548]: 2025-12-05 01:39:09.143 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:39:09 compute-0 nova_compute[349548]: 2025-12-05 01:39:09.144 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:39:09 compute-0 nova_compute[349548]: 2025-12-05 01:39:09.144 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:39:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:39:09 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4043580565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:39:09 compute-0 nova_compute[349548]: 2025-12-05 01:39:09.612 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:39:09 compute-0 sudo[380455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nomlsiesiqnlfmqlxjpgdxpxnzzyztge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898749.2223647-92-32559423372468/AnsiballZ_systemd_service.py'
Dec 05 01:39:09 compute-0 sudo[380455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:39:10 compute-0 nova_compute[349548]: 2025-12-05 01:39:10.006 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:39:10 compute-0 nova_compute[349548]: 2025-12-05 01:39:10.007 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4572MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 01:39:10 compute-0 nova_compute[349548]: 2025-12-05 01:39:10.008 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:39:10 compute-0 nova_compute[349548]: 2025-12-05 01:39:10.008 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:39:10 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4043580565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:39:10 compute-0 python3.9[380457]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 05 01:39:10 compute-0 systemd[1]: Reloading.
Dec 05 01:39:10 compute-0 nova_compute[349548]: 2025-12-05 01:39:10.098 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 01:39:10 compute-0 nova_compute[349548]: 2025-12-05 01:39:10.099 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 01:39:10 compute-0 nova_compute[349548]: 2025-12-05 01:39:10.121 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:39:10 compute-0 systemd-sysv-generator[380488]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 05 01:39:10 compute-0 systemd-rc-local-generator[380485]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 05 01:39:10 compute-0 sudo[380455]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:39:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1802422616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:39:10 compute-0 nova_compute[349548]: 2025-12-05 01:39:10.664 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:39:10 compute-0 nova_compute[349548]: 2025-12-05 01:39:10.678 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:39:10 compute-0 nova_compute[349548]: 2025-12-05 01:39:10.710 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:39:10 compute-0 nova_compute[349548]: 2025-12-05 01:39:10.715 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 01:39:10 compute-0 nova_compute[349548]: 2025-12-05 01:39:10.716 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:39:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v887: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:11 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1802422616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:39:11 compute-0 ceph-mon[192914]: pgmap v887: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:11 compute-0 sudo[380664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaepvnvwgdmvbscvcysnhyyhgldguwmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898750.972755-100-96290287920552/AnsiballZ_command.py'
Dec 05 01:39:11 compute-0 sudo[380664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:39:11 compute-0 nova_compute[349548]: 2025-12-05 01:39:11.665 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:39:11 compute-0 nova_compute[349548]: 2025-12-05 01:39:11.666 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:39:11 compute-0 nova_compute[349548]: 2025-12-05 01:39:11.666 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 01:39:11 compute-0 nova_compute[349548]: 2025-12-05 01:39:11.667 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 01:39:11 compute-0 nova_compute[349548]: 2025-12-05 01:39:11.694 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 01:39:11 compute-0 nova_compute[349548]: 2025-12-05 01:39:11.696 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:39:11 compute-0 nova_compute[349548]: 2025-12-05 01:39:11.697 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:39:11 compute-0 nova_compute[349548]: 2025-12-05 01:39:11.697 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:39:11 compute-0 python3.9[380666]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:39:11 compute-0 sudo[380664]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:12 compute-0 nova_compute[349548]: 2025-12-05 01:39:12.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:39:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:39:12 compute-0 sudo[380817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhshpelpvvtrarzulmxdwpizfdfstrpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898752.1627743-109-119308640560925/AnsiballZ_file.py'
Dec 05 01:39:12 compute-0 sudo[380817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:39:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v888: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:12 compute-0 python3.9[380819]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:39:13 compute-0 sudo[380817]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:13 compute-0 ceph-mon[192914]: pgmap v888: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:14 compute-0 python3.9[380969]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:39:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v889: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:15 compute-0 ceph-mon[192914]: pgmap v889: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:15 compute-0 python3.9[381121]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:39:16 compute-0 python3.9[381197]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:39:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:39:16
Dec 05 01:39:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:39:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:39:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.control', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'backups', '.mgr', 'volumes', 'default.rgw.log', '.rgw.root']
Dec 05 01:39:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:39:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:39:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:39:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:39:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:39:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:39:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:39:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:39:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:39:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:39:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:39:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:39:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:39:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:39:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:39:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:39:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:39:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v890: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:17 compute-0 ceph-mon[192914]: pgmap v890: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:39:17 compute-0 sudo[381347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agnjnuhyilcsajdwsjctwnpfdftbjvvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898756.5828552-140-208290065195701/AnsiballZ_getent.py'
Dec 05 01:39:17 compute-0 sudo[381347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:39:18 compute-0 python3.9[381349]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec 05 01:39:18 compute-0 sudo[381347]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v891: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:19 compute-0 ceph-mon[192914]: pgmap v891: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:19 compute-0 podman[381428]: 2025-12-05 01:39:19.742280285 +0000 UTC m=+0.146234295 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 05 01:39:19 compute-0 podman[381429]: 2025-12-05 01:39:19.750343882 +0000 UTC m=+0.151637047 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 01:39:20 compute-0 podman[381430]: 2025-12-05 01:39:20.680789096 +0000 UTC m=+1.075168486 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:39:20 compute-0 podman[381536]: 2025-12-05 01:39:20.833536684 +0000 UTC m=+0.122415965 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.openshift.tags=base rhel9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0)
Dec 05 01:39:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v892: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:21 compute-0 python3.9[381579]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:39:21 compute-0 ceph-mon[192914]: pgmap v892: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:21 compute-0 python3.9[381658]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf _original_basename=ceilometer.conf recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:39:22 compute-0 podman[381782]: 2025-12-05 01:39:22.419668454 +0000 UTC m=+0.097388070 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:39:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:39:22 compute-0 podman[381832]: 2025-12-05 01:39:22.586617841 +0000 UTC m=+0.122529768 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, config_id=edpm, io.buildah.version=1.33.7, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible)
Dec 05 01:39:22 compute-0 python3.9[381826]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:39:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v893: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:23 compute-0 ceph-mon[192914]: pgmap v893: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:23 compute-0 python3.9[381927]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml _original_basename=polling.yaml recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:39:24 compute-0 python3.9[382077]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:39:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v894: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:25 compute-0 ceph-mon[192914]: pgmap v894: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:25 compute-0 python3.9[382153]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf _original_basename=custom.conf recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:39:26 compute-0 python3.9[382303]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:39:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:39:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:39:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:39:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:39:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:39:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:39:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:39:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:39:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:39:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:39:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:39:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:39:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:39:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:39:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:39:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:39:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:39:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:39:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:39:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:39:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:39:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:39:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:39:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v895: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:27 compute-0 ceph-mon[192914]: pgmap v895: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:27 compute-0 python3.9[382455]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:39:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:39:28 compute-0 python3.9[382607]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:39:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v896: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:29 compute-0 ceph-mon[192914]: pgmap v896: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:29 compute-0 python3.9[382683]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json _original_basename=ceilometer-agent-ipmi.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:39:29 compute-0 podman[158197]: time="2025-12-05T01:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:39:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 01:39:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8112 "" "Go-http-client/1.1"
Dec 05 01:39:30 compute-0 python3.9[382834]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:39:30 compute-0 python3.9[382910]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:39:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v897: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:31 compute-0 ceph-mon[192914]: pgmap v897: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:31 compute-0 openstack_network_exporter[366555]: ERROR   01:39:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:39:31 compute-0 openstack_network_exporter[366555]: ERROR   01:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:39:31 compute-0 openstack_network_exporter[366555]: ERROR   01:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:39:31 compute-0 openstack_network_exporter[366555]: ERROR   01:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:39:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:39:31 compute-0 openstack_network_exporter[366555]: ERROR   01:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:39:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:39:31 compute-0 python3.9[383060]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:39:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:39:32 compute-0 python3.9[383136]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json _original_basename=ceilometer_agent_ipmi.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:39:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v898: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:33 compute-0 ceph-mon[192914]: pgmap v898: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:33 compute-0 python3.9[383286]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:39:34 compute-0 python3.9[383362]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:39:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v899: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:35 compute-0 python3.9[383512]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:39:35 compute-0 ceph-mon[192914]: pgmap v899: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:35 compute-0 python3.9[383588]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:39:36 compute-0 python3.9[383738]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:39:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v900: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:37 compute-0 ceph-mon[192914]: pgmap v900: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:37 compute-0 podman[383789]: 2025-12-05 01:39:37.135329295 +0000 UTC m=+0.118777842 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:39:37 compute-0 podman[383788]: 2025-12-05 01:39:37.146976823 +0000 UTC m=+0.134287369 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 01:39:37 compute-0 python3.9[383836]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json _original_basename=kepler.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:39:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:39:38 compute-0 python3.9[384006]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:39:38 compute-0 podman[384056]: 2025-12-05 01:39:38.912294624 +0000 UTC m=+0.092044210 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:39:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v901: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:39 compute-0 ceph-mon[192914]: pgmap v901: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:39 compute-0 python3.9[384096]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:39:40 compute-0 sudo[384250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqkmybtqknjwfxjrkudbdtvrhwiszqft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898779.4217923-298-34281547471807/AnsiballZ_file.py'
Dec 05 01:39:40 compute-0 sudo[384250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:39:40 compute-0 python3.9[384252]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:39:40 compute-0 sudo[384250]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v902: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:41 compute-0 ceph-mon[192914]: pgmap v902: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:41 compute-0 sudo[384402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkskwspmyruzftgnfowsqyiteconkfjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898781.1237812-306-267661420212361/AnsiballZ_file.py'
Dec 05 01:39:41 compute-0 sudo[384402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:39:41 compute-0 python3.9[384404]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:39:41 compute-0 sudo[384402]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:39:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v903: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:43 compute-0 ceph-mon[192914]: pgmap v903: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:43 compute-0 sudo[384554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yffaikglkijpjlnixqungibqurgzeyuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898782.2211294-314-62306001536486/AnsiballZ_file.py'
Dec 05 01:39:43 compute-0 sudo[384554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:39:43 compute-0 python3.9[384556]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:39:43 compute-0 sudo[384554]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:44 compute-0 sudo[384706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnsdbtsoaloqrmkujwlstgqsphmlwxsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898784.226923-322-21143623727973/AnsiballZ_stat.py'
Dec 05 01:39:44 compute-0 sudo[384706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:39:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v904: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:44 compute-0 python3.9[384708]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:39:45 compute-0 sudo[384706]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:45 compute-0 ceph-mon[192914]: pgmap v904: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:45 compute-0 sudo[384784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrhrdjkbavsowcdtpmklagqtiuhvhhhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898784.226923-322-21143623727973/AnsiballZ_file.py'
Dec 05 01:39:45 compute-0 sudo[384784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:39:45 compute-0 python3.9[384786]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:39:45 compute-0 sudo[384784]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:46 compute-0 sudo[384860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdvtcbxotuwhysphupcuiuhqrzzaedac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898784.226923-322-21143623727973/AnsiballZ_stat.py'
Dec 05 01:39:46 compute-0 sudo[384860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:39:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:39:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:39:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:39:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:39:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:39:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:39:46 compute-0 python3.9[384862]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:39:46 compute-0 sudo[384860]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:46 compute-0 sudo[384938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkfwmugmpmnecukixchgclrlfvuvdqqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898784.226923-322-21143623727973/AnsiballZ_file.py'
Dec 05 01:39:46 compute-0 sudo[384938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:39:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v905: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:47 compute-0 python3.9[384940]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ _original_basename=healthcheck.future recurse=False state=file path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:39:47 compute-0 sudo[384938]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:47 compute-0 ceph-mon[192914]: pgmap v905: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:39:47 compute-0 sudo[385090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlevcmsfwejvcltjzbmejlxoypmeclxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898787.2991927-322-7311367763446/AnsiballZ_stat.py'
Dec 05 01:39:47 compute-0 sudo[385090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:39:48 compute-0 python3.9[385092]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:39:48 compute-0 sudo[385090]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:48 compute-0 sudo[385168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qluchyhnrzrsslbtyuykkamlhhpodezi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898787.2991927-322-7311367763446/AnsiballZ_file.py'
Dec 05 01:39:48 compute-0 sudo[385168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:39:48 compute-0 python3.9[385170]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/kepler/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/kepler/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 05 01:39:48 compute-0 sudo[385168]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v906: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:49 compute-0 ceph-mon[192914]: pgmap v906: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:49 compute-0 sudo[385321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdqhnnsxxaiyobddoluokceltdtxukgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898789.273508-355-275167570380299/AnsiballZ_container_config_data.py'
Dec 05 01:39:49 compute-0 sudo[385321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:39:50 compute-0 python3.9[385323]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Dec 05 01:39:50 compute-0 sudo[385321]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v907: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:51 compute-0 ceph-mon[192914]: pgmap v907: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:51 compute-0 sudo[385523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnrzuzafakqepnzjttjwztothjxvoqaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898790.6030931-364-147555960873291/AnsiballZ_container_config_hash.py'
Dec 05 01:39:51 compute-0 sudo[385523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:39:51 compute-0 podman[385447]: 2025-12-05 01:39:51.368308655 +0000 UTC m=+0.117555438 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:39:51 compute-0 podman[385448]: 2025-12-05 01:39:51.380293843 +0000 UTC m=+0.123472835 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=1, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 01:39:51 compute-0 podman[385450]: 2025-12-05 01:39:51.384587094 +0000 UTC m=+0.114614066 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.buildah.version=1.29.0, vcs-type=git, name=ubi9, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., container_name=kepler, release=1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 05 01:39:51 compute-0 systemd[1]: 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335-3bd305131483247.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 01:39:51 compute-0 systemd[1]: 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335-3bd305131483247.service: Failed with result 'exit-code'.
Dec 05 01:39:51 compute-0 podman[385449]: 2025-12-05 01:39:51.418132038 +0000 UTC m=+0.156025901 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 05 01:39:51 compute-0 python3.9[385542]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 01:39:51 compute-0 sudo[385523]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:39:52.468107) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898792468200, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 711, "num_deletes": 256, "total_data_size": 891549, "memory_usage": 904728, "flush_reason": "Manual Compaction"}
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898792482431, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 883797, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18682, "largest_seqno": 19392, "table_properties": {"data_size": 880136, "index_size": 1505, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7699, "raw_average_key_size": 17, "raw_value_size": 872778, "raw_average_value_size": 2034, "num_data_blocks": 69, "num_entries": 429, "num_filter_entries": 429, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764898733, "oldest_key_time": 1764898733, "file_creation_time": 1764898792, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 14369 microseconds, and 6707 cpu microseconds.
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:39:52.482481) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 883797 bytes OK
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:39:52.482506) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:39:52.485705) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:39:52.485729) EVENT_LOG_v1 {"time_micros": 1764898792485722, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:39:52.485751) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 887876, prev total WAL file size 887876, number of live WAL files 2.
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:39:52.486872) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(863KB)], [44(6026KB)]
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898792486966, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7055176, "oldest_snapshot_seqno": -1}
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4065 keys, 6922673 bytes, temperature: kUnknown
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898792566586, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 6922673, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6894990, "index_size": 16437, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 100749, "raw_average_key_size": 24, "raw_value_size": 6820716, "raw_average_value_size": 1677, "num_data_blocks": 692, "num_entries": 4065, "num_filter_entries": 4065, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764898792, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:39:52.567016) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 6922673 bytes
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:39:52.569757) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 88.5 rd, 86.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 5.9 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(15.8) write-amplify(7.8) OK, records in: 4587, records dropped: 522 output_compression: NoCompression
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:39:52.569795) EVENT_LOG_v1 {"time_micros": 1764898792569777, "job": 22, "event": "compaction_finished", "compaction_time_micros": 79717, "compaction_time_cpu_micros": 35798, "output_level": 6, "num_output_files": 1, "total_output_size": 6922673, "num_input_records": 4587, "num_output_records": 4065, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898792570363, "job": 22, "event": "table_file_deletion", "file_number": 46}
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898792572338, "job": 22, "event": "table_file_deletion", "file_number": 44}
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:39:52.486582) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:39:52.572691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:39:52.572697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:39:52.572701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:39:52.572704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:39:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:39:52.572707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:39:52 compute-0 podman[385661]: 2025-12-05 01:39:52.706193025 +0000 UTC m=+0.113577357 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 01:39:52 compute-0 sudo[385743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkftheyaoyqvvgfdlthrcgdwkzkkeeza ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764898792.010682-374-79572782610499/AnsiballZ_edpm_container_manage.py'
Dec 05 01:39:52 compute-0 podman[385690]: 2025-12-05 01:39:52.809294027 +0000 UTC m=+0.117888489 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350)
Dec 05 01:39:52 compute-0 sudo[385743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:39:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v908: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:53 compute-0 python3[385750]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 01:39:53 compute-0 python3[385750]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                {
                                                     "Id": "24d4416455a3caf43088be1a1fdcd72d9680ad5e64ac2b338cb2cc50d15f5acc",
                                                     "Digest": "sha256:b2785dbc3ceaa930dff8068bbb8654af2e0b40a9c2632300641cb8348e9cf43d",
                                                     "RepoTags": [
                                                          "quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified"
                                                     ],
                                                     "RepoDigests": [
                                                          "quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi@sha256:b2785dbc3ceaa930dff8068bbb8654af2e0b40a9c2632300641cb8348e9cf43d"
                                                     ],
                                                     "Parent": "",
                                                     "Comment": "",
                                                     "Created": "2025-12-01T06:21:56.309143559Z",
                                                     "Config": {
                                                          "User": "ceilometer",
                                                          "Env": [
                                                               "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                               "LANG=en_US.UTF-8",
                                                               "TZ=UTC",
                                                               "container=oci"
                                                          ],
                                                          "Entrypoint": [
                                                               "dumb-init",
                                                               "--single-child",
                                                               "--"
                                                          ],
                                                          "Cmd": [
                                                               "kolla_start"
                                                          ],
                                                          "Labels": {
                                                               "io.buildah.version": "1.41.3",
                                                               "maintainer": "OpenStack Kubernetes Operator team",
                                                               "org.label-schema.build-date": "20251125",
                                                               "org.label-schema.license": "GPLv2",
                                                               "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                               "org.label-schema.schema-version": "1.0",
                                                               "org.label-schema.vendor": "CentOS",
                                                               "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                               "tcib_managed": "true"
                                                          },
                                                          "StopSignal": "SIGTERM"
                                                     },
                                                     "Version": "",
                                                     "Author": "",
                                                     "Architecture": "amd64",
                                                     "Os": "linux",
                                                     "Size": 506187128,
                                                     "VirtualSize": 506187128,
                                                     "GraphDriver": {
                                                          "Name": "overlay",
                                                          "Data": {
                                                               "LowerDir": "/var/lib/containers/storage/overlay/4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",
                                                               "UpperDir": "/var/lib/containers/storage/overlay/821b44142d5812fced017a49e9cde2155fbb57b89e20e5e28a492c08b7bcc279/diff",
                                                               "WorkDir": "/var/lib/containers/storage/overlay/821b44142d5812fced017a49e9cde2155fbb57b89e20e5e28a492c08b7bcc279/work"
                                                          }
                                                     },
                                                     "RootFS": {
                                                          "Type": "layers",
                                                          "Layers": [
                                                               "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",
                                                               "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",
                                                               "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",
                                                               "sha256:a47016624274f5ebad76019f5a2e465c1737f96caa539b36f90ab8e33592f415",
                                                               "sha256:fac9f22f4739f84f681c87b7458e8da1dae9a71bb9d7e632a7076d50c98f8070"
                                                          ]
                                                     },
                                                     "Labels": {
                                                          "io.buildah.version": "1.41.3",
                                                          "maintainer": "OpenStack Kubernetes Operator team",
                                                          "org.label-schema.build-date": "20251125",
                                                          "org.label-schema.license": "GPLv2",
                                                          "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                          "org.label-schema.schema-version": "1.0",
                                                          "org.label-schema.vendor": "CentOS",
                                                          "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                          "tcib_managed": "true"
                                                     },
                                                     "Annotations": {},
                                                     "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                     "User": "ceilometer",
                                                     "History": [
                                                          {
                                                               "created": "2025-11-25T04:02:36.223494528Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-11-25T04:02:36.223562059Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-11-25T04:02:39.054452717Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.025707917Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                               "comment": "FROM quay.io/centos/centos:stream9",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.025744608Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.025767729Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.025791379Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.02581523Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.025867611Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:09:28.469442331Z",
                                                               "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:02.029095017Z",
                                                               "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:05.672474685Z",
                                                               "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:06.113425253Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:06.532320725Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:07.370061347Z",
                                                               "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:07.805172373Z",
                                                               "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:08.259306372Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:08.625948784Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:09.028304824Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:09.423316076Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:09.801219631Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:10.239187116Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:10.70996597Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:11.147342611Z",
                                                               "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:11.5739488Z",
                                                               "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:12.006975065Z",
                                                               "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:12.421255505Z",
                                                               "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:16.066694755Z",
                                                               "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:16.475695836Z",
                                                               "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:16.8971372Z",
                                                               "created_by": "/bin/sh -c mkdir -p /openstack",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:18.542651107Z",
                                                               "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:20.622503041Z",
                                                               "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:20.622561802Z",
                                                               "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:20.622578342Z",
                                                               "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:20.622594423Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:10:22.080892529Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:12:15.092312074Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:12:53.218820537Z",
                                                               "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:12:56.858075591Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:14:56.244673147Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:14:56.960273159Z",
                                                               "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage ceilometer",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:15:37.588899909Z",
                                                               "created_by": "/bin/sh -c dnf -y install openstack-ceilometer-common && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:15:41.197123864Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:21:19.693367404Z",
                                                               "created_by": "/bin/sh -c #(nop) USER root",
                                                               "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-ceilometer-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:21:56.306692765Z",
                                                               "created_by": "/bin/sh -c dnf -y install openstack-ceilometer-ipmi && dnf clean all && rm -rf /var/cache/dnf",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:21:56.306749376Z",
                                                               "created_by": "/bin/sh -c #(nop) USER ceilometer",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2025-12-01T06:21:59.082745267Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                          }
                                                     ],
                                                     "NamesHistory": [
                                                          "quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified"
                                                     ]
                                                }
                                           ]
                                           : quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec 05 01:39:53 compute-0 ceph-mon[192914]: pgmap v908: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:53 compute-0 sudo[385743]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:54 compute-0 sudo[385958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrnknmumnvihjcnknjyqobeymfckcffl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898793.9450514-382-141210717274867/AnsiballZ_stat.py'
Dec 05 01:39:54 compute-0 sudo[385958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:39:54 compute-0 python3.9[385960]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:39:54 compute-0 sudo[385958]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v909: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:55 compute-0 ceph-mon[192914]: pgmap v909: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:55 compute-0 sudo[386112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubekzzbxokkqghzldowbpuipstjtzygs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898795.2226138-391-141757387940593/AnsiballZ_file.py'
Dec 05 01:39:55 compute-0 sudo[386112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:39:56 compute-0 python3.9[386114]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:39:56 compute-0 sudo[386112]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:39:56.166 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:39:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:39:56.167 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:39:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:39:56.168 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:39:56 compute-0 sudo[386190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:39:56 compute-0 sudo[386190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:39:56 compute-0 sudo[386190]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:56 compute-0 sudo[386238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:39:56 compute-0 sudo[386238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:39:56 compute-0 sudo[386238]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v910: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:57 compute-0 ceph-mon[192914]: pgmap v910: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:57 compute-0 sudo[386288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:39:57 compute-0 sudo[386288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:39:57 compute-0 sudo[386288]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:57 compute-0 sudo[386336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-audbpmxgcyilvmmaoeeqcbxwipfpmkas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898796.1781101-391-34343539553222/AnsiballZ_copy.py'
Dec 05 01:39:57 compute-0 sudo[386336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:39:57 compute-0 sudo[386340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:39:57 compute-0 sudo[386340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:39:57 compute-0 python3.9[386342]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764898796.1781101-391-34343539553222/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:39:57 compute-0 sudo[386336]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:39:57 compute-0 sudo[386340]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:39:57 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:39:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:39:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:39:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:39:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:39:58 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 15bde09b-73fe-4224-a406-04916f341180 does not exist
Dec 05 01:39:58 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 4e44bc54-2af5-4297-b3dc-3c172b8835f4 does not exist
Dec 05 01:39:58 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev f4c7a6d5-82be-498d-9383-be0c12efbc9d does not exist
Dec 05 01:39:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:39:58 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:39:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:39:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:39:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:39:58 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:39:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:39:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:39:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:39:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:39:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:39:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:39:58 compute-0 sudo[386442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:39:58 compute-0 sudo[386442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:39:58 compute-0 sudo[386442]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:58 compute-0 sudo[386492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwxlquirhsjclnixbtgqhkkvmxktozmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898796.1781101-391-34343539553222/AnsiballZ_systemd.py'
Dec 05 01:39:58 compute-0 sudo[386492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:39:58 compute-0 sudo[386495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:39:58 compute-0 sudo[386495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:39:58 compute-0 sudo[386495]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:58 compute-0 sudo[386521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:39:58 compute-0 sudo[386521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:39:58 compute-0 sudo[386521]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:58 compute-0 python3.9[386498]: ansible-systemd Invoked with state=started name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:39:58 compute-0 sudo[386492]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:58 compute-0 sudo[386546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:39:58 compute-0 sudo[386546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:39:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v911: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:59 compute-0 ceph-mon[192914]: pgmap v911: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:39:59 compute-0 podman[386671]: 2025-12-05 01:39:59.167470563 +0000 UTC m=+0.071400351 container create cf098e81be31ab108590e37f2f5e1f0c739613686b33ed1f052fb8c36604c85b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 05 01:39:59 compute-0 podman[386671]: 2025-12-05 01:39:59.13431405 +0000 UTC m=+0.038243888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:39:59 compute-0 systemd[1]: Started libpod-conmon-cf098e81be31ab108590e37f2f5e1f0c739613686b33ed1f052fb8c36604c85b.scope.
Dec 05 01:39:59 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:39:59 compute-0 podman[386671]: 2025-12-05 01:39:59.285094103 +0000 UTC m=+0.189023901 container init cf098e81be31ab108590e37f2f5e1f0c739613686b33ed1f052fb8c36604c85b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_lewin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:39:59 compute-0 podman[386671]: 2025-12-05 01:39:59.297855232 +0000 UTC m=+0.201785010 container start cf098e81be31ab108590e37f2f5e1f0c739613686b33ed1f052fb8c36604c85b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_lewin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 01:39:59 compute-0 podman[386671]: 2025-12-05 01:39:59.303183722 +0000 UTC m=+0.207113540 container attach cf098e81be31ab108590e37f2f5e1f0c739613686b33ed1f052fb8c36604c85b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 05 01:39:59 compute-0 epic_lewin[386721]: 167 167
Dec 05 01:39:59 compute-0 systemd[1]: libpod-cf098e81be31ab108590e37f2f5e1f0c739613686b33ed1f052fb8c36604c85b.scope: Deactivated successfully.
Dec 05 01:39:59 compute-0 podman[386671]: 2025-12-05 01:39:59.310104227 +0000 UTC m=+0.214034015 container died cf098e81be31ab108590e37f2f5e1f0c739613686b33ed1f052fb8c36604c85b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 01:39:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7c78714ce548ede76e4348522114a434d93acfc5f0fe85fa507af2946c707bd-merged.mount: Deactivated successfully.
Dec 05 01:39:59 compute-0 podman[386671]: 2025-12-05 01:39:59.466035855 +0000 UTC m=+0.369965643 container remove cf098e81be31ab108590e37f2f5e1f0c739613686b33ed1f052fb8c36604c85b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_lewin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:39:59 compute-0 systemd[1]: libpod-conmon-cf098e81be31ab108590e37f2f5e1f0c739613686b33ed1f052fb8c36604c85b.scope: Deactivated successfully.
Dec 05 01:39:59 compute-0 sudo[386791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eotgsmjdqgkgkfiazoxjatyyciexyfhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898799.012478-413-237600475388519/AnsiballZ_container_config_data.py'
Dec 05 01:39:59 compute-0 sudo[386791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:39:59 compute-0 podman[386799]: 2025-12-05 01:39:59.727984776 +0000 UTC m=+0.086316090 container create 82332ac09992e5f4c0fb959289b44340ca8a78bc3e158267d74531bc30901626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_stonebraker, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:39:59 compute-0 podman[158197]: time="2025-12-05T01:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:39:59 compute-0 podman[386799]: 2025-12-05 01:39:59.693469015 +0000 UTC m=+0.051800409 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:39:59 compute-0 python3.9[386793]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Dec 05 01:39:59 compute-0 systemd[1]: Started libpod-conmon-82332ac09992e5f4c0fb959289b44340ca8a78bc3e158267d74531bc30901626.scope.
Dec 05 01:39:59 compute-0 sudo[386791]: pam_unix(sudo:session): session closed for user root
Dec 05 01:39:59 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:39:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b648c30c710ed079ce284ba64dae40ec26e2f278ab6d2b8b247686e692843c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:39:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b648c30c710ed079ce284ba64dae40ec26e2f278ab6d2b8b247686e692843c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:39:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b648c30c710ed079ce284ba64dae40ec26e2f278ab6d2b8b247686e692843c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:39:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b648c30c710ed079ce284ba64dae40ec26e2f278ab6d2b8b247686e692843c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:39:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b648c30c710ed079ce284ba64dae40ec26e2f278ab6d2b8b247686e692843c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:39:59 compute-0 podman[386799]: 2025-12-05 01:39:59.887637589 +0000 UTC m=+0.245968963 container init 82332ac09992e5f4c0fb959289b44340ca8a78bc3e158267d74531bc30901626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 05 01:39:59 compute-0 podman[386799]: 2025-12-05 01:39:59.900523312 +0000 UTC m=+0.258854616 container start 82332ac09992e5f4c0fb959289b44340ca8a78bc3e158267d74531bc30901626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 05 01:39:59 compute-0 podman[386799]: 2025-12-05 01:39:59.905635796 +0000 UTC m=+0.263967190 container attach 82332ac09992e5f4c0fb959289b44340ca8a78bc3e158267d74531bc30901626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:39:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 44292 "" "Go-http-client/1.1"
Dec 05 01:39:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8534 "" "Go-http-client/1.1"
Dec 05 01:40:00 compute-0 sudo[386977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjzmzkmspjpgjjbjmsxapaonxofukzjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898800.1516275-422-265651821747566/AnsiballZ_container_config_hash.py'
Dec 05 01:40:00 compute-0 sudo[386977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:00 compute-0 python3.9[386979]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 05 01:40:00 compute-0 sudo[386977]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v912: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:01 compute-0 ceph-mon[192914]: pgmap v912: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:01 compute-0 affectionate_stonebraker[386815]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:40:01 compute-0 affectionate_stonebraker[386815]: --> relative data size: 1.0
Dec 05 01:40:01 compute-0 affectionate_stonebraker[386815]: --> All data devices are unavailable
Dec 05 01:40:01 compute-0 systemd[1]: libpod-82332ac09992e5f4c0fb959289b44340ca8a78bc3e158267d74531bc30901626.scope: Deactivated successfully.
Dec 05 01:40:01 compute-0 systemd[1]: libpod-82332ac09992e5f4c0fb959289b44340ca8a78bc3e158267d74531bc30901626.scope: Consumed 1.201s CPU time.
Dec 05 01:40:01 compute-0 podman[386799]: 2025-12-05 01:40:01.171859889 +0000 UTC m=+1.530191233 container died 82332ac09992e5f4c0fb959289b44340ca8a78bc3e158267d74531bc30901626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:40:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-41b648c30c710ed079ce284ba64dae40ec26e2f278ab6d2b8b247686e692843c-merged.mount: Deactivated successfully.
Dec 05 01:40:01 compute-0 podman[386799]: 2025-12-05 01:40:01.268267562 +0000 UTC m=+1.626598866 container remove 82332ac09992e5f4c0fb959289b44340ca8a78bc3e158267d74531bc30901626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 05 01:40:01 compute-0 systemd[1]: libpod-conmon-82332ac09992e5f4c0fb959289b44340ca8a78bc3e158267d74531bc30901626.scope: Deactivated successfully.
Dec 05 01:40:01 compute-0 sudo[386546]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:01 compute-0 sudo[387046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:40:01 compute-0 sudo[387046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:40:01 compute-0 sudo[387046]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:01 compute-0 openstack_network_exporter[366555]: ERROR   01:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:40:01 compute-0 openstack_network_exporter[366555]: ERROR   01:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:40:01 compute-0 openstack_network_exporter[366555]: ERROR   01:40:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:40:01 compute-0 openstack_network_exporter[366555]: ERROR   01:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:40:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:40:01 compute-0 openstack_network_exporter[366555]: ERROR   01:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:40:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:40:01 compute-0 sudo[387100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:40:01 compute-0 sudo[387100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:40:01 compute-0 sudo[387100]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:01 compute-0 sudo[387136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:40:01 compute-0 sudo[387136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:40:01 compute-0 sudo[387136]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:01 compute-0 sudo[387182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:40:01 compute-0 sudo[387182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:40:01 compute-0 sudo[387257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjerxiveogffeayjfxyrkiwuliliufpv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764898801.3429544-432-267250823674379/AnsiballZ_edpm_container_manage.py'
Dec 05 01:40:01 compute-0 sudo[387257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:02 compute-0 python3[387261]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Dec 05 01:40:02 compute-0 podman[387307]: 2025-12-05 01:40:02.317298672 +0000 UTC m=+0.084850369 container create d4f6c542dae73271f5fd4039477c294748cc49b5c96bf455f5b110e099ed5985 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_williamson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:40:02 compute-0 podman[387307]: 2025-12-05 01:40:02.288121871 +0000 UTC m=+0.055673608 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:40:02 compute-0 systemd[1]: Started libpod-conmon-d4f6c542dae73271f5fd4039477c294748cc49b5c96bf455f5b110e099ed5985.scope.
Dec 05 01:40:02 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:40:02 compute-0 podman[387307]: 2025-12-05 01:40:02.426786923 +0000 UTC m=+0.194338650 container init d4f6c542dae73271f5fd4039477c294748cc49b5c96bf455f5b110e099ed5985 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 05 01:40:02 compute-0 podman[387307]: 2025-12-05 01:40:02.445307025 +0000 UTC m=+0.212858682 container start d4f6c542dae73271f5fd4039477c294748cc49b5c96bf455f5b110e099ed5985 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_williamson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 05 01:40:02 compute-0 podman[387307]: 2025-12-05 01:40:02.449986756 +0000 UTC m=+0.217538513 container attach d4f6c542dae73271f5fd4039477c294748cc49b5c96bf455f5b110e099ed5985 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_williamson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:40:02 compute-0 confident_williamson[387347]: 167 167
Dec 05 01:40:02 compute-0 systemd[1]: libpod-d4f6c542dae73271f5fd4039477c294748cc49b5c96bf455f5b110e099ed5985.scope: Deactivated successfully.
Dec 05 01:40:02 compute-0 podman[387307]: 2025-12-05 01:40:02.457822317 +0000 UTC m=+0.225373984 container died d4f6c542dae73271f5fd4039477c294748cc49b5c96bf455f5b110e099ed5985 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:40:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:40:02 compute-0 python3[387261]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                {
                                                     "Id": "ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7",
                                                     "Digest": "sha256:c74e63cd5740586d4c62182467bb463ef5e3dd809027aedc92c05ac19e93b086",
                                                     "RepoTags": [
                                                          "quay.io/sustainable_computing_io/kepler:release-0.7.12"
                                                     ],
                                                     "RepoDigests": [
                                                          "quay.io/sustainable_computing_io/kepler@sha256:581b65b646301e0fcb07582150ba63438f1353a85bf9acf1eb2acb4ce71c58bd",
                                                          "quay.io/sustainable_computing_io/kepler@sha256:c74e63cd5740586d4c62182467bb463ef5e3dd809027aedc92c05ac19e93b086"
                                                     ],
                                                     "Parent": "",
                                                     "Comment": "",
                                                     "Created": "2024-10-15T06:30:56.315982344Z",
                                                     "Config": {
                                                          "Env": [
                                                               "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                               "container=oci",
                                                               "NVIDIA_VISIBLE_DEVICES=all",
                                                               "NVIDIA_DRIVER_CAPABILITIES=utility",
                                                               "NVIDIA_MIG_MONITOR_DEVICES=all",
                                                               "NVIDIA_MIG_CONFIG_DEVICES=all"
                                                          ],
                                                          "Entrypoint": [
                                                               "/usr/bin/kepler"
                                                          ],
                                                          "Labels": {
                                                               "architecture": "x86_64",
                                                               "build-date": "2024-09-18T21:23:30",
                                                               "com.redhat.component": "ubi9-container",
                                                               "com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",
                                                               "description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
                                                               "distribution-scope": "public",
                                                               "io.buildah.version": "1.29.0",
                                                               "io.k8s.description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
                                                               "io.k8s.display-name": "Red Hat Universal Base Image 9",
                                                               "io.openshift.expose-services": "",
                                                               "io.openshift.tags": "base rhel9",
                                                               "maintainer": "Red Hat, Inc.",
                                                               "name": "ubi9",
                                                               "release": "1214.1726694543",
                                                               "release-0.7.12": "",
                                                               "summary": "Provides the latest release of Red Hat Universal Base Image 9.",
                                                               "url": "https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543",
                                                               "vcs-ref": "e309397d02fc53f7fa99db1371b8700eb49f268f",
                                                               "vcs-type": "git",
                                                               "vendor": "Red Hat, Inc.",
                                                               "version": "9.4"
                                                          }
                                                     },
                                                     "Version": "",
                                                     "Author": "",
                                                     "Architecture": "amd64",
                                                     "Os": "linux",
                                                     "Size": 331545571,
                                                     "VirtualSize": 331545571,
                                                     "GraphDriver": {
                                                          "Name": "overlay",
                                                          "Data": {
                                                               "LowerDir": "/var/lib/containers/storage/overlay/de1557109facda5eb038045e25371b06ad2baf5cf32c60a7fe84a603bee1e079/diff:/var/lib/containers/storage/overlay/725f7e4e3b8edde36f0bdcd313bbaf872dbe55b162264f8008ee3c09a0b89b66/diff:/var/lib/containers/storage/overlay/573769ea2305456dffa2f0674424aa020c1494387d36bcccb339788fd220d39b/diff:/var/lib/containers/storage/overlay/56a7d751d1997fb4e9fb31bd07356a0c9a7699a9bb524feeb3c7fe2b433b8223/diff:/var/lib/containers/storage/overlay/0560e6233aa93f1e1ac7bed53255811f32dc680869ef7f31dd630efc1203b853/diff:/var/lib/containers/storage/overlay/8d984035cdde48f32944ddaa464ac42d376faabc98415168800b2b8c9aec0930/diff:/var/lib/containers/storage/overlay/e7328e803158cca63d8efdbe1caefb1b51654de77e5fa8691079ad06db1abf75/diff",
                                                               "UpperDir": "/var/lib/containers/storage/overlay/ed698de2bb3f7ef46422d45edf0654a1764e700cec794f481dab0a1f34f51932/diff",
                                                               "WorkDir": "/var/lib/containers/storage/overlay/ed698de2bb3f7ef46422d45edf0654a1764e700cec794f481dab0a1f34f51932/work"
                                                          }
                                                     },
                                                     "RootFS": {
                                                          "Type": "layers",
                                                          "Layers": [
                                                               "sha256:e7328e803158cca63d8efdbe1caefb1b51654de77e5fa8691079ad06db1abf75",
                                                               "sha256:f947b23b2d0723eac9b608b79e6d48e59d90f74958e05f2762295489e0088e86",
                                                               "sha256:3bf6ab40cc16a103a087232c2c6a1a093dcb6141e70397de57907f5d00741429",
                                                               "sha256:2f5269f1ade14b3b0806305a0b2d3efffe65a187b302789a50ac00bcb815b960",
                                                               "sha256:413f5abb84bd1c03bdfd9c1e0dec8f4be92159c9c6116c4e44247efcdcc6b518",
                                                               "sha256:60c06a2423851502fc43aec0680b91181b0d62b52812c019d3fc66f1546c4529",
                                                               "sha256:323ce4bcad35618db6032dd5bfbd6c8ebb0cde882f730b19296d0ceaf5e39427",
                                                               "sha256:270b3386a8e4a2127a32b007abfea7cb394ae1dee577ee7fefdbb79cd2bea856"
                                                          ]
                                                     },
                                                     "Labels": {
                                                          "architecture": "x86_64",
                                                          "build-date": "2024-09-18T21:23:30",
                                                          "com.redhat.component": "ubi9-container",
                                                          "com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",
                                                          "description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
                                                          "distribution-scope": "public",
                                                          "io.buildah.version": "1.29.0",
                                                          "io.k8s.description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",
                                                          "io.k8s.display-name": "Red Hat Universal Base Image 9",
                                                          "io.openshift.expose-services": "",
                                                          "io.openshift.tags": "base rhel9",
                                                          "maintainer": "Red Hat, Inc.",
                                                          "name": "ubi9",
                                                          "release": "1214.1726694543",
                                                          "release-0.7.12": "",
                                                          "summary": "Provides the latest release of Red Hat Universal Base Image 9.",
                                                          "url": "https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543",
                                                          "vcs-ref": "e309397d02fc53f7fa99db1371b8700eb49f268f",
                                                          "vcs-type": "git",
                                                          "vendor": "Red Hat, Inc.",
                                                          "version": "9.4"
                                                     },
                                                     "Annotations": {},
                                                     "ManifestType": "application/vnd.oci.image.manifest.v1+json",
                                                     "User": "",
                                                     "History": [
                                                          {
                                                               "created": "2024-09-18T21:36:31.099323493Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:0067eb9f2ee25ab2d666a7639a85fe707b582902a09242761abf30c53664069b in / ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.031010231Z",
                                                               "created_by": "/bin/sh -c mv -f /etc/yum.repos.d/ubi.repo /tmp || :",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.418413433Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:5b1f650e1376d79fa3a65df4a154ea5166def95154b52c1c1097dfd8fc7d58eb in /tmp/tls-ca-bundle.pem ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.91238548Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD multi:7a67822d03b1a3ddb205cc3fcf7acd9d3180aef5988a5d25887bc0753a7a493b in /etc/yum.repos.d/ ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.912448474Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"Red Hat, Inc.\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.912573716Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL com.redhat.component=\"ubi9-container\"       name=\"ubi9\"       version=\"9.4\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.912652474Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL com.redhat.license_terms=\"https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.912740628Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL summary=\"Provides the latest release of Red Hat Universal Base Image 9.\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.912866673Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL description=\"The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.912921304Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL io.k8s.display-name=\"Red Hat Universal Base Image 9\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.912962586Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL io.openshift.expose-services=\"\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.913001888Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL io.openshift.tags=\"base rhel9\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.913021599Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV container oci",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.913081151Z",
                                                               "created_by": "/bin/sh -c #(nop) ENV PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:32.913091001Z",
                                                               "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:33.824802353Z",
                                                               "created_by": "/bin/sh -c rm -rf /var/log/*",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:34.766737128Z",
                                                               "created_by": "/bin/sh -c mkdir -p /var/log/rhsm",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:35.121320055Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:ed34e436a5c2cc729eecd8b15b94c75028aea1cb18b739cafbb293b5e4ad5dae in /root/buildinfo/content_manifests/ubi9-container-9.4-1214.1726694543.json ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:35.525712655Z",
                                                               "created_by": "/bin/sh -c #(nop) ADD file:d56bb1961538221b52d7e292418978f186bf67b9906771f38530fc3996a9d0d4 in /root/buildinfo/Dockerfile-ubi9-9.4-1214.1726694543 ",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:35.526152969Z",
                                                               "created_by": "/bin/sh -c #(nop) LABEL \"release\"=\"1214.1726694543\" \"distribution-scope\"=\"public\" \"vendor\"=\"Red Hat, Inc.\" \"build-date\"=\"2024-09-18T21:23:30\" \"architecture\"=\"x86_64\" \"vcs-type\"=\"git\" \"vcs-ref\"=\"e309397d02fc53f7fa99db1371b8700eb49f268f\" \"io.k8s.description\"=\"The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.\" \"url\"=\"https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543\"",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:36.481014095Z",
                                                               "created_by": "/bin/sh -c rm -f '/etc/yum.repos.d/odcs-3496925-3b364.repo' '/etc/yum.repos.d/rhel-9.4-compose-34ae9.repo'",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:37.364179091Z",
                                                               "created_by": "/bin/sh -c rm -f /tmp/tls-ca-bundle.pem",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-09-18T21:36:41.423178117Z",
                                                               "created_by": "/bin/sh -c mv -fZ /tmp/ubi.repo /etc/yum.repos.d/ubi.repo || :"
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "SHELL [/bin/bash -c]",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "ARG INSTALL_DCGM=false",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "ARG INSTALL_HABANA=false",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "ARG TARGETARCH=amd64",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "ENV NVIDIA_VISIBLE_DEVICES=all",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "ENV NVIDIA_DRIVER_CAPABILITIES=utility",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "ENV NVIDIA_MIG_MONITOR_DEVICES=all",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "ENV NVIDIA_MIG_CONFIG_DEVICES=all",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:14.211190228Z",
                                                               "created_by": "RUN |3 INSTALL_DCGM=false INSTALL_HABANA=false TARGETARCH=amd64 /bin/bash -c yum -y update-minimal --security --sec-severity=Important --sec-severity=Critical && yum clean all # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:28:38.991358946Z",
                                                               "created_by": "RUN |3 INSTALL_DCGM=false INSTALL_HABANA=false TARGETARCH=amd64 /bin/bash -c set -e -x ;\t\tINSTALL_PKGS=\" \t\t\tlibbpf  \t\t\" ;\t\tyum install -y $INSTALL_PKGS ;\t\t\t\tif [[ \"$TARGETARCH\" == \"amd64\" ]]; then \t\t\tyum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm; \t\t\tyum install -y cpuid; \t\t\tif [[ \"$INSTALL_DCGM\" == \"true\" ]]; then \t\t\t\tdnf config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/cuda-rhel9.repo; \t\t\t\tyum install -y datacenter-gpu-manager libnvidia-ml; \t\t\tfi; \t\t\tif [[ \"$INSTALL_HABANA\" == \"true\" ]]; then \t\t\t\trpm -Uvh https://vault.habana.ai/artifactory/rhel/9/9.2/habanalabs-firmware-tools-1.15.1-15.el9.x86_64.rpm --nodeps; \t\t\t\techo /usr/lib/habanalabs > /etc/ld.so.conf.d/habanalabs.conf; \t\t\t\tldconfig; \t\t\tfi; \t\tfi;\t\tyum clean all # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:30:56.146511902Z",
                                                               "created_by": "COPY /workspace/_output/bin/kepler /usr/bin/kepler # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:30:56.168608119Z",
                                                               "created_by": "COPY /libbpf-source/linux-5.14.0-424.el9/tools/bpf/bpftool/bpftool /usr/bin/bpftool # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:30:56.24706386Z",
                                                               "created_by": "RUN |3 INSTALL_DCGM=false INSTALL_HABANA=false TARGETARCH=amd64 /bin/bash -c mkdir -p /var/lib/kepler/data # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:30:56.299132132Z",
                                                               "created_by": "COPY /workspace/data/cpus.yaml /var/lib/kepler/data/cpus.yaml # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:30:56.315982344Z",
                                                               "created_by": "COPY /workspace/data/model_weight /var/lib/kepler/data/model_weight # buildkit",
                                                               "comment": "buildkit.dockerfile.v0"
                                                          },
                                                          {
                                                               "created": "2024-10-15T06:30:56.315982344Z",
                                                               "created_by": "ENTRYPOINT [\"/usr/bin/kepler\"]",
                                                               "comment": "buildkit.dockerfile.v0",
                                                               "empty_layer": true
                                                          }
                                                     ],
                                                     "NamesHistory": [
                                                          "quay.io/sustainable_computing_io/kepler:release-0.7.12"
                                                     ]
                                                }
                                           ]
                                           : quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec 05 01:40:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b137fd5b76791d0d52133b8c00bf4a938dfb01642099e723a78a382d19ca07dd-merged.mount: Deactivated successfully.
Dec 05 01:40:02 compute-0 podman[387307]: 2025-12-05 01:40:02.522210359 +0000 UTC m=+0.289762036 container remove d4f6c542dae73271f5fd4039477c294748cc49b5c96bf455f5b110e099ed5985 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_williamson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:40:02 compute-0 systemd[1]: libpod-conmon-d4f6c542dae73271f5fd4039477c294748cc49b5c96bf455f5b110e099ed5985.scope: Deactivated successfully.
Dec 05 01:40:02 compute-0 kepler[177967]: I1205 01:40:02.580203       1 exporter.go:218] Received shutdown signal
Dec 05 01:40:02 compute-0 kepler[177967]: I1205 01:40:02.581580       1 exporter.go:226] Exiting...
Dec 05 01:40:02 compute-0 systemd[1]: libpod-de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91.scope: Deactivated successfully.
Dec 05 01:40:02 compute-0 systemd[1]: libpod-de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91.scope: Consumed 38.820s CPU time.
Dec 05 01:40:02 compute-0 conmon[177967]: conmon de56270197aa3402c446 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91.scope/container/memory.events
Dec 05 01:40:02 compute-0 podman[387397]: 2025-12-05 01:40:02.764041744 +0000 UTC m=+0.069068615 container create d3364cf7b28882c198807f785f987c72573a82a9229a22ac8835136622e5090b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lalande, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:40:02 compute-0 podman[387371]: 2025-12-05 01:40:02.77029794 +0000 UTC m=+0.256981983 container died de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=edpm, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, vendor=Red Hat, Inc., version=9.4, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 01:40:02 compute-0 systemd[1]: de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91-4ebf4b8c79608771.timer: Deactivated successfully.
Dec 05 01:40:02 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91.
Dec 05 01:40:02 compute-0 systemd[1]: de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91-4ebf4b8c79608771.service: Failed to open /run/systemd/transient/de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91-4ebf4b8c79608771.service: No such file or directory
Dec 05 01:40:02 compute-0 systemd[1]: Started libpod-conmon-d3364cf7b28882c198807f785f987c72573a82a9229a22ac8835136622e5090b.scope.
Dec 05 01:40:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91-userdata-shm.mount: Deactivated successfully.
Dec 05 01:40:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-a374ec8aa50f4d970047ac6324333a688dcc2712f075ca8bf268b9db1c5579b0-merged.mount: Deactivated successfully.
Dec 05 01:40:02 compute-0 podman[387371]: 2025-12-05 01:40:02.82609277 +0000 UTC m=+0.312776813 container cleanup de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, container_name=kepler, maintainer=Red Hat, Inc., release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9, com.redhat.component=ubi9-container, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, vendor=Red Hat, Inc.)
Dec 05 01:40:02 compute-0 podman[387397]: 2025-12-05 01:40:02.740005058 +0000 UTC m=+0.045031979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:40:02 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:40:02 compute-0 python3[387261]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman stop kepler
Dec 05 01:40:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4ab83fa7c649bf76fccf127b541f367d6d6bd1852e037642ab8cf79e6dc163e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:40:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4ab83fa7c649bf76fccf127b541f367d6d6bd1852e037642ab8cf79e6dc163e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:40:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4ab83fa7c649bf76fccf127b541f367d6d6bd1852e037642ab8cf79e6dc163e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:40:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4ab83fa7c649bf76fccf127b541f367d6d6bd1852e037642ab8cf79e6dc163e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:40:02 compute-0 podman[387397]: 2025-12-05 01:40:02.856403243 +0000 UTC m=+0.161430114 container init d3364cf7b28882c198807f785f987c72573a82a9229a22ac8835136622e5090b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 01:40:02 compute-0 podman[387397]: 2025-12-05 01:40:02.876347105 +0000 UTC m=+0.181374016 container start d3364cf7b28882c198807f785f987c72573a82a9229a22ac8835136622e5090b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lalande, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:40:02 compute-0 podman[387397]: 2025-12-05 01:40:02.881001766 +0000 UTC m=+0.186028657 container attach d3364cf7b28882c198807f785f987c72573a82a9229a22ac8835136622e5090b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:40:02 compute-0 systemd[1]: de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91-4ebf4b8c79608771.timer: Failed to open /run/systemd/transient/de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91-4ebf4b8c79608771.timer: No such file or directory
Dec 05 01:40:02 compute-0 systemd[1]: de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91-4ebf4b8c79608771.service: Failed to open /run/systemd/transient/de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91-4ebf4b8c79608771.service: No such file or directory
Dec 05 01:40:02 compute-0 podman[387431]: 2025-12-05 01:40:02.924657964 +0000 UTC m=+0.070166965 container remove de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-container, container_name=kepler, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, distribution-scope=public)
Dec 05 01:40:02 compute-0 podman[387430]: Error: no container with ID de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 found in database: no such container
Dec 05 01:40:02 compute-0 python3[387261]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force kepler
Dec 05 01:40:02 compute-0 systemd[1]: edpm_kepler.service: Control process exited, code=exited, status=125/n/a
Dec 05 01:40:02 compute-0 podman[387456]: Error: no container with name or ID "kepler" found: no such container
Dec 05 01:40:03 compute-0 systemd[1]: edpm_kepler.service: Control process exited, code=exited, status=125/n/a
Dec 05 01:40:03 compute-0 systemd[1]: edpm_kepler.service: Failed with result 'exit-code'.
Dec 05 01:40:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v913: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:03 compute-0 podman[387455]: 2025-12-05 01:40:03.002132394 +0000 UTC m=+0.052520839 container create 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, container_name=kepler, maintainer=Red Hat, Inc., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, managed_by=edpm_ansible, version=9.4, com.redhat.component=ubi9-container, config_id=edpm, io.openshift.expose-services=, release-0.7.12=)
Dec 05 01:40:03 compute-0 podman[387455]: 2025-12-05 01:40:02.973170569 +0000 UTC m=+0.023559044 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec 05 01:40:03 compute-0 python3[387261]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Dec 05 01:40:03 compute-0 ceph-mon[192914]: pgmap v913: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:03 compute-0 systemd[1]: edpm_kepler.service: Scheduled restart job, restart counter is at 1.
Dec 05 01:40:03 compute-0 systemd[1]: Started libpod-conmon-088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54.scope.
Dec 05 01:40:03 compute-0 systemd[1]: Stopped kepler container.
Dec 05 01:40:03 compute-0 systemd[1]: Starting kepler container...
Dec 05 01:40:03 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:40:03 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54.
Dec 05 01:40:03 compute-0 podman[387479]: 2025-12-05 01:40:03.221156598 +0000 UTC m=+0.190707008 container init 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.component=ubi9-container, release=1214.1726694543, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, name=ubi9, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Dec 05 01:40:03 compute-0 kepler[387496]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 05 01:40:03 compute-0 podman[387479]: 2025-12-05 01:40:03.261496993 +0000 UTC m=+0.231047363 container start 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, version=9.4, managed_by=edpm_ansible, release-0.7.12=, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, architecture=x86_64, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.266741       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.267099       1 config.go:293] using gCgroup ID in the BPF program: true
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.267128       1 config.go:295] kernel version: 5.14
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.267707       1 power.go:78] Unable to obtain power, use estimate method
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.267743       1 redfish.go:169] failed to get redfish credential file path
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.268675       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.268695       1 power.go:79] using none to obtain power
Dec 05 01:40:03 compute-0 kepler[387496]: E1205 01:40:03.268718       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec 05 01:40:03 compute-0 kepler[387496]: E1205 01:40:03.268752       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec 05 01:40:03 compute-0 kepler[387496]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.270729       1 exporter.go:84] Number of CPUs: 8
Dec 05 01:40:03 compute-0 podman[387495]: kepler
Dec 05 01:40:03 compute-0 python3[387261]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman start kepler
Dec 05 01:40:03 compute-0 systemd[1]: Started kepler container.
Dec 05 01:40:03 compute-0 podman[387517]: 2025-12-05 01:40:03.376499289 +0000 UTC m=+0.089748576 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, name=ubi9, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, architecture=x86_64, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public)
Dec 05 01:40:03 compute-0 systemd[1]: 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54-2043af28fb0a5466.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 01:40:03 compute-0 systemd[1]: 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54-2043af28fb0a5466.service: Failed with result 'exit-code'.
Dec 05 01:40:03 compute-0 sudo[387257]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:03 compute-0 amazing_lalande[387424]: {
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:     "0": [
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:         {
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "devices": [
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "/dev/loop3"
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             ],
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "lv_name": "ceph_lv0",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "lv_size": "21470642176",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "name": "ceph_lv0",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "tags": {
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.cluster_name": "ceph",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.crush_device_class": "",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.encrypted": "0",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.osd_id": "0",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.type": "block",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.vdo": "0"
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             },
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "type": "block",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "vg_name": "ceph_vg0"
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:         }
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:     ],
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:     "1": [
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:         {
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "devices": [
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "/dev/loop4"
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             ],
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "lv_name": "ceph_lv1",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "lv_size": "21470642176",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "name": "ceph_lv1",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "tags": {
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.cluster_name": "ceph",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.crush_device_class": "",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.encrypted": "0",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.osd_id": "1",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.type": "block",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.vdo": "0"
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             },
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "type": "block",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "vg_name": "ceph_vg1"
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:         }
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:     ],
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:     "2": [
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:         {
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "devices": [
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "/dev/loop5"
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             ],
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "lv_name": "ceph_lv2",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "lv_size": "21470642176",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "name": "ceph_lv2",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "tags": {
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.cluster_name": "ceph",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.crush_device_class": "",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.encrypted": "0",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.osd_id": "2",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.type": "block",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:                 "ceph.vdo": "0"
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             },
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "type": "block",
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:             "vg_name": "ceph_vg2"
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:         }
Dec 05 01:40:03 compute-0 amazing_lalande[387424]:     ]
Dec 05 01:40:03 compute-0 amazing_lalande[387424]: }
Dec 05 01:40:03 compute-0 systemd[1]: libpod-d3364cf7b28882c198807f785f987c72573a82a9229a22ac8835136622e5090b.scope: Deactivated successfully.
Dec 05 01:40:03 compute-0 podman[387397]: 2025-12-05 01:40:03.664264248 +0000 UTC m=+0.969291169 container died d3364cf7b28882c198807f785f987c72573a82a9229a22ac8835136622e5090b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lalande, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:40:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4ab83fa7c649bf76fccf127b541f367d6d6bd1852e037642ab8cf79e6dc163e-merged.mount: Deactivated successfully.
Dec 05 01:40:03 compute-0 podman[387397]: 2025-12-05 01:40:03.747206592 +0000 UTC m=+1.052233473 container remove d3364cf7b28882c198807f785f987c72573a82a9229a22ac8835136622e5090b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lalande, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:40:03 compute-0 systemd[1]: libpod-conmon-d3364cf7b28882c198807f785f987c72573a82a9229a22ac8835136622e5090b.scope: Deactivated successfully.
Dec 05 01:40:03 compute-0 sudo[387182]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.867392       1 watcher.go:83] Using in cluster k8s config
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.867473       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec 05 01:40:03 compute-0 kepler[387496]: E1205 01:40:03.867618       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.876299       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.876383       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.884724       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.884798       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.899251       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.899309       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.899331       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.913618       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.913672       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.913682       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.913691       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.913702       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.913719       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.913836       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.913878       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.914005       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.914039       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.914186       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec 05 01:40:03 compute-0 kepler[387496]: I1205 01:40:03.914986       1 exporter.go:208] Started Kepler in 648.560782ms
Dec 05 01:40:03 compute-0 sudo[387580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:40:03 compute-0 sudo[387580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:40:03 compute-0 sudo[387580]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:04 compute-0 sudo[387615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:40:04 compute-0 sudo[387615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:40:04 compute-0 sudo[387615]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:04 compute-0 sudo[387646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:40:04 compute-0 sudo[387646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:40:04 compute-0 sudo[387646]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:04 compute-0 sudo[387689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:40:04 compute-0 sudo[387689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:40:04 compute-0 sudo[387877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbungfmvgxnwwkhhveipufzoahnsgnmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898804.3284886-440-128333836949757/AnsiballZ_stat.py'
Dec 05 01:40:04 compute-0 sudo[387877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:04 compute-0 podman[387878]: 2025-12-05 01:40:04.973623015 +0000 UTC m=+0.097881916 container create d4611c49ebd7c0a0b3de1927950780f579960364b13074479a3a40d646903842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:40:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v914: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:05 compute-0 podman[387878]: 2025-12-05 01:40:04.937059736 +0000 UTC m=+0.061318687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:40:05 compute-0 systemd[1]: Started libpod-conmon-d4611c49ebd7c0a0b3de1927950780f579960364b13074479a3a40d646903842.scope.
Dec 05 01:40:05 compute-0 ceph-mon[192914]: pgmap v914: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:05 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:40:05 compute-0 python3.9[387881]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:40:05 compute-0 podman[387878]: 2025-12-05 01:40:05.133980138 +0000 UTC m=+0.258239019 container init d4611c49ebd7c0a0b3de1927950780f579960364b13074479a3a40d646903842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_turing, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:40:05 compute-0 podman[387878]: 2025-12-05 01:40:05.151319285 +0000 UTC m=+0.275578156 container start d4611c49ebd7c0a0b3de1927950780f579960364b13074479a3a40d646903842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 01:40:05 compute-0 podman[387878]: 2025-12-05 01:40:05.156361767 +0000 UTC m=+0.280620648 container attach d4611c49ebd7c0a0b3de1927950780f579960364b13074479a3a40d646903842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 01:40:05 compute-0 sweet_turing[387895]: 167 167
Dec 05 01:40:05 compute-0 systemd[1]: libpod-d4611c49ebd7c0a0b3de1927950780f579960364b13074479a3a40d646903842.scope: Deactivated successfully.
Dec 05 01:40:05 compute-0 podman[387878]: 2025-12-05 01:40:05.167598284 +0000 UTC m=+0.291857185 container died d4611c49ebd7c0a0b3de1927950780f579960364b13074479a3a40d646903842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_turing, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 01:40:05 compute-0 sudo[387877]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d335a42e7b89389c14318d0b3494d5fbd477e82b91f749ec0f00658d278f0a9-merged.mount: Deactivated successfully.
Dec 05 01:40:05 compute-0 podman[387878]: 2025-12-05 01:40:05.236034429 +0000 UTC m=+0.360293300 container remove d4611c49ebd7c0a0b3de1927950780f579960364b13074479a3a40d646903842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 01:40:05 compute-0 systemd[1]: libpod-conmon-d4611c49ebd7c0a0b3de1927950780f579960364b13074479a3a40d646903842.scope: Deactivated successfully.
Dec 05 01:40:05 compute-0 podman[387944]: 2025-12-05 01:40:05.48374129 +0000 UTC m=+0.090942160 container create 69ae247f22fff7c4af70ac63496bf00974c926b7d4e1a405ebb1dfcbf1e3e30a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:40:05 compute-0 podman[387944]: 2025-12-05 01:40:05.457240725 +0000 UTC m=+0.064441595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:40:05 compute-0 systemd[1]: Started libpod-conmon-69ae247f22fff7c4af70ac63496bf00974c926b7d4e1a405ebb1dfcbf1e3e30a.scope.
Dec 05 01:40:05 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:40:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/203cb8dc4368b9de6355011069c2c420e155162f2a14874c5c6855b5f40c2f0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:40:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/203cb8dc4368b9de6355011069c2c420e155162f2a14874c5c6855b5f40c2f0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:40:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/203cb8dc4368b9de6355011069c2c420e155162f2a14874c5c6855b5f40c2f0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:40:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/203cb8dc4368b9de6355011069c2c420e155162f2a14874c5c6855b5f40c2f0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:40:05 compute-0 podman[387944]: 2025-12-05 01:40:05.666309787 +0000 UTC m=+0.273510647 container init 69ae247f22fff7c4af70ac63496bf00974c926b7d4e1a405ebb1dfcbf1e3e30a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Dec 05 01:40:05 compute-0 podman[387944]: 2025-12-05 01:40:05.684593412 +0000 UTC m=+0.291794242 container start 69ae247f22fff7c4af70ac63496bf00974c926b7d4e1a405ebb1dfcbf1e3e30a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 01:40:05 compute-0 podman[387944]: 2025-12-05 01:40:05.688553573 +0000 UTC m=+0.295754403 container attach 69ae247f22fff7c4af70ac63496bf00974c926b7d4e1a405ebb1dfcbf1e3e30a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 05 01:40:06 compute-0 sudo[388102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elswszuagrzvtwdywucjzguqbtfvxchx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898805.5082603-449-256849164776009/AnsiballZ_file.py'
Dec 05 01:40:06 compute-0 sudo[388102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:06 compute-0 python3.9[388107]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:40:06 compute-0 sudo[388102]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:06 compute-0 nice_euclid[388002]: {
Dec 05 01:40:06 compute-0 nice_euclid[388002]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:40:06 compute-0 nice_euclid[388002]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:40:06 compute-0 nice_euclid[388002]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:40:06 compute-0 nice_euclid[388002]:         "osd_id": 0,
Dec 05 01:40:06 compute-0 nice_euclid[388002]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:40:06 compute-0 nice_euclid[388002]:         "type": "bluestore"
Dec 05 01:40:06 compute-0 nice_euclid[388002]:     },
Dec 05 01:40:06 compute-0 nice_euclid[388002]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:40:06 compute-0 nice_euclid[388002]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:40:06 compute-0 nice_euclid[388002]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:40:06 compute-0 nice_euclid[388002]:         "osd_id": 1,
Dec 05 01:40:06 compute-0 nice_euclid[388002]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:40:06 compute-0 nice_euclid[388002]:         "type": "bluestore"
Dec 05 01:40:06 compute-0 nice_euclid[388002]:     },
Dec 05 01:40:06 compute-0 nice_euclid[388002]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:40:06 compute-0 nice_euclid[388002]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:40:06 compute-0 nice_euclid[388002]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:40:06 compute-0 nice_euclid[388002]:         "osd_id": 2,
Dec 05 01:40:06 compute-0 nice_euclid[388002]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:40:06 compute-0 nice_euclid[388002]:         "type": "bluestore"
Dec 05 01:40:06 compute-0 nice_euclid[388002]:     }
Dec 05 01:40:06 compute-0 nice_euclid[388002]: }
Dec 05 01:40:06 compute-0 systemd[1]: libpod-69ae247f22fff7c4af70ac63496bf00974c926b7d4e1a405ebb1dfcbf1e3e30a.scope: Deactivated successfully.
Dec 05 01:40:06 compute-0 systemd[1]: libpod-69ae247f22fff7c4af70ac63496bf00974c926b7d4e1a405ebb1dfcbf1e3e30a.scope: Consumed 1.199s CPU time.
Dec 05 01:40:06 compute-0 podman[387944]: 2025-12-05 01:40:06.889212281 +0000 UTC m=+1.496413151 container died 69ae247f22fff7c4af70ac63496bf00974c926b7d4e1a405ebb1dfcbf1e3e30a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:40:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-203cb8dc4368b9de6355011069c2c420e155162f2a14874c5c6855b5f40c2f0b-merged.mount: Deactivated successfully.
Dec 05 01:40:06 compute-0 podman[387944]: 2025-12-05 01:40:06.995262336 +0000 UTC m=+1.602463176 container remove 69ae247f22fff7c4af70ac63496bf00974c926b7d4e1a405ebb1dfcbf1e3e30a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Dec 05 01:40:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v915: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:07 compute-0 systemd[1]: libpod-conmon-69ae247f22fff7c4af70ac63496bf00974c926b7d4e1a405ebb1dfcbf1e3e30a.scope: Deactivated successfully.
Dec 05 01:40:07 compute-0 sudo[387689]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:40:07 compute-0 ceph-mon[192914]: pgmap v915: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:07 compute-0 nova_compute[349548]: 2025-12-05 01:40:07.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:40:07 compute-0 nova_compute[349548]: 2025-12-05 01:40:07.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 01:40:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:40:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:40:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:40:07 compute-0 nova_compute[349548]: 2025-12-05 01:40:07.088 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 01:40:07 compute-0 nova_compute[349548]: 2025-12-05 01:40:07.089 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:40:07 compute-0 nova_compute[349548]: 2025-12-05 01:40:07.090 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 01:40:07 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 02bd2a07-ed43-4bf4-aecf-73954115cbcc does not exist
Dec 05 01:40:07 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev fabbed53-40a7-423f-be58-7442102dee6c does not exist
Dec 05 01:40:07 compute-0 nova_compute[349548]: 2025-12-05 01:40:07.103 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:40:07 compute-0 sudo[388186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:40:07 compute-0 sudo[388186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:40:07 compute-0 sudo[388186]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:07 compute-0 podman[388233]: 2025-12-05 01:40:07.347611601 +0000 UTC m=+0.096857757 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 01:40:07 compute-0 sudo[388246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:40:07 compute-0 sudo[388246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:40:07 compute-0 sudo[388246]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:07 compute-0 podman[388234]: 2025-12-05 01:40:07.383968954 +0000 UTC m=+0.129376632 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 01:40:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:40:07 compute-0 sudo[388372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ildadhnsnvxwnumidquljrhrerhenlkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898806.897829-449-244947734899941/AnsiballZ_copy.py'
Dec 05 01:40:07 compute-0 sudo[388372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:07 compute-0 python3.9[388374]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764898806.897829-449-244947734899941/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:40:07 compute-0 sudo[388372]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:40:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:40:08 compute-0 sudo[388448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmsemixvyybfggawvqntvrqparfqskwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898806.897829-449-244947734899941/AnsiballZ_systemd.py'
Dec 05 01:40:08 compute-0 sudo[388448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:08 compute-0 python3.9[388450]: ansible-systemd Invoked with state=started name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 05 01:40:08 compute-0 sudo[388448]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v916: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:09 compute-0 ceph-mon[192914]: pgmap v916: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:09 compute-0 nova_compute[349548]: 2025-12-05 01:40:09.120 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:40:09 compute-0 nova_compute[349548]: 2025-12-05 01:40:09.120 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 01:40:09 compute-0 sudo[388617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afufnjokwxzgohkqcyakppfboatsiqnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898809.0428991-469-95207546175347/AnsiballZ_systemd.py'
Dec 05 01:40:09 compute-0 sudo[388617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:09 compute-0 podman[388576]: 2025-12-05 01:40:09.698203809 +0000 UTC m=+0.174119221 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, managed_by=edpm_ansible)
Dec 05 01:40:10 compute-0 python3.9[388624]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 01:40:10 compute-0 nova_compute[349548]: 2025-12-05 01:40:10.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:40:10 compute-0 nova_compute[349548]: 2025-12-05 01:40:10.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:40:10 compute-0 nova_compute[349548]: 2025-12-05 01:40:10.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:40:10 compute-0 systemd[1]: Stopping ceilometer_agent_ipmi container...
Dec 05 01:40:10 compute-0 nova_compute[349548]: 2025-12-05 01:40:10.110 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:40:10 compute-0 nova_compute[349548]: 2025-12-05 01:40:10.111 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:40:10 compute-0 nova_compute[349548]: 2025-12-05 01:40:10.111 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:40:10 compute-0 nova_compute[349548]: 2025-12-05 01:40:10.111 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:40:10 compute-0 nova_compute[349548]: 2025-12-05 01:40:10.112 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:40:10 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:40:10.182 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec 05 01:40:10 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:40:10.284 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Dec 05 01:40:10 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:40:10.284 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Dec 05 01:40:10 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:40:10.285 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Dec 05 01:40:10 compute-0 ceilometer_agent_ipmi[177712]: 2025-12-05 01:40:10.295 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Dec 05 01:40:10 compute-0 systemd[1]: libpod-88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.scope: Deactivated successfully.
Dec 05 01:40:10 compute-0 systemd[1]: libpod-88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.scope: Consumed 3.691s CPU time.
Dec 05 01:40:10 compute-0 podman[388631]: 2025-12-05 01:40:10.475094682 +0000 UTC m=+0.379367047 container died 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec 05 01:40:10 compute-0 systemd[1]: 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335-3bd305131483247.timer: Deactivated successfully.
Dec 05 01:40:10 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.
Dec 05 01:40:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335-userdata-shm.mount: Deactivated successfully.
Dec 05 01:40:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-5aeaaee3422df2ccf4d4601c96e9c4f445969cd6f5a16b56f77ac2bf9514fd4d-merged.mount: Deactivated successfully.
Dec 05 01:40:10 compute-0 podman[388631]: 2025-12-05 01:40:10.578510162 +0000 UTC m=+0.482782517 container cleanup 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible)
Dec 05 01:40:10 compute-0 podman[388631]: ceilometer_agent_ipmi
Dec 05 01:40:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:40:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2503029048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:40:10 compute-0 nova_compute[349548]: 2025-12-05 01:40:10.616 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:40:10 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2503029048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:40:10 compute-0 podman[388677]: ceilometer_agent_ipmi
Dec 05 01:40:10 compute-0 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Dec 05 01:40:10 compute-0 systemd[1]: Stopped ceilometer_agent_ipmi container.
Dec 05 01:40:10 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec 05 01:40:10 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:40:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aeaaee3422df2ccf4d4601c96e9c4f445969cd6f5a16b56f77ac2bf9514fd4d/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 05 01:40:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aeaaee3422df2ccf4d4601c96e9c4f445969cd6f5a16b56f77ac2bf9514fd4d/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec 05 01:40:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aeaaee3422df2ccf4d4601c96e9c4f445969cd6f5a16b56f77ac2bf9514fd4d/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 05 01:40:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aeaaee3422df2ccf4d4601c96e9c4f445969cd6f5a16b56f77ac2bf9514fd4d/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 05 01:40:10 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.
Dec 05 01:40:10 compute-0 podman[388691]: 2025-12-05 01:40:10.9653832 +0000 UTC m=+0.218373977 container init 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 05 01:40:10 compute-0 ceilometer_agent_ipmi[388706]: + sudo -E kolla_set_configs
Dec 05 01:40:11 compute-0 podman[388691]: 2025-12-05 01:40:11.004091429 +0000 UTC m=+0.257082186 container start 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec 05 01:40:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v917: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:11 compute-0 podman[388691]: ceilometer_agent_ipmi
Dec 05 01:40:11 compute-0 sudo[388712]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 05 01:40:11 compute-0 sudo[388712]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 01:40:11 compute-0 sudo[388712]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 05 01:40:11 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Dec 05 01:40:11 compute-0 sudo[388617]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: INFO:__main__:Validating config file
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: INFO:__main__:Copying service configuration files
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 05 01:40:11 compute-0 nova_compute[349548]: 2025-12-05 01:40:11.101 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 05 01:40:11 compute-0 nova_compute[349548]: 2025-12-05 01:40:11.102 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4661MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 05 01:40:11 compute-0 nova_compute[349548]: 2025-12-05 01:40:11.103 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 05 01:40:11 compute-0 nova_compute[349548]: 2025-12-05 01:40:11.103 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: INFO:__main__:Writing out command to execute
Dec 05 01:40:11 compute-0 podman[388713]: 2025-12-05 01:40:11.106771838 +0000 UTC m=+0.085900888 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 01:40:11 compute-0 sudo[388712]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: ++ cat /run_command
Dec 05 01:40:11 compute-0 systemd[1]: 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335-561a54df6cf0a7cf.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 01:40:11 compute-0 systemd[1]: 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335-561a54df6cf0a7cf.service: Failed with result 'exit-code'.
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: + ARGS=
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: + sudo kolla_copy_cacerts
Dec 05 01:40:11 compute-0 sudo[388734]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 05 01:40:11 compute-0 sudo[388734]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 01:40:11 compute-0 sudo[388734]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 05 01:40:11 compute-0 sudo[388734]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: + [[ ! -n '' ]]
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: + . kolla_extend_start
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: + umask 0022
Dec 05 01:40:11 compute-0 ceilometer_agent_ipmi[388706]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec 05 01:40:11 compute-0 nova_compute[349548]: 2025-12-05 01:40:11.371 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 01:40:11 compute-0 nova_compute[349548]: 2025-12-05 01:40:11.372 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 01:40:11 compute-0 nova_compute[349548]: 2025-12-05 01:40:11.463 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing inventories for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 01:40:11 compute-0 nova_compute[349548]: 2025-12-05 01:40:11.543 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating ProviderTree inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 01:40:11 compute-0 nova_compute[349548]: 2025-12-05 01:40:11.544 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 01:40:11 compute-0 nova_compute[349548]: 2025-12-05 01:40:11.568 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing aggregate associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 01:40:11 compute-0 nova_compute[349548]: 2025-12-05 01:40:11.592 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing trait associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, traits: HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 01:40:11 compute-0 nova_compute[349548]: 2025-12-05 01:40:11.607 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:40:11 compute-0 ceph-mon[192914]: pgmap v917: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:11 compute-0 sudo[388906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thgsjzocxnuutcwlovpdnnjjzenstzco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898811.3648562-477-27798096656841/AnsiballZ_systemd.py'
Dec 05 01:40:11 compute-0 sudo[388906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:40:12 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/862151192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.112 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.112 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.112 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.112 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.113 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.113 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.113 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.113 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.113 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.113 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.113 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.113 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.113 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.113 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.113 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.113 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.114 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.114 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.114 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.114 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.114 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.114 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.114 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.114 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.114 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.114 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.114 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.114 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 nova_compute[349548]: 2025-12-05 01:40:12.114 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.115 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.115 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.115 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.115 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.115 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.115 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.115 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.115 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.115 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.115 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.116 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.116 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.116 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.116 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.116 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.116 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.116 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.116 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.116 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.116 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.116 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.117 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.117 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.117 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.117 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.117 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.117 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.117 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.117 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.117 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.117 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.117 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.117 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.118 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.118 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.118 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.118 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.118 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.118 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.118 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.118 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.118 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.118 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.118 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.120 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.120 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.120 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.120 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.120 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.120 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.120 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.120 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.120 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.120 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.121 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.121 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.121 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.121 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.121 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.121 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.121 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.121 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.121 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.121 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.121 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.121 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.122 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.122 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.122 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.122 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.122 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.122 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.122 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.122 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.122 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.122 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.122 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.123 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.123 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.123 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.123 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.123 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.123 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.123 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.123 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.123 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.123 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.123 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.123 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.124 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.124 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.124 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.124 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.124 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.124 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.124 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.124 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.124 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.124 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.124 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.124 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.125 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.125 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.125 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.125 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.125 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.125 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.125 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.125 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.125 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.125 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.125 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.125 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.125 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.126 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.126 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.126 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.126 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.126 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.126 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.126 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.126 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.126 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.126 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 05 01:40:12 compute-0 nova_compute[349548]: 2025-12-05 01:40:12.135 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.152 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.153 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.154 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 05 01:40:12 compute-0 nova_compute[349548]: 2025-12-05 01:40:12.161 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:40:12 compute-0 nova_compute[349548]: 2025-12-05 01:40:12.164 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 01:40:12 compute-0 nova_compute[349548]: 2025-12-05 01:40:12.165 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.171 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpt6bapz0y/privsep.sock']
Dec 05 01:40:12 compute-0 sudo[388915]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpt6bapz0y/privsep.sock
Dec 05 01:40:12 compute-0 sudo[388915]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 05 01:40:12 compute-0 sudo[388915]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 05 01:40:12 compute-0 python3.9[388908]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 05 01:40:12 compute-0 systemd[1]: Stopping kepler container...
Dec 05 01:40:12 compute-0 kepler[387496]: I1205 01:40:12.404946       1 exporter.go:218] Received shutdown signal
Dec 05 01:40:12 compute-0 kepler[387496]: I1205 01:40:12.406132       1 exporter.go:226] Exiting...
Dec 05 01:40:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:40:12 compute-0 systemd[1]: libpod-088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54.scope: Deactivated successfully.
Dec 05 01:40:12 compute-0 systemd[1]: libpod-088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54.scope: Consumed 1.153s CPU time.
Dec 05 01:40:12 compute-0 podman[388921]: 2025-12-05 01:40:12.626231838 +0000 UTC m=+0.299366236 container died 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, architecture=x86_64, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., config_id=edpm, release=1214.1726694543, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=base rhel9, managed_by=edpm_ansible, container_name=kepler, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 05 01:40:12 compute-0 systemd[1]: 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54-2043af28fb0a5466.timer: Deactivated successfully.
Dec 05 01:40:12 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54.
Dec 05 01:40:12 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/862151192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:40:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54-userdata-shm.mount: Deactivated successfully.
Dec 05 01:40:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-71ae877336c5e7c632d675f2ecf6a7537636baeef66482d30ac059fb9581fd39-merged.mount: Deactivated successfully.
Dec 05 01:40:12 compute-0 podman[388921]: 2025-12-05 01:40:12.702978788 +0000 UTC m=+0.376113166 container cleanup 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, managed_by=edpm_ansible, version=9.4, distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9)
Dec 05 01:40:12 compute-0 podman[388921]: kepler
Dec 05 01:40:12 compute-0 systemd[1]: libpod-conmon-088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54.scope: Deactivated successfully.
Dec 05 01:40:12 compute-0 podman[388950]: kepler
Dec 05 01:40:12 compute-0 systemd[1]: edpm_kepler.service: Deactivated successfully.
Dec 05 01:40:12 compute-0 systemd[1]: Stopped kepler container.
Dec 05 01:40:12 compute-0 systemd[1]: Starting kepler container...
Dec 05 01:40:12 compute-0 sudo[388915]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.923 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.924 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpt6bapz0y/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.800 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.806 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.811 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 05 01:40:12 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:12.811 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec 05 01:40:12 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:40:12 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54.
Dec 05 01:40:12 compute-0 podman[388964]: 2025-12-05 01:40:12.991233599 +0000 UTC m=+0.155558469 container init 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, version=9.4, config_id=edpm, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=)
Dec 05 01:40:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v918: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:13 compute-0 kepler[388980]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 05 01:40:13 compute-0 podman[388964]: 2025-12-05 01:40:13.034950429 +0000 UTC m=+0.199275279 container start 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, release-0.7.12=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, version=9.4, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.035991       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.036194       1 config.go:293] using gCgroup ID in the BPF program: true
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.036225       1 config.go:295] kernel version: 5.14
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.036854       1 power.go:78] Unable to obtain power, use estimate method
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.036923       1 redfish.go:169] failed to get redfish credential file path
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.037538       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.037559       1 power.go:79] using none to obtain power
Dec 05 01:40:13 compute-0 kepler[388980]: E1205 01:40:13.037581       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec 05 01:40:13 compute-0 kepler[388980]: E1205 01:40:13.037612       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec 05 01:40:13 compute-0 podman[388964]: kepler
Dec 05 01:40:13 compute-0 kepler[388980]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.040821       1 exporter.go:84] Number of CPUs: 8
Dec 05 01:40:13 compute-0 systemd[1]: Started kepler container.
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.060 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.062 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.064 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.065 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.065 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.065 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.066 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.067 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.067 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.067 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.068 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.068 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.068 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.074 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.075 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.076 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.076 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.076 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.077 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.077 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.077 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.077 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.077 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.078 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.078 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.078 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.078 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.079 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.079 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.079 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.080 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.080 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.080 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.080 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.081 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.081 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.081 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.081 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.081 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.082 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.082 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.082 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.082 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.082 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.082 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.083 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.083 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.083 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.083 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.083 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.083 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.084 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.084 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.084 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.084 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.084 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.085 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.085 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.085 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.085 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.085 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.085 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.085 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.086 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.086 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.086 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.086 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.086 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.086 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.086 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.086 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.086 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.087 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.087 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.087 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.087 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.087 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.087 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.087 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.088 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.088 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.088 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.088 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.088 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.088 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.088 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.088 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.089 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.089 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.089 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.089 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.089 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.089 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.089 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.089 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.089 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.090 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.090 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.090 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.090 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.090 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.090 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.090 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.090 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.090 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.091 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.091 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.091 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.091 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.091 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.091 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.091 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.091 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.092 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.092 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.092 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.092 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.092 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.092 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.092 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.092 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.093 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.093 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.093 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.093 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.093 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.093 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.093 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.094 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.094 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.094 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.094 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.094 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.094 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.094 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.094 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.094 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.095 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.095 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 sudo[388906]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.095 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.095 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.095 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.095 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.095 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.095 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.096 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.096 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.096 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.096 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.096 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.096 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.096 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.097 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.097 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.097 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.097 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.097 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.097 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.097 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.097 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.097 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.098 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.098 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.098 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.098 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.098 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.098 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.098 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.098 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.099 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.099 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.099 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.099 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.099 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.099 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.099 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.099 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.099 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.100 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.100 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.100 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.100 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.100 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.100 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.100 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.100 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.101 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.101 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.101 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.101 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.101 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.101 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.101 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.102 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.102 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.102 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.102 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.102 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.102 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.102 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.102 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.103 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.103 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.103 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.103 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.103 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.103 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.104 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.104 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.104 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.104 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.104 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.104 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.104 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.104 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.105 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec 05 01:40:13 compute-0 ceilometer_agent_ipmi[388706]: 2025-12-05 01:40:13.108 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec 05 01:40:13 compute-0 podman[388992]: 2025-12-05 01:40:13.124573981 +0000 UTC m=+0.073932021 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, release=1214.1726694543, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, distribution-scope=public, maintainer=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, config_id=edpm, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 05 01:40:13 compute-0 systemd[1]: 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54-43a46a95eb81a216.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 01:40:13 compute-0 systemd[1]: 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54-43a46a95eb81a216.service: Failed with result 'exit-code'.
Dec 05 01:40:13 compute-0 nova_compute[349548]: 2025-12-05 01:40:13.162 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:40:13 compute-0 nova_compute[349548]: 2025-12-05 01:40:13.163 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:40:13 compute-0 nova_compute[349548]: 2025-12-05 01:40:13.165 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 01:40:13 compute-0 nova_compute[349548]: 2025-12-05 01:40:13.165 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 01:40:13 compute-0 nova_compute[349548]: 2025-12-05 01:40:13.187 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 01:40:13 compute-0 nova_compute[349548]: 2025-12-05 01:40:13.187 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:40:13 compute-0 nova_compute[349548]: 2025-12-05 01:40:13.187 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:40:13 compute-0 nova_compute[349548]: 2025-12-05 01:40:13.188 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.622687       1 watcher.go:83] Using in cluster k8s config
Dec 05 01:40:13 compute-0 sudo[389167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwzkijvzfzvpfjkxrwzhkbjxlfxwjwou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898813.2780302-485-267815794592672/AnsiballZ_find.py'
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.622742       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec 05 01:40:13 compute-0 kepler[388980]: E1205 01:40:13.623136       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec 05 01:40:13 compute-0 sudo[389167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.629336       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.629372       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.633031       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.633059       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.645954       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.645984       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.645997       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.659338       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.659367       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.659371       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.659375       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.659380       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.659390       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.659436       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.659456       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.659474       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.659487       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.659607       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec 05 01:40:13 compute-0 kepler[388980]: I1205 01:40:13.659965       1 exporter.go:208] Started Kepler in 624.444193ms
Dec 05 01:40:13 compute-0 ceph-mon[192914]: pgmap v918: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:13 compute-0 python3.9[389175]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 05 01:40:13 compute-0 sudo[389167]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v919: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:15 compute-0 ceph-mon[192914]: pgmap v919: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:15 compute-0 sudo[389329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgcvqysnecvgpyyjwojtqaegdicpuasr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898814.482244-495-173600317321482/AnsiballZ_podman_container_info.py'
Dec 05 01:40:15 compute-0 sudo[389329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:15 compute-0 python3.9[389331]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec 05 01:40:15 compute-0 sudo[389329]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:40:16
Dec 05 01:40:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:40:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:40:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'backups', 'vms', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes']
Dec 05 01:40:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:40:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:40:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:40:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:40:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:40:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:40:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:40:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:40:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:40:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:40:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:40:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:40:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:40:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:40:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:40:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:40:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:40:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v920: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:17 compute-0 ceph-mon[192914]: pgmap v920: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:17 compute-0 sudo[389493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzqmsynwwtogfkmyogcakdjvwlncncfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898816.4676304-503-125854868538913/AnsiballZ_podman_container_exec.py'
Dec 05 01:40:17 compute-0 sudo[389493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:17 compute-0 python3.9[389495]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:40:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:40:17 compute-0 systemd[1]: Started libpod-conmon-d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.scope.
Dec 05 01:40:17 compute-0 podman[389496]: 2025-12-05 01:40:17.624623598 +0000 UTC m=+0.145508236 container exec d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 01:40:17 compute-0 podman[389496]: 2025-12-05 01:40:17.663517943 +0000 UTC m=+0.184402551 container exec_died d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 05 01:40:17 compute-0 sudo[389493]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:17 compute-0 systemd[1]: libpod-conmon-d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.scope: Deactivated successfully.
Dec 05 01:40:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v921: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:19 compute-0 ceph-mon[192914]: pgmap v921: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:19 compute-0 sudo[389675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxlebkqwhwyxamatjdpuwpluaxqlbmlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898818.8112483-511-9941858087671/AnsiballZ_podman_container_exec.py'
Dec 05 01:40:19 compute-0 sudo[389675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:19 compute-0 python3.9[389677]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:40:19 compute-0 systemd[1]: Started libpod-conmon-d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.scope.
Dec 05 01:40:19 compute-0 podman[389679]: 2025-12-05 01:40:19.746159311 +0000 UTC m=+0.144448836 container exec d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 05 01:40:19 compute-0 podman[389679]: 2025-12-05 01:40:19.783337747 +0000 UTC m=+0.181627302 container exec_died d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 01:40:19 compute-0 sudo[389675]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:19 compute-0 systemd[1]: libpod-conmon-d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.scope: Deactivated successfully.
Dec 05 01:40:20 compute-0 sudo[389860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehinrrjdphbziirygoyccdkqybdktdub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898820.160102-519-11098566515974/AnsiballZ_file.py'
Dec 05 01:40:20 compute-0 sudo[389860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:20 compute-0 python3.9[389862]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:40:21 compute-0 sudo[389860]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v922: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:21 compute-0 ceph-mon[192914]: pgmap v922: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:21 compute-0 podman[389945]: 2025-12-05 01:40:21.745738871 +0000 UTC m=+0.148659035 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 01:40:21 compute-0 podman[389952]: 2025-12-05 01:40:21.818415306 +0000 UTC m=+0.220761764 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 01:40:21 compute-0 sudo[390055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdfjugynshtlcaakcqrsdqizdqvuelkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898821.3807504-528-227813055326869/AnsiballZ_podman_container_info.py'
Dec 05 01:40:21 compute-0 sudo[390055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:22 compute-0 python3.9[390057]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec 05 01:40:22 compute-0 sudo[390055]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:40:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v923: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:23 compute-0 ceph-mon[192914]: pgmap v923: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:23 compute-0 sudo[390247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgywufbljwaqicbhvdgputzdbpuqevzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898822.577179-536-242794526101371/AnsiballZ_podman_container_exec.py'
Dec 05 01:40:23 compute-0 sudo[390247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:23 compute-0 podman[390193]: 2025-12-05 01:40:23.37746165 +0000 UTC m=+0.150727193 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, version=9.6, release=1755695350, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, vcs-type=git, io.openshift.tags=minimal rhel9, architecture=x86_64, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 05 01:40:23 compute-0 podman[390192]: 2025-12-05 01:40:23.378152879 +0000 UTC m=+0.158918883 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:40:23 compute-0 python3.9[390259]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:40:23 compute-0 systemd[1]: Started libpod-conmon-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope.
Dec 05 01:40:23 compute-0 podman[390265]: 2025-12-05 01:40:23.708870215 +0000 UTC m=+0.153498770 container exec 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm)
Dec 05 01:40:23 compute-0 podman[390265]: 2025-12-05 01:40:23.74884336 +0000 UTC m=+0.193471865 container exec_died 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 01:40:23 compute-0 systemd[1]: libpod-conmon-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope: Deactivated successfully.
Dec 05 01:40:23 compute-0 sudo[390247]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:24 compute-0 sudo[390444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmgeuognmbuxlmvgricvuwfnuabvzkvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898824.139244-544-23017446220631/AnsiballZ_podman_container_exec.py'
Dec 05 01:40:24 compute-0 sudo[390444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:24 compute-0 python3.9[390446]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:40:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v924: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:25 compute-0 systemd[1]: Started libpod-conmon-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope.
Dec 05 01:40:25 compute-0 podman[390447]: 2025-12-05 01:40:25.12865603 +0000 UTC m=+0.165994012 container exec 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 01:40:25 compute-0 podman[390447]: 2025-12-05 01:40:25.164427447 +0000 UTC m=+0.201765409 container exec_died 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 01:40:25 compute-0 sudo[390444]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:25 compute-0 systemd[1]: libpod-conmon-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope: Deactivated successfully.
Dec 05 01:40:26 compute-0 ceph-mon[192914]: pgmap v924: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:26 compute-0 sudo[390625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gijyahllbxcuxczpwbsztndvyhbdflza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898825.5743613-552-170003907928057/AnsiballZ_file.py'
Dec 05 01:40:26 compute-0 sudo[390625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:26 compute-0 python3.9[390627]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:40:26 compute-0 sudo[390625]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:40:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:40:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:40:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:40:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:40:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:40:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:40:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:40:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:40:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:40:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:40:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:40:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:40:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:40:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:40:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:40:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:40:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:40:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:40:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:40:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:40:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:40:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:40:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v925: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:27 compute-0 sudo[390777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvrbdcwebpgffmctdnxkrfmqknynpqsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898826.8732853-561-82684370581678/AnsiballZ_podman_container_info.py'
Dec 05 01:40:27 compute-0 sudo[390777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:40:27 compute-0 python3.9[390779]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec 05 01:40:27 compute-0 sudo[390777]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:28 compute-0 ceph-mon[192914]: pgmap v925: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:28 compute-0 sudo[390939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amrpmhqugnjlaipntrnwevhnrpwloveq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898828.284743-569-33811840300270/AnsiballZ_podman_container_exec.py'
Dec 05 01:40:28 compute-0 sudo[390939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v926: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:29 compute-0 python3.9[390941]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:40:29 compute-0 systemd[1]: Started libpod-conmon-602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9.scope.
Dec 05 01:40:29 compute-0 podman[390942]: 2025-12-05 01:40:29.256705998 +0000 UTC m=+0.144991231 container exec 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 01:40:29 compute-0 podman[390942]: 2025-12-05 01:40:29.291419425 +0000 UTC m=+0.179704598 container exec_died 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:40:29 compute-0 systemd[1]: libpod-conmon-602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9.scope: Deactivated successfully.
Dec 05 01:40:29 compute-0 sudo[390939]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:29 compute-0 podman[158197]: time="2025-12-05T01:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:40:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42582 "" "Go-http-client/1.1"
Dec 05 01:40:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8101 "" "Go-http-client/1.1"
Dec 05 01:40:30 compute-0 ceph-mon[192914]: pgmap v926: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:30 compute-0 sudo[391120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvkeuubjufdeeaqlvwvswlvzxbhybpnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898829.6807377-577-99198123539715/AnsiballZ_podman_container_exec.py'
Dec 05 01:40:30 compute-0 sudo[391120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:30 compute-0 python3.9[391122]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:40:30 compute-0 systemd[1]: Started libpod-conmon-602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9.scope.
Dec 05 01:40:30 compute-0 podman[391123]: 2025-12-05 01:40:30.668501018 +0000 UTC m=+0.135620848 container exec 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 01:40:30 compute-0 podman[391123]: 2025-12-05 01:40:30.703018978 +0000 UTC m=+0.170138858 container exec_died 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 01:40:30 compute-0 sudo[391120]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:30 compute-0 systemd[1]: libpod-conmon-602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9.scope: Deactivated successfully.
Dec 05 01:40:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v927: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:31 compute-0 openstack_network_exporter[366555]: ERROR   01:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:40:31 compute-0 openstack_network_exporter[366555]: ERROR   01:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:40:31 compute-0 openstack_network_exporter[366555]: ERROR   01:40:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:40:31 compute-0 openstack_network_exporter[366555]: ERROR   01:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:40:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:40:31 compute-0 openstack_network_exporter[366555]: ERROR   01:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:40:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:40:31 compute-0 sudo[391303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajmytmuldetitymfodocguluowlvafbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898831.0856867-585-188999689524833/AnsiballZ_file.py'
Dec 05 01:40:31 compute-0 sudo[391303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:31 compute-0 python3.9[391305]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:40:31 compute-0 sudo[391303]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:32 compute-0 ceph-mon[192914]: pgmap v927: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:40:32 compute-0 sudo[391455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpcrlxzpouwrlerqmubxkvqhpfednmxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898832.200737-594-100323006672032/AnsiballZ_podman_container_info.py'
Dec 05 01:40:32 compute-0 sudo[391455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:32 compute-0 python3.9[391457]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec 05 01:40:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v928: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:33 compute-0 sudo[391455]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:33 compute-0 sudo[391619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-damgkplsllvpsbezuprlopdgqyxslqlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898833.4198086-602-147459143113898/AnsiballZ_podman_container_exec.py'
Dec 05 01:40:33 compute-0 sudo[391619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:34 compute-0 ceph-mon[192914]: pgmap v928: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:34 compute-0 python3.9[391621]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:40:34 compute-0 systemd[1]: Started libpod-conmon-b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc.scope.
Dec 05 01:40:34 compute-0 podman[391622]: 2025-12-05 01:40:34.282369926 +0000 UTC m=+0.122392045 container exec b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:40:34 compute-0 podman[391622]: 2025-12-05 01:40:34.315620781 +0000 UTC m=+0.155642910 container exec_died b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 01:40:34 compute-0 systemd[1]: libpod-conmon-b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc.scope: Deactivated successfully.
Dec 05 01:40:34 compute-0 sudo[391619]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v929: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:35 compute-0 sudo[391799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyetzbcbdlobjbseazvxdfukphufzkye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898834.6542022-610-272347892266891/AnsiballZ_podman_container_exec.py'
Dec 05 01:40:35 compute-0 sudo[391799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:35 compute-0 python3.9[391801]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:40:35 compute-0 systemd[1]: Started libpod-conmon-b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc.scope.
Dec 05 01:40:35 compute-0 podman[391802]: 2025-12-05 01:40:35.585753575 +0000 UTC m=+0.128310702 container exec b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:40:35 compute-0 podman[391802]: 2025-12-05 01:40:35.620648587 +0000 UTC m=+0.163205714 container exec_died b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:40:35 compute-0 systemd[1]: libpod-conmon-b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc.scope: Deactivated successfully.
Dec 05 01:40:35 compute-0 sudo[391799]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:36 compute-0 ceph-mon[192914]: pgmap v929: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v930: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:37 compute-0 sudo[391981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pldrhytfqkvwimgmvovwwuuluzzosdzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898836.6454048-618-32869886628237/AnsiballZ_file.py'
Dec 05 01:40:37 compute-0 sudo[391981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:37 compute-0 python3.9[391983]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:40:37 compute-0 sudo[391981]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:40:37 compute-0 podman[392004]: 2025-12-05 01:40:37.714857381 +0000 UTC m=+0.120080130 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec 05 01:40:37 compute-0 podman[392008]: 2025-12-05 01:40:37.723472003 +0000 UTC m=+0.126455469 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 01:40:38 compute-0 ceph-mon[192914]: pgmap v930: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.312 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.313 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.318 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.318 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.319 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.319 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.320 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.320 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.320 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.322 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.330 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.330 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.332 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.332 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:40:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:40:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v931: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:39 compute-0 sudo[392173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chqczcnqqnfvkoqffeuekielrfqiirbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898837.8002286-627-50796454107265/AnsiballZ_podman_container_info.py'
Dec 05 01:40:39 compute-0 sudo[392173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:39 compute-0 python3.9[392175]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec 05 01:40:39 compute-0 sudo[392173]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:40 compute-0 ceph-mon[192914]: pgmap v931: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:40 compute-0 sudo[392353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iibrhlnumvmlhzrpqwnosnvkapxpltbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898839.8135326-635-45069911286546/AnsiballZ_podman_container_exec.py'
Dec 05 01:40:40 compute-0 sudo[392353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:40 compute-0 podman[392313]: 2025-12-05 01:40:40.458597862 +0000 UTC m=+0.140063462 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.4)
Dec 05 01:40:40 compute-0 python3.9[392360]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:40:40 compute-0 systemd[1]: Started libpod-conmon-fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4.scope.
Dec 05 01:40:40 compute-0 podman[392361]: 2025-12-05 01:40:40.844374919 +0000 UTC m=+0.146011790 container exec fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, release=1755695350, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, version=9.6, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm)
Dec 05 01:40:40 compute-0 podman[392361]: 2025-12-05 01:40:40.88138553 +0000 UTC m=+0.183022391 container exec_died fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, version=9.6, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, release=1755695350, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 05 01:40:40 compute-0 sudo[392353]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:40 compute-0 systemd[1]: libpod-conmon-fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4.scope: Deactivated successfully.
Dec 05 01:40:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v932: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:41 compute-0 podman[392488]: 2025-12-05 01:40:41.670852546 +0000 UTC m=+0.080411814 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Dec 05 01:40:41 compute-0 systemd[1]: 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335-561a54df6cf0a7cf.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 01:40:41 compute-0 systemd[1]: 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335-561a54df6cf0a7cf.service: Failed with result 'exit-code'.
Dec 05 01:40:41 compute-0 sudo[392554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfxnjkkucnnztfjswektrjzotxyttllj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898841.2467215-643-280811932087577/AnsiballZ_podman_container_exec.py'
Dec 05 01:40:41 compute-0 sudo[392554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:42 compute-0 python3.9[392556]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:40:42 compute-0 systemd[1]: Started libpod-conmon-fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4.scope.
Dec 05 01:40:42 compute-0 podman[392557]: 2025-12-05 01:40:42.156174513 +0000 UTC m=+0.116569171 container exec fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1755695350, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm)
Dec 05 01:40:42 compute-0 podman[392557]: 2025-12-05 01:40:42.192004582 +0000 UTC m=+0.152399210 container exec_died fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, io.openshift.tags=minimal rhel9, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_id=edpm, release=1755695350)
Dec 05 01:40:42 compute-0 ceph-mon[192914]: pgmap v932: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:42 compute-0 systemd[1]: libpod-conmon-fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4.scope: Deactivated successfully.
Dec 05 01:40:42 compute-0 sudo[392554]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:40:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v933: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:43 compute-0 sudo[392735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekuvmeseayzfohcfwgfjyqiwyqmybdsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898842.5368586-651-110452891465638/AnsiballZ_file.py'
Dec 05 01:40:43 compute-0 sudo[392735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:43 compute-0 python3.9[392737]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:40:43 compute-0 sudo[392735]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:43 compute-0 podman[392762]: 2025-12-05 01:40:43.724674643 +0000 UTC m=+0.128185268 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, architecture=x86_64, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, config_id=edpm, io.openshift.tags=base rhel9, managed_by=edpm_ansible, name=ubi9, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, io.openshift.expose-services=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler)
Dec 05 01:40:44 compute-0 ceph-mon[192914]: pgmap v933: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:44 compute-0 sudo[392906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwixzivfdzuvivngmtudglvzyfcgmjyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898843.7165768-660-184317855552362/AnsiballZ_podman_container_info.py'
Dec 05 01:40:44 compute-0 sudo[392906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:44 compute-0 python3.9[392908]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Dec 05 01:40:44 compute-0 sudo[392906]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v934: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:45 compute-0 sudo[393070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfbbtyinihnnkyavusxjwtrselvoxnbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898845.0635831-668-205420844985418/AnsiballZ_podman_container_exec.py'
Dec 05 01:40:45 compute-0 sudo[393070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:45 compute-0 python3.9[393072]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:40:46 compute-0 systemd[1]: Started libpod-conmon-88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.scope.
Dec 05 01:40:46 compute-0 podman[393073]: 2025-12-05 01:40:46.050864274 +0000 UTC m=+0.146478083 container exec 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 05 01:40:46 compute-0 podman[393073]: 2025-12-05 01:40:46.087179176 +0000 UTC m=+0.182792925 container exec_died 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 01:40:46 compute-0 systemd[1]: libpod-conmon-88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.scope: Deactivated successfully.
Dec 05 01:40:46 compute-0 sudo[393070]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:46 compute-0 ceph-mon[192914]: pgmap v934: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:40:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:40:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:40:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:40:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:40:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:40:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v935: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:47 compute-0 sudo[393252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xofdpdazxigonrnfxntrayjnlrubxheu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898846.4897146-676-23263373151487/AnsiballZ_podman_container_exec.py'
Dec 05 01:40:47 compute-0 sudo[393252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:47 compute-0 python3.9[393254]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:40:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:40:47 compute-0 systemd[1]: Started libpod-conmon-88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.scope.
Dec 05 01:40:47 compute-0 podman[393255]: 2025-12-05 01:40:47.534466575 +0000 UTC m=+0.166119156 container exec 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 01:40:47 compute-0 podman[393255]: 2025-12-05 01:40:47.569689646 +0000 UTC m=+0.201342207 container exec_died 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 01:40:47 compute-0 sudo[393252]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:47 compute-0 systemd[1]: libpod-conmon-88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.scope: Deactivated successfully.
Dec 05 01:40:48 compute-0 ceph-mon[192914]: pgmap v935: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:48 compute-0 sudo[393434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zstiyujcuulqhlmeakaklgcuvucseezo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898847.9960895-684-200515941804783/AnsiballZ_file.py'
Dec 05 01:40:48 compute-0 sudo[393434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:48 compute-0 python3.9[393436]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:40:48 compute-0 sudo[393434]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v936: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:50 compute-0 ceph-mon[192914]: pgmap v936: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:50 compute-0 sudo[393587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eptsiyednaccuvipuepcwrpdwuqqioyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898849.8548677-693-223567047133540/AnsiballZ_podman_container_info.py'
Dec 05 01:40:50 compute-0 sudo[393587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:50 compute-0 python3.9[393589]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Dec 05 01:40:50 compute-0 sudo[393587]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v937: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:52 compute-0 ceph-mon[192914]: pgmap v937: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:52 compute-0 sudo[393779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhoannknsfowgnjwrbrnxmlxlyopjgje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898851.1428025-701-50583560389854/AnsiballZ_podman_container_exec.py'
Dec 05 01:40:52 compute-0 sudo[393779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:52 compute-0 podman[393726]: 2025-12-05 01:40:52.322781813 +0000 UTC m=+0.101684952 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:40:52 compute-0 podman[393727]: 2025-12-05 01:40:52.343863217 +0000 UTC m=+0.123831456 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 05 01:40:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:40:52 compute-0 python3.9[393788]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:40:52 compute-0 systemd[1]: Started libpod-conmon-088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54.scope.
Dec 05 01:40:52 compute-0 podman[393797]: 2025-12-05 01:40:52.69541048 +0000 UTC m=+0.136732459 container exec 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.buildah.version=1.29.0, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 05 01:40:52 compute-0 podman[393797]: 2025-12-05 01:40:52.734267823 +0000 UTC m=+0.175589732 container exec_died 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.component=ubi9-container, distribution-scope=public, vcs-type=git, io.openshift.tags=base rhel9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, managed_by=edpm_ansible, config_id=edpm, io.openshift.expose-services=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30)
Dec 05 01:40:52 compute-0 systemd[1]: libpod-conmon-088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54.scope: Deactivated successfully.
Dec 05 01:40:52 compute-0 sudo[393779]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v938: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:53 compute-0 rsyslogd[188644]: imjournal: 3625 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec 05 01:40:53 compute-0 sudo[394005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swqkfbyyhthqfostzlimzlwzcygoaujb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898853.093434-709-186037521144772/AnsiballZ_podman_container_exec.py'
Dec 05 01:40:53 compute-0 sudo[394005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:53 compute-0 podman[393951]: 2025-12-05 01:40:53.617802477 +0000 UTC m=+0.111547420 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:40:53 compute-0 podman[393952]: 2025-12-05 01:40:53.629198398 +0000 UTC m=+0.117771196 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, release=1755695350, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter)
Dec 05 01:40:53 compute-0 python3.9[394010]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:40:53 compute-0 systemd[1]: Started libpod-conmon-088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54.scope.
Dec 05 01:40:53 compute-0 podman[394022]: 2025-12-05 01:40:53.988072147 +0000 UTC m=+0.157107082 container exec 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, version=9.4, com.redhat.component=ubi9-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_id=edpm, vcs-type=git, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, release-0.7.12=)
Dec 05 01:40:54 compute-0 podman[394022]: 2025-12-05 01:40:54.023498194 +0000 UTC m=+0.192533069 container exec_died 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release-0.7.12=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, distribution-scope=public, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, name=ubi9, io.openshift.tags=base rhel9, architecture=x86_64, release=1214.1726694543, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., version=9.4, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 01:40:54 compute-0 systemd[1]: libpod-conmon-088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54.scope: Deactivated successfully.
Dec 05 01:40:54 compute-0 sudo[394005]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:54 compute-0 ceph-mon[192914]: pgmap v938: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:55 compute-0 sudo[394199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eremcvklrsomhpazkensdiocwaknkisx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898854.4331803-717-214975931062245/AnsiballZ_file.py'
Dec 05 01:40:55 compute-0 sudo[394199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v939: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:55 compute-0 python3.9[394201]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:40:55 compute-0 sudo[394199]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:40:56.168 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:40:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:40:56.169 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:40:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:40:56.169 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:40:56 compute-0 ceph-mon[192914]: pgmap v939: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:56 compute-0 sudo[394351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wblfkjkuyfghbllmscpfiagqawnkumjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898855.6457484-726-7971634898642/AnsiballZ_podman_container_info.py'
Dec 05 01:40:56 compute-0 sudo[394351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:56 compute-0 python3.9[394353]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Dec 05 01:40:56 compute-0 sudo[394351]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v940: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:40:57 compute-0 sudo[394516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsaeeoschmmpdvefnzogpihktqnnaqmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898857.114241-734-102120537395086/AnsiballZ_podman_container_exec.py'
Dec 05 01:40:57 compute-0 sudo[394516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:57 compute-0 python3.9[394518]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:40:58 compute-0 systemd[1]: Started libpod-conmon-33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638.scope.
Dec 05 01:40:58 compute-0 podman[394519]: 2025-12-05 01:40:58.098991273 +0000 UTC m=+0.163612015 container exec 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 01:40:58 compute-0 podman[394519]: 2025-12-05 01:40:58.114250152 +0000 UTC m=+0.178870924 container exec_died 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 05 01:40:58 compute-0 sudo[394516]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:58 compute-0 systemd[1]: libpod-conmon-33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638.scope: Deactivated successfully.
Dec 05 01:40:58 compute-0 ceph-mon[192914]: pgmap v940: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v941: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:40:59 compute-0 sudo[394700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spiwfernedhfftnoiewdvporsccuirgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898858.5211084-742-167008206290867/AnsiballZ_podman_container_exec.py'
Dec 05 01:40:59 compute-0 sudo[394700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:40:59 compute-0 python3.9[394702]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:40:59 compute-0 systemd[1]: Started libpod-conmon-33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638.scope.
Dec 05 01:40:59 compute-0 podman[394703]: 2025-12-05 01:40:59.544808389 +0000 UTC m=+0.143146669 container exec 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 01:40:59 compute-0 podman[394703]: 2025-12-05 01:40:59.58071066 +0000 UTC m=+0.179048940 container exec_died 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:40:59 compute-0 systemd[1]: libpod-conmon-33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638.scope: Deactivated successfully.
Dec 05 01:40:59 compute-0 sudo[394700]: pam_unix(sudo:session): session closed for user root
Dec 05 01:40:59 compute-0 podman[158197]: time="2025-12-05T01:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:40:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42581 "" "Go-http-client/1.1"
Dec 05 01:40:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8088 "" "Go-http-client/1.1"
Dec 05 01:41:00 compute-0 ceph-mon[192914]: pgmap v941: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:00 compute-0 sudo[394882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpxgjbuoupciilylsxvsvprevctyqarx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898860.1190946-750-196409152975554/AnsiballZ_file.py'
Dec 05 01:41:00 compute-0 sudo[394882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:00 compute-0 python3.9[394884]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:41:00 compute-0 sudo[394882]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v942: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:01 compute-0 openstack_network_exporter[366555]: ERROR   01:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:41:01 compute-0 openstack_network_exporter[366555]: ERROR   01:41:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:41:01 compute-0 openstack_network_exporter[366555]: ERROR   01:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:41:01 compute-0 openstack_network_exporter[366555]: ERROR   01:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:41:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:41:01 compute-0 openstack_network_exporter[366555]: ERROR   01:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:41:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:41:01 compute-0 sudo[395034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qswbngmrtslshhjcthvajztdxymidnqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898861.258396-759-59298659407090/AnsiballZ_podman_container_info.py'
Dec 05 01:41:01 compute-0 sudo[395034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:02 compute-0 python3.9[395036]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Dec 05 01:41:02 compute-0 sudo[395034]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:02 compute-0 ceph-mon[192914]: pgmap v942: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:41:02 compute-0 sudo[395198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpwcimxgxpcbcemfvakwtwbvinwczvcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898862.4555254-767-44732141992822/AnsiballZ_podman_container_exec.py'
Dec 05 01:41:02 compute-0 sudo[395198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v943: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:03 compute-0 python3.9[395200]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:41:03 compute-0 systemd[1]: Started libpod-conmon-4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee.scope.
Dec 05 01:41:03 compute-0 podman[395201]: 2025-12-05 01:41:03.377317 +0000 UTC m=+0.160845497 container exec 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:41:03 compute-0 podman[395201]: 2025-12-05 01:41:03.410470303 +0000 UTC m=+0.193998760 container exec_died 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 05 01:41:03 compute-0 sudo[395198]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:03 compute-0 systemd[1]: libpod-conmon-4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee.scope: Deactivated successfully.
Dec 05 01:41:04 compute-0 sudo[395379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpdtmhehzvrqnwenqnrdezlhalrbuxhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898863.7371633-775-181208306890122/AnsiballZ_podman_container_exec.py'
Dec 05 01:41:04 compute-0 sudo[395379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:04 compute-0 ceph-mon[192914]: pgmap v943: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:04 compute-0 python3.9[395381]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 05 01:41:04 compute-0 systemd[1]: Started libpod-conmon-4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee.scope.
Dec 05 01:41:04 compute-0 podman[395382]: 2025-12-05 01:41:04.688169529 +0000 UTC m=+0.141120692 container exec 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true)
Dec 05 01:41:04 compute-0 podman[395382]: 2025-12-05 01:41:04.72584758 +0000 UTC m=+0.178798793 container exec_died 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, tcib_managed=true)
Dec 05 01:41:04 compute-0 systemd[1]: libpod-conmon-4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee.scope: Deactivated successfully.
Dec 05 01:41:04 compute-0 sudo[395379]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v944: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:05 compute-0 sudo[395560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpbojolqlhzdehmzpevblqfxvsmvuwjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898865.0592952-783-255064689676256/AnsiballZ_file.py'
Dec 05 01:41:05 compute-0 sudo[395560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:05 compute-0 python3.9[395562]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:41:05 compute-0 sudo[395560]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:06 compute-0 ceph-mon[192914]: pgmap v944: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:06 compute-0 sudo[395712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imturjvvqtovnmbfegknoklyirrorjed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898866.2490408-792-149404641102431/AnsiballZ_file.py'
Dec 05 01:41:06 compute-0 sudo[395712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v945: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:07 compute-0 python3.9[395714]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:41:07 compute-0 sudo[395712]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:41:07 compute-0 sudo[395760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:41:07 compute-0 sudo[395760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:41:07 compute-0 sudo[395760]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:07 compute-0 sudo[395814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:41:07 compute-0 sudo[395814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:41:07 compute-0 sudo[395814]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:07 compute-0 sudo[395864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:41:07 compute-0 sudo[395864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:41:07 compute-0 sudo[395864]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:07 compute-0 podman[395889]: 2025-12-05 01:41:07.907575777 +0000 UTC m=+0.096755894 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 01:41:07 compute-0 podman[395894]: 2025-12-05 01:41:07.91585422 +0000 UTC m=+0.092683729 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:41:07 compute-0 sudo[395920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:41:07 compute-0 sudo[395920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:41:07 compute-0 sudo[396002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwxkranxleixwwkitnzpdfnicikzsngx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898867.4196064-800-69052689060097/AnsiballZ_stat.py'
Dec 05 01:41:07 compute-0 sudo[396002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:08 compute-0 python3.9[396004]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:41:08 compute-0 sudo[396002]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:08 compute-0 ceph-mon[192914]: pgmap v945: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:08 compute-0 sudo[395920]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:41:08 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:41:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:41:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:41:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:41:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:41:08 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev dfaa656d-3482-4fb2-af8b-65e807081585 does not exist
Dec 05 01:41:08 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 243a0e70-f3aa-4e34-b39d-06927a047ecd does not exist
Dec 05 01:41:08 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a8bb38bf-ddc5-4b32-8485-577ea62b8e69 does not exist
Dec 05 01:41:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:41:08 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:41:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:41:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:41:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:41:08 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:41:08 compute-0 sudo[396110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzocdgksgmkdedvjkyxglczhmyitdlws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898867.4196064-800-69052689060097/AnsiballZ_file.py'
Dec 05 01:41:08 compute-0 sudo[396110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:08 compute-0 sudo[396111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:41:08 compute-0 sudo[396111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:41:08 compute-0 sudo[396111]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:08 compute-0 python3.9[396116]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/edpm-config/firewall/kepler.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/kepler.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:41:08 compute-0 sudo[396138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:41:08 compute-0 sudo[396138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:41:08 compute-0 sudo[396110]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:08 compute-0 sudo[396138]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:08 compute-0 sudo[396163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:41:08 compute-0 sudo[396163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:41:08 compute-0 sudo[396163]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v946: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:09 compute-0 sudo[396212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:41:09 compute-0 sudo[396212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:41:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:41:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:41:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:41:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:41:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:41:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:41:09 compute-0 podman[396332]: 2025-12-05 01:41:09.662457193 +0000 UTC m=+0.075913628 container create 84527bea8d8c8a077f790366598af9a70d8bb4d6a623e97f59f8cd7d09d4d4ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dubinsky, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 01:41:09 compute-0 podman[396332]: 2025-12-05 01:41:09.626983374 +0000 UTC m=+0.040439889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:41:09 compute-0 systemd[1]: Started libpod-conmon-84527bea8d8c8a077f790366598af9a70d8bb4d6a623e97f59f8cd7d09d4d4ed.scope.
Dec 05 01:41:09 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:41:09 compute-0 podman[396332]: 2025-12-05 01:41:09.790607159 +0000 UTC m=+0.204063644 container init 84527bea8d8c8a077f790366598af9a70d8bb4d6a623e97f59f8cd7d09d4d4ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:41:09 compute-0 podman[396332]: 2025-12-05 01:41:09.80876012 +0000 UTC m=+0.222216545 container start 84527bea8d8c8a077f790366598af9a70d8bb4d6a623e97f59f8cd7d09d4d4ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dubinsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:41:09 compute-0 podman[396332]: 2025-12-05 01:41:09.814660756 +0000 UTC m=+0.228117251 container attach 84527bea8d8c8a077f790366598af9a70d8bb4d6a623e97f59f8cd7d09d4d4ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 01:41:09 compute-0 wizardly_dubinsky[396384]: 167 167
Dec 05 01:41:09 compute-0 podman[396332]: 2025-12-05 01:41:09.819547274 +0000 UTC m=+0.233003729 container died 84527bea8d8c8a077f790366598af9a70d8bb4d6a623e97f59f8cd7d09d4d4ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dubinsky, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 05 01:41:09 compute-0 systemd[1]: libpod-84527bea8d8c8a077f790366598af9a70d8bb4d6a623e97f59f8cd7d09d4d4ed.scope: Deactivated successfully.
Dec 05 01:41:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa4bad5266519ae746d0d50a7d39137d3fc9f8714ca4eb387c6d0eaa14fc35ba-merged.mount: Deactivated successfully.
Dec 05 01:41:09 compute-0 sudo[396433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvjfrhshdbfhgmkikfvqpiekqalvjdux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898869.401382-813-207010694001587/AnsiballZ_file.py'
Dec 05 01:41:09 compute-0 sudo[396433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:10 compute-0 podman[396332]: 2025-12-05 01:41:10.014367626 +0000 UTC m=+0.427824041 container remove 84527bea8d8c8a077f790366598af9a70d8bb4d6a623e97f59f8cd7d09d4d4ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dubinsky, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 05 01:41:10 compute-0 systemd[1]: libpod-conmon-84527bea8d8c8a077f790366598af9a70d8bb4d6a623e97f59f8cd7d09d4d4ed.scope: Deactivated successfully.
Dec 05 01:41:10 compute-0 nova_compute[349548]: 2025-12-05 01:41:10.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:41:10 compute-0 nova_compute[349548]: 2025-12-05 01:41:10.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:41:10 compute-0 nova_compute[349548]: 2025-12-05 01:41:10.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 01:41:10 compute-0 nova_compute[349548]: 2025-12-05 01:41:10.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:41:10 compute-0 nova_compute[349548]: 2025-12-05 01:41:10.103 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:41:10 compute-0 nova_compute[349548]: 2025-12-05 01:41:10.104 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:41:10 compute-0 nova_compute[349548]: 2025-12-05 01:41:10.104 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:41:10 compute-0 nova_compute[349548]: 2025-12-05 01:41:10.105 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:41:10 compute-0 nova_compute[349548]: 2025-12-05 01:41:10.105 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:41:10 compute-0 python3.9[396435]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:41:10 compute-0 sudo[396433]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:10 compute-0 podman[396444]: 2025-12-05 01:41:10.29562596 +0000 UTC m=+0.089581572 container create fe5a7f1d4869f4df2adda982c0df37286669866820b8dfb1f30ea3685100ac9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 01:41:10 compute-0 podman[396444]: 2025-12-05 01:41:10.257190858 +0000 UTC m=+0.051146500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:41:10 compute-0 systemd[1]: Started libpod-conmon-fe5a7f1d4869f4df2adda982c0df37286669866820b8dfb1f30ea3685100ac9c.scope.
Dec 05 01:41:10 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:41:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5c05cedc3e78d08acef1290f0acd6ba33d267fd11447b9985e83f0a477f043/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:41:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5c05cedc3e78d08acef1290f0acd6ba33d267fd11447b9985e83f0a477f043/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:41:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5c05cedc3e78d08acef1290f0acd6ba33d267fd11447b9985e83f0a477f043/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:41:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5c05cedc3e78d08acef1290f0acd6ba33d267fd11447b9985e83f0a477f043/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:41:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5c05cedc3e78d08acef1290f0acd6ba33d267fd11447b9985e83f0a477f043/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:41:10 compute-0 podman[396444]: 2025-12-05 01:41:10.450418986 +0000 UTC m=+0.244374658 container init fe5a7f1d4869f4df2adda982c0df37286669866820b8dfb1f30ea3685100ac9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:41:10 compute-0 podman[396444]: 2025-12-05 01:41:10.4772084 +0000 UTC m=+0.271164002 container start fe5a7f1d4869f4df2adda982c0df37286669866820b8dfb1f30ea3685100ac9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 01:41:10 compute-0 podman[396444]: 2025-12-05 01:41:10.484396572 +0000 UTC m=+0.278352164 container attach fe5a7f1d4869f4df2adda982c0df37286669866820b8dfb1f30ea3685100ac9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:41:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:41:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4024983090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:41:10 compute-0 nova_compute[349548]: 2025-12-05 01:41:10.605 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:41:10 compute-0 ceph-mon[192914]: pgmap v946: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:10 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4024983090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:41:10 compute-0 podman[396560]: 2025-12-05 01:41:10.705565726 +0000 UTC m=+0.120285306 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec 05 01:41:10 compute-0 sudo[396655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npktqraoncscbcvawkaeukfkkjhwhmrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898870.440232-821-77908449439946/AnsiballZ_stat.py'
Dec 05 01:41:10 compute-0 sudo[396655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v947: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:11 compute-0 nova_compute[349548]: 2025-12-05 01:41:11.085 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:41:11 compute-0 nova_compute[349548]: 2025-12-05 01:41:11.088 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4528MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 01:41:11 compute-0 nova_compute[349548]: 2025-12-05 01:41:11.088 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:41:11 compute-0 nova_compute[349548]: 2025-12-05 01:41:11.089 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:41:11 compute-0 python3.9[396657]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:41:11 compute-0 nova_compute[349548]: 2025-12-05 01:41:11.154 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 01:41:11 compute-0 nova_compute[349548]: 2025-12-05 01:41:11.155 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 01:41:11 compute-0 nova_compute[349548]: 2025-12-05 01:41:11.176 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:41:11 compute-0 sudo[396655]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:11 compute-0 sudo[396770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbbtcoavwbzfyajibssnpnsenslhhyyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898870.440232-821-77908449439946/AnsiballZ_file.py'
Dec 05 01:41:11 compute-0 sudo[396770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:41:11 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2946123790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:41:11 compute-0 nova_compute[349548]: 2025-12-05 01:41:11.706 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:41:11 compute-0 nova_compute[349548]: 2025-12-05 01:41:11.721 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:41:11 compute-0 focused_davinci[396503]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:41:11 compute-0 focused_davinci[396503]: --> relative data size: 1.0
Dec 05 01:41:11 compute-0 focused_davinci[396503]: --> All data devices are unavailable
Dec 05 01:41:11 compute-0 nova_compute[349548]: 2025-12-05 01:41:11.743 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:41:11 compute-0 nova_compute[349548]: 2025-12-05 01:41:11.747 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 01:41:11 compute-0 nova_compute[349548]: 2025-12-05 01:41:11.748 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:41:11 compute-0 systemd[1]: libpod-fe5a7f1d4869f4df2adda982c0df37286669866820b8dfb1f30ea3685100ac9c.scope: Deactivated successfully.
Dec 05 01:41:11 compute-0 podman[396444]: 2025-12-05 01:41:11.77778342 +0000 UTC m=+1.571739052 container died fe5a7f1d4869f4df2adda982c0df37286669866820b8dfb1f30ea3685100ac9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 05 01:41:11 compute-0 systemd[1]: libpod-fe5a7f1d4869f4df2adda982c0df37286669866820b8dfb1f30ea3685100ac9c.scope: Consumed 1.218s CPU time.
Dec 05 01:41:11 compute-0 python3.9[396773]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:41:11 compute-0 sudo[396770]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab5c05cedc3e78d08acef1290f0acd6ba33d267fd11447b9985e83f0a477f043-merged.mount: Deactivated successfully.
Dec 05 01:41:11 compute-0 podman[396444]: 2025-12-05 01:41:11.891338666 +0000 UTC m=+1.685294268 container remove fe5a7f1d4869f4df2adda982c0df37286669866820b8dfb1f30ea3685100ac9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 05 01:41:11 compute-0 systemd[1]: libpod-conmon-fe5a7f1d4869f4df2adda982c0df37286669866820b8dfb1f30ea3685100ac9c.scope: Deactivated successfully.
Dec 05 01:41:11 compute-0 sudo[396212]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:11 compute-0 podman[396783]: 2025-12-05 01:41:11.948576646 +0000 UTC m=+0.119060961 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 05 01:41:12 compute-0 sudo[396838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:41:12 compute-0 sudo[396838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:41:12 compute-0 sudo[396838]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:12 compute-0 sudo[396881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:41:12 compute-0 sudo[396881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:41:12 compute-0 sudo[396881]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:12 compute-0 sudo[396937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:41:12 compute-0 sudo[396937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:41:12 compute-0 sudo[396937]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:12 compute-0 sudo[396988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:41:12 compute-0 sudo[396988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:41:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:41:12 compute-0 sudo[397065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjfwusiswzduhmtxpkfzmqfceyusrllm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898872.0669343-833-41549021946026/AnsiballZ_stat.py'
Dec 05 01:41:12 compute-0 sudo[397065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:12 compute-0 ceph-mon[192914]: pgmap v947: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:12 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2946123790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:41:12 compute-0 nova_compute[349548]: 2025-12-05 01:41:12.746 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:41:12 compute-0 nova_compute[349548]: 2025-12-05 01:41:12.772 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:41:12 compute-0 nova_compute[349548]: 2025-12-05 01:41:12.773 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:41:12 compute-0 python3.9[397075]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:41:12 compute-0 sudo[397065]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:12 compute-0 podman[397105]: 2025-12-05 01:41:12.995405386 +0000 UTC m=+0.101693993 container create b5d1eeb848206a651a20427700be63fa15b64fd1126da165f0745e4cda9635ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_margulis, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:41:13 compute-0 podman[397105]: 2025-12-05 01:41:12.957707375 +0000 UTC m=+0.063996052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:41:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v948: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:13 compute-0 nova_compute[349548]: 2025-12-05 01:41:13.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:41:13 compute-0 nova_compute[349548]: 2025-12-05 01:41:13.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:41:13 compute-0 systemd[1]: Started libpod-conmon-b5d1eeb848206a651a20427700be63fa15b64fd1126da165f0745e4cda9635ba.scope.
Dec 05 01:41:13 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:41:13 compute-0 podman[397105]: 2025-12-05 01:41:13.152953979 +0000 UTC m=+0.259242606 container init b5d1eeb848206a651a20427700be63fa15b64fd1126da165f0745e4cda9635ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 05 01:41:13 compute-0 podman[397105]: 2025-12-05 01:41:13.173646101 +0000 UTC m=+0.279934698 container start b5d1eeb848206a651a20427700be63fa15b64fd1126da165f0745e4cda9635ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:41:13 compute-0 podman[397105]: 2025-12-05 01:41:13.180426482 +0000 UTC m=+0.286715109 container attach b5d1eeb848206a651a20427700be63fa15b64fd1126da165f0745e4cda9635ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_margulis, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 05 01:41:13 compute-0 recursing_margulis[397157]: 167 167
Dec 05 01:41:13 compute-0 systemd[1]: libpod-b5d1eeb848206a651a20427700be63fa15b64fd1126da165f0745e4cda9635ba.scope: Deactivated successfully.
Dec 05 01:41:13 compute-0 podman[397105]: 2025-12-05 01:41:13.182437399 +0000 UTC m=+0.288725996 container died b5d1eeb848206a651a20427700be63fa15b64fd1126da165f0745e4cda9635ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_margulis, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:41:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5440e32f38bc00d345c25ed15a65c1bec27aaa1cf44ead532bf87162c45f25a-merged.mount: Deactivated successfully.
Dec 05 01:41:13 compute-0 podman[397105]: 2025-12-05 01:41:13.256859063 +0000 UTC m=+0.363147690 container remove b5d1eeb848206a651a20427700be63fa15b64fd1126da165f0745e4cda9635ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_margulis, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:41:13 compute-0 systemd[1]: libpod-conmon-b5d1eeb848206a651a20427700be63fa15b64fd1126da165f0745e4cda9635ba.scope: Deactivated successfully.
Dec 05 01:41:13 compute-0 sudo[397211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opaushqvaqzjmsboycojxemysxkykurg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898872.0669343-833-41549021946026/AnsiballZ_file.py'
Dec 05 01:41:13 compute-0 sudo[397211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:13 compute-0 podman[397219]: 2025-12-05 01:41:13.541674638 +0000 UTC m=+0.091066033 container create a8fd15e8e2a352e152600c6f94a5a469a2957c2d373891d420226910b2945a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_saha, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:41:13 compute-0 python3.9[397213]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.upaytczi recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:41:13 compute-0 sudo[397211]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:13 compute-0 podman[397219]: 2025-12-05 01:41:13.504805601 +0000 UTC m=+0.054197036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:41:13 compute-0 systemd[1]: Started libpod-conmon-a8fd15e8e2a352e152600c6f94a5a469a2957c2d373891d420226910b2945a71.scope.
Dec 05 01:41:13 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:41:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9807186a11163dd2558bc19f06b8d37beb3fc5a9bb995a4d1a7f6c332150bf7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:41:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9807186a11163dd2558bc19f06b8d37beb3fc5a9bb995a4d1a7f6c332150bf7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:41:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9807186a11163dd2558bc19f06b8d37beb3fc5a9bb995a4d1a7f6c332150bf7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:41:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9807186a11163dd2558bc19f06b8d37beb3fc5a9bb995a4d1a7f6c332150bf7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:41:13 compute-0 podman[397219]: 2025-12-05 01:41:13.709216762 +0000 UTC m=+0.258608187 container init a8fd15e8e2a352e152600c6f94a5a469a2957c2d373891d420226910b2945a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_saha, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:41:13 compute-0 podman[397219]: 2025-12-05 01:41:13.733330401 +0000 UTC m=+0.282721776 container start a8fd15e8e2a352e152600c6f94a5a469a2957c2d373891d420226910b2945a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:41:13 compute-0 podman[397219]: 2025-12-05 01:41:13.739030131 +0000 UTC m=+0.288421586 container attach a8fd15e8e2a352e152600c6f94a5a469a2957c2d373891d420226910b2945a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:41:14 compute-0 nova_compute[349548]: 2025-12-05 01:41:14.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:41:14 compute-0 nova_compute[349548]: 2025-12-05 01:41:14.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 01:41:14 compute-0 nova_compute[349548]: 2025-12-05 01:41:14.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 01:41:14 compute-0 nova_compute[349548]: 2025-12-05 01:41:14.091 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 01:41:14 compute-0 nova_compute[349548]: 2025-12-05 01:41:14.092 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:41:14 compute-0 magical_saha[397240]: {
Dec 05 01:41:14 compute-0 magical_saha[397240]:     "0": [
Dec 05 01:41:14 compute-0 magical_saha[397240]:         {
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "devices": [
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "/dev/loop3"
Dec 05 01:41:14 compute-0 magical_saha[397240]:             ],
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "lv_name": "ceph_lv0",
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "lv_size": "21470642176",
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "name": "ceph_lv0",
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "tags": {
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.cluster_name": "ceph",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.crush_device_class": "",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.encrypted": "0",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.osd_id": "0",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.type": "block",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.vdo": "0"
Dec 05 01:41:14 compute-0 magical_saha[397240]:             },
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "type": "block",
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "vg_name": "ceph_vg0"
Dec 05 01:41:14 compute-0 magical_saha[397240]:         }
Dec 05 01:41:14 compute-0 magical_saha[397240]:     ],
Dec 05 01:41:14 compute-0 magical_saha[397240]:     "1": [
Dec 05 01:41:14 compute-0 magical_saha[397240]:         {
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "devices": [
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "/dev/loop4"
Dec 05 01:41:14 compute-0 magical_saha[397240]:             ],
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "lv_name": "ceph_lv1",
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "lv_size": "21470642176",
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "name": "ceph_lv1",
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "tags": {
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.cluster_name": "ceph",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.crush_device_class": "",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.encrypted": "0",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.osd_id": "1",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.type": "block",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.vdo": "0"
Dec 05 01:41:14 compute-0 magical_saha[397240]:             },
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "type": "block",
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "vg_name": "ceph_vg1"
Dec 05 01:41:14 compute-0 magical_saha[397240]:         }
Dec 05 01:41:14 compute-0 magical_saha[397240]:     ],
Dec 05 01:41:14 compute-0 magical_saha[397240]:     "2": [
Dec 05 01:41:14 compute-0 magical_saha[397240]:         {
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "devices": [
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "/dev/loop5"
Dec 05 01:41:14 compute-0 magical_saha[397240]:             ],
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "lv_name": "ceph_lv2",
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "lv_size": "21470642176",
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "name": "ceph_lv2",
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "tags": {
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.cluster_name": "ceph",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.crush_device_class": "",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.encrypted": "0",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.osd_id": "2",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.type": "block",
Dec 05 01:41:14 compute-0 magical_saha[397240]:                 "ceph.vdo": "0"
Dec 05 01:41:14 compute-0 magical_saha[397240]:             },
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "type": "block",
Dec 05 01:41:14 compute-0 magical_saha[397240]:             "vg_name": "ceph_vg2"
Dec 05 01:41:14 compute-0 magical_saha[397240]:         }
Dec 05 01:41:14 compute-0 magical_saha[397240]:     ]
Dec 05 01:41:14 compute-0 magical_saha[397240]: }
Dec 05 01:41:14 compute-0 podman[397219]: 2025-12-05 01:41:14.63659437 +0000 UTC m=+1.185985765 container died a8fd15e8e2a352e152600c6f94a5a469a2957c2d373891d420226910b2945a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 05 01:41:14 compute-0 systemd[1]: libpod-a8fd15e8e2a352e152600c6f94a5a469a2957c2d373891d420226910b2945a71.scope: Deactivated successfully.
Dec 05 01:41:14 compute-0 ceph-mon[192914]: pgmap v948: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-9807186a11163dd2558bc19f06b8d37beb3fc5a9bb995a4d1a7f6c332150bf7b-merged.mount: Deactivated successfully.
Dec 05 01:41:14 compute-0 podman[397321]: 2025-12-05 01:41:14.707504015 +0000 UTC m=+0.104196133 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.openshift.expose-services=, name=ubi9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.buildah.version=1.29.0, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, version=9.4, config_id=edpm)
Dec 05 01:41:14 compute-0 podman[397219]: 2025-12-05 01:41:14.72755552 +0000 UTC m=+1.276946895 container remove a8fd15e8e2a352e152600c6f94a5a469a2957c2d373891d420226910b2945a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_saha, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:41:14 compute-0 systemd[1]: libpod-conmon-a8fd15e8e2a352e152600c6f94a5a469a2957c2d373891d420226910b2945a71.scope: Deactivated successfully.
Dec 05 01:41:14 compute-0 sudo[396988]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:14 compute-0 sudo[397439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfivqxgsrxzcrasszpqlqadsywavyaxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898874.4377031-845-139867466299218/AnsiballZ_stat.py'
Dec 05 01:41:14 compute-0 sudo[397439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:14 compute-0 sudo[397411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:41:14 compute-0 sudo[397411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:41:14 compute-0 sudo[397411]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:14 compute-0 sudo[397454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:41:14 compute-0 sudo[397454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:41:14 compute-0 sudo[397454]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:15 compute-0 python3.9[397451]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:41:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v949: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:15 compute-0 sudo[397439]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:15 compute-0 sudo[397479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:41:15 compute-0 sudo[397479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:41:15 compute-0 sudo[397479]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:15 compute-0 sudo[397506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:41:15 compute-0 sudo[397506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:41:15 compute-0 sudo[397625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojejfeaxtpojvdzykgjzyotowiwvpnwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898874.4377031-845-139867466299218/AnsiballZ_file.py'
Dec 05 01:41:15 compute-0 sudo[397625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:15 compute-0 python3.9[397629]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:41:15 compute-0 sudo[397625]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:15 compute-0 podman[397647]: 2025-12-05 01:41:15.809941259 +0000 UTC m=+0.092610467 container create bb3d1b8ecd024015348b8c1e82b170ac5ed0608669e59eb2a0e9a9183de2acd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:41:15 compute-0 podman[397647]: 2025-12-05 01:41:15.779557814 +0000 UTC m=+0.062227082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:41:15 compute-0 systemd[1]: Started libpod-conmon-bb3d1b8ecd024015348b8c1e82b170ac5ed0608669e59eb2a0e9a9183de2acd2.scope.
Dec 05 01:41:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:41:15 compute-0 podman[397647]: 2025-12-05 01:41:15.938572249 +0000 UTC m=+0.221241527 container init bb3d1b8ecd024015348b8c1e82b170ac5ed0608669e59eb2a0e9a9183de2acd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 01:41:15 compute-0 podman[397647]: 2025-12-05 01:41:15.952851671 +0000 UTC m=+0.235520899 container start bb3d1b8ecd024015348b8c1e82b170ac5ed0608669e59eb2a0e9a9183de2acd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:41:15 compute-0 hopeful_dhawan[397686]: 167 167
Dec 05 01:41:15 compute-0 systemd[1]: libpod-bb3d1b8ecd024015348b8c1e82b170ac5ed0608669e59eb2a0e9a9183de2acd2.scope: Deactivated successfully.
Dec 05 01:41:15 compute-0 podman[397647]: 2025-12-05 01:41:15.963748718 +0000 UTC m=+0.246418016 container attach bb3d1b8ecd024015348b8c1e82b170ac5ed0608669e59eb2a0e9a9183de2acd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 01:41:15 compute-0 podman[397647]: 2025-12-05 01:41:15.964329034 +0000 UTC m=+0.246998262 container died bb3d1b8ecd024015348b8c1e82b170ac5ed0608669e59eb2a0e9a9183de2acd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 01:41:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3830910a0eaf9fdd6d6958c45a785bc938f3be23dddbcdc9f202731da59ccfc-merged.mount: Deactivated successfully.
Dec 05 01:41:16 compute-0 podman[397647]: 2025-12-05 01:41:16.033944213 +0000 UTC m=+0.316613411 container remove bb3d1b8ecd024015348b8c1e82b170ac5ed0608669e59eb2a0e9a9183de2acd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:41:16 compute-0 systemd[1]: libpod-conmon-bb3d1b8ecd024015348b8c1e82b170ac5ed0608669e59eb2a0e9a9183de2acd2.scope: Deactivated successfully.
Dec 05 01:41:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:41:16
Dec 05 01:41:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:41:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:41:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'backups', 'images', 'cephfs.cephfs.meta', '.mgr']
Dec 05 01:41:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:41:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:41:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:41:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:41:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:41:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:41:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:41:16 compute-0 podman[397755]: 2025-12-05 01:41:16.272289871 +0000 UTC m=+0.087754451 container create 1ee30458edb4fd341d743cad4a2e9f13f1ca675c7d6f930c2da08688dad8792a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ride, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:41:16 compute-0 podman[397755]: 2025-12-05 01:41:16.241033031 +0000 UTC m=+0.056497661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:41:16 compute-0 systemd[1]: Started libpod-conmon-1ee30458edb4fd341d743cad4a2e9f13f1ca675c7d6f930c2da08688dad8792a.scope.
Dec 05 01:41:16 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:41:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fbdac0c456561482c73c5847671ebdd4cb71ea133ca2dea24960cc2c2c4ed6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:41:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fbdac0c456561482c73c5847671ebdd4cb71ea133ca2dea24960cc2c2c4ed6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:41:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fbdac0c456561482c73c5847671ebdd4cb71ea133ca2dea24960cc2c2c4ed6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:41:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fbdac0c456561482c73c5847671ebdd4cb71ea133ca2dea24960cc2c2c4ed6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:41:16 compute-0 podman[397755]: 2025-12-05 01:41:16.450803964 +0000 UTC m=+0.266268544 container init 1ee30458edb4fd341d743cad4a2e9f13f1ca675c7d6f930c2da08688dad8792a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 05 01:41:16 compute-0 podman[397755]: 2025-12-05 01:41:16.475704975 +0000 UTC m=+0.291169565 container start 1ee30458edb4fd341d743cad4a2e9f13f1ca675c7d6f930c2da08688dad8792a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ride, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 05 01:41:16 compute-0 podman[397755]: 2025-12-05 01:41:16.48476686 +0000 UTC m=+0.300231500 container attach 1ee30458edb4fd341d743cad4a2e9f13f1ca675c7d6f930c2da08688dad8792a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ride, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:41:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:41:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:41:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:41:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:41:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:41:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:41:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:41:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:41:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:41:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:41:16 compute-0 ceph-mon[192914]: pgmap v949: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v950: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:17 compute-0 sudo[397871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yayzfoqrnokglfrxedqejvkaartbzuci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898876.0965421-858-276918112824528/AnsiballZ_command.py'
Dec 05 01:41:17 compute-0 sudo[397871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:41:17 compute-0 python3.9[397876]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:41:17 compute-0 bold_ride[397802]: {
Dec 05 01:41:17 compute-0 bold_ride[397802]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:41:17 compute-0 bold_ride[397802]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:41:17 compute-0 bold_ride[397802]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:41:17 compute-0 bold_ride[397802]:         "osd_id": 0,
Dec 05 01:41:17 compute-0 bold_ride[397802]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:41:17 compute-0 bold_ride[397802]:         "type": "bluestore"
Dec 05 01:41:17 compute-0 bold_ride[397802]:     },
Dec 05 01:41:17 compute-0 bold_ride[397802]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:41:17 compute-0 bold_ride[397802]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:41:17 compute-0 bold_ride[397802]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:41:17 compute-0 bold_ride[397802]:         "osd_id": 1,
Dec 05 01:41:17 compute-0 bold_ride[397802]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:41:17 compute-0 bold_ride[397802]:         "type": "bluestore"
Dec 05 01:41:17 compute-0 bold_ride[397802]:     },
Dec 05 01:41:17 compute-0 bold_ride[397802]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:41:17 compute-0 bold_ride[397802]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:41:17 compute-0 bold_ride[397802]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:41:17 compute-0 bold_ride[397802]:         "osd_id": 2,
Dec 05 01:41:17 compute-0 bold_ride[397802]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:41:17 compute-0 bold_ride[397802]:         "type": "bluestore"
Dec 05 01:41:17 compute-0 bold_ride[397802]:     }
Dec 05 01:41:17 compute-0 bold_ride[397802]: }
Dec 05 01:41:17 compute-0 sudo[397871]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:17 compute-0 systemd[1]: libpod-1ee30458edb4fd341d743cad4a2e9f13f1ca675c7d6f930c2da08688dad8792a.scope: Deactivated successfully.
Dec 05 01:41:17 compute-0 podman[397755]: 2025-12-05 01:41:17.668939753 +0000 UTC m=+1.484404293 container died 1ee30458edb4fd341d743cad4a2e9f13f1ca675c7d6f930c2da08688dad8792a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:41:17 compute-0 systemd[1]: libpod-1ee30458edb4fd341d743cad4a2e9f13f1ca675c7d6f930c2da08688dad8792a.scope: Consumed 1.196s CPU time.
Dec 05 01:41:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fbdac0c456561482c73c5847671ebdd4cb71ea133ca2dea24960cc2c2c4ed6a-merged.mount: Deactivated successfully.
Dec 05 01:41:17 compute-0 podman[397755]: 2025-12-05 01:41:17.766868349 +0000 UTC m=+1.582332889 container remove 1ee30458edb4fd341d743cad4a2e9f13f1ca675c7d6f930c2da08688dad8792a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:41:17 compute-0 systemd[1]: libpod-conmon-1ee30458edb4fd341d743cad4a2e9f13f1ca675c7d6f930c2da08688dad8792a.scope: Deactivated successfully.
Dec 05 01:41:17 compute-0 sudo[397506]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:41:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:41:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:41:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:41:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 552e0275-a09b-4f59-a750-a2a8dd30cfae does not exist
Dec 05 01:41:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev fc8f7be7-f599-45e2-a321-364ef61b2a27 does not exist
Dec 05 01:41:17 compute-0 sudo[397923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:41:17 compute-0 sudo[397923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:41:17 compute-0 sudo[397923]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:18 compute-0 sudo[397968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:41:18 compute-0 sudo[397968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:41:18 compute-0 sudo[397968]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:18 compute-0 sudo[398098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkkttnbihlkbljkcdutqeodtsmqhzcgg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764898878.0164485-866-10195530799971/AnsiballZ_edpm_nftables_from_files.py'
Dec 05 01:41:18 compute-0 sudo[398098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:18 compute-0 ceph-mon[192914]: pgmap v950: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:41:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:41:19 compute-0 python3[398100]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 05 01:41:19 compute-0 sudo[398098]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v951: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:19 compute-0 sudo[398251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybmvzvitsqyhuijajfrxxlepugickvez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898879.3903403-874-138757538544639/AnsiballZ_stat.py'
Dec 05 01:41:19 compute-0 sudo[398251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:20 compute-0 python3.9[398253]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:41:20 compute-0 sudo[398251]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:20 compute-0 sudo[398329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcqwkpciggdzqruwrmyeaxijpsphzmjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898879.3903403-874-138757538544639/AnsiballZ_file.py'
Dec 05 01:41:20 compute-0 sudo[398329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:20 compute-0 ceph-mon[192914]: pgmap v951: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:20 compute-0 python3.9[398331]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:41:20 compute-0 sudo[398329]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v952: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:21 compute-0 sudo[398481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pliydumffhfcwmisjtzzblepcemokdvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898881.172765-886-162381299131721/AnsiballZ_stat.py'
Dec 05 01:41:21 compute-0 sudo[398481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:22 compute-0 python3.9[398483]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:41:22 compute-0 sudo[398481]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:41:22 compute-0 sudo[398592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fclahafvdlhgniinnqphkjavuqqsvehx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898881.172765-886-162381299131721/AnsiballZ_file.py'
Dec 05 01:41:22 compute-0 sudo[398592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:22 compute-0 podman[398533]: 2025-12-05 01:41:22.602825669 +0000 UTC m=+0.146667929 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 01:41:22 compute-0 podman[398534]: 2025-12-05 01:41:22.640077537 +0000 UTC m=+0.182640451 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 05 01:41:22 compute-0 python3.9[398600]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:41:22 compute-0 sudo[398592]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:22 compute-0 ceph-mon[192914]: pgmap v952: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v953: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:23 compute-0 sudo[398756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnhdbjteehgjruescchhlzozrzyauphw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898883.0979543-898-2959266490297/AnsiballZ_stat.py'
Dec 05 01:41:23 compute-0 sudo[398756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:23 compute-0 podman[398758]: 2025-12-05 01:41:23.83947618 +0000 UTC m=+0.122360895 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:41:23 compute-0 podman[398759]: 2025-12-05 01:41:23.874519756 +0000 UTC m=+0.152369089 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=, container_name=openstack_network_exporter, release=1755695350, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 01:41:23 compute-0 python3.9[398760]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:41:23 compute-0 sudo[398756]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:24 compute-0 sudo[398873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjllaifjszeiojmzjwzrgjaswjouwkgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898883.0979543-898-2959266490297/AnsiballZ_file.py'
Dec 05 01:41:24 compute-0 sudo[398873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:24 compute-0 python3.9[398875]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:41:24 compute-0 sudo[398873]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:24 compute-0 ceph-mon[192914]: pgmap v953: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v954: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:25 compute-0 sudo[399025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaufamrgzannsjnnyyozcgaebndquhsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898884.992631-910-29412739424855/AnsiballZ_stat.py'
Dec 05 01:41:25 compute-0 sudo[399025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:25 compute-0 python3.9[399027]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:41:25 compute-0 sudo[399025]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:26 compute-0 sudo[399103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pufflwmxdaqnkwrnfuabqupdmbvngnwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898884.992631-910-29412739424855/AnsiballZ_file.py'
Dec 05 01:41:26 compute-0 sudo[399103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:41:26 compute-0 python3.9[399105]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:41:26 compute-0 sudo[399103]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:26 compute-0 ceph-mon[192914]: pgmap v954: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v955: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:41:28 compute-0 sudo[399255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdakdzmiijvukytzzkfmsvoxcshxfwcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898886.880779-922-177348600098443/AnsiballZ_stat.py'
Dec 05 01:41:28 compute-0 sudo[399255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:28 compute-0 python3.9[399257]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:41:28 compute-0 sudo[399255]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:28 compute-0 sudo[399333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adxomvoffyieyullmhdlzdmsqzwzodgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898886.880779-922-177348600098443/AnsiballZ_file.py'
Dec 05 01:41:28 compute-0 sudo[399333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:28 compute-0 ceph-mon[192914]: pgmap v955: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v956: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:29 compute-0 python3.9[399335]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:41:29 compute-0 sudo[399333]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:29 compute-0 podman[158197]: time="2025-12-05T01:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:41:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 01:41:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8100 "" "Go-http-client/1.1"
Dec 05 01:41:30 compute-0 sudo[399485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdqpnkwggxpwpjjtrgvfbdetqwcptytt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898890.053569-935-225963433262539/AnsiballZ_command.py'
Dec 05 01:41:30 compute-0 sudo[399485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:30 compute-0 python3.9[399487]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:41:30 compute-0 sudo[399485]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:30 compute-0 ceph-mon[192914]: pgmap v956: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v957: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:31 compute-0 openstack_network_exporter[366555]: ERROR   01:41:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:41:31 compute-0 openstack_network_exporter[366555]: ERROR   01:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:41:31 compute-0 openstack_network_exporter[366555]: ERROR   01:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:41:31 compute-0 openstack_network_exporter[366555]: ERROR   01:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:41:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:41:31 compute-0 openstack_network_exporter[366555]: ERROR   01:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:41:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:41:31 compute-0 sudo[399640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvqzwqqbvxeebqcoxjckahsxshutxsgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898891.1624563-943-28477340752925/AnsiballZ_blockinfile.py'
Dec 05 01:41:31 compute-0 sudo[399640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:32 compute-0 python3.9[399642]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:41:32 compute-0 sudo[399640]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:41:32 compute-0 ceph-mon[192914]: pgmap v957: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:32 compute-0 sudo[399792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwihhtfklogtllrdfpwfqprvxyhdppbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898892.3848093-952-25014822750168/AnsiballZ_command.py'
Dec 05 01:41:32 compute-0 sudo[399792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v958: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:33 compute-0 python3.9[399794]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:41:33 compute-0 sudo[399792]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:34 compute-0 sudo[399945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqcmsnswfwspmfdoruqtyolsbghdgtqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898893.5688033-960-257707803222478/AnsiballZ_stat.py'
Dec 05 01:41:34 compute-0 sudo[399945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:34 compute-0 python3.9[399947]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 05 01:41:34 compute-0 sudo[399945]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:34 compute-0 ceph-mon[192914]: pgmap v958: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v959: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:35 compute-0 sudo[400097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zumquaahrbwpyojlzltyogjygkiyknoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898894.6062832-969-145517074261188/AnsiballZ_file.py'
Dec 05 01:41:35 compute-0 sudo[400097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:35 compute-0 python3.9[400099]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:41:35 compute-0 sudo[400097]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:35 compute-0 sshd-session[378715]: Connection closed by 192.168.122.30 port 37678
Dec 05 01:41:35 compute-0 sshd-session[378660]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:41:35 compute-0 systemd[1]: session-59.scope: Deactivated successfully.
Dec 05 01:41:35 compute-0 systemd[1]: session-59.scope: Consumed 2min 6.789s CPU time.
Dec 05 01:41:35 compute-0 systemd-logind[792]: Session 59 logged out. Waiting for processes to exit.
Dec 05 01:41:35 compute-0 systemd-logind[792]: Removed session 59.
Dec 05 01:41:36 compute-0 ceph-mon[192914]: pgmap v959: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v960: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:41:38 compute-0 podman[400125]: 2025-12-05 01:41:38.730924211 +0000 UTC m=+0.126215652 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:41:38 compute-0 podman[400124]: 2025-12-05 01:41:38.748242868 +0000 UTC m=+0.147845930 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 05 01:41:38 compute-0 ceph-mon[192914]: pgmap v960: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v961: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:40 compute-0 ceph-mon[192914]: pgmap v961: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v962: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:41 compute-0 sshd-session[400165]: Accepted publickey for zuul from 192.168.122.30 port 57966 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 01:41:41 compute-0 systemd-logind[792]: New session 60 of user zuul.
Dec 05 01:41:41 compute-0 systemd[1]: Started Session 60 of User zuul.
Dec 05 01:41:41 compute-0 sshd-session[400165]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:41:41 compute-0 podman[400167]: 2025-12-05 01:41:41.253417747 +0000 UTC m=+0.143598852 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125)
Dec 05 01:41:42 compute-0 podman[400313]: 2025-12-05 01:41:42.321134573 +0000 UTC m=+0.111170208 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 05 01:41:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:41:42 compute-0 python3.9[400357]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:41:43 compute-0 ceph-mon[192914]: pgmap v962: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v963: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:44 compute-0 sudo[400514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbcimlzrenguhfjfgkbwikmqcohhyanw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898903.2768795-34-49384404484846/AnsiballZ_systemd.py'
Dec 05 01:41:44 compute-0 sudo[400514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:44 compute-0 python3.9[400516]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Dec 05 01:41:44 compute-0 sudo[400514]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:44 compute-0 podman[400558]: 2025-12-05 01:41:44.8851907 +0000 UTC m=+0.119216406 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, distribution-scope=public, release=1214.1726694543, release-0.7.12=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 05 01:41:45 compute-0 ceph-mon[192914]: pgmap v963: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v964: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:45 compute-0 sudo[400686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chhjkpzklmyzwegvntbcrsrtxgfqqyeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898904.7746468-42-116967757150666/AnsiballZ_setup.py'
Dec 05 01:41:45 compute-0 sudo[400686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:45 compute-0 python3.9[400688]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 05 01:41:46 compute-0 sudo[400686]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:41:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:41:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:41:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:41:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:41:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:41:46 compute-0 sudo[400770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwsbvrpnziiapyiymbkyrafkaxjyiisl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898904.7746468-42-116967757150666/AnsiballZ_dnf.py'
Dec 05 01:41:46 compute-0 sudo[400770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:46 compute-0 python3.9[400772]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 05 01:41:47 compute-0 ceph-mon[192914]: pgmap v964: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v965: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:41:48 compute-0 sudo[400770]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:49 compute-0 ceph-mon[192914]: pgmap v965: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v966: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:49 compute-0 sudo[400923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhuluyoymgcfotmcthlgutxpwwvbsmqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898908.76478-54-95860926383438/AnsiballZ_stat.py'
Dec 05 01:41:49 compute-0 sudo[400923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:49 compute-0 python3.9[400926]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:41:49 compute-0 sudo[400923]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:50 compute-0 ceph-mon[192914]: pgmap v966: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:50 compute-0 sudo[401002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sejgdmjhzqhjztvnrsxtsgouvukxeqxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898908.76478-54-95860926383438/AnsiballZ_file.py'
Dec 05 01:41:50 compute-0 sudo[401002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:50 compute-0 python3.9[401004]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/pki/rsyslog/ca-openshift.crt _original_basename=ca-openshift.crt recurse=False state=file path=/etc/pki/rsyslog/ca-openshift.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:41:50 compute-0 sudo[401002]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v967: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:51 compute-0 sudo[401154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wobfkxxkbwlasutvtryjxdnbhlpamsxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898910.9922092-66-265284174412040/AnsiballZ_file.py'
Dec 05 01:41:51 compute-0 sudo[401154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:51 compute-0 python3.9[401156]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:41:51 compute-0 sudo[401154]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:52 compute-0 ceph-mon[192914]: pgmap v967: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:41:52 compute-0 sudo[401306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixkxmltmbuutwdatqbedrejgbegabvdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898912.12867-74-276923921009466/AnsiballZ_stat.py'
Dec 05 01:41:52 compute-0 sudo[401306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:52 compute-0 podman[401308]: 2025-12-05 01:41:52.900081269 +0000 UTC m=+0.128831647 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 05 01:41:52 compute-0 python3.9[401310]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 05 01:41:52 compute-0 podman[401309]: 2025-12-05 01:41:52.975239494 +0000 UTC m=+0.198955110 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:41:53 compute-0 sudo[401306]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v968: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:53 compute-0 sudo[401427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxpeosspaaoylyvdwymlatcxbbhpvvby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764898912.12867-74-276923921009466/AnsiballZ_file.py'
Dec 05 01:41:53 compute-0 sudo[401427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:41:53 compute-0 python3.9[401429]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/rsyslog.d/10-telemetry.conf _original_basename=10-telemetry.conf recurse=False state=file path=/etc/rsyslog.d/10-telemetry.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 05 01:41:53 compute-0 sudo[401427]: pam_unix(sudo:session): session closed for user root
Dec 05 01:41:54 compute-0 sshd-session[400176]: Connection closed by 192.168.122.30 port 57966
Dec 05 01:41:54 compute-0 sshd-session[400165]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:41:54 compute-0 systemd[1]: session-60.scope: Deactivated successfully.
Dec 05 01:41:54 compute-0 systemd[1]: session-60.scope: Consumed 10.370s CPU time.
Dec 05 01:41:54 compute-0 systemd-logind[792]: Session 60 logged out. Waiting for processes to exit.
Dec 05 01:41:54 compute-0 ceph-mon[192914]: pgmap v968: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:54 compute-0 systemd-logind[792]: Removed session 60.
Dec 05 01:41:54 compute-0 podman[401454]: 2025-12-05 01:41:54.305729045 +0000 UTC m=+0.137355947 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 01:41:54 compute-0 podman[401455]: 2025-12-05 01:41:54.31658839 +0000 UTC m=+0.141243825 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, architecture=x86_64, name=ubi9-minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., release=1755695350, version=9.6, managed_by=edpm_ansible, container_name=openstack_network_exporter)
Dec 05 01:41:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v969: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:56 compute-0 ceph-mon[192914]: pgmap v969: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:41:56.169 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:41:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:41:56.169 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:41:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:41:56.169 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:41:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v970: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:41:58 compute-0 ceph-mon[192914]: pgmap v970: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v971: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:41:59 compute-0 podman[158197]: time="2025-12-05T01:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:41:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 01:41:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8097 "" "Go-http-client/1.1"
Dec 05 01:42:00 compute-0 ceph-mon[192914]: pgmap v971: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v972: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:01 compute-0 openstack_network_exporter[366555]: ERROR   01:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:42:01 compute-0 openstack_network_exporter[366555]: ERROR   01:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:42:01 compute-0 openstack_network_exporter[366555]: ERROR   01:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:42:01 compute-0 openstack_network_exporter[366555]: ERROR   01:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:42:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:42:01 compute-0 openstack_network_exporter[366555]: ERROR   01:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:42:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:42:02 compute-0 ceph-mon[192914]: pgmap v972: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:42:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v973: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:04 compute-0 ceph-mon[192914]: pgmap v973: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v974: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:06 compute-0 ceph-mon[192914]: pgmap v974: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v975: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:42:08 compute-0 ceph-mon[192914]: pgmap v975: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v976: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:09 compute-0 podman[401499]: 2025-12-05 01:42:09.706869832 +0000 UTC m=+0.110820020 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:42:09 compute-0 podman[401498]: 2025-12-05 01:42:09.714406434 +0000 UTC m=+0.123758344 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 01:42:10 compute-0 ceph-mon[192914]: pgmap v976: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:11 compute-0 nova_compute[349548]: 2025-12-05 01:42:11.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:42:11 compute-0 nova_compute[349548]: 2025-12-05 01:42:11.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:42:11 compute-0 nova_compute[349548]: 2025-12-05 01:42:11.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 01:42:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v977: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:11 compute-0 podman[401542]: 2025-12-05 01:42:11.734389758 +0000 UTC m=+0.143987773 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec 05 01:42:12 compute-0 nova_compute[349548]: 2025-12-05 01:42:12.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:42:12 compute-0 nova_compute[349548]: 2025-12-05 01:42:12.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:42:12 compute-0 nova_compute[349548]: 2025-12-05 01:42:12.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:42:12 compute-0 nova_compute[349548]: 2025-12-05 01:42:12.105 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:42:12 compute-0 nova_compute[349548]: 2025-12-05 01:42:12.105 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:42:12 compute-0 nova_compute[349548]: 2025-12-05 01:42:12.105 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:42:12 compute-0 nova_compute[349548]: 2025-12-05 01:42:12.106 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:42:12 compute-0 nova_compute[349548]: 2025-12-05 01:42:12.106 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:42:12 compute-0 ceph-mon[192914]: pgmap v977: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:42:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:42:12 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/402253177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:42:12 compute-0 nova_compute[349548]: 2025-12-05 01:42:12.623 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:42:12 compute-0 podman[401580]: 2025-12-05 01:42:12.718713068 +0000 UTC m=+0.124739642 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi)
Dec 05 01:42:13 compute-0 nova_compute[349548]: 2025-12-05 01:42:13.034 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:42:13 compute-0 nova_compute[349548]: 2025-12-05 01:42:13.036 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4552MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 01:42:13 compute-0 nova_compute[349548]: 2025-12-05 01:42:13.036 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:42:13 compute-0 nova_compute[349548]: 2025-12-05 01:42:13.037 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:42:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v978: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:13 compute-0 nova_compute[349548]: 2025-12-05 01:42:13.184 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 01:42:13 compute-0 nova_compute[349548]: 2025-12-05 01:42:13.185 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 01:42:13 compute-0 nova_compute[349548]: 2025-12-05 01:42:13.250 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:42:13 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/402253177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:42:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:42:13 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/56398953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:42:13 compute-0 nova_compute[349548]: 2025-12-05 01:42:13.770 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:42:13 compute-0 nova_compute[349548]: 2025-12-05 01:42:13.779 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:42:13 compute-0 nova_compute[349548]: 2025-12-05 01:42:13.810 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:42:13 compute-0 nova_compute[349548]: 2025-12-05 01:42:13.811 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 01:42:13 compute-0 nova_compute[349548]: 2025-12-05 01:42:13.812 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:42:14 compute-0 ceph-mon[192914]: pgmap v978: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:14 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/56398953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:42:14 compute-0 nova_compute[349548]: 2025-12-05 01:42:14.809 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:42:14 compute-0 nova_compute[349548]: 2025-12-05 01:42:14.809 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:42:14 compute-0 nova_compute[349548]: 2025-12-05 01:42:14.810 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 01:42:14 compute-0 nova_compute[349548]: 2025-12-05 01:42:14.810 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 01:42:14 compute-0 nova_compute[349548]: 2025-12-05 01:42:14.831 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 01:42:14 compute-0 nova_compute[349548]: 2025-12-05 01:42:14.831 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:42:15 compute-0 nova_compute[349548]: 2025-12-05 01:42:15.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:42:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v979: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:15 compute-0 podman[401625]: 2025-12-05 01:42:15.715159001 +0000 UTC m=+0.122961031 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., config_id=edpm, name=ubi9, release=1214.1726694543, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.openshift.tags=base rhel9, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 05 01:42:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:42:16
Dec 05 01:42:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:42:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:42:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', '.mgr', 'default.rgw.control', 'backups', 'volumes', 'default.rgw.meta', 'vms']
Dec 05 01:42:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:42:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:42:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:42:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:42:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:42:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:42:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:42:16 compute-0 ceph-mon[192914]: pgmap v979: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:42:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:42:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:42:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:42:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:42:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:42:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:42:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:42:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:42:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:42:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v980: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:42:18 compute-0 sudo[401643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:42:18 compute-0 sudo[401643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:42:18 compute-0 sudo[401643]: pam_unix(sudo:session): session closed for user root
Dec 05 01:42:18 compute-0 ceph-mon[192914]: pgmap v980: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:18 compute-0 sudo[401668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:42:18 compute-0 sudo[401668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:42:18 compute-0 sudo[401668]: pam_unix(sudo:session): session closed for user root
Dec 05 01:42:18 compute-0 sudo[401693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:42:18 compute-0 sudo[401693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:42:18 compute-0 sudo[401693]: pam_unix(sudo:session): session closed for user root
Dec 05 01:42:18 compute-0 sudo[401718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:42:18 compute-0 sudo[401718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:42:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v981: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:19 compute-0 sudo[401718]: pam_unix(sudo:session): session closed for user root
Dec 05 01:42:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:42:19 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:42:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:42:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:42:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:42:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:42:19 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7866598c-3932-4f75-ac79-26ffe00f94a6 does not exist
Dec 05 01:42:19 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c137c7ac-667a-4c66-825e-3a7d36b981a6 does not exist
Dec 05 01:42:19 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b534d160-a796-4ea7-a72b-75e5fe0bde3e does not exist
Dec 05 01:42:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:42:19 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:42:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:42:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:42:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:42:19 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:42:19 compute-0 sudo[401774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:42:19 compute-0 sudo[401774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:42:19 compute-0 sudo[401774]: pam_unix(sudo:session): session closed for user root
Dec 05 01:42:19 compute-0 sudo[401799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:42:19 compute-0 sudo[401799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:42:19 compute-0 sudo[401799]: pam_unix(sudo:session): session closed for user root
Dec 05 01:42:20 compute-0 sudo[401824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:42:20 compute-0 sudo[401824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:42:20 compute-0 sudo[401824]: pam_unix(sudo:session): session closed for user root
Dec 05 01:42:20 compute-0 sudo[401849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:42:20 compute-0 sudo[401849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:42:20 compute-0 ceph-mon[192914]: pgmap v981: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:42:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:42:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:42:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:42:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:42:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:42:20 compute-0 podman[401910]: 2025-12-05 01:42:20.761773798 +0000 UTC m=+0.091771673 container create a782cba6bd3de0444b5758a4a15bc6b1ac11fc701412165ba41731ea8300ce51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:42:20 compute-0 podman[401910]: 2025-12-05 01:42:20.726119065 +0000 UTC m=+0.056116990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:42:20 compute-0 systemd[1]: Started libpod-conmon-a782cba6bd3de0444b5758a4a15bc6b1ac11fc701412165ba41731ea8300ce51.scope.
Dec 05 01:42:20 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:42:20 compute-0 podman[401910]: 2025-12-05 01:42:20.886603821 +0000 UTC m=+0.216601746 container init a782cba6bd3de0444b5758a4a15bc6b1ac11fc701412165ba41731ea8300ce51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:42:20 compute-0 podman[401910]: 2025-12-05 01:42:20.904243658 +0000 UTC m=+0.234241533 container start a782cba6bd3de0444b5758a4a15bc6b1ac11fc701412165ba41731ea8300ce51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gould, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:42:20 compute-0 podman[401910]: 2025-12-05 01:42:20.910043591 +0000 UTC m=+0.240041506 container attach a782cba6bd3de0444b5758a4a15bc6b1ac11fc701412165ba41731ea8300ce51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 05 01:42:20 compute-0 nervous_gould[401926]: 167 167
Dec 05 01:42:20 compute-0 systemd[1]: libpod-a782cba6bd3de0444b5758a4a15bc6b1ac11fc701412165ba41731ea8300ce51.scope: Deactivated successfully.
Dec 05 01:42:20 compute-0 podman[401910]: 2025-12-05 01:42:20.913977702 +0000 UTC m=+0.243975577 container died a782cba6bd3de0444b5758a4a15bc6b1ac11fc701412165ba41731ea8300ce51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gould, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:42:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c5b53bd30d132caa076ce4767fcc429906df76029065b032663a65911f4a5d4-merged.mount: Deactivated successfully.
Dec 05 01:42:20 compute-0 podman[401910]: 2025-12-05 01:42:20.982244933 +0000 UTC m=+0.312242768 container remove a782cba6bd3de0444b5758a4a15bc6b1ac11fc701412165ba41731ea8300ce51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gould, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:42:21 compute-0 systemd[1]: libpod-conmon-a782cba6bd3de0444b5758a4a15bc6b1ac11fc701412165ba41731ea8300ce51.scope: Deactivated successfully.
Dec 05 01:42:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v982: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:21 compute-0 podman[401948]: 2025-12-05 01:42:21.270581977 +0000 UTC m=+0.088551123 container create ea2c387fd380383b5badeae77f04383d12804f8e2653bb8732a2daa1835b0a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_edison, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 05 01:42:21 compute-0 podman[401948]: 2025-12-05 01:42:21.243237818 +0000 UTC m=+0.061207044 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:42:21 compute-0 systemd[1]: Started libpod-conmon-ea2c387fd380383b5badeae77f04383d12804f8e2653bb8732a2daa1835b0a8b.scope.
Dec 05 01:42:21 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40add8ce8b35f55fcebe894f2d3db40c294f8f4ff2acc0a6783ce0c87496396/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40add8ce8b35f55fcebe894f2d3db40c294f8f4ff2acc0a6783ce0c87496396/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40add8ce8b35f55fcebe894f2d3db40c294f8f4ff2acc0a6783ce0c87496396/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40add8ce8b35f55fcebe894f2d3db40c294f8f4ff2acc0a6783ce0c87496396/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40add8ce8b35f55fcebe894f2d3db40c294f8f4ff2acc0a6783ce0c87496396/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:42:21 compute-0 podman[401948]: 2025-12-05 01:42:21.426042392 +0000 UTC m=+0.244011568 container init ea2c387fd380383b5badeae77f04383d12804f8e2653bb8732a2daa1835b0a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 05 01:42:21 compute-0 podman[401948]: 2025-12-05 01:42:21.448235467 +0000 UTC m=+0.266204633 container start ea2c387fd380383b5badeae77f04383d12804f8e2653bb8732a2daa1835b0a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_edison, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:42:21 compute-0 podman[401948]: 2025-12-05 01:42:21.455639385 +0000 UTC m=+0.273608561 container attach ea2c387fd380383b5badeae77f04383d12804f8e2653bb8732a2daa1835b0a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 01:42:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 01:42:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/544959681' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:42:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 01:42:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/544959681' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:42:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 01:42:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2770189528' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:42:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 01:42:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2770189528' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:42:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 01:42:22 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2182166203' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:42:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 01:42:22 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2182166203' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:42:22 compute-0 ceph-mon[192914]: pgmap v982: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:22 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/544959681' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:42:22 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/544959681' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:42:22 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2770189528' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:42:22 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2770189528' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:42:22 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2182166203' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:42:22 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2182166203' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:42:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:42:22 compute-0 jovial_edison[401964]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:42:22 compute-0 jovial_edison[401964]: --> relative data size: 1.0
Dec 05 01:42:22 compute-0 jovial_edison[401964]: --> All data devices are unavailable
Dec 05 01:42:22 compute-0 systemd[1]: libpod-ea2c387fd380383b5badeae77f04383d12804f8e2653bb8732a2daa1835b0a8b.scope: Deactivated successfully.
Dec 05 01:42:22 compute-0 podman[401948]: 2025-12-05 01:42:22.802675641 +0000 UTC m=+1.620644797 container died ea2c387fd380383b5badeae77f04383d12804f8e2653bb8732a2daa1835b0a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_edison, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 01:42:22 compute-0 systemd[1]: libpod-ea2c387fd380383b5badeae77f04383d12804f8e2653bb8732a2daa1835b0a8b.scope: Consumed 1.299s CPU time.
Dec 05 01:42:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-f40add8ce8b35f55fcebe894f2d3db40c294f8f4ff2acc0a6783ce0c87496396-merged.mount: Deactivated successfully.
Dec 05 01:42:22 compute-0 podman[401948]: 2025-12-05 01:42:22.908339135 +0000 UTC m=+1.726308311 container remove ea2c387fd380383b5badeae77f04383d12804f8e2653bb8732a2daa1835b0a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_edison, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:42:22 compute-0 systemd[1]: libpod-conmon-ea2c387fd380383b5badeae77f04383d12804f8e2653bb8732a2daa1835b0a8b.scope: Deactivated successfully.
Dec 05 01:42:22 compute-0 sudo[401849]: pam_unix(sudo:session): session closed for user root
Dec 05 01:42:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v983: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:23 compute-0 podman[402005]: 2025-12-05 01:42:23.096670485 +0000 UTC m=+0.136306507 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 05 01:42:23 compute-0 sudo[402012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:42:23 compute-0 sudo[402012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:42:23 compute-0 sudo[402012]: pam_unix(sudo:session): session closed for user root
Dec 05 01:42:23 compute-0 sudo[402057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:42:23 compute-0 sudo[402057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:42:23 compute-0 sudo[402057]: pam_unix(sudo:session): session closed for user root
Dec 05 01:42:23 compute-0 podman[402049]: 2025-12-05 01:42:23.300788809 +0000 UTC m=+0.178399532 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 05 01:42:23 compute-0 sudo[402101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:42:23 compute-0 sudo[402101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:42:23 compute-0 sudo[402101]: pam_unix(sudo:session): session closed for user root
Dec 05 01:42:23 compute-0 sudo[402126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:42:23 compute-0 sudo[402126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:42:24 compute-0 podman[402189]: 2025-12-05 01:42:24.105120014 +0000 UTC m=+0.082050230 container create c8d7593e1307c339326a9f408bfb7d19820276b3daa0fbadafd7240afe118f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kapitsa, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:42:24 compute-0 podman[402189]: 2025-12-05 01:42:24.076835448 +0000 UTC m=+0.053765724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:42:24 compute-0 systemd[1]: Started libpod-conmon-c8d7593e1307c339326a9f408bfb7d19820276b3daa0fbadafd7240afe118f2d.scope.
Dec 05 01:42:24 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:42:24 compute-0 podman[402189]: 2025-12-05 01:42:24.271691871 +0000 UTC m=+0.248622147 container init c8d7593e1307c339326a9f408bfb7d19820276b3daa0fbadafd7240afe118f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kapitsa, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 05 01:42:24 compute-0 podman[402189]: 2025-12-05 01:42:24.288014271 +0000 UTC m=+0.264944467 container start c8d7593e1307c339326a9f408bfb7d19820276b3daa0fbadafd7240afe118f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kapitsa, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 05 01:42:24 compute-0 podman[402189]: 2025-12-05 01:42:24.294783731 +0000 UTC m=+0.271714007 container attach c8d7593e1307c339326a9f408bfb7d19820276b3daa0fbadafd7240afe118f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kapitsa, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Dec 05 01:42:24 compute-0 elegant_kapitsa[402204]: 167 167
Dec 05 01:42:24 compute-0 systemd[1]: libpod-c8d7593e1307c339326a9f408bfb7d19820276b3daa0fbadafd7240afe118f2d.scope: Deactivated successfully.
Dec 05 01:42:24 compute-0 podman[402189]: 2025-12-05 01:42:24.301522911 +0000 UTC m=+0.278453157 container died c8d7593e1307c339326a9f408bfb7d19820276b3daa0fbadafd7240afe118f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kapitsa, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:42:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-25d83b777ed388b4fb31dcbd1984fe3fcfd7d5c4a90c7a9ac8f8bf0428364e0c-merged.mount: Deactivated successfully.
Dec 05 01:42:24 compute-0 ceph-mon[192914]: pgmap v983: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:24 compute-0 podman[402189]: 2025-12-05 01:42:24.390343511 +0000 UTC m=+0.367273737 container remove c8d7593e1307c339326a9f408bfb7d19820276b3daa0fbadafd7240afe118f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 05 01:42:24 compute-0 systemd[1]: libpod-conmon-c8d7593e1307c339326a9f408bfb7d19820276b3daa0fbadafd7240afe118f2d.scope: Deactivated successfully.
Dec 05 01:42:24 compute-0 podman[402219]: 2025-12-05 01:42:24.502792155 +0000 UTC m=+0.108576667 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, distribution-scope=public, config_id=edpm, managed_by=edpm_ansible, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=)
Dec 05 01:42:24 compute-0 podman[402218]: 2025-12-05 01:42:24.514644429 +0000 UTC m=+0.118651241 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:42:24 compute-0 podman[402271]: 2025-12-05 01:42:24.620551299 +0000 UTC m=+0.059023742 container create d09219f17134eae9a337fedbeda9ee4c4be3fe1b9dc6a4d4a63b23e8cf94d653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:42:24 compute-0 systemd[1]: Started libpod-conmon-d09219f17134eae9a337fedbeda9ee4c4be3fe1b9dc6a4d4a63b23e8cf94d653.scope.
Dec 05 01:42:24 compute-0 podman[402271]: 2025-12-05 01:42:24.59927406 +0000 UTC m=+0.037746543 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:42:24 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:42:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f732ac60e432fc0f755ee6a5124417ef3bf48f88c35848666e478acf704b9b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:42:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f732ac60e432fc0f755ee6a5124417ef3bf48f88c35848666e478acf704b9b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:42:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f732ac60e432fc0f755ee6a5124417ef3bf48f88c35848666e478acf704b9b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:42:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f732ac60e432fc0f755ee6a5124417ef3bf48f88c35848666e478acf704b9b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:42:24 compute-0 podman[402271]: 2025-12-05 01:42:24.790569683 +0000 UTC m=+0.229042376 container init d09219f17134eae9a337fedbeda9ee4c4be3fe1b9dc6a4d4a63b23e8cf94d653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 01:42:24 compute-0 podman[402271]: 2025-12-05 01:42:24.81142196 +0000 UTC m=+0.249894443 container start d09219f17134eae9a337fedbeda9ee4c4be3fe1b9dc6a4d4a63b23e8cf94d653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:42:24 compute-0 podman[402271]: 2025-12-05 01:42:24.818225312 +0000 UTC m=+0.256697795 container attach d09219f17134eae9a337fedbeda9ee4c4be3fe1b9dc6a4d4a63b23e8cf94d653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:42:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v984: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:25 compute-0 vigilant_edison[402287]: {
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:     "0": [
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:         {
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "devices": [
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "/dev/loop3"
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             ],
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "lv_name": "ceph_lv0",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "lv_size": "21470642176",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "name": "ceph_lv0",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "tags": {
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.cluster_name": "ceph",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.crush_device_class": "",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.encrypted": "0",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.osd_id": "0",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.type": "block",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.vdo": "0"
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             },
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "type": "block",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "vg_name": "ceph_vg0"
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:         }
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:     ],
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:     "1": [
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:         {
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "devices": [
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "/dev/loop4"
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             ],
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "lv_name": "ceph_lv1",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "lv_size": "21470642176",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "name": "ceph_lv1",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "tags": {
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.cluster_name": "ceph",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.crush_device_class": "",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.encrypted": "0",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.osd_id": "1",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.type": "block",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.vdo": "0"
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             },
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "type": "block",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "vg_name": "ceph_vg1"
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:         }
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:     ],
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:     "2": [
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:         {
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "devices": [
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "/dev/loop5"
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             ],
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "lv_name": "ceph_lv2",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "lv_size": "21470642176",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "name": "ceph_lv2",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "tags": {
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.cluster_name": "ceph",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.crush_device_class": "",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.encrypted": "0",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.osd_id": "2",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.type": "block",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:                 "ceph.vdo": "0"
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             },
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "type": "block",
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:             "vg_name": "ceph_vg2"
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:         }
Dec 05 01:42:25 compute-0 vigilant_edison[402287]:     ]
Dec 05 01:42:25 compute-0 vigilant_edison[402287]: }
Dec 05 01:42:25 compute-0 systemd[1]: libpod-d09219f17134eae9a337fedbeda9ee4c4be3fe1b9dc6a4d4a63b23e8cf94d653.scope: Deactivated successfully.
Dec 05 01:42:25 compute-0 podman[402271]: 2025-12-05 01:42:25.752545074 +0000 UTC m=+1.191017567 container died d09219f17134eae9a337fedbeda9ee4c4be3fe1b9dc6a4d4a63b23e8cf94d653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 01:42:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f732ac60e432fc0f755ee6a5124417ef3bf48f88c35848666e478acf704b9b6-merged.mount: Deactivated successfully.
Dec 05 01:42:25 compute-0 podman[402271]: 2025-12-05 01:42:25.861628844 +0000 UTC m=+1.300101317 container remove d09219f17134eae9a337fedbeda9ee4c4be3fe1b9dc6a4d4a63b23e8cf94d653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_edison, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:42:25 compute-0 systemd[1]: libpod-conmon-d09219f17134eae9a337fedbeda9ee4c4be3fe1b9dc6a4d4a63b23e8cf94d653.scope: Deactivated successfully.
Dec 05 01:42:25 compute-0 sudo[402126]: pam_unix(sudo:session): session closed for user root
Dec 05 01:42:26 compute-0 sudo[402308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:42:26 compute-0 sudo[402308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:42:26 compute-0 sudo[402308]: pam_unix(sudo:session): session closed for user root
Dec 05 01:42:26 compute-0 sudo[402333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:42:26 compute-0 sudo[402333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:42:26 compute-0 sudo[402333]: pam_unix(sudo:session): session closed for user root
Dec 05 01:42:26 compute-0 sudo[402358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:42:26 compute-0 sudo[402358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:42:26 compute-0 sudo[402358]: pam_unix(sudo:session): session closed for user root
Dec 05 01:42:26 compute-0 ceph-mon[192914]: pgmap v984: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:26 compute-0 sudo[402383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:42:26 compute-0 sudo[402383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:42:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 01:42:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 1800.0 total, 600.0 interval
                                            Cumulative writes: 4606 writes, 20K keys, 4606 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                            Cumulative WAL: 4606 writes, 4606 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1300 writes, 5648 keys, 1300 commit groups, 1.0 writes per commit group, ingest: 8.47 MB, 0.01 MB/s
                                            Interval WAL: 1300 writes, 1300 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    115.1      0.19              0.09        11    0.017       0      0       0.0       0.0
                                              L6      1/0    6.60 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    138.9    113.8      0.61              0.30        10    0.061     42K   5270       0.0       0.0
                                             Sum      1/0    6.60 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    105.8    114.1      0.80              0.38        21    0.038     42K   5270       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.3    101.4    101.3      0.35              0.17         8    0.044     18K   2065       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    138.9    113.8      0.61              0.30        10    0.061     42K   5270       0.0       0.0
                                            High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    117.9      0.18              0.09        10    0.018       0      0       0.0       0.0
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 1800.0 total, 600.0 interval
                                            Flush(GB): cumulative 0.021, interval 0.006
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.09 GB write, 0.05 MB/s write, 0.08 GB read, 0.05 MB/s read, 0.8 seconds
                                            Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56463779d1f0#2 capacity: 308.00 MB usage: 6.42 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 9.1e-05 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(412,6.07 MB,1.97008%) FilterBlock(22,125.17 KB,0.0396877%) IndexBlock(22,238.05 KB,0.0754765%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Dec 05 01:42:27 compute-0 podman[402447]: 2025-12-05 01:42:27.021787482 +0000 UTC m=+0.080413024 container create 02a85b343108b6108dade61932ca2a5c2c5bbc67203d81d75a6d090eaba00372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:42:27 compute-0 podman[402447]: 2025-12-05 01:42:26.986117808 +0000 UTC m=+0.044743370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:42:27 compute-0 systemd[1]: Started libpod-conmon-02a85b343108b6108dade61932ca2a5c2c5bbc67203d81d75a6d090eaba00372.scope.
Dec 05 01:42:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v985: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:27 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:42:27 compute-0 podman[402447]: 2025-12-05 01:42:27.164976462 +0000 UTC m=+0.223602074 container init 02a85b343108b6108dade61932ca2a5c2c5bbc67203d81d75a6d090eaba00372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:42:27 compute-0 podman[402447]: 2025-12-05 01:42:27.182214467 +0000 UTC m=+0.240840009 container start 02a85b343108b6108dade61932ca2a5c2c5bbc67203d81d75a6d090eaba00372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 05 01:42:27 compute-0 podman[402447]: 2025-12-05 01:42:27.189857972 +0000 UTC m=+0.248483574 container attach 02a85b343108b6108dade61932ca2a5c2c5bbc67203d81d75a6d090eaba00372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 05 01:42:27 compute-0 vibrant_antonelli[402463]: 167 167
Dec 05 01:42:27 compute-0 systemd[1]: libpod-02a85b343108b6108dade61932ca2a5c2c5bbc67203d81d75a6d090eaba00372.scope: Deactivated successfully.
Dec 05 01:42:27 compute-0 podman[402447]: 2025-12-05 01:42:27.19724924 +0000 UTC m=+0.255874762 container died 02a85b343108b6108dade61932ca2a5c2c5bbc67203d81d75a6d090eaba00372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:42:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bd598d6fecda98660335e37ebdc7744cbbd808fafbc394a7e15906db4549f0e-merged.mount: Deactivated successfully.
Dec 05 01:42:27 compute-0 podman[402447]: 2025-12-05 01:42:27.284479935 +0000 UTC m=+0.343105487 container remove 02a85b343108b6108dade61932ca2a5c2c5bbc67203d81d75a6d090eaba00372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 05 01:42:27 compute-0 systemd[1]: libpod-conmon-02a85b343108b6108dade61932ca2a5c2c5bbc67203d81d75a6d090eaba00372.scope: Deactivated successfully.
Dec 05 01:42:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:42:27 compute-0 podman[402487]: 2025-12-05 01:42:27.579087625 +0000 UTC m=+0.087707169 container create d3d94b8752091a6f2c47e096e271f47b9f5ec3a27d3e755a6305639b63e4a55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hofstadter, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 01:42:27 compute-0 podman[402487]: 2025-12-05 01:42:27.542477035 +0000 UTC m=+0.051096629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:42:27 compute-0 systemd[1]: Started libpod-conmon-d3d94b8752091a6f2c47e096e271f47b9f5ec3a27d3e755a6305639b63e4a55b.scope.
Dec 05 01:42:27 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:42:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c338f33aceff9dc65f2a12d78b9d4a9fd318e7d6c88f7d60373386c42bd2491/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:42:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c338f33aceff9dc65f2a12d78b9d4a9fd318e7d6c88f7d60373386c42bd2491/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:42:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c338f33aceff9dc65f2a12d78b9d4a9fd318e7d6c88f7d60373386c42bd2491/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:42:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c338f33aceff9dc65f2a12d78b9d4a9fd318e7d6c88f7d60373386c42bd2491/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:42:27 compute-0 podman[402487]: 2025-12-05 01:42:27.758539345 +0000 UTC m=+0.267158929 container init d3d94b8752091a6f2c47e096e271f47b9f5ec3a27d3e755a6305639b63e4a55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 05 01:42:27 compute-0 podman[402487]: 2025-12-05 01:42:27.7871222 +0000 UTC m=+0.295741714 container start d3d94b8752091a6f2c47e096e271f47b9f5ec3a27d3e755a6305639b63e4a55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:42:27 compute-0 podman[402487]: 2025-12-05 01:42:27.792916623 +0000 UTC m=+0.301536137 container attach d3d94b8752091a6f2c47e096e271f47b9f5ec3a27d3e755a6305639b63e4a55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hofstadter, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:42:28 compute-0 ceph-mon[192914]: pgmap v985: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:28 compute-0 practical_hofstadter[402503]: {
Dec 05 01:42:28 compute-0 practical_hofstadter[402503]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:42:28 compute-0 practical_hofstadter[402503]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:42:28 compute-0 practical_hofstadter[402503]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:42:28 compute-0 practical_hofstadter[402503]:         "osd_id": 0,
Dec 05 01:42:28 compute-0 practical_hofstadter[402503]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:42:28 compute-0 practical_hofstadter[402503]:         "type": "bluestore"
Dec 05 01:42:28 compute-0 practical_hofstadter[402503]:     },
Dec 05 01:42:28 compute-0 practical_hofstadter[402503]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:42:28 compute-0 practical_hofstadter[402503]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:42:28 compute-0 practical_hofstadter[402503]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:42:28 compute-0 practical_hofstadter[402503]:         "osd_id": 1,
Dec 05 01:42:28 compute-0 practical_hofstadter[402503]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:42:28 compute-0 practical_hofstadter[402503]:         "type": "bluestore"
Dec 05 01:42:28 compute-0 practical_hofstadter[402503]:     },
Dec 05 01:42:28 compute-0 practical_hofstadter[402503]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:42:28 compute-0 practical_hofstadter[402503]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:42:28 compute-0 practical_hofstadter[402503]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:42:28 compute-0 practical_hofstadter[402503]:         "osd_id": 2,
Dec 05 01:42:28 compute-0 practical_hofstadter[402503]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:42:28 compute-0 practical_hofstadter[402503]:         "type": "bluestore"
Dec 05 01:42:28 compute-0 practical_hofstadter[402503]:     }
Dec 05 01:42:28 compute-0 practical_hofstadter[402503]: }
Dec 05 01:42:28 compute-0 systemd[1]: libpod-d3d94b8752091a6f2c47e096e271f47b9f5ec3a27d3e755a6305639b63e4a55b.scope: Deactivated successfully.
Dec 05 01:42:28 compute-0 podman[402487]: 2025-12-05 01:42:28.895814719 +0000 UTC m=+1.404434263 container died d3d94b8752091a6f2c47e096e271f47b9f5ec3a27d3e755a6305639b63e4a55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hofstadter, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 05 01:42:28 compute-0 systemd[1]: libpod-d3d94b8752091a6f2c47e096e271f47b9f5ec3a27d3e755a6305639b63e4a55b.scope: Consumed 1.117s CPU time.
Dec 05 01:42:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c338f33aceff9dc65f2a12d78b9d4a9fd318e7d6c88f7d60373386c42bd2491-merged.mount: Deactivated successfully.
Dec 05 01:42:28 compute-0 podman[402487]: 2025-12-05 01:42:28.982358964 +0000 UTC m=+1.490978518 container remove d3d94b8752091a6f2c47e096e271f47b9f5ec3a27d3e755a6305639b63e4a55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 05 01:42:28 compute-0 systemd[1]: libpod-conmon-d3d94b8752091a6f2c47e096e271f47b9f5ec3a27d3e755a6305639b63e4a55b.scope: Deactivated successfully.
Dec 05 01:42:29 compute-0 sudo[402383]: pam_unix(sudo:session): session closed for user root
Dec 05 01:42:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:42:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:42:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:42:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:42:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev e612973b-b7ad-4afd-aec0-06d27cca8339 does not exist
Dec 05 01:42:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 1e9dc24a-a411-4d39-8025-8d1aa6c4d343 does not exist
Dec 05 01:42:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v986: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:29 compute-0 sudo[402547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:42:29 compute-0 sudo[402547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:42:29 compute-0 sudo[402547]: pam_unix(sudo:session): session closed for user root
Dec 05 01:42:29 compute-0 sudo[402572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:42:29 compute-0 sudo[402572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:42:29 compute-0 sudo[402572]: pam_unix(sudo:session): session closed for user root
Dec 05 01:42:29 compute-0 podman[158197]: time="2025-12-05T01:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:42:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 01:42:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8105 "" "Go-http-client/1.1"
Dec 05 01:42:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:42:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:42:31 compute-0 ceph-mon[192914]: pgmap v986: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v987: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.105721) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898951105772, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1503, "num_deletes": 251, "total_data_size": 2380549, "memory_usage": 2430160, "flush_reason": "Manual Compaction"}
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898951135271, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2336232, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19393, "largest_seqno": 20895, "table_properties": {"data_size": 2329238, "index_size": 4065, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14437, "raw_average_key_size": 19, "raw_value_size": 2315213, "raw_average_value_size": 3180, "num_data_blocks": 185, "num_entries": 728, "num_filter_entries": 728, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764898793, "oldest_key_time": 1764898793, "file_creation_time": 1764898951, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 29624 microseconds, and 12389 cpu microseconds.
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.135350) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2336232 bytes OK
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.135371) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.137526) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.137541) EVENT_LOG_v1 {"time_micros": 1764898951137537, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.137560) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2373953, prev total WAL file size 2373953, number of live WAL files 2.
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.138761) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2281KB)], [47(6760KB)]
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898951138858, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9258905, "oldest_snapshot_seqno": -1}
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4279 keys, 7482083 bytes, temperature: kUnknown
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898951213777, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7482083, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7452460, "index_size": 17801, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10757, "raw_key_size": 105759, "raw_average_key_size": 24, "raw_value_size": 7373873, "raw_average_value_size": 1723, "num_data_blocks": 747, "num_entries": 4279, "num_filter_entries": 4279, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764898951, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.214238) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7482083 bytes
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.217157) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 123.2 rd, 99.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 6.6 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(7.2) write-amplify(3.2) OK, records in: 4793, records dropped: 514 output_compression: NoCompression
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.217187) EVENT_LOG_v1 {"time_micros": 1764898951217173, "job": 24, "event": "compaction_finished", "compaction_time_micros": 75153, "compaction_time_cpu_micros": 34683, "output_level": 6, "num_output_files": 1, "total_output_size": 7482083, "num_input_records": 4793, "num_output_records": 4279, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898951218104, "job": 24, "event": "table_file_deletion", "file_number": 49}
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898951221370, "job": 24, "event": "table_file_deletion", "file_number": 47}
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.138593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.221629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.221637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.221640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.221643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.221646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:42:31 compute-0 openstack_network_exporter[366555]: ERROR   01:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:42:31 compute-0 openstack_network_exporter[366555]: ERROR   01:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:42:31 compute-0 openstack_network_exporter[366555]: ERROR   01:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:42:31 compute-0 openstack_network_exporter[366555]: ERROR   01:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:42:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:42:31 compute-0 openstack_network_exporter[366555]: ERROR   01:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:42:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:42:32 compute-0 ceph-mon[192914]: pgmap v987: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:42:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v988: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:34 compute-0 ceph-mon[192914]: pgmap v988: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:34 compute-0 sshd-session[402597]: Connection reset by authenticating user root 45.140.17.124 port 45778 [preauth]
Dec 05 01:42:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v989: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:36 compute-0 ceph-mon[192914]: pgmap v989: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:36 compute-0 sshd-session[402599]: Connection reset by authenticating user root 45.140.17.124 port 45788 [preauth]
Dec 05 01:42:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v990: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:42:38 compute-0 ceph-mon[192914]: pgmap v990: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.312 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.313 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.317 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.319 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.319 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.319 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.319 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.320 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.320 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.320 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.320 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.320 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.320 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.321 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.321 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.322 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.322 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:42:38 compute-0 sshd-session[402601]: Connection reset by authenticating user root 45.140.17.124 port 45798 [preauth]
Dec 05 01:42:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v991: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:40 compute-0 ceph-mon[192914]: pgmap v991: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:40 compute-0 podman[402606]: 2025-12-05 01:42:40.719795811 +0000 UTC m=+0.127429757 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 05 01:42:40 compute-0 podman[402607]: 2025-12-05 01:42:40.725166092 +0000 UTC m=+0.129523066 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:42:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v992: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:41 compute-0 sshd-session[402604]: Invalid user user from 45.140.17.124 port 45802
Dec 05 01:42:42 compute-0 sshd-session[402604]: Connection reset by invalid user user 45.140.17.124 port 45802 [preauth]
Dec 05 01:42:42 compute-0 ceph-mon[192914]: pgmap v992: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:42:42 compute-0 podman[402648]: 2025-12-05 01:42:42.739326613 +0000 UTC m=+0.148098088 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 05 01:42:42 compute-0 podman[402669]: 2025-12-05 01:42:42.879755755 +0000 UTC m=+0.109059320 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true)
Dec 05 01:42:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v993: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:44 compute-0 ceph-mon[192914]: pgmap v993: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:44 compute-0 sshd-session[402647]: Connection reset by authenticating user root 45.140.17.124 port 26282 [preauth]
Dec 05 01:42:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v994: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:46 compute-0 ceph-mon[192914]: pgmap v994: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:42:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:42:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:42:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:42:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:42:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:42:46 compute-0 podman[402688]: 2025-12-05 01:42:46.700734002 +0000 UTC m=+0.118174277 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, com.redhat.component=ubi9-container, config_id=edpm, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, version=9.4, build-date=2024-09-18T21:23:30, release=1214.1726694543, io.buildah.version=1.29.0, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git)
Dec 05 01:42:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v995: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:42:48 compute-0 ceph-mon[192914]: pgmap v995: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v996: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:50 compute-0 ceph-mon[192914]: pgmap v996: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v997: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:52 compute-0 ceph-mon[192914]: pgmap v997: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:42:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v998: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:53 compute-0 podman[402707]: 2025-12-05 01:42:53.746628549 +0000 UTC m=+0.156366381 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 05 01:42:53 compute-0 podman[402708]: 2025-12-05 01:42:53.809649032 +0000 UTC m=+0.214106216 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 05 01:42:54 compute-0 ceph-mon[192914]: pgmap v998: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:54 compute-0 podman[402749]: 2025-12-05 01:42:54.735234919 +0000 UTC m=+0.136207204 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:42:54 compute-0 podman[402750]: 2025-12-05 01:42:54.737605975 +0000 UTC m=+0.133608550 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 01:42:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v999: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:42:56.169 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:42:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:42:56.170 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:42:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:42:56.170 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:42:56 compute-0 ceph-mon[192914]: pgmap v999: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1000: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:42:58 compute-0 ceph-mon[192914]: pgmap v1000: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1001: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:42:59 compute-0 podman[158197]: time="2025-12-05T01:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:42:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 01:42:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8117 "" "Go-http-client/1.1"
Dec 05 01:43:00 compute-0 ceph-mon[192914]: pgmap v1001: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1002: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:01 compute-0 openstack_network_exporter[366555]: ERROR   01:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:43:01 compute-0 openstack_network_exporter[366555]: ERROR   01:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:43:01 compute-0 openstack_network_exporter[366555]: ERROR   01:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:43:01 compute-0 openstack_network_exporter[366555]: ERROR   01:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:43:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:43:01 compute-0 openstack_network_exporter[366555]: ERROR   01:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:43:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:43:02 compute-0 ceph-mon[192914]: pgmap v1002: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:43:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1003: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Dec 05 01:43:04 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1876526718' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec 05 01:43:04 compute-0 ceph-mon[192914]: pgmap v1003: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:04 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1876526718' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec 05 01:43:04 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14377 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 05 01:43:04 compute-0 ceph-mgr[193209]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 05 01:43:04 compute-0 ceph-mgr[193209]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 05 01:43:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1004: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:05 compute-0 ceph-mon[192914]: from='client.14377 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 05 01:43:06 compute-0 ceph-mon[192914]: pgmap v1004: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1005: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:43:08 compute-0 ceph-mon[192914]: pgmap v1005: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1006: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:10 compute-0 ceph-mon[192914]: pgmap v1006: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1007: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:11 compute-0 podman[402793]: 2025-12-05 01:43:11.715920396 +0000 UTC m=+0.110582253 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:43:11 compute-0 podman[402792]: 2025-12-05 01:43:11.728084668 +0000 UTC m=+0.123809915 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 01:43:12 compute-0 nova_compute[349548]: 2025-12-05 01:43:12.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:43:12 compute-0 nova_compute[349548]: 2025-12-05 01:43:12.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 01:43:12 compute-0 ceph-mon[192914]: pgmap v1007: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:43:13 compute-0 nova_compute[349548]: 2025-12-05 01:43:13.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:43:13 compute-0 nova_compute[349548]: 2025-12-05 01:43:13.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:43:13 compute-0 nova_compute[349548]: 2025-12-05 01:43:13.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:43:13 compute-0 nova_compute[349548]: 2025-12-05 01:43:13.123 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:43:13 compute-0 nova_compute[349548]: 2025-12-05 01:43:13.123 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:43:13 compute-0 nova_compute[349548]: 2025-12-05 01:43:13.124 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:43:13 compute-0 nova_compute[349548]: 2025-12-05 01:43:13.124 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:43:13 compute-0 nova_compute[349548]: 2025-12-05 01:43:13.124 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:43:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1008: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:43:13 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3070138185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:43:13 compute-0 nova_compute[349548]: 2025-12-05 01:43:13.720 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:43:13 compute-0 podman[402853]: 2025-12-05 01:43:13.724807938 +0000 UTC m=+0.129730412 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec 05 01:43:13 compute-0 podman[402854]: 2025-12-05 01:43:13.748774312 +0000 UTC m=+0.148674755 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Dec 05 01:43:14 compute-0 nova_compute[349548]: 2025-12-05 01:43:14.195 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:43:14 compute-0 nova_compute[349548]: 2025-12-05 01:43:14.196 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4584MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 01:43:14 compute-0 nova_compute[349548]: 2025-12-05 01:43:14.196 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:43:14 compute-0 nova_compute[349548]: 2025-12-05 01:43:14.197 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:43:14 compute-0 nova_compute[349548]: 2025-12-05 01:43:14.281 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 01:43:14 compute-0 nova_compute[349548]: 2025-12-05 01:43:14.282 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 01:43:14 compute-0 nova_compute[349548]: 2025-12-05 01:43:14.301 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:43:14 compute-0 ceph-mon[192914]: pgmap v1008: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:14 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3070138185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:43:14 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:43:14 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3767186678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:43:14 compute-0 nova_compute[349548]: 2025-12-05 01:43:14.788 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:43:14 compute-0 nova_compute[349548]: 2025-12-05 01:43:14.798 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:43:14 compute-0 nova_compute[349548]: 2025-12-05 01:43:14.813 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:43:14 compute-0 nova_compute[349548]: 2025-12-05 01:43:14.815 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 01:43:14 compute-0 nova_compute[349548]: 2025-12-05 01:43:14.815 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:43:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1009: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:15 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3767186678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:43:15 compute-0 nova_compute[349548]: 2025-12-05 01:43:15.811 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:43:15 compute-0 nova_compute[349548]: 2025-12-05 01:43:15.811 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:43:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:43:16
Dec 05 01:43:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:43:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:43:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'images', '.rgw.root', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', '.mgr']
Dec 05 01:43:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:43:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:43:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:43:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:43:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:43:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:43:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:43:16 compute-0 nova_compute[349548]: 2025-12-05 01:43:16.459 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:43:16 compute-0 nova_compute[349548]: 2025-12-05 01:43:16.460 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 01:43:16 compute-0 ceph-mon[192914]: pgmap v1009: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:16 compute-0 nova_compute[349548]: 2025-12-05 01:43:16.461 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 01:43:16 compute-0 nova_compute[349548]: 2025-12-05 01:43:16.485 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 01:43:16 compute-0 nova_compute[349548]: 2025-12-05 01:43:16.487 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:43:16 compute-0 nova_compute[349548]: 2025-12-05 01:43:16.488 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:43:16 compute-0 nova_compute[349548]: 2025-12-05 01:43:16.488 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:43:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:43:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:43:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:43:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:43:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:43:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:43:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:43:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:43:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:43:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:43:17 compute-0 ceph-mgr[193209]: client.0 ms_handle_reset on v2:192.168.122.100:6800/858078637
Dec 05 01:43:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1010: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:43:17 compute-0 podman[402914]: 2025-12-05 01:43:17.710594622 +0000 UTC m=+0.128545188 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release-0.7.12=, config_id=edpm, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 05 01:43:18 compute-0 ceph-mon[192914]: pgmap v1010: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1011: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:20 compute-0 ceph-mon[192914]: pgmap v1011: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1012: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:22 compute-0 ceph-mon[192914]: pgmap v1012: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:43:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1013: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:23 compute-0 ceph-mon[192914]: pgmap v1013: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:24 compute-0 podman[402935]: 2025-12-05 01:43:24.713363308 +0000 UTC m=+0.121217703 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:43:24 compute-0 podman[402936]: 2025-12-05 01:43:24.76178744 +0000 UTC m=+0.170736185 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec 05 01:43:24 compute-0 podman[402979]: 2025-12-05 01:43:24.901875223 +0000 UTC m=+0.096896498 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:43:24 compute-0 podman[402980]: 2025-12-05 01:43:24.953086084 +0000 UTC m=+0.139907978 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, version=9.6, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.buildah.version=1.33.7, container_name=openstack_network_exporter, distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 05 01:43:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1014: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:25 compute-0 ceph-mon[192914]: pgmap v1014: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Dec 05 01:43:26 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3920326235' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14383 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 05 01:43:26 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/3920326235' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec 05 01:43:26 compute-0 ceph-mon[192914]: from='client.14383 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:43:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1015: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:27 compute-0 ceph-mon[192914]: pgmap v1015: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:43:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1016: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:29 compute-0 ceph-mon[192914]: pgmap v1016: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:29 compute-0 sudo[403022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:43:29 compute-0 sudo[403022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:43:29 compute-0 sudo[403022]: pam_unix(sudo:session): session closed for user root
Dec 05 01:43:29 compute-0 sudo[403047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:43:29 compute-0 sudo[403047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:43:29 compute-0 sudo[403047]: pam_unix(sudo:session): session closed for user root
Dec 05 01:43:29 compute-0 podman[158197]: time="2025-12-05T01:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:43:29 compute-0 sudo[403072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:43:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 01:43:29 compute-0 sudo[403072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:43:29 compute-0 sudo[403072]: pam_unix(sudo:session): session closed for user root
Dec 05 01:43:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8116 "" "Go-http-client/1.1"
Dec 05 01:43:29 compute-0 sudo[403097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:43:29 compute-0 sudo[403097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:43:30 compute-0 sudo[403097]: pam_unix(sudo:session): session closed for user root
Dec 05 01:43:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:43:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:43:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:43:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:43:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:43:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:43:30 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7088d53b-39bd-4c69-9627-eed289b12f36 does not exist
Dec 05 01:43:30 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8d4fad16-5f58-4fcf-8dd0-d3295ec433de does not exist
Dec 05 01:43:30 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev bf8ed7d2-c731-42e9-a95b-dad008959aa4 does not exist
Dec 05 01:43:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:43:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:43:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:43:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:43:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:43:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:43:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:43:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:43:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:43:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:43:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:43:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:43:30 compute-0 sudo[403154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:43:30 compute-0 sudo[403154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:43:30 compute-0 sudo[403154]: pam_unix(sudo:session): session closed for user root
Dec 05 01:43:30 compute-0 sudo[403179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:43:30 compute-0 sudo[403179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:43:30 compute-0 sudo[403179]: pam_unix(sudo:session): session closed for user root
Dec 05 01:43:31 compute-0 sudo[403204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:43:31 compute-0 sudo[403204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:43:31 compute-0 sudo[403204]: pam_unix(sudo:session): session closed for user root
Dec 05 01:43:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1017: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:31 compute-0 sudo[403229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:43:31 compute-0 sudo[403229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:43:31 compute-0 openstack_network_exporter[366555]: ERROR   01:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:43:31 compute-0 openstack_network_exporter[366555]: ERROR   01:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:43:31 compute-0 openstack_network_exporter[366555]: ERROR   01:43:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:43:31 compute-0 openstack_network_exporter[366555]: ERROR   01:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:43:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:43:31 compute-0 openstack_network_exporter[366555]: ERROR   01:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:43:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:43:31 compute-0 ceph-mon[192914]: pgmap v1017: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:31 compute-0 podman[403293]: 2025-12-05 01:43:31.876472015 +0000 UTC m=+0.111078307 container create 91fe2b47e734f34a455f07ffa0716a4ea95cd65365714bac139d549ccc623268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 01:43:31 compute-0 podman[403293]: 2025-12-05 01:43:31.835553893 +0000 UTC m=+0.070160245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:43:31 compute-0 systemd[1]: Started libpod-conmon-91fe2b47e734f34a455f07ffa0716a4ea95cd65365714bac139d549ccc623268.scope.
Dec 05 01:43:31 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:43:32 compute-0 podman[403293]: 2025-12-05 01:43:32.014146269 +0000 UTC m=+0.248752621 container init 91fe2b47e734f34a455f07ffa0716a4ea95cd65365714bac139d549ccc623268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Dec 05 01:43:32 compute-0 podman[403293]: 2025-12-05 01:43:32.038801143 +0000 UTC m=+0.273407435 container start 91fe2b47e734f34a455f07ffa0716a4ea95cd65365714bac139d549ccc623268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dirac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 05 01:43:32 compute-0 podman[403293]: 2025-12-05 01:43:32.045761029 +0000 UTC m=+0.280367321 container attach 91fe2b47e734f34a455f07ffa0716a4ea95cd65365714bac139d549ccc623268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dirac, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Dec 05 01:43:32 compute-0 mystifying_dirac[403309]: 167 167
Dec 05 01:43:32 compute-0 systemd[1]: libpod-91fe2b47e734f34a455f07ffa0716a4ea95cd65365714bac139d549ccc623268.scope: Deactivated successfully.
Dec 05 01:43:32 compute-0 podman[403293]: 2025-12-05 01:43:32.052100447 +0000 UTC m=+0.286706759 container died 91fe2b47e734f34a455f07ffa0716a4ea95cd65365714bac139d549ccc623268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 05 01:43:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-e75bbf53bcd35b182351b025f08645b4810373b029537b66fb1032588183f4d7-merged.mount: Deactivated successfully.
Dec 05 01:43:32 compute-0 podman[403293]: 2025-12-05 01:43:32.13248396 +0000 UTC m=+0.367090262 container remove 91fe2b47e734f34a455f07ffa0716a4ea95cd65365714bac139d549ccc623268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dirac, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec 05 01:43:32 compute-0 systemd[1]: libpod-conmon-91fe2b47e734f34a455f07ffa0716a4ea95cd65365714bac139d549ccc623268.scope: Deactivated successfully.
Dec 05 01:43:32 compute-0 podman[403331]: 2025-12-05 01:43:32.400605985 +0000 UTC m=+0.074871328 container create 3af65f3ed41dcfc31d32fbacf90ea24a53072352fe439701e2f54dd2f4dec6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 05 01:43:32 compute-0 podman[403331]: 2025-12-05 01:43:32.372377791 +0000 UTC m=+0.046643174 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:43:32 compute-0 systemd[1]: Started libpod-conmon-3af65f3ed41dcfc31d32fbacf90ea24a53072352fe439701e2f54dd2f4dec6a3.scope.
Dec 05 01:43:32 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:43:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5730e355804d460d3c550e41792c26211984e56a48526a1d70b82c629c58486e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:43:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5730e355804d460d3c550e41792c26211984e56a48526a1d70b82c629c58486e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:43:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5730e355804d460d3c550e41792c26211984e56a48526a1d70b82c629c58486e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:43:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5730e355804d460d3c550e41792c26211984e56a48526a1d70b82c629c58486e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:43:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5730e355804d460d3c550e41792c26211984e56a48526a1d70b82c629c58486e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:43:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:43:32 compute-0 podman[403331]: 2025-12-05 01:43:32.543103815 +0000 UTC m=+0.217369238 container init 3af65f3ed41dcfc31d32fbacf90ea24a53072352fe439701e2f54dd2f4dec6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_stonebraker, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:43:32 compute-0 podman[403331]: 2025-12-05 01:43:32.561120292 +0000 UTC m=+0.235385635 container start 3af65f3ed41dcfc31d32fbacf90ea24a53072352fe439701e2f54dd2f4dec6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_stonebraker, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 05 01:43:32 compute-0 podman[403331]: 2025-12-05 01:43:32.566710769 +0000 UTC m=+0.240976182 container attach 3af65f3ed41dcfc31d32fbacf90ea24a53072352fe439701e2f54dd2f4dec6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:43:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1018: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:33 compute-0 ceph-mon[192914]: pgmap v1018: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:33 compute-0 keen_stonebraker[403347]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:43:33 compute-0 keen_stonebraker[403347]: --> relative data size: 1.0
Dec 05 01:43:33 compute-0 keen_stonebraker[403347]: --> All data devices are unavailable
Dec 05 01:43:33 compute-0 systemd[1]: libpod-3af65f3ed41dcfc31d32fbacf90ea24a53072352fe439701e2f54dd2f4dec6a3.scope: Deactivated successfully.
Dec 05 01:43:33 compute-0 systemd[1]: libpod-3af65f3ed41dcfc31d32fbacf90ea24a53072352fe439701e2f54dd2f4dec6a3.scope: Consumed 1.277s CPU time.
Dec 05 01:43:33 compute-0 podman[403331]: 2025-12-05 01:43:33.904226218 +0000 UTC m=+1.578491571 container died 3af65f3ed41dcfc31d32fbacf90ea24a53072352fe439701e2f54dd2f4dec6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_stonebraker, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 05 01:43:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-5730e355804d460d3c550e41792c26211984e56a48526a1d70b82c629c58486e-merged.mount: Deactivated successfully.
Dec 05 01:43:33 compute-0 podman[403331]: 2025-12-05 01:43:33.995795015 +0000 UTC m=+1.670060348 container remove 3af65f3ed41dcfc31d32fbacf90ea24a53072352fe439701e2f54dd2f4dec6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:43:34 compute-0 systemd[1]: libpod-conmon-3af65f3ed41dcfc31d32fbacf90ea24a53072352fe439701e2f54dd2f4dec6a3.scope: Deactivated successfully.
Dec 05 01:43:34 compute-0 sudo[403229]: pam_unix(sudo:session): session closed for user root
Dec 05 01:43:34 compute-0 sudo[403389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:43:34 compute-0 sudo[403389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:43:34 compute-0 sudo[403389]: pam_unix(sudo:session): session closed for user root
Dec 05 01:43:34 compute-0 sudo[403414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:43:34 compute-0 sudo[403414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:43:34 compute-0 sudo[403414]: pam_unix(sudo:session): session closed for user root
Dec 05 01:43:34 compute-0 sudo[403439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:43:34 compute-0 sudo[403439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:43:34 compute-0 sudo[403439]: pam_unix(sudo:session): session closed for user root
Dec 05 01:43:34 compute-0 sudo[403464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:43:34 compute-0 sudo[403464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:43:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1019: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:35 compute-0 podman[403529]: 2025-12-05 01:43:35.200315901 +0000 UTC m=+0.095283562 container create 76e3a69a38bbf8f69a9ce9e2cf4dac58c5a8d3e40537540fb28374fa9c19dbdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 05 01:43:35 compute-0 ceph-mon[192914]: pgmap v1019: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:35 compute-0 podman[403529]: 2025-12-05 01:43:35.164560015 +0000 UTC m=+0.059527716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:43:35 compute-0 systemd[1]: Started libpod-conmon-76e3a69a38bbf8f69a9ce9e2cf4dac58c5a8d3e40537540fb28374fa9c19dbdb.scope.
Dec 05 01:43:35 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:43:35 compute-0 podman[403529]: 2025-12-05 01:43:35.336685289 +0000 UTC m=+0.231652990 container init 76e3a69a38bbf8f69a9ce9e2cf4dac58c5a8d3e40537540fb28374fa9c19dbdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_carver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 05 01:43:35 compute-0 podman[403529]: 2025-12-05 01:43:35.354000936 +0000 UTC m=+0.248968577 container start 76e3a69a38bbf8f69a9ce9e2cf4dac58c5a8d3e40537540fb28374fa9c19dbdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 01:43:35 compute-0 podman[403529]: 2025-12-05 01:43:35.36193773 +0000 UTC m=+0.256905361 container attach 76e3a69a38bbf8f69a9ce9e2cf4dac58c5a8d3e40537540fb28374fa9c19dbdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_carver, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 01:43:35 compute-0 stoic_carver[403545]: 167 167
Dec 05 01:43:35 compute-0 systemd[1]: libpod-76e3a69a38bbf8f69a9ce9e2cf4dac58c5a8d3e40537540fb28374fa9c19dbdb.scope: Deactivated successfully.
Dec 05 01:43:35 compute-0 podman[403529]: 2025-12-05 01:43:35.365554952 +0000 UTC m=+0.260522633 container died 76e3a69a38bbf8f69a9ce9e2cf4dac58c5a8d3e40537540fb28374fa9c19dbdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 01:43:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba0be9a4c806b0a40b06cdfb7c93640ed1c146b223cf566000f3c716d69d3288-merged.mount: Deactivated successfully.
Dec 05 01:43:35 compute-0 podman[403529]: 2025-12-05 01:43:35.438298039 +0000 UTC m=+0.333265670 container remove 76e3a69a38bbf8f69a9ce9e2cf4dac58c5a8d3e40537540fb28374fa9c19dbdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 05 01:43:35 compute-0 systemd[1]: libpod-conmon-76e3a69a38bbf8f69a9ce9e2cf4dac58c5a8d3e40537540fb28374fa9c19dbdb.scope: Deactivated successfully.
Dec 05 01:43:35 compute-0 podman[403570]: 2025-12-05 01:43:35.727270851 +0000 UTC m=+0.091984070 container create 1dd588ffe7d321461d73aa4e3720352bc71e1790fe811ff2309290dbcc6a1f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 05 01:43:35 compute-0 podman[403570]: 2025-12-05 01:43:35.693312205 +0000 UTC m=+0.058025474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:43:35 compute-0 systemd[1]: Started libpod-conmon-1dd588ffe7d321461d73aa4e3720352bc71e1790fe811ff2309290dbcc6a1f81.scope.
Dec 05 01:43:35 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:43:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b31646c1cd145c7e61e246b89d0a16caa6b8b63d3e2b8b8231027927c5311d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:43:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b31646c1cd145c7e61e246b89d0a16caa6b8b63d3e2b8b8231027927c5311d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:43:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b31646c1cd145c7e61e246b89d0a16caa6b8b63d3e2b8b8231027927c5311d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:43:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b31646c1cd145c7e61e246b89d0a16caa6b8b63d3e2b8b8231027927c5311d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:43:35 compute-0 podman[403570]: 2025-12-05 01:43:35.897240474 +0000 UTC m=+0.261953723 container init 1dd588ffe7d321461d73aa4e3720352bc71e1790fe811ff2309290dbcc6a1f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 05 01:43:35 compute-0 podman[403570]: 2025-12-05 01:43:35.931366944 +0000 UTC m=+0.296080163 container start 1dd588ffe7d321461d73aa4e3720352bc71e1790fe811ff2309290dbcc6a1f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wiles, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:43:35 compute-0 podman[403570]: 2025-12-05 01:43:35.937817716 +0000 UTC m=+0.302530985 container attach 1dd588ffe7d321461d73aa4e3720352bc71e1790fe811ff2309290dbcc6a1f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 01:43:36 compute-0 reverent_wiles[403587]: {
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:     "0": [
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:         {
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "devices": [
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "/dev/loop3"
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             ],
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "lv_name": "ceph_lv0",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "lv_size": "21470642176",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "name": "ceph_lv0",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "tags": {
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.cluster_name": "ceph",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.crush_device_class": "",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.encrypted": "0",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.osd_id": "0",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.type": "block",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.vdo": "0"
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             },
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "type": "block",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "vg_name": "ceph_vg0"
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:         }
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:     ],
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:     "1": [
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:         {
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "devices": [
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "/dev/loop4"
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             ],
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "lv_name": "ceph_lv1",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "lv_size": "21470642176",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "name": "ceph_lv1",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "tags": {
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.cluster_name": "ceph",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.crush_device_class": "",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.encrypted": "0",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.osd_id": "1",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.type": "block",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.vdo": "0"
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             },
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "type": "block",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "vg_name": "ceph_vg1"
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:         }
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:     ],
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:     "2": [
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:         {
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "devices": [
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "/dev/loop5"
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             ],
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "lv_name": "ceph_lv2",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "lv_size": "21470642176",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "name": "ceph_lv2",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "tags": {
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.cluster_name": "ceph",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.crush_device_class": "",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.encrypted": "0",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.osd_id": "2",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.type": "block",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:                 "ceph.vdo": "0"
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             },
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "type": "block",
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:             "vg_name": "ceph_vg2"
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:         }
Dec 05 01:43:36 compute-0 reverent_wiles[403587]:     ]
Dec 05 01:43:36 compute-0 reverent_wiles[403587]: }
Dec 05 01:43:36 compute-0 systemd[1]: libpod-1dd588ffe7d321461d73aa4e3720352bc71e1790fe811ff2309290dbcc6a1f81.scope: Deactivated successfully.
Dec 05 01:43:36 compute-0 podman[403570]: 2025-12-05 01:43:36.786118828 +0000 UTC m=+1.150832077 container died 1dd588ffe7d321461d73aa4e3720352bc71e1790fe811ff2309290dbcc6a1f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 05 01:43:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-26b31646c1cd145c7e61e246b89d0a16caa6b8b63d3e2b8b8231027927c5311d-merged.mount: Deactivated successfully.
Dec 05 01:43:36 compute-0 podman[403570]: 2025-12-05 01:43:36.873710872 +0000 UTC m=+1.238424051 container remove 1dd588ffe7d321461d73aa4e3720352bc71e1790fe811ff2309290dbcc6a1f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 01:43:36 compute-0 systemd[1]: libpod-conmon-1dd588ffe7d321461d73aa4e3720352bc71e1790fe811ff2309290dbcc6a1f81.scope: Deactivated successfully.
Dec 05 01:43:36 compute-0 sudo[403464]: pam_unix(sudo:session): session closed for user root
Dec 05 01:43:37 compute-0 sudo[403610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:43:37 compute-0 sudo[403610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:43:37 compute-0 sudo[403610]: pam_unix(sudo:session): session closed for user root
Dec 05 01:43:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1020: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:37 compute-0 sudo[403635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:43:37 compute-0 sudo[403635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:43:37 compute-0 sudo[403635]: pam_unix(sudo:session): session closed for user root
Dec 05 01:43:37 compute-0 ceph-mon[192914]: pgmap v1020: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:37 compute-0 sudo[403660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:43:37 compute-0 sudo[403660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:43:37 compute-0 sudo[403660]: pam_unix(sudo:session): session closed for user root
Dec 05 01:43:37 compute-0 sudo[403685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:43:37 compute-0 sudo[403685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:43:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:43:38 compute-0 podman[403749]: 2025-12-05 01:43:38.0516151 +0000 UTC m=+0.080564108 container create 2b61d259e0e1ab65efa375b395be6107647ddb386c59cec12a15e3acb555e8de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_grothendieck, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:43:38 compute-0 podman[403749]: 2025-12-05 01:43:38.019052824 +0000 UTC m=+0.048001842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:43:38 compute-0 systemd[1]: Started libpod-conmon-2b61d259e0e1ab65efa375b395be6107647ddb386c59cec12a15e3acb555e8de.scope.
Dec 05 01:43:38 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:43:38 compute-0 podman[403749]: 2025-12-05 01:43:38.211830199 +0000 UTC m=+0.240779207 container init 2b61d259e0e1ab65efa375b395be6107647ddb386c59cec12a15e3acb555e8de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:43:38 compute-0 podman[403749]: 2025-12-05 01:43:38.229025843 +0000 UTC m=+0.257974851 container start 2b61d259e0e1ab65efa375b395be6107647ddb386c59cec12a15e3acb555e8de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_grothendieck, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:43:38 compute-0 podman[403749]: 2025-12-05 01:43:38.236094832 +0000 UTC m=+0.265043890 container attach 2b61d259e0e1ab65efa375b395be6107647ddb386c59cec12a15e3acb555e8de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_grothendieck, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:43:38 compute-0 adoring_grothendieck[403764]: 167 167
Dec 05 01:43:38 compute-0 systemd[1]: libpod-2b61d259e0e1ab65efa375b395be6107647ddb386c59cec12a15e3acb555e8de.scope: Deactivated successfully.
Dec 05 01:43:38 compute-0 podman[403749]: 2025-12-05 01:43:38.241556175 +0000 UTC m=+0.270505183 container died 2b61d259e0e1ab65efa375b395be6107647ddb386c59cec12a15e3acb555e8de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_grothendieck, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 01:43:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-067adc8194c95d3bb24241c8649b3e74e7654c2f609745fbefe60e681b4aa8f8-merged.mount: Deactivated successfully.
Dec 05 01:43:38 compute-0 podman[403749]: 2025-12-05 01:43:38.306650757 +0000 UTC m=+0.335599725 container remove 2b61d259e0e1ab65efa375b395be6107647ddb386c59cec12a15e3acb555e8de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_grothendieck, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:43:38 compute-0 systemd[1]: libpod-conmon-2b61d259e0e1ab65efa375b395be6107647ddb386c59cec12a15e3acb555e8de.scope: Deactivated successfully.
Dec 05 01:43:38 compute-0 podman[403787]: 2025-12-05 01:43:38.602794141 +0000 UTC m=+0.097945507 container create 98e37e6af726f305a32294c320c231adbd74a22e516a187ca936c1ec7ac9a091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Dec 05 01:43:38 compute-0 podman[403787]: 2025-12-05 01:43:38.57078384 +0000 UTC m=+0.065935256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:43:38 compute-0 systemd[1]: Started libpod-conmon-98e37e6af726f305a32294c320c231adbd74a22e516a187ca936c1ec7ac9a091.scope.
Dec 05 01:43:38 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:43:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7500c5c55bfcaa541ae6fa2b90b22a8bf7226612b4d8b29a300804052b22d133/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:43:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7500c5c55bfcaa541ae6fa2b90b22a8bf7226612b4d8b29a300804052b22d133/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:43:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7500c5c55bfcaa541ae6fa2b90b22a8bf7226612b4d8b29a300804052b22d133/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:43:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7500c5c55bfcaa541ae6fa2b90b22a8bf7226612b4d8b29a300804052b22d133/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:43:38 compute-0 podman[403787]: 2025-12-05 01:43:38.775762629 +0000 UTC m=+0.270914055 container init 98e37e6af726f305a32294c320c231adbd74a22e516a187ca936c1ec7ac9a091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kirch, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:43:38 compute-0 podman[403787]: 2025-12-05 01:43:38.801516374 +0000 UTC m=+0.296667740 container start 98e37e6af726f305a32294c320c231adbd74a22e516a187ca936c1ec7ac9a091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:43:38 compute-0 podman[403787]: 2025-12-05 01:43:38.808963973 +0000 UTC m=+0.304115399 container attach 98e37e6af726f305a32294c320c231adbd74a22e516a187ca936c1ec7ac9a091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kirch, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:43:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1021: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:39 compute-0 ceph-mon[192914]: pgmap v1021: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:39 compute-0 laughing_kirch[403803]: {
Dec 05 01:43:39 compute-0 laughing_kirch[403803]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:43:39 compute-0 laughing_kirch[403803]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:43:39 compute-0 laughing_kirch[403803]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:43:39 compute-0 laughing_kirch[403803]:         "osd_id": 0,
Dec 05 01:43:39 compute-0 laughing_kirch[403803]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:43:39 compute-0 laughing_kirch[403803]:         "type": "bluestore"
Dec 05 01:43:39 compute-0 laughing_kirch[403803]:     },
Dec 05 01:43:39 compute-0 laughing_kirch[403803]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:43:39 compute-0 laughing_kirch[403803]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:43:39 compute-0 laughing_kirch[403803]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:43:39 compute-0 laughing_kirch[403803]:         "osd_id": 1,
Dec 05 01:43:39 compute-0 laughing_kirch[403803]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:43:39 compute-0 laughing_kirch[403803]:         "type": "bluestore"
Dec 05 01:43:39 compute-0 laughing_kirch[403803]:     },
Dec 05 01:43:39 compute-0 laughing_kirch[403803]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:43:39 compute-0 laughing_kirch[403803]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:43:39 compute-0 laughing_kirch[403803]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:43:39 compute-0 laughing_kirch[403803]:         "osd_id": 2,
Dec 05 01:43:39 compute-0 laughing_kirch[403803]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:43:39 compute-0 laughing_kirch[403803]:         "type": "bluestore"
Dec 05 01:43:39 compute-0 laughing_kirch[403803]:     }
Dec 05 01:43:39 compute-0 laughing_kirch[403803]: }
Dec 05 01:43:40 compute-0 systemd[1]: libpod-98e37e6af726f305a32294c320c231adbd74a22e516a187ca936c1ec7ac9a091.scope: Deactivated successfully.
Dec 05 01:43:40 compute-0 systemd[1]: libpod-98e37e6af726f305a32294c320c231adbd74a22e516a187ca936c1ec7ac9a091.scope: Consumed 1.228s CPU time.
Dec 05 01:43:40 compute-0 podman[403836]: 2025-12-05 01:43:40.124281768 +0000 UTC m=+0.065102513 container died 98e37e6af726f305a32294c320c231adbd74a22e516a187ca936c1ec7ac9a091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kirch, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 01:43:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-7500c5c55bfcaa541ae6fa2b90b22a8bf7226612b4d8b29a300804052b22d133-merged.mount: Deactivated successfully.
Dec 05 01:43:40 compute-0 podman[403836]: 2025-12-05 01:43:40.259653607 +0000 UTC m=+0.200474312 container remove 98e37e6af726f305a32294c320c231adbd74a22e516a187ca936c1ec7ac9a091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kirch, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:43:40 compute-0 systemd[1]: libpod-conmon-98e37e6af726f305a32294c320c231adbd74a22e516a187ca936c1ec7ac9a091.scope: Deactivated successfully.
Dec 05 01:43:40 compute-0 sudo[403685]: pam_unix(sudo:session): session closed for user root
Dec 05 01:43:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:43:40 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:43:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:43:40 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:43:40 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 448b738f-8d3e-4aab-b1d1-bdc02a44c502 does not exist
Dec 05 01:43:40 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 40a4e974-f7f8-43ca-8fbf-8483553eb1f3 does not exist
Dec 05 01:43:40 compute-0 sudo[403850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:43:40 compute-0 sudo[403850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:43:40 compute-0 sudo[403850]: pam_unix(sudo:session): session closed for user root
Dec 05 01:43:40 compute-0 sudo[403875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:43:40 compute-0 sudo[403875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:43:40 compute-0 sudo[403875]: pam_unix(sudo:session): session closed for user root
Dec 05 01:43:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1022: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:41 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:43:41 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:43:41 compute-0 ceph-mon[192914]: pgmap v1022: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:43:42 compute-0 podman[403900]: 2025-12-05 01:43:42.724692836 +0000 UTC m=+0.128440295 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 01:43:42 compute-0 podman[403901]: 2025-12-05 01:43:42.749499214 +0000 UTC m=+0.150153116 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:43:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1023: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:43 compute-0 ceph-mon[192914]: pgmap v1023: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:44 compute-0 podman[403941]: 2025-12-05 01:43:44.730005588 +0000 UTC m=+0.129580038 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 01:43:44 compute-0 podman[403940]: 2025-12-05 01:43:44.752498691 +0000 UTC m=+0.159386877 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec 05 01:43:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1024: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:45 compute-0 ceph-mon[192914]: pgmap v1024: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 01:43:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/800433581' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:43:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 01:43:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/800433581' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:43:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:43:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:43:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/800433581' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:43:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/800433581' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:43:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:43:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:43:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:43:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:43:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1025: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:47 compute-0 ceph-mon[192914]: pgmap v1025: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:43:48 compute-0 podman[403976]: 2025-12-05 01:43:48.757719103 +0000 UTC m=+0.157755330 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, distribution-scope=public, io.openshift.tags=base rhel9, managed_by=edpm_ansible, config_id=edpm, vcs-type=git, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=kepler, release=1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=)
Dec 05 01:43:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1026: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:49 compute-0 ceph-mon[192914]: pgmap v1026: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1027: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:51 compute-0 ceph-mon[192914]: pgmap v1027: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:43:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1028: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:53 compute-0 ceph-mon[192914]: pgmap v1028: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1029: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:55 compute-0 ceph-mon[192914]: pgmap v1029: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:55 compute-0 podman[403998]: 2025-12-05 01:43:55.718700063 +0000 UTC m=+0.107563708 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 01:43:55 compute-0 podman[403997]: 2025-12-05 01:43:55.756953 +0000 UTC m=+0.156393153 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 01:43:55 compute-0 podman[404005]: 2025-12-05 01:43:55.783776804 +0000 UTC m=+0.154958971 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, io.openshift.expose-services=, version=9.6, config_id=edpm, architecture=x86_64, distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal)
Dec 05 01:43:55 compute-0 podman[403999]: 2025-12-05 01:43:55.784213497 +0000 UTC m=+0.161013422 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125)
Dec 05 01:43:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:43:56.170 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:43:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:43:56.171 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:43:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:43:56.171 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:43:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1030: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:57 compute-0 ceph-mon[192914]: pgmap v1030: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:43:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1031: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:59 compute-0 ceph-mon[192914]: pgmap v1031: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:43:59 compute-0 podman[158197]: time="2025-12-05T01:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:43:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 01:43:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8105 "" "Go-http-client/1.1"
Dec 05 01:44:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1032: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:01 compute-0 ceph-mon[192914]: pgmap v1032: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:01 compute-0 openstack_network_exporter[366555]: ERROR   01:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:44:01 compute-0 openstack_network_exporter[366555]: ERROR   01:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:44:01 compute-0 openstack_network_exporter[366555]: ERROR   01:44:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:44:01 compute-0 openstack_network_exporter[366555]: ERROR   01:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:44:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:44:01 compute-0 openstack_network_exporter[366555]: ERROR   01:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:44:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:44:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:44:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1033: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:03 compute-0 ceph-mon[192914]: pgmap v1033: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1034: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:05 compute-0 ceph-mon[192914]: pgmap v1034: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1035: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:07 compute-0 ceph-mon[192914]: pgmap v1035: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:44:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1036: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:09 compute-0 ceph-mon[192914]: pgmap v1036: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:10 compute-0 sshd-session[404080]: Connection reset by authenticating user root 91.202.233.33 port 49514 [preauth]
Dec 05 01:44:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1037: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:11 compute-0 ceph-mon[192914]: pgmap v1037: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:12 compute-0 nova_compute[349548]: 2025-12-05 01:44:12.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:44:12 compute-0 nova_compute[349548]: 2025-12-05 01:44:12.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 01:44:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:44:12 compute-0 sshd-session[404082]: Connection reset by authenticating user root 91.202.233.33 port 22304 [preauth]
Dec 05 01:44:13 compute-0 nova_compute[349548]: 2025-12-05 01:44:13.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:44:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1038: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:13 compute-0 ceph-mon[192914]: pgmap v1038: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:13 compute-0 podman[404086]: 2025-12-05 01:44:13.715848088 +0000 UTC m=+0.125877209 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:44:13 compute-0 podman[404087]: 2025-12-05 01:44:13.739459681 +0000 UTC m=+0.143263877 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:44:14 compute-0 nova_compute[349548]: 2025-12-05 01:44:14.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:44:14 compute-0 nova_compute[349548]: 2025-12-05 01:44:14.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:44:14 compute-0 nova_compute[349548]: 2025-12-05 01:44:14.148 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:44:14 compute-0 nova_compute[349548]: 2025-12-05 01:44:14.148 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:44:14 compute-0 nova_compute[349548]: 2025-12-05 01:44:14.149 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:44:14 compute-0 nova_compute[349548]: 2025-12-05 01:44:14.149 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:44:14 compute-0 nova_compute[349548]: 2025-12-05 01:44:14.149 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:44:14 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:44:14 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1262124617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:44:14 compute-0 nova_compute[349548]: 2025-12-05 01:44:14.636 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:44:14 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1262124617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:44:15 compute-0 nova_compute[349548]: 2025-12-05 01:44:15.165 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:44:15 compute-0 nova_compute[349548]: 2025-12-05 01:44:15.167 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4586MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 01:44:15 compute-0 nova_compute[349548]: 2025-12-05 01:44:15.168 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:44:15 compute-0 nova_compute[349548]: 2025-12-05 01:44:15.169 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:44:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1039: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:15 compute-0 nova_compute[349548]: 2025-12-05 01:44:15.276 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 01:44:15 compute-0 nova_compute[349548]: 2025-12-05 01:44:15.277 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 01:44:15 compute-0 nova_compute[349548]: 2025-12-05 01:44:15.305 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:44:15 compute-0 ceph-mon[192914]: pgmap v1039: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:15 compute-0 podman[404168]: 2025-12-05 01:44:15.731791248 +0000 UTC m=+0.138482663 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute)
Dec 05 01:44:15 compute-0 podman[404169]: 2025-12-05 01:44:15.758510419 +0000 UTC m=+0.158518376 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Dec 05 01:44:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:44:15 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2676241468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:44:15 compute-0 nova_compute[349548]: 2025-12-05 01:44:15.877 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:44:15 compute-0 nova_compute[349548]: 2025-12-05 01:44:15.891 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:44:15 compute-0 nova_compute[349548]: 2025-12-05 01:44:15.916 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:44:15 compute-0 nova_compute[349548]: 2025-12-05 01:44:15.919 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 01:44:15 compute-0 nova_compute[349548]: 2025-12-05 01:44:15.919 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:44:16 compute-0 sshd-session[404084]: Connection reset by authenticating user root 91.202.233.33 port 22332 [preauth]
Dec 05 01:44:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:44:16
Dec 05 01:44:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:44:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:44:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', '.mgr', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'vms']
Dec 05 01:44:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:44:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:44:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:44:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:44:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:44:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:44:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:44:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:44:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:44:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:44:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:44:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:44:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:44:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:44:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:44:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:44:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:44:16 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2676241468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:44:16 compute-0 nova_compute[349548]: 2025-12-05 01:44:16.916 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:44:16 compute-0 nova_compute[349548]: 2025-12-05 01:44:16.916 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:44:16 compute-0 nova_compute[349548]: 2025-12-05 01:44:16.917 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 01:44:16 compute-0 nova_compute[349548]: 2025-12-05 01:44:16.917 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 01:44:16 compute-0 nova_compute[349548]: 2025-12-05 01:44:16.946 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 01:44:16 compute-0 nova_compute[349548]: 2025-12-05 01:44:16.947 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:44:16 compute-0 nova_compute[349548]: 2025-12-05 01:44:16.948 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:44:17 compute-0 nova_compute[349548]: 2025-12-05 01:44:17.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:44:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1040: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:44:17 compute-0 ceph-mon[192914]: pgmap v1040: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:18 compute-0 sshd-session[404207]: Connection reset by authenticating user root 91.202.233.33 port 22338 [preauth]
Dec 05 01:44:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1041: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:19 compute-0 ceph-mon[192914]: pgmap v1041: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:19 compute-0 podman[404212]: 2025-12-05 01:44:19.769772078 +0000 UTC m=+0.119791347 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, architecture=x86_64, io.openshift.tags=base rhel9, name=ubi9, release-0.7.12=, distribution-scope=public)
Dec 05 01:44:20 compute-0 sshd-session[404230]: Accepted publickey for zuul from 38.102.83.179 port 45762 ssh2: RSA SHA256:TVb6vFiLOEHtrkkdoyIozA4b0isBLmSla+NPtR7bFX8
Dec 05 01:44:20 compute-0 systemd-logind[792]: New session 61 of user zuul.
Dec 05 01:44:20 compute-0 systemd[1]: Started Session 61 of User zuul.
Dec 05 01:44:20 compute-0 sshd-session[404230]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 01:44:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1042: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:22 compute-0 sshd-session[404209]: Connection reset by authenticating user root 91.202.233.33 port 22354 [preauth]
Dec 05 01:44:22 compute-0 python3[404407]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:44:22 compute-0 ceph-mon[192914]: pgmap v1042: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:44:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 01:44:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 1800.1 total, 600.0 interval
                                            Cumulative writes: 5879 writes, 24K keys, 5879 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 5879 writes, 995 syncs, 5.91 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s
                                            Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 01:44:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1043: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:24 compute-0 ceph-mon[192914]: pgmap v1043: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:24 compute-0 sudo[404638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awubiqaxjgivrtmmfrjzuhafkpteydxx ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764899063.7693734-38464-186272741370683/AnsiballZ_command.py'
Dec 05 01:44:24 compute-0 sudo[404638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:44:24 compute-0 python3[404640]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:44:24 compute-0 sudo[404638]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1044: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:25 compute-0 sudo[404791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhfskotmhqhgtyaggvsioljmjhdqynci ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764899065.3304985-38475-226854964137732/AnsiballZ_command.py'
Dec 05 01:44:25 compute-0 sudo[404791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:44:26 compute-0 podman[404794]: 2025-12-05 01:44:26.010707206 +0000 UTC m=+0.100841085 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:44:26 compute-0 podman[404793]: 2025-12-05 01:44:26.02650011 +0000 UTC m=+0.115017724 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 01:44:26 compute-0 podman[404796]: 2025-12-05 01:44:26.058089708 +0000 UTC m=+0.127446373 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64)
Dec 05 01:44:26 compute-0 podman[404795]: 2025-12-05 01:44:26.080614321 +0000 UTC m=+0.158580868 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:44:26 compute-0 python3[404808]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "nova_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:44:26 compute-0 ceph-mon[192914]: pgmap v1044: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:44:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1045: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:27 compute-0 sudo[404791]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:44:28 compute-0 ceph-mon[192914]: pgmap v1045: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:29 compute-0 python3[405028]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 05 01:44:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 01:44:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 1800.2 total, 600.0 interval
                                            Cumulative writes: 7187 writes, 29K keys, 7187 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 7187 writes, 1327 syncs, 5.42 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                            Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 01:44:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1046: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:29 compute-0 podman[158197]: time="2025-12-05T01:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:44:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 01:44:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8106 "" "Go-http-client/1.1"
Dec 05 01:44:30 compute-0 sudo[405179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yojmtssaptdzrypgpcconjhvrlokssqz ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764899069.527561-38519-8122186862849/AnsiballZ_setup.py'
Dec 05 01:44:30 compute-0 sudo[405179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:44:30 compute-0 ceph-mon[192914]: pgmap v1046: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:30 compute-0 python3[405181]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 05 01:44:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1047: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:31 compute-0 openstack_network_exporter[366555]: ERROR   01:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:44:31 compute-0 openstack_network_exporter[366555]: ERROR   01:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:44:31 compute-0 openstack_network_exporter[366555]: ERROR   01:44:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:44:31 compute-0 openstack_network_exporter[366555]: ERROR   01:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:44:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:44:31 compute-0 openstack_network_exporter[366555]: ERROR   01:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:44:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:44:31 compute-0 sudo[405179]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:32 compute-0 ceph-mon[192914]: pgmap v1047: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:44:33 compute-0 sudo[405414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrowyreuejwtgfjaqmvegvgmirojhubu ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764899072.4662018-38548-16598857451479/AnsiballZ_command.py'
Dec 05 01:44:33 compute-0 sudo[405414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:44:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1048: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:33 compute-0 python3[405416]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:44:33 compute-0 sudo[405414]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:34 compute-0 sudo[405580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbeynhuakolprplpystuptbjszdaysxt ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764899073.7548213-38565-61844606820233/AnsiballZ_command.py'
Dec 05 01:44:34 compute-0 sudo[405580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 01:44:34 compute-0 ceph-mon[192914]: pgmap v1048: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:34 compute-0 python3[405582]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 01:44:34 compute-0 sudo[405580]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1049: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 01:44:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 1800.1 total, 600.0 interval
                                            Cumulative writes: 5917 writes, 24K keys, 5917 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 5917 writes, 1021 syncs, 5.80 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                            Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 01:44:36 compute-0 ceph-mon[192914]: pgmap v1049: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1050: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:44:37 compute-0 ceph-mgr[193209]: [devicehealth INFO root] Check health
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.313 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.314 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.318 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.319 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.319 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.330 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.330 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.330 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.331 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.331 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.332 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.332 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.332 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:44:38 compute-0 ceph-mon[192914]: pgmap v1050: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1051: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:40 compute-0 ceph-mon[192914]: pgmap v1051: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:40 compute-0 sudo[405622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:44:40 compute-0 sudo[405622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:44:40 compute-0 sudo[405622]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:40 compute-0 sudo[405647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:44:40 compute-0 sudo[405647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:44:40 compute-0 sudo[405647]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:41 compute-0 sudo[405672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:44:41 compute-0 sudo[405672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:44:41 compute-0 sudo[405672]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1052: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:41 compute-0 sudo[405697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:44:41 compute-0 sudo[405697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:44:41 compute-0 sudo[405697]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:42 compute-0 sudo[405752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:44:42 compute-0 sudo[405752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:44:42 compute-0 sudo[405752]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:42 compute-0 sudo[405777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:44:42 compute-0 sudo[405777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:44:42 compute-0 sudo[405777]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:42 compute-0 sudo[405802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:44:42 compute-0 sudo[405802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:44:42 compute-0 sudo[405802]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:42 compute-0 ceph-mon[192914]: pgmap v1052: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:42 compute-0 sudo[405827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Dec 05 01:44:42 compute-0 sudo[405827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:44:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:44:42 compute-0 sudo[405827]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:44:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:44:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:44:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:44:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:44:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:44:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:44:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:44:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:44:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:44:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 17616af6-a58a-49a5-9032-cec58491f179 does not exist
Dec 05 01:44:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 2b6f979b-329d-40c6-991d-42ff470aaf31 does not exist
Dec 05 01:44:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 4ada5392-ece7-42b2-811a-41983204fae1 does not exist
Dec 05 01:44:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:44:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:44:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:44:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:44:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:44:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:44:42 compute-0 sudo[405869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:44:42 compute-0 sudo[405869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:44:42 compute-0 sudo[405869]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:43 compute-0 sudo[405894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:44:43 compute-0 sudo[405894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:44:43 compute-0 sudo[405894]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1053: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:43 compute-0 sudo[405919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:44:43 compute-0 sudo[405919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:44:43 compute-0 sudo[405919]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:43 compute-0 sudo[405944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:44:43 compute-0 sudo[405944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:44:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:44:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:44:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:44:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:44:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:44:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:44:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:44:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:44:43 compute-0 ceph-mon[192914]: pgmap v1053: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:43 compute-0 podman[406007]: 2025-12-05 01:44:43.863304472 +0000 UTC m=+0.067371524 container create b4b51294a39d3e7fd0c92a75eb26a13d820a7aadd90d72328f100e015a3c14c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_taussig, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 01:44:43 compute-0 systemd[1]: Started libpod-conmon-b4b51294a39d3e7fd0c92a75eb26a13d820a7aadd90d72328f100e015a3c14c6.scope.
Dec 05 01:44:43 compute-0 podman[406007]: 2025-12-05 01:44:43.832427054 +0000 UTC m=+0.036494116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:44:43 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:44:43 compute-0 podman[406007]: 2025-12-05 01:44:43.993359687 +0000 UTC m=+0.197426749 container init b4b51294a39d3e7fd0c92a75eb26a13d820a7aadd90d72328f100e015a3c14c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_taussig, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 05 01:44:44 compute-0 podman[406007]: 2025-12-05 01:44:44.011262381 +0000 UTC m=+0.215329413 container start b4b51294a39d3e7fd0c92a75eb26a13d820a7aadd90d72328f100e015a3c14c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_taussig, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:44:44 compute-0 podman[406007]: 2025-12-05 01:44:44.017102295 +0000 UTC m=+0.221169337 container attach b4b51294a39d3e7fd0c92a75eb26a13d820a7aadd90d72328f100e015a3c14c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_taussig, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 05 01:44:44 compute-0 exciting_taussig[406033]: 167 167
Dec 05 01:44:44 compute-0 systemd[1]: libpod-b4b51294a39d3e7fd0c92a75eb26a13d820a7aadd90d72328f100e015a3c14c6.scope: Deactivated successfully.
Dec 05 01:44:44 compute-0 podman[406007]: 2025-12-05 01:44:44.019349898 +0000 UTC m=+0.223416930 container died b4b51294a39d3e7fd0c92a75eb26a13d820a7aadd90d72328f100e015a3c14c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_taussig, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:44:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3c43527ab1ac3b07b2ba8f365c6f1121d7de1069f6858019d8fe9c2a5bc4f1c-merged.mount: Deactivated successfully.
Dec 05 01:44:44 compute-0 podman[406022]: 2025-12-05 01:44:44.055669669 +0000 UTC m=+0.125403506 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 05 01:44:44 compute-0 podman[406025]: 2025-12-05 01:44:44.060196766 +0000 UTC m=+0.121049253 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 01:44:44 compute-0 podman[406007]: 2025-12-05 01:44:44.083173302 +0000 UTC m=+0.287240334 container remove b4b51294a39d3e7fd0c92a75eb26a13d820a7aadd90d72328f100e015a3c14c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_taussig, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:44:44 compute-0 systemd[1]: libpod-conmon-b4b51294a39d3e7fd0c92a75eb26a13d820a7aadd90d72328f100e015a3c14c6.scope: Deactivated successfully.
Dec 05 01:44:44 compute-0 podman[406085]: 2025-12-05 01:44:44.357633606 +0000 UTC m=+0.093832509 container create 56db989389da1f1f0a8ee97504caf164c3a5427f211676b469490ecffbb4b15c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lumiere, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:44:44 compute-0 podman[406085]: 2025-12-05 01:44:44.319683619 +0000 UTC m=+0.055882582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:44:44 compute-0 systemd[1]: Started libpod-conmon-56db989389da1f1f0a8ee97504caf164c3a5427f211676b469490ecffbb4b15c.scope.
Dec 05 01:44:44 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df0d9fa909de6b81db7c2fcd5d75200b5d9bda7680b7b684209aceea715cd60b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df0d9fa909de6b81db7c2fcd5d75200b5d9bda7680b7b684209aceea715cd60b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df0d9fa909de6b81db7c2fcd5d75200b5d9bda7680b7b684209aceea715cd60b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df0d9fa909de6b81db7c2fcd5d75200b5d9bda7680b7b684209aceea715cd60b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df0d9fa909de6b81db7c2fcd5d75200b5d9bda7680b7b684209aceea715cd60b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:44:44 compute-0 podman[406085]: 2025-12-05 01:44:44.554761986 +0000 UTC m=+0.290960859 container init 56db989389da1f1f0a8ee97504caf164c3a5427f211676b469490ecffbb4b15c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lumiere, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:44:44 compute-0 podman[406085]: 2025-12-05 01:44:44.587350012 +0000 UTC m=+0.323548885 container start 56db989389da1f1f0a8ee97504caf164c3a5427f211676b469490ecffbb4b15c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lumiere, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 05 01:44:44 compute-0 podman[406085]: 2025-12-05 01:44:44.593125475 +0000 UTC m=+0.329324358 container attach 56db989389da1f1f0a8ee97504caf164c3a5427f211676b469490ecffbb4b15c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 01:44:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1054: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 01:44:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3697009052' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:44:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 01:44:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3697009052' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:44:45 compute-0 crazy_lumiere[406100]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:44:45 compute-0 crazy_lumiere[406100]: --> relative data size: 1.0
Dec 05 01:44:45 compute-0 crazy_lumiere[406100]: --> All data devices are unavailable
Dec 05 01:44:45 compute-0 systemd[1]: libpod-56db989389da1f1f0a8ee97504caf164c3a5427f211676b469490ecffbb4b15c.scope: Deactivated successfully.
Dec 05 01:44:45 compute-0 systemd[1]: libpod-56db989389da1f1f0a8ee97504caf164c3a5427f211676b469490ecffbb4b15c.scope: Consumed 1.224s CPU time.
Dec 05 01:44:45 compute-0 conmon[406100]: conmon 56db989389da1f1f0a8e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-56db989389da1f1f0a8ee97504caf164c3a5427f211676b469490ecffbb4b15c.scope/container/memory.events
Dec 05 01:44:45 compute-0 podman[406085]: 2025-12-05 01:44:45.873568942 +0000 UTC m=+1.609767835 container died 56db989389da1f1f0a8ee97504caf164c3a5427f211676b469490ecffbb4b15c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:44:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-df0d9fa909de6b81db7c2fcd5d75200b5d9bda7680b7b684209aceea715cd60b-merged.mount: Deactivated successfully.
Dec 05 01:44:45 compute-0 podman[406085]: 2025-12-05 01:44:45.961355739 +0000 UTC m=+1.697554592 container remove 56db989389da1f1f0a8ee97504caf164c3a5427f211676b469490ecffbb4b15c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 01:44:45 compute-0 systemd[1]: libpod-conmon-56db989389da1f1f0a8ee97504caf164c3a5427f211676b469490ecffbb4b15c.scope: Deactivated successfully.
Dec 05 01:44:46 compute-0 sudo[405944]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:46 compute-0 podman[406133]: 2025-12-05 01:44:46.003449242 +0000 UTC m=+0.093463448 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3)
Dec 05 01:44:46 compute-0 podman[406131]: 2025-12-05 01:44:46.019443922 +0000 UTC m=+0.102078960 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec 05 01:44:46 compute-0 sudo[406178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:44:46 compute-0 sudo[406178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:44:46 compute-0 sudo[406178]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:46 compute-0 sudo[406203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:44:46 compute-0 sudo[406203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:44:46 compute-0 sudo[406203]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:44:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:44:46 compute-0 ceph-mon[192914]: pgmap v1054: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/3697009052' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:44:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/3697009052' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:44:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:44:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:44:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:44:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:44:46 compute-0 sudo[406228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:44:46 compute-0 sudo[406228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:44:46 compute-0 sudo[406228]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:46 compute-0 sudo[406253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:44:46 compute-0 sudo[406253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:44:47 compute-0 podman[406316]: 2025-12-05 01:44:47.113292666 +0000 UTC m=+0.092519322 container create 81760f0574973f628e830eac6644e27c6e61a2353c4c9ec6c1daae5c26c35ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poitras, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 05 01:44:47 compute-0 podman[406316]: 2025-12-05 01:44:47.078699553 +0000 UTC m=+0.057926259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:44:47 compute-0 systemd[1]: Started libpod-conmon-81760f0574973f628e830eac6644e27c6e61a2353c4c9ec6c1daae5c26c35ee3.scope.
Dec 05 01:44:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1055: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:47 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:44:47 compute-0 podman[406316]: 2025-12-05 01:44:47.261617895 +0000 UTC m=+0.240844591 container init 81760f0574973f628e830eac6644e27c6e61a2353c4c9ec6c1daae5c26c35ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Dec 05 01:44:47 compute-0 podman[406316]: 2025-12-05 01:44:47.279648211 +0000 UTC m=+0.258874867 container start 81760f0574973f628e830eac6644e27c6e61a2353c4c9ec6c1daae5c26c35ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 05 01:44:47 compute-0 podman[406316]: 2025-12-05 01:44:47.286275448 +0000 UTC m=+0.265502224 container attach 81760f0574973f628e830eac6644e27c6e61a2353c4c9ec6c1daae5c26c35ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poitras, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:44:47 compute-0 agitated_poitras[406332]: 167 167
Dec 05 01:44:47 compute-0 systemd[1]: libpod-81760f0574973f628e830eac6644e27c6e61a2353c4c9ec6c1daae5c26c35ee3.scope: Deactivated successfully.
Dec 05 01:44:47 compute-0 podman[406316]: 2025-12-05 01:44:47.29383807 +0000 UTC m=+0.273064736 container died 81760f0574973f628e830eac6644e27c6e61a2353c4c9ec6c1daae5c26c35ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poitras, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:44:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-320511e234d1b44ea5df8b281e8260f0dd94e94c7290fe6aec6497123d2a28d0-merged.mount: Deactivated successfully.
Dec 05 01:44:47 compute-0 podman[406316]: 2025-12-05 01:44:47.358193719 +0000 UTC m=+0.337420355 container remove 81760f0574973f628e830eac6644e27c6e61a2353c4c9ec6c1daae5c26c35ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poitras, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 01:44:47 compute-0 systemd[1]: libpod-conmon-81760f0574973f628e830eac6644e27c6e61a2353c4c9ec6c1daae5c26c35ee3.scope: Deactivated successfully.
Dec 05 01:44:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:44:47 compute-0 podman[406356]: 2025-12-05 01:44:47.594431109 +0000 UTC m=+0.081494012 container create d715c8d72d6514f0205f883f2890dae5b75c07185bebdc25b2f42f67f16593cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 01:44:47 compute-0 podman[406356]: 2025-12-05 01:44:47.559949439 +0000 UTC m=+0.047012372 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:44:47 compute-0 systemd[1]: Started libpod-conmon-d715c8d72d6514f0205f883f2890dae5b75c07185bebdc25b2f42f67f16593cf.scope.
Dec 05 01:44:47 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baae144ac87900c9e4aa0ad1f15d27744aa071ef01f3ac4caed847205af80a96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baae144ac87900c9e4aa0ad1f15d27744aa071ef01f3ac4caed847205af80a96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baae144ac87900c9e4aa0ad1f15d27744aa071ef01f3ac4caed847205af80a96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baae144ac87900c9e4aa0ad1f15d27744aa071ef01f3ac4caed847205af80a96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:44:47 compute-0 podman[406356]: 2025-12-05 01:44:47.751007619 +0000 UTC m=+0.238070582 container init d715c8d72d6514f0205f883f2890dae5b75c07185bebdc25b2f42f67f16593cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_gauss, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 05 01:44:47 compute-0 podman[406356]: 2025-12-05 01:44:47.769456148 +0000 UTC m=+0.256519051 container start d715c8d72d6514f0205f883f2890dae5b75c07185bebdc25b2f42f67f16593cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:44:47 compute-0 podman[406356]: 2025-12-05 01:44:47.776355932 +0000 UTC m=+0.263418835 container attach d715c8d72d6514f0205f883f2890dae5b75c07185bebdc25b2f42f67f16593cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_gauss, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:44:48 compute-0 ceph-mon[192914]: pgmap v1055: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]: {
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:     "0": [
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:         {
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "devices": [
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "/dev/loop3"
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             ],
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "lv_name": "ceph_lv0",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "lv_size": "21470642176",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "name": "ceph_lv0",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "tags": {
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.cluster_name": "ceph",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.crush_device_class": "",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.encrypted": "0",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.osd_id": "0",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.type": "block",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.vdo": "0"
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             },
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "type": "block",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "vg_name": "ceph_vg0"
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:         }
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:     ],
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:     "1": [
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:         {
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "devices": [
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "/dev/loop4"
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             ],
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "lv_name": "ceph_lv1",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "lv_size": "21470642176",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "name": "ceph_lv1",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "tags": {
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.cluster_name": "ceph",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.crush_device_class": "",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.encrypted": "0",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.osd_id": "1",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.type": "block",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.vdo": "0"
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             },
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "type": "block",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "vg_name": "ceph_vg1"
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:         }
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:     ],
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:     "2": [
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:         {
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "devices": [
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "/dev/loop5"
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             ],
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "lv_name": "ceph_lv2",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "lv_size": "21470642176",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "name": "ceph_lv2",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "tags": {
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.cluster_name": "ceph",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.crush_device_class": "",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.encrypted": "0",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.osd_id": "2",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.type": "block",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:                 "ceph.vdo": "0"
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             },
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "type": "block",
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:             "vg_name": "ceph_vg2"
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:         }
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]:     ]
Dec 05 01:44:48 compute-0 heuristic_gauss[406371]: }
Dec 05 01:44:48 compute-0 systemd[1]: libpod-d715c8d72d6514f0205f883f2890dae5b75c07185bebdc25b2f42f67f16593cf.scope: Deactivated successfully.
Dec 05 01:44:48 compute-0 conmon[406371]: conmon d715c8d72d6514f0205f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d715c8d72d6514f0205f883f2890dae5b75c07185bebdc25b2f42f67f16593cf.scope/container/memory.events
Dec 05 01:44:48 compute-0 podman[406356]: 2025-12-05 01:44:48.578465365 +0000 UTC m=+1.065528268 container died d715c8d72d6514f0205f883f2890dae5b75c07185bebdc25b2f42f67f16593cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:44:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-baae144ac87900c9e4aa0ad1f15d27744aa071ef01f3ac4caed847205af80a96-merged.mount: Deactivated successfully.
Dec 05 01:44:48 compute-0 podman[406356]: 2025-12-05 01:44:48.6707971 +0000 UTC m=+1.157859973 container remove d715c8d72d6514f0205f883f2890dae5b75c07185bebdc25b2f42f67f16593cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_gauss, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:44:48 compute-0 systemd[1]: libpod-conmon-d715c8d72d6514f0205f883f2890dae5b75c07185bebdc25b2f42f67f16593cf.scope: Deactivated successfully.
Dec 05 01:44:48 compute-0 sudo[406253]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:48 compute-0 sudo[406390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:44:48 compute-0 sudo[406390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:44:48 compute-0 sudo[406390]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:48 compute-0 sudo[406415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:44:49 compute-0 sudo[406415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:44:49 compute-0 sudo[406415]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:49 compute-0 sudo[406440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:44:49 compute-0 sudo[406440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:44:49 compute-0 sudo[406440]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1056: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:49 compute-0 sudo[406465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:44:49 compute-0 sudo[406465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:44:49 compute-0 podman[406528]: 2025-12-05 01:44:49.92462024 +0000 UTC m=+0.102313916 container create 7ce6b4aa09b7b2bce7ec34d26434450a2059ce25260910ceae061856b1b5f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_saha, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 01:44:49 compute-0 podman[406528]: 2025-12-05 01:44:49.889495663 +0000 UTC m=+0.067189399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:44:49 compute-0 systemd[1]: Started libpod-conmon-7ce6b4aa09b7b2bce7ec34d26434450a2059ce25260910ceae061856b1b5f802.scope.
Dec 05 01:44:50 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:44:50 compute-0 podman[406528]: 2025-12-05 01:44:50.063054321 +0000 UTC m=+0.240748077 container init 7ce6b4aa09b7b2bce7ec34d26434450a2059ce25260910ceae061856b1b5f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 01:44:50 compute-0 podman[406528]: 2025-12-05 01:44:50.075179192 +0000 UTC m=+0.252872878 container start 7ce6b4aa09b7b2bce7ec34d26434450a2059ce25260910ceae061856b1b5f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_saha, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 05 01:44:50 compute-0 podman[406528]: 2025-12-05 01:44:50.082015974 +0000 UTC m=+0.259709660 container attach 7ce6b4aa09b7b2bce7ec34d26434450a2059ce25260910ceae061856b1b5f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Dec 05 01:44:50 compute-0 boring_saha[406544]: 167 167
Dec 05 01:44:50 compute-0 systemd[1]: libpod-7ce6b4aa09b7b2bce7ec34d26434450a2059ce25260910ceae061856b1b5f802.scope: Deactivated successfully.
Dec 05 01:44:50 compute-0 podman[406528]: 2025-12-05 01:44:50.088602189 +0000 UTC m=+0.266295875 container died 7ce6b4aa09b7b2bce7ec34d26434450a2059ce25260910ceae061856b1b5f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_saha, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:44:50 compute-0 podman[406541]: 2025-12-05 01:44:50.120120765 +0000 UTC m=+0.120639812 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, maintainer=Red Hat, Inc., release-0.7.12=, config_id=edpm, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, build-date=2024-09-18T21:23:30, name=ubi9)
Dec 05 01:44:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-45601bda4962d8981c7cb2aefb96d011fc47490b2945fc7f57251a787dfe8b4f-merged.mount: Deactivated successfully.
Dec 05 01:44:50 compute-0 podman[406528]: 2025-12-05 01:44:50.159164702 +0000 UTC m=+0.336858358 container remove 7ce6b4aa09b7b2bce7ec34d26434450a2059ce25260910ceae061856b1b5f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_saha, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 01:44:50 compute-0 systemd[1]: libpod-conmon-7ce6b4aa09b7b2bce7ec34d26434450a2059ce25260910ceae061856b1b5f802.scope: Deactivated successfully.
Dec 05 01:44:50 compute-0 ceph-mon[192914]: pgmap v1056: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:50 compute-0 podman[406587]: 2025-12-05 01:44:50.426013282 +0000 UTC m=+0.083398075 container create e602d04df59b540847cddefd4b097aad015499b76d6495e8721230da5c9c50e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:44:50 compute-0 podman[406587]: 2025-12-05 01:44:50.393961382 +0000 UTC m=+0.051346215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:44:50 compute-0 systemd[1]: Started libpod-conmon-e602d04df59b540847cddefd4b097aad015499b76d6495e8721230da5c9c50e3.scope.
Dec 05 01:44:50 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:44:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6723c404a40397fd4499020d85fc373173d7951e03f448f05feccb7c83d6e213/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:44:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6723c404a40397fd4499020d85fc373173d7951e03f448f05feccb7c83d6e213/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:44:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6723c404a40397fd4499020d85fc373173d7951e03f448f05feccb7c83d6e213/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:44:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6723c404a40397fd4499020d85fc373173d7951e03f448f05feccb7c83d6e213/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:44:50 compute-0 podman[406587]: 2025-12-05 01:44:50.598243573 +0000 UTC m=+0.255628406 container init e602d04df59b540847cddefd4b097aad015499b76d6495e8721230da5c9c50e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 05 01:44:50 compute-0 podman[406587]: 2025-12-05 01:44:50.622202196 +0000 UTC m=+0.279586979 container start e602d04df59b540847cddefd4b097aad015499b76d6495e8721230da5c9c50e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:44:50 compute-0 podman[406587]: 2025-12-05 01:44:50.628801102 +0000 UTC m=+0.286185925 container attach e602d04df59b540847cddefd4b097aad015499b76d6495e8721230da5c9c50e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_golick, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 05 01:44:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1057: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:51 compute-0 nostalgic_golick[406602]: {
Dec 05 01:44:51 compute-0 nostalgic_golick[406602]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:44:51 compute-0 nostalgic_golick[406602]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:44:51 compute-0 nostalgic_golick[406602]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:44:51 compute-0 nostalgic_golick[406602]:         "osd_id": 0,
Dec 05 01:44:51 compute-0 nostalgic_golick[406602]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:44:51 compute-0 nostalgic_golick[406602]:         "type": "bluestore"
Dec 05 01:44:51 compute-0 nostalgic_golick[406602]:     },
Dec 05 01:44:51 compute-0 nostalgic_golick[406602]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:44:51 compute-0 nostalgic_golick[406602]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:44:51 compute-0 nostalgic_golick[406602]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:44:51 compute-0 nostalgic_golick[406602]:         "osd_id": 1,
Dec 05 01:44:51 compute-0 nostalgic_golick[406602]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:44:51 compute-0 nostalgic_golick[406602]:         "type": "bluestore"
Dec 05 01:44:51 compute-0 nostalgic_golick[406602]:     },
Dec 05 01:44:51 compute-0 nostalgic_golick[406602]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:44:51 compute-0 nostalgic_golick[406602]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:44:51 compute-0 nostalgic_golick[406602]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:44:51 compute-0 nostalgic_golick[406602]:         "osd_id": 2,
Dec 05 01:44:51 compute-0 nostalgic_golick[406602]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:44:51 compute-0 nostalgic_golick[406602]:         "type": "bluestore"
Dec 05 01:44:51 compute-0 nostalgic_golick[406602]:     }
Dec 05 01:44:51 compute-0 nostalgic_golick[406602]: }
Dec 05 01:44:51 compute-0 systemd[1]: libpod-e602d04df59b540847cddefd4b097aad015499b76d6495e8721230da5c9c50e3.scope: Deactivated successfully.
Dec 05 01:44:51 compute-0 systemd[1]: libpod-e602d04df59b540847cddefd4b097aad015499b76d6495e8721230da5c9c50e3.scope: Consumed 1.138s CPU time.
Dec 05 01:44:51 compute-0 podman[406635]: 2025-12-05 01:44:51.808981762 +0000 UTC m=+0.036338142 container died e602d04df59b540847cddefd4b097aad015499b76d6495e8721230da5c9c50e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_golick, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 05 01:44:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-6723c404a40397fd4499020d85fc373173d7951e03f448f05feccb7c83d6e213-merged.mount: Deactivated successfully.
Dec 05 01:44:51 compute-0 podman[406635]: 2025-12-05 01:44:51.93450668 +0000 UTC m=+0.161862960 container remove e602d04df59b540847cddefd4b097aad015499b76d6495e8721230da5c9c50e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_golick, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 05 01:44:51 compute-0 systemd[1]: libpod-conmon-e602d04df59b540847cddefd4b097aad015499b76d6495e8721230da5c9c50e3.scope: Deactivated successfully.
Dec 05 01:44:52 compute-0 sudo[406465]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:44:52 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:44:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:44:52 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:44:52 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 9ec7a2e1-0333-48d5-a24b-4930c0bc42d6 does not exist
Dec 05 01:44:52 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev e9271a9c-4bd9-4179-9763-a15aa1da35dd does not exist
Dec 05 01:44:52 compute-0 sudo[406650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:44:52 compute-0 sudo[406650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:44:52 compute-0 sudo[406650]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:52 compute-0 ceph-mon[192914]: pgmap v1057: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:52 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:44:52 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:44:52 compute-0 sudo[406675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:44:52 compute-0 sudo[406675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:44:52 compute-0 sudo[406675]: pam_unix(sudo:session): session closed for user root
Dec 05 01:44:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:44:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1058: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:54 compute-0 ceph-mon[192914]: pgmap v1058: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1059: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:44:56.171 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:44:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:44:56.172 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:44:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:44:56.172 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:44:56 compute-0 ceph-mon[192914]: pgmap v1059: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:56 compute-0 podman[406701]: 2025-12-05 01:44:56.708368443 +0000 UTC m=+0.102291266 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 01:44:56 compute-0 podman[406700]: 2025-12-05 01:44:56.741304119 +0000 UTC m=+0.135838849 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2)
Dec 05 01:44:56 compute-0 podman[406703]: 2025-12-05 01:44:56.742373219 +0000 UTC m=+0.127582057 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7)
Dec 05 01:44:56 compute-0 podman[406702]: 2025-12-05 01:44:56.757571146 +0000 UTC m=+0.146738045 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:44:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1060: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:44:58 compute-0 ceph-mon[192914]: pgmap v1060: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1061: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:44:59 compute-0 podman[158197]: time="2025-12-05T01:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:44:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 01:44:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8122 "" "Go-http-client/1.1"
Dec 05 01:45:00 compute-0 ceph-mon[192914]: pgmap v1061: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1062: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:01 compute-0 openstack_network_exporter[366555]: ERROR   01:45:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:45:01 compute-0 openstack_network_exporter[366555]: ERROR   01:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:45:01 compute-0 openstack_network_exporter[366555]: ERROR   01:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:45:01 compute-0 openstack_network_exporter[366555]: ERROR   01:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:45:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:45:01 compute-0 openstack_network_exporter[366555]: ERROR   01:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:45:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:45:02 compute-0 ceph-mon[192914]: pgmap v1062: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:45:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1063: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:04 compute-0 ceph-mon[192914]: pgmap v1063: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1064: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:06 compute-0 ceph-mon[192914]: pgmap v1064: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1065: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:45:08 compute-0 nova_compute[349548]: 2025-12-05 01:45:08.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:45:08 compute-0 nova_compute[349548]: 2025-12-05 01:45:08.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 01:45:08 compute-0 ceph-mon[192914]: pgmap v1065: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1066: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:10 compute-0 ceph-mon[192914]: pgmap v1066: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:11 compute-0 nova_compute[349548]: 2025-12-05 01:45:11.098 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:45:11 compute-0 nova_compute[349548]: 2025-12-05 01:45:11.099 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 01:45:11 compute-0 nova_compute[349548]: 2025-12-05 01:45:11.115 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 01:45:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1067: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:12 compute-0 ceph-mon[192914]: pgmap v1067: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:45:13 compute-0 nova_compute[349548]: 2025-12-05 01:45:13.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:45:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1068: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:14 compute-0 nova_compute[349548]: 2025-12-05 01:45:14.085 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:45:14 compute-0 nova_compute[349548]: 2025-12-05 01:45:14.086 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 01:45:14 compute-0 ceph-mon[192914]: pgmap v1068: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:14 compute-0 podman[406782]: 2025-12-05 01:45:14.723464835 +0000 UTC m=+0.125594401 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 01:45:14 compute-0 podman[406783]: 2025-12-05 01:45:14.742352686 +0000 UTC m=+0.142002672 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.085 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.086 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.086 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.087 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.123 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.124 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.125 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.126 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.127 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:45:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1069: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:45:15 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2264434584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.660 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:45:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:45:16
Dec 05 01:45:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:45:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:45:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['volumes', 'images', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'backups', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta']
Dec 05 01:45:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:45:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:45:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:45:16 compute-0 nova_compute[349548]: 2025-12-05 01:45:16.257 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:45:16 compute-0 nova_compute[349548]: 2025-12-05 01:45:16.258 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4571MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 01:45:16 compute-0 nova_compute[349548]: 2025-12-05 01:45:16.259 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:45:16 compute-0 nova_compute[349548]: 2025-12-05 01:45:16.259 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:45:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:45:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:45:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:45:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:45:16 compute-0 ceph-mon[192914]: pgmap v1069: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:16 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2264434584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:45:16 compute-0 nova_compute[349548]: 2025-12-05 01:45:16.614 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 01:45:16 compute-0 nova_compute[349548]: 2025-12-05 01:45:16.615 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 01:45:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:45:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:45:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:45:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:45:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:45:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:45:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:45:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:45:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:45:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:45:16 compute-0 podman[406843]: 2025-12-05 01:45:16.693931527 +0000 UTC m=+0.102536083 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true)
Dec 05 01:45:16 compute-0 nova_compute[349548]: 2025-12-05 01:45:16.712 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing inventories for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 01:45:16 compute-0 podman[406844]: 2025-12-05 01:45:16.721736209 +0000 UTC m=+0.125363305 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 05 01:45:16 compute-0 nova_compute[349548]: 2025-12-05 01:45:16.821 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating ProviderTree inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 01:45:16 compute-0 nova_compute[349548]: 2025-12-05 01:45:16.821 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 01:45:16 compute-0 nova_compute[349548]: 2025-12-05 01:45:16.839 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing aggregate associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 01:45:16 compute-0 nova_compute[349548]: 2025-12-05 01:45:16.860 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing trait associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, traits: HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 01:45:16 compute-0 nova_compute[349548]: 2025-12-05 01:45:16.874 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:45:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1070: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:45:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1565133677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:45:17 compute-0 nova_compute[349548]: 2025-12-05 01:45:17.353 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:45:17 compute-0 nova_compute[349548]: 2025-12-05 01:45:17.365 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:45:17 compute-0 nova_compute[349548]: 2025-12-05 01:45:17.379 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:45:17 compute-0 nova_compute[349548]: 2025-12-05 01:45:17.382 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 01:45:17 compute-0 nova_compute[349548]: 2025-12-05 01:45:17.382 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:45:17 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1565133677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:45:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:45:18 compute-0 nova_compute[349548]: 2025-12-05 01:45:18.361 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:45:18 compute-0 nova_compute[349548]: 2025-12-05 01:45:18.362 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:45:18 compute-0 nova_compute[349548]: 2025-12-05 01:45:18.381 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:45:18 compute-0 nova_compute[349548]: 2025-12-05 01:45:18.381 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:45:18 compute-0 nova_compute[349548]: 2025-12-05 01:45:18.382 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:45:18 compute-0 ceph-mon[192914]: pgmap v1070: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1071: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:20 compute-0 ceph-mon[192914]: pgmap v1071: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:20 compute-0 podman[406905]: 2025-12-05 01:45:20.744707817 +0000 UTC m=+0.150069858 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.tags=base rhel9, io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Dec 05 01:45:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1072: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:45:22 compute-0 ceph-mon[192914]: pgmap v1072: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1073: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:24 compute-0 ceph-mon[192914]: pgmap v1073: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1074: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:45:26 compute-0 ceph-mon[192914]: pgmap v1074: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1075: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:45:27 compute-0 ceph-mon[192914]: pgmap v1075: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:27 compute-0 podman[406925]: 2025-12-05 01:45:27.714431237 +0000 UTC m=+0.122886375 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:45:27 compute-0 podman[406924]: 2025-12-05 01:45:27.722860904 +0000 UTC m=+0.127892246 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 01:45:27 compute-0 podman[406927]: 2025-12-05 01:45:27.724570992 +0000 UTC m=+0.108439109 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, name=ubi9-minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, container_name=openstack_network_exporter, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 01:45:27 compute-0 podman[406926]: 2025-12-05 01:45:27.773054473 +0000 UTC m=+0.162422135 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:45:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1076: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:29 compute-0 podman[158197]: time="2025-12-05T01:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:45:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 01:45:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8124 "" "Go-http-client/1.1"
Dec 05 01:45:30 compute-0 ceph-mon[192914]: pgmap v1076: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1077: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:31 compute-0 openstack_network_exporter[366555]: ERROR   01:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:45:31 compute-0 openstack_network_exporter[366555]: ERROR   01:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:45:31 compute-0 openstack_network_exporter[366555]: ERROR   01:45:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:45:31 compute-0 openstack_network_exporter[366555]: ERROR   01:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:45:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:45:31 compute-0 openstack_network_exporter[366555]: ERROR   01:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:45:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:45:32 compute-0 ceph-mon[192914]: pgmap v1077: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:45:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1078: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:34 compute-0 ceph-mon[192914]: pgmap v1078: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:34 compute-0 sshd-session[404233]: Received disconnect from 38.102.83.179 port 45762:11: disconnected by user
Dec 05 01:45:34 compute-0 sshd-session[404233]: Disconnected from user zuul 38.102.83.179 port 45762
Dec 05 01:45:34 compute-0 sshd-session[404230]: pam_unix(sshd:session): session closed for user zuul
Dec 05 01:45:34 compute-0 systemd[1]: session-61.scope: Deactivated successfully.
Dec 05 01:45:34 compute-0 systemd[1]: session-61.scope: Consumed 12.578s CPU time.
Dec 05 01:45:34 compute-0 systemd-logind[792]: Session 61 logged out. Waiting for processes to exit.
Dec 05 01:45:34 compute-0 systemd-logind[792]: Removed session 61.
Dec 05 01:45:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1079: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:36 compute-0 ceph-mon[192914]: pgmap v1079: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:37 compute-0 nova_compute[349548]: 2025-12-05 01:45:37.077 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:45:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1080: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:45:38 compute-0 ceph-mon[192914]: pgmap v1080: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1081: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:40 compute-0 ceph-mon[192914]: pgmap v1081: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1082: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:42 compute-0 ceph-mon[192914]: pgmap v1082: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:45:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1083: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:44 compute-0 ceph-mon[192914]: pgmap v1083: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 01:45:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/919219471' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:45:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 01:45:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/919219471' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:45:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1084: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/919219471' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:45:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/919219471' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:45:45 compute-0 podman[407007]: 2025-12-05 01:45:45.703984807 +0000 UTC m=+0.109424336 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 05 01:45:45 compute-0 podman[407008]: 2025-12-05 01:45:45.744974169 +0000 UTC m=+0.147543587 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 01:45:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:45:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:45:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:45:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:45:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:45:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:45:46 compute-0 ceph-mon[192914]: pgmap v1084: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1085: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:45:47 compute-0 podman[407046]: 2025-12-05 01:45:47.724442174 +0000 UTC m=+0.134428789 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 01:45:47 compute-0 podman[407047]: 2025-12-05 01:45:47.732598553 +0000 UTC m=+0.133030960 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:45:48 compute-0 ceph-mon[192914]: pgmap v1085: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1086: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:50 compute-0 ceph-mon[192914]: pgmap v1086: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1087: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:51 compute-0 podman[407082]: 2025-12-05 01:45:51.697311087 +0000 UTC m=+0.103339716 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, distribution-scope=public, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, version=9.4, com.redhat.component=ubi9-container, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., release-0.7.12=, architecture=x86_64, io.openshift.expose-services=, name=ubi9, container_name=kepler, io.openshift.tags=base rhel9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543)
Dec 05 01:45:52 compute-0 sudo[407100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:45:52 compute-0 sudo[407100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:45:52 compute-0 sudo[407100]: pam_unix(sudo:session): session closed for user root
Dec 05 01:45:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:45:52 compute-0 ceph-mon[192914]: pgmap v1087: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:52 compute-0 sudo[407125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:45:52 compute-0 sudo[407125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:45:52 compute-0 sudo[407125]: pam_unix(sudo:session): session closed for user root
Dec 05 01:45:52 compute-0 sudo[407150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:45:52 compute-0 sudo[407150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:45:52 compute-0 sudo[407150]: pam_unix(sudo:session): session closed for user root
Dec 05 01:45:52 compute-0 sudo[407175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 05 01:45:52 compute-0 sudo[407175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:45:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1088: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:53 compute-0 sudo[407175]: pam_unix(sudo:session): session closed for user root
Dec 05 01:45:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:45:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:45:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:45:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:45:53 compute-0 sudo[407222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:45:53 compute-0 sudo[407222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:45:53 compute-0 sudo[407222]: pam_unix(sudo:session): session closed for user root
Dec 05 01:45:53 compute-0 sudo[407247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:45:53 compute-0 sudo[407247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:45:53 compute-0 sudo[407247]: pam_unix(sudo:session): session closed for user root
Dec 05 01:45:53 compute-0 sudo[407272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:45:53 compute-0 sudo[407272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:45:53 compute-0 sudo[407272]: pam_unix(sudo:session): session closed for user root
Dec 05 01:45:53 compute-0 sudo[407297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:45:53 compute-0 sudo[407297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:45:54 compute-0 ceph-mon[192914]: pgmap v1088: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:45:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:45:54 compute-0 sudo[407297]: pam_unix(sudo:session): session closed for user root
Dec 05 01:45:54 compute-0 sudo[407353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:45:54 compute-0 sudo[407353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:45:54 compute-0 sudo[407353]: pam_unix(sudo:session): session closed for user root
Dec 05 01:45:54 compute-0 sudo[407378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:45:54 compute-0 sudo[407378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:45:54 compute-0 sudo[407378]: pam_unix(sudo:session): session closed for user root
Dec 05 01:45:54 compute-0 sudo[407403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:45:54 compute-0 sudo[407403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:45:54 compute-0 sudo[407403]: pam_unix(sudo:session): session closed for user root
Dec 05 01:45:55 compute-0 sudo[407428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- inventory --format=json-pretty --filter-for-batch
Dec 05 01:45:55 compute-0 sudo[407428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:45:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1089: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:55 compute-0 podman[407492]: 2025-12-05 01:45:55.754003484 +0000 UTC m=+0.115244830 container create 5c545500c339fff151004d36ad0716012fed517765edb11fbf315323c53223af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_burnell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:45:55 compute-0 podman[407492]: 2025-12-05 01:45:55.714764661 +0000 UTC m=+0.076006057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:45:55 compute-0 systemd[1]: Started libpod-conmon-5c545500c339fff151004d36ad0716012fed517765edb11fbf315323c53223af.scope.
Dec 05 01:45:55 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:45:55 compute-0 podman[407492]: 2025-12-05 01:45:55.916657905 +0000 UTC m=+0.277899301 container init 5c545500c339fff151004d36ad0716012fed517765edb11fbf315323c53223af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_burnell, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 01:45:55 compute-0 podman[407492]: 2025-12-05 01:45:55.933996373 +0000 UTC m=+0.295237719 container start 5c545500c339fff151004d36ad0716012fed517765edb11fbf315323c53223af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_burnell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 05 01:45:55 compute-0 podman[407492]: 2025-12-05 01:45:55.941605196 +0000 UTC m=+0.302846562 container attach 5c545500c339fff151004d36ad0716012fed517765edb11fbf315323c53223af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_burnell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:45:55 compute-0 tender_burnell[407509]: 167 167
Dec 05 01:45:55 compute-0 systemd[1]: libpod-5c545500c339fff151004d36ad0716012fed517765edb11fbf315323c53223af.scope: Deactivated successfully.
Dec 05 01:45:55 compute-0 podman[407492]: 2025-12-05 01:45:55.945715552 +0000 UTC m=+0.306956938 container died 5c545500c339fff151004d36ad0716012fed517765edb11fbf315323c53223af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:45:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-d607773a2de060657b6c11bce39c69d15895930ea2e7f6662bf7909e5d353243-merged.mount: Deactivated successfully.
Dec 05 01:45:56 compute-0 podman[407492]: 2025-12-05 01:45:56.032608944 +0000 UTC m=+0.393850290 container remove 5c545500c339fff151004d36ad0716012fed517765edb11fbf315323c53223af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 01:45:56 compute-0 systemd[1]: libpod-conmon-5c545500c339fff151004d36ad0716012fed517765edb11fbf315323c53223af.scope: Deactivated successfully.
Dec 05 01:45:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:45:56.173 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:45:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:45:56.175 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:45:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:45:56.176 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:45:56 compute-0 podman[407532]: 2025-12-05 01:45:56.3313471 +0000 UTC m=+0.097022197 container create cfd3d628a48a7b31438ca05b851a62fa6d18b0833a4ddc54a245e112c02805cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_fermi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 01:45:56 compute-0 ceph-mon[192914]: pgmap v1089: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:56 compute-0 podman[407532]: 2025-12-05 01:45:56.28827316 +0000 UTC m=+0.053948317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:45:56 compute-0 systemd[1]: Started libpod-conmon-cfd3d628a48a7b31438ca05b851a62fa6d18b0833a4ddc54a245e112c02805cf.scope.
Dec 05 01:45:56 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/620f88242226ee491e83cda68696f6d4751454792e33146a568d360cb24b32a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/620f88242226ee491e83cda68696f6d4751454792e33146a568d360cb24b32a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/620f88242226ee491e83cda68696f6d4751454792e33146a568d360cb24b32a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/620f88242226ee491e83cda68696f6d4751454792e33146a568d360cb24b32a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:45:56 compute-0 podman[407532]: 2025-12-05 01:45:56.493020803 +0000 UTC m=+0.258695950 container init cfd3d628a48a7b31438ca05b851a62fa6d18b0833a4ddc54a245e112c02805cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:45:56 compute-0 podman[407532]: 2025-12-05 01:45:56.511945035 +0000 UTC m=+0.277620112 container start cfd3d628a48a7b31438ca05b851a62fa6d18b0833a4ddc54a245e112c02805cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_fermi, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 05 01:45:56 compute-0 podman[407532]: 2025-12-05 01:45:56.517533442 +0000 UTC m=+0.283208579 container attach cfd3d628a48a7b31438ca05b851a62fa6d18b0833a4ddc54a245e112c02805cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:45:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1090: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:45:58 compute-0 ceph-mon[192914]: pgmap v1090: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:58 compute-0 podman[408877]: 2025-12-05 01:45:58.695668512 +0000 UTC m=+0.094538518 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:45:58 compute-0 podman[408857]: 2025-12-05 01:45:58.731480608 +0000 UTC m=+0.133350039 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 01:45:58 compute-0 podman[408888]: 2025-12-05 01:45:58.748324632 +0000 UTC m=+0.137236759 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 05 01:45:58 compute-0 podman[408883]: 2025-12-05 01:45:58.760749191 +0000 UTC m=+0.157457877 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]: [
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:     {
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:         "available": false,
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:         "ceph_device": false,
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:         "lsm_data": {},
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:         "lvs": [],
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:         "path": "/dev/sr0",
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:         "rejected_reasons": [
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:             "Has a FileSystem",
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:             "Insufficient space (<5GB)"
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:         ],
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:         "sys_api": {
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:             "actuators": null,
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:             "device_nodes": "sr0",
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:             "devname": "sr0",
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:             "human_readable_size": "482.00 KB",
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:             "id_bus": "ata",
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:             "model": "QEMU DVD-ROM",
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:             "nr_requests": "2",
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:             "parent": "/dev/sr0",
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:             "partitions": {},
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:             "path": "/dev/sr0",
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:             "removable": "1",
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:             "rev": "2.5+",
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:             "ro": "0",
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:             "rotational": "1",
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:             "sas_address": "",
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:             "sas_device_handle": "",
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:             "scheduler_mode": "mq-deadline",
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:             "sectors": 0,
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:             "sectorsize": "2048",
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:             "size": 493568.0,
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:             "support_discard": "2048",
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:             "type": "disk",
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:             "vendor": "QEMU"
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:         }
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]:     }
Dec 05 01:45:58 compute-0 hardcore_fermi[407548]: ]
Dec 05 01:45:58 compute-0 systemd[1]: libpod-cfd3d628a48a7b31438ca05b851a62fa6d18b0833a4ddc54a245e112c02805cf.scope: Deactivated successfully.
Dec 05 01:45:58 compute-0 systemd[1]: libpod-cfd3d628a48a7b31438ca05b851a62fa6d18b0833a4ddc54a245e112c02805cf.scope: Consumed 2.534s CPU time.
Dec 05 01:45:58 compute-0 conmon[407548]: conmon cfd3d628a48a7b31438c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cfd3d628a48a7b31438ca05b851a62fa6d18b0833a4ddc54a245e112c02805cf.scope/container/memory.events
Dec 05 01:45:58 compute-0 podman[407532]: 2025-12-05 01:45:58.966164344 +0000 UTC m=+2.731839411 container died cfd3d628a48a7b31438ca05b851a62fa6d18b0833a4ddc54a245e112c02805cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_fermi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:45:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-620f88242226ee491e83cda68696f6d4751454792e33146a568d360cb24b32a9-merged.mount: Deactivated successfully.
Dec 05 01:45:59 compute-0 podman[407532]: 2025-12-05 01:45:59.049828756 +0000 UTC m=+2.815503813 container remove cfd3d628a48a7b31438ca05b851a62fa6d18b0833a4ddc54a245e112c02805cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 05 01:45:59 compute-0 systemd[1]: libpod-conmon-cfd3d628a48a7b31438ca05b851a62fa6d18b0833a4ddc54a245e112c02805cf.scope: Deactivated successfully.
Dec 05 01:45:59 compute-0 sudo[407428]: pam_unix(sudo:session): session closed for user root
Dec 05 01:45:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:45:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:45:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:45:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:45:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:45:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:45:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:45:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:45:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:45:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:45:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 89d00e85-6e9c-40d6-83da-a3f4c49764c9 does not exist
Dec 05 01:45:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 17b7d2b3-59ba-40cb-99aa-29cde6a8cf09 does not exist
Dec 05 01:45:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d87436a0-8eca-4a80-840b-fee71c34b557 does not exist
Dec 05 01:45:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:45:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:45:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:45:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:45:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:45:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:45:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1091: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:45:59 compute-0 sudo[409709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:45:59 compute-0 sudo[409709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:45:59 compute-0 sudo[409709]: pam_unix(sudo:session): session closed for user root
Dec 05 01:45:59 compute-0 sudo[409734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:45:59 compute-0 sudo[409734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:45:59 compute-0 sudo[409734]: pam_unix(sudo:session): session closed for user root
Dec 05 01:45:59 compute-0 sudo[409759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:45:59 compute-0 sudo[409759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:45:59 compute-0 sudo[409759]: pam_unix(sudo:session): session closed for user root
Dec 05 01:45:59 compute-0 sudo[409784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:45:59 compute-0 sudo[409784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:45:59 compute-0 podman[158197]: time="2025-12-05T01:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:45:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 01:45:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8117 "" "Go-http-client/1.1"
Dec 05 01:46:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:46:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:46:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:46:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:46:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:46:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:46:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:46:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:46:00 compute-0 ceph-mon[192914]: pgmap v1091: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:00 compute-0 podman[409847]: 2025-12-05 01:46:00.28071271 +0000 UTC m=+0.089591739 container create 02d6edcf38bfad964bd3167b0b045986cdd8988792059df8a34e356417041cc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Dec 05 01:46:00 compute-0 podman[409847]: 2025-12-05 01:46:00.248057462 +0000 UTC m=+0.056936541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:46:00 compute-0 systemd[1]: Started libpod-conmon-02d6edcf38bfad964bd3167b0b045986cdd8988792059df8a34e356417041cc8.scope.
Dec 05 01:46:00 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:46:00 compute-0 podman[409847]: 2025-12-05 01:46:00.421311922 +0000 UTC m=+0.230191001 container init 02d6edcf38bfad964bd3167b0b045986cdd8988792059df8a34e356417041cc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shockley, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 01:46:00 compute-0 podman[409847]: 2025-12-05 01:46:00.438700291 +0000 UTC m=+0.247579300 container start 02d6edcf38bfad964bd3167b0b045986cdd8988792059df8a34e356417041cc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 05 01:46:00 compute-0 podman[409847]: 2025-12-05 01:46:00.445215914 +0000 UTC m=+0.254094993 container attach 02d6edcf38bfad964bd3167b0b045986cdd8988792059df8a34e356417041cc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:46:00 compute-0 hardcore_shockley[409862]: 167 167
Dec 05 01:46:00 compute-0 systemd[1]: libpod-02d6edcf38bfad964bd3167b0b045986cdd8988792059df8a34e356417041cc8.scope: Deactivated successfully.
Dec 05 01:46:00 compute-0 podman[409847]: 2025-12-05 01:46:00.451587673 +0000 UTC m=+0.260466682 container died 02d6edcf38bfad964bd3167b0b045986cdd8988792059df8a34e356417041cc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 05 01:46:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-d10a567a0c9a154bc63a0a1e95e36dc705fcc9e844c060339d5c29b0410f500d-merged.mount: Deactivated successfully.
Dec 05 01:46:00 compute-0 podman[409847]: 2025-12-05 01:46:00.524311167 +0000 UTC m=+0.333190206 container remove 02d6edcf38bfad964bd3167b0b045986cdd8988792059df8a34e356417041cc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shockley, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Dec 05 01:46:00 compute-0 systemd[1]: libpod-conmon-02d6edcf38bfad964bd3167b0b045986cdd8988792059df8a34e356417041cc8.scope: Deactivated successfully.
Dec 05 01:46:00 compute-0 podman[409887]: 2025-12-05 01:46:00.774494968 +0000 UTC m=+0.082356645 container create b57230d778f91f786bc0f0004a332ddcd581aad3160b16b30df935510ea50320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:46:00 compute-0 podman[409887]: 2025-12-05 01:46:00.74963006 +0000 UTC m=+0.057491767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:46:00 compute-0 systemd[1]: Started libpod-conmon-b57230d778f91f786bc0f0004a332ddcd581aad3160b16b30df935510ea50320.scope.
Dec 05 01:46:00 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:46:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08708a8f9ed8658133c9f0ba4e4cef121dacd2c6265c50b29a76aa86568a73c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:46:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08708a8f9ed8658133c9f0ba4e4cef121dacd2c6265c50b29a76aa86568a73c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:46:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08708a8f9ed8658133c9f0ba4e4cef121dacd2c6265c50b29a76aa86568a73c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:46:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08708a8f9ed8658133c9f0ba4e4cef121dacd2c6265c50b29a76aa86568a73c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:46:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08708a8f9ed8658133c9f0ba4e4cef121dacd2c6265c50b29a76aa86568a73c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:46:00 compute-0 podman[409887]: 2025-12-05 01:46:00.966397152 +0000 UTC m=+0.274258859 container init b57230d778f91f786bc0f0004a332ddcd581aad3160b16b30df935510ea50320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 05 01:46:00 compute-0 podman[409887]: 2025-12-05 01:46:00.994035099 +0000 UTC m=+0.301896806 container start b57230d778f91f786bc0f0004a332ddcd581aad3160b16b30df935510ea50320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 05 01:46:01 compute-0 podman[409887]: 2025-12-05 01:46:01.000954593 +0000 UTC m=+0.308816330 container attach b57230d778f91f786bc0f0004a332ddcd581aad3160b16b30df935510ea50320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_cannon, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 05 01:46:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1092: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:01 compute-0 openstack_network_exporter[366555]: ERROR   01:46:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:46:01 compute-0 openstack_network_exporter[366555]: ERROR   01:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:46:01 compute-0 openstack_network_exporter[366555]: ERROR   01:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:46:01 compute-0 openstack_network_exporter[366555]: ERROR   01:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:46:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:46:01 compute-0 openstack_network_exporter[366555]: ERROR   01:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:46:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:46:02 compute-0 laughing_cannon[409903]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:46:02 compute-0 laughing_cannon[409903]: --> relative data size: 1.0
Dec 05 01:46:02 compute-0 laughing_cannon[409903]: --> All data devices are unavailable
Dec 05 01:46:02 compute-0 systemd[1]: libpod-b57230d778f91f786bc0f0004a332ddcd581aad3160b16b30df935510ea50320.scope: Deactivated successfully.
Dec 05 01:46:02 compute-0 podman[409887]: 2025-12-05 01:46:02.303066761 +0000 UTC m=+1.610928518 container died b57230d778f91f786bc0f0004a332ddcd581aad3160b16b30df935510ea50320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 05 01:46:02 compute-0 systemd[1]: libpod-b57230d778f91f786bc0f0004a332ddcd581aad3160b16b30df935510ea50320.scope: Consumed 1.247s CPU time.
Dec 05 01:46:02 compute-0 ceph-mon[192914]: pgmap v1092: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b08708a8f9ed8658133c9f0ba4e4cef121dacd2c6265c50b29a76aa86568a73c-merged.mount: Deactivated successfully.
Dec 05 01:46:02 compute-0 podman[409887]: 2025-12-05 01:46:02.397641909 +0000 UTC m=+1.705503576 container remove b57230d778f91f786bc0f0004a332ddcd581aad3160b16b30df935510ea50320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:46:02 compute-0 systemd[1]: libpod-conmon-b57230d778f91f786bc0f0004a332ddcd581aad3160b16b30df935510ea50320.scope: Deactivated successfully.
Dec 05 01:46:02 compute-0 sudo[409784]: pam_unix(sudo:session): session closed for user root
Dec 05 01:46:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:46:02 compute-0 sudo[409944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:46:02 compute-0 sudo[409944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:46:02 compute-0 sudo[409944]: pam_unix(sudo:session): session closed for user root
Dec 05 01:46:02 compute-0 sudo[409969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:46:02 compute-0 sudo[409969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:46:02 compute-0 sudo[409969]: pam_unix(sudo:session): session closed for user root
Dec 05 01:46:02 compute-0 sudo[409994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:46:02 compute-0 sudo[409994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:46:02 compute-0 sudo[409994]: pam_unix(sudo:session): session closed for user root
Dec 05 01:46:03 compute-0 sudo[410019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:46:03 compute-0 sudo[410019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:46:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1093: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:03 compute-0 podman[410082]: 2025-12-05 01:46:03.641332493 +0000 UTC m=+0.088233940 container create 2929dd4231ae2d81e995063ba18047f293597c09afba4444fc8d7e7dfdaaee4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 05 01:46:03 compute-0 podman[410082]: 2025-12-05 01:46:03.604746375 +0000 UTC m=+0.051647852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:46:03 compute-0 systemd[1]: Started libpod-conmon-2929dd4231ae2d81e995063ba18047f293597c09afba4444fc8d7e7dfdaaee4a.scope.
Dec 05 01:46:03 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:46:03 compute-0 podman[410082]: 2025-12-05 01:46:03.794671633 +0000 UTC m=+0.241573160 container init 2929dd4231ae2d81e995063ba18047f293597c09afba4444fc8d7e7dfdaaee4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_grothendieck, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:46:03 compute-0 podman[410082]: 2025-12-05 01:46:03.804365036 +0000 UTC m=+0.251266483 container start 2929dd4231ae2d81e995063ba18047f293597c09afba4444fc8d7e7dfdaaee4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 05 01:46:03 compute-0 podman[410082]: 2025-12-05 01:46:03.810397505 +0000 UTC m=+0.257298962 container attach 2929dd4231ae2d81e995063ba18047f293597c09afba4444fc8d7e7dfdaaee4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_grothendieck, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 01:46:03 compute-0 competent_grothendieck[410098]: 167 167
Dec 05 01:46:03 compute-0 systemd[1]: libpod-2929dd4231ae2d81e995063ba18047f293597c09afba4444fc8d7e7dfdaaee4a.scope: Deactivated successfully.
Dec 05 01:46:03 compute-0 podman[410082]: 2025-12-05 01:46:03.819072229 +0000 UTC m=+0.265973746 container died 2929dd4231ae2d81e995063ba18047f293597c09afba4444fc8d7e7dfdaaee4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 05 01:46:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-98e6b425de61a773851b564ad0d7f731386c35cac627a439cce9c16f1bce11a9-merged.mount: Deactivated successfully.
Dec 05 01:46:03 compute-0 podman[410082]: 2025-12-05 01:46:03.896547707 +0000 UTC m=+0.343449154 container remove 2929dd4231ae2d81e995063ba18047f293597c09afba4444fc8d7e7dfdaaee4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:46:03 compute-0 systemd[1]: libpod-conmon-2929dd4231ae2d81e995063ba18047f293597c09afba4444fc8d7e7dfdaaee4a.scope: Deactivated successfully.
Dec 05 01:46:04 compute-0 podman[410122]: 2025-12-05 01:46:04.173015107 +0000 UTC m=+0.083266771 container create 77026929f48570f3f360752ceb04f05c585fc34e7b6cb5488da8b3beafa3069f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:46:04 compute-0 podman[410122]: 2025-12-05 01:46:04.144633369 +0000 UTC m=+0.054885113 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:46:04 compute-0 systemd[1]: Started libpod-conmon-77026929f48570f3f360752ceb04f05c585fc34e7b6cb5488da8b3beafa3069f.scope.
Dec 05 01:46:04 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33ab1900530df64fa26cb454143d7b5e5dfdf3401cd4737e794eb9827da2ec5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33ab1900530df64fa26cb454143d7b5e5dfdf3401cd4737e794eb9827da2ec5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33ab1900530df64fa26cb454143d7b5e5dfdf3401cd4737e794eb9827da2ec5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33ab1900530df64fa26cb454143d7b5e5dfdf3401cd4737e794eb9827da2ec5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:46:04 compute-0 podman[410122]: 2025-12-05 01:46:04.304623086 +0000 UTC m=+0.214874740 container init 77026929f48570f3f360752ceb04f05c585fc34e7b6cb5488da8b3beafa3069f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_williamson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:46:04 compute-0 podman[410122]: 2025-12-05 01:46:04.325790711 +0000 UTC m=+0.236042375 container start 77026929f48570f3f360752ceb04f05c585fc34e7b6cb5488da8b3beafa3069f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 05 01:46:04 compute-0 podman[410122]: 2025-12-05 01:46:04.331631525 +0000 UTC m=+0.241883179 container attach 77026929f48570f3f360752ceb04f05c585fc34e7b6cb5488da8b3beafa3069f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:46:04 compute-0 ceph-mon[192914]: pgmap v1093: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:05 compute-0 serene_williamson[410136]: {
Dec 05 01:46:05 compute-0 serene_williamson[410136]:     "0": [
Dec 05 01:46:05 compute-0 serene_williamson[410136]:         {
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "devices": [
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "/dev/loop3"
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             ],
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "lv_name": "ceph_lv0",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "lv_size": "21470642176",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "name": "ceph_lv0",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "tags": {
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.cluster_name": "ceph",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.crush_device_class": "",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.encrypted": "0",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.osd_id": "0",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.type": "block",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.vdo": "0"
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             },
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "type": "block",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "vg_name": "ceph_vg0"
Dec 05 01:46:05 compute-0 serene_williamson[410136]:         }
Dec 05 01:46:05 compute-0 serene_williamson[410136]:     ],
Dec 05 01:46:05 compute-0 serene_williamson[410136]:     "1": [
Dec 05 01:46:05 compute-0 serene_williamson[410136]:         {
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "devices": [
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "/dev/loop4"
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             ],
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "lv_name": "ceph_lv1",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "lv_size": "21470642176",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "name": "ceph_lv1",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "tags": {
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.cluster_name": "ceph",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.crush_device_class": "",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.encrypted": "0",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.osd_id": "1",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.type": "block",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.vdo": "0"
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             },
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "type": "block",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "vg_name": "ceph_vg1"
Dec 05 01:46:05 compute-0 serene_williamson[410136]:         }
Dec 05 01:46:05 compute-0 serene_williamson[410136]:     ],
Dec 05 01:46:05 compute-0 serene_williamson[410136]:     "2": [
Dec 05 01:46:05 compute-0 serene_williamson[410136]:         {
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "devices": [
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "/dev/loop5"
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             ],
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "lv_name": "ceph_lv2",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "lv_size": "21470642176",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "name": "ceph_lv2",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "tags": {
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.cluster_name": "ceph",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.crush_device_class": "",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.encrypted": "0",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.osd_id": "2",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.type": "block",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:                 "ceph.vdo": "0"
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             },
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "type": "block",
Dec 05 01:46:05 compute-0 serene_williamson[410136]:             "vg_name": "ceph_vg2"
Dec 05 01:46:05 compute-0 serene_williamson[410136]:         }
Dec 05 01:46:05 compute-0 serene_williamson[410136]:     ]
Dec 05 01:46:05 compute-0 serene_williamson[410136]: }
Dec 05 01:46:05 compute-0 systemd[1]: libpod-77026929f48570f3f360752ceb04f05c585fc34e7b6cb5488da8b3beafa3069f.scope: Deactivated successfully.
Dec 05 01:46:05 compute-0 podman[410122]: 2025-12-05 01:46:05.226523977 +0000 UTC m=+1.136775641 container died 77026929f48570f3f360752ceb04f05c585fc34e7b6cb5488da8b3beafa3069f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:46:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1094: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 11 op/s
Dec 05 01:46:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d33ab1900530df64fa26cb454143d7b5e5dfdf3401cd4737e794eb9827da2ec5-merged.mount: Deactivated successfully.
Dec 05 01:46:05 compute-0 podman[410122]: 2025-12-05 01:46:05.297676197 +0000 UTC m=+1.207927851 container remove 77026929f48570f3f360752ceb04f05c585fc34e7b6cb5488da8b3beafa3069f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 05 01:46:05 compute-0 systemd[1]: libpod-conmon-77026929f48570f3f360752ceb04f05c585fc34e7b6cb5488da8b3beafa3069f.scope: Deactivated successfully.
Dec 05 01:46:05 compute-0 sudo[410019]: pam_unix(sudo:session): session closed for user root
Dec 05 01:46:05 compute-0 sudo[410157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:46:05 compute-0 sudo[410157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:46:05 compute-0 sudo[410157]: pam_unix(sudo:session): session closed for user root
Dec 05 01:46:05 compute-0 sudo[410182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:46:05 compute-0 sudo[410182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:46:05 compute-0 sudo[410182]: pam_unix(sudo:session): session closed for user root
Dec 05 01:46:05 compute-0 sudo[410207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:46:05 compute-0 sudo[410207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:46:05 compute-0 sudo[410207]: pam_unix(sudo:session): session closed for user root
Dec 05 01:46:05 compute-0 sudo[410232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:46:05 compute-0 sudo[410232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:46:06 compute-0 ceph-mon[192914]: pgmap v1094: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 11 op/s
Dec 05 01:46:06 compute-0 podman[410296]: 2025-12-05 01:46:06.350667923 +0000 UTC m=+0.069749932 container create a3a2e81b9e2df32cf8e651f41e478770a31d774e8c59e1c4651a47c9d45acd83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_thompson, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 01:46:06 compute-0 systemd[1]: Started libpod-conmon-a3a2e81b9e2df32cf8e651f41e478770a31d774e8c59e1c4651a47c9d45acd83.scope.
Dec 05 01:46:06 compute-0 podman[410296]: 2025-12-05 01:46:06.32497273 +0000 UTC m=+0.044054769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:46:06 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:46:06 compute-0 podman[410296]: 2025-12-05 01:46:06.470417178 +0000 UTC m=+0.189499267 container init a3a2e81b9e2df32cf8e651f41e478770a31d774e8c59e1c4651a47c9d45acd83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_thompson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Dec 05 01:46:06 compute-0 podman[410296]: 2025-12-05 01:46:06.485257345 +0000 UTC m=+0.204339374 container start a3a2e81b9e2df32cf8e651f41e478770a31d774e8c59e1c4651a47c9d45acd83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Dec 05 01:46:06 compute-0 podman[410296]: 2025-12-05 01:46:06.491511771 +0000 UTC m=+0.210593770 container attach a3a2e81b9e2df32cf8e651f41e478770a31d774e8c59e1c4651a47c9d45acd83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_thompson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 05 01:46:06 compute-0 magical_thompson[410311]: 167 167
Dec 05 01:46:06 compute-0 systemd[1]: libpod-a3a2e81b9e2df32cf8e651f41e478770a31d774e8c59e1c4651a47c9d45acd83.scope: Deactivated successfully.
Dec 05 01:46:06 compute-0 podman[410296]: 2025-12-05 01:46:06.497207691 +0000 UTC m=+0.216289720 container died a3a2e81b9e2df32cf8e651f41e478770a31d774e8c59e1c4651a47c9d45acd83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_thompson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:46:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-41523574de32a182627a3be617350c1fbbcfe1641b3fcffed3b840530ba73220-merged.mount: Deactivated successfully.
Dec 05 01:46:06 compute-0 podman[410296]: 2025-12-05 01:46:06.581279384 +0000 UTC m=+0.300361423 container remove a3a2e81b9e2df32cf8e651f41e478770a31d774e8c59e1c4651a47c9d45acd83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 05 01:46:06 compute-0 systemd[1]: libpod-conmon-a3a2e81b9e2df32cf8e651f41e478770a31d774e8c59e1c4651a47c9d45acd83.scope: Deactivated successfully.
Dec 05 01:46:06 compute-0 podman[410334]: 2025-12-05 01:46:06.854533634 +0000 UTC m=+0.075692808 container create ab120b3ee7f4f1de6c276b322d9d350e103b33c6409e2bc0cbea3418db5b35c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:46:06 compute-0 systemd[1]: Started libpod-conmon-ab120b3ee7f4f1de6c276b322d9d350e103b33c6409e2bc0cbea3418db5b35c9.scope.
Dec 05 01:46:06 compute-0 podman[410334]: 2025-12-05 01:46:06.830095247 +0000 UTC m=+0.051254471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:46:06 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c632f468fc69e044474fcab3e36029aeff7943ca18e6ed6bfb8710886d474652/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c632f468fc69e044474fcab3e36029aeff7943ca18e6ed6bfb8710886d474652/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c632f468fc69e044474fcab3e36029aeff7943ca18e6ed6bfb8710886d474652/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c632f468fc69e044474fcab3e36029aeff7943ca18e6ed6bfb8710886d474652/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:46:06 compute-0 podman[410334]: 2025-12-05 01:46:06.981179364 +0000 UTC m=+0.202338588 container init ab120b3ee7f4f1de6c276b322d9d350e103b33c6409e2bc0cbea3418db5b35c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 05 01:46:07 compute-0 podman[410334]: 2025-12-05 01:46:07.010744395 +0000 UTC m=+0.231903609 container start ab120b3ee7f4f1de6c276b322d9d350e103b33c6409e2bc0cbea3418db5b35c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:46:07 compute-0 podman[410334]: 2025-12-05 01:46:07.018870123 +0000 UTC m=+0.240029327 container attach ab120b3ee7f4f1de6c276b322d9d350e103b33c6409e2bc0cbea3418db5b35c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:46:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1095: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 49 op/s
Dec 05 01:46:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:46:08 compute-0 nice_lamport[410350]: {
Dec 05 01:46:08 compute-0 nice_lamport[410350]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:46:08 compute-0 nice_lamport[410350]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:46:08 compute-0 nice_lamport[410350]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:46:08 compute-0 nice_lamport[410350]:         "osd_id": 0,
Dec 05 01:46:08 compute-0 nice_lamport[410350]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:46:08 compute-0 nice_lamport[410350]:         "type": "bluestore"
Dec 05 01:46:08 compute-0 nice_lamport[410350]:     },
Dec 05 01:46:08 compute-0 nice_lamport[410350]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:46:08 compute-0 nice_lamport[410350]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:46:08 compute-0 nice_lamport[410350]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:46:08 compute-0 nice_lamport[410350]:         "osd_id": 1,
Dec 05 01:46:08 compute-0 nice_lamport[410350]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:46:08 compute-0 nice_lamport[410350]:         "type": "bluestore"
Dec 05 01:46:08 compute-0 nice_lamport[410350]:     },
Dec 05 01:46:08 compute-0 nice_lamport[410350]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:46:08 compute-0 nice_lamport[410350]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:46:08 compute-0 nice_lamport[410350]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:46:08 compute-0 nice_lamport[410350]:         "osd_id": 2,
Dec 05 01:46:08 compute-0 nice_lamport[410350]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:46:08 compute-0 nice_lamport[410350]:         "type": "bluestore"
Dec 05 01:46:08 compute-0 nice_lamport[410350]:     }
Dec 05 01:46:08 compute-0 nice_lamport[410350]: }
Dec 05 01:46:08 compute-0 systemd[1]: libpod-ab120b3ee7f4f1de6c276b322d9d350e103b33c6409e2bc0cbea3418db5b35c9.scope: Deactivated successfully.
Dec 05 01:46:08 compute-0 podman[410334]: 2025-12-05 01:46:08.151199888 +0000 UTC m=+1.372359072 container died ab120b3ee7f4f1de6c276b322d9d350e103b33c6409e2bc0cbea3418db5b35c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 01:46:08 compute-0 systemd[1]: libpod-ab120b3ee7f4f1de6c276b322d9d350e103b33c6409e2bc0cbea3418db5b35c9.scope: Consumed 1.140s CPU time.
Dec 05 01:46:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-c632f468fc69e044474fcab3e36029aeff7943ca18e6ed6bfb8710886d474652-merged.mount: Deactivated successfully.
Dec 05 01:46:08 compute-0 podman[410334]: 2025-12-05 01:46:08.238304436 +0000 UTC m=+1.459463610 container remove ab120b3ee7f4f1de6c276b322d9d350e103b33c6409e2bc0cbea3418db5b35c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 05 01:46:08 compute-0 systemd[1]: libpod-conmon-ab120b3ee7f4f1de6c276b322d9d350e103b33c6409e2bc0cbea3418db5b35c9.scope: Deactivated successfully.
Dec 05 01:46:08 compute-0 sudo[410232]: pam_unix(sudo:session): session closed for user root
Dec 05 01:46:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:46:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:46:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:46:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:46:08 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 2b9f0602-cf97-42e6-bc5a-75f8b593e147 does not exist
Dec 05 01:46:08 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 55b19990-59fc-42fe-92f6-22ef2319c8dd does not exist
Dec 05 01:46:08 compute-0 ceph-mon[192914]: pgmap v1095: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 49 op/s
Dec 05 01:46:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:46:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:46:08 compute-0 sudo[410393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:46:08 compute-0 sudo[410393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:46:08 compute-0 sudo[410393]: pam_unix(sudo:session): session closed for user root
Dec 05 01:46:08 compute-0 sudo[410418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:46:08 compute-0 sudo[410418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:46:08 compute-0 sudo[410418]: pam_unix(sudo:session): session closed for user root
Dec 05 01:46:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1096: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 49 op/s
Dec 05 01:46:10 compute-0 ceph-mon[192914]: pgmap v1096: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 49 op/s
Dec 05 01:46:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1097: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:46:12 compute-0 ceph-mon[192914]: pgmap v1097: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:46:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:46:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1098: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:46:14 compute-0 ceph-mon[192914]: pgmap v1098: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:46:15 compute-0 nova_compute[349548]: 2025-12-05 01:46:15.086 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:46:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1099: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:46:16 compute-0 nova_compute[349548]: 2025-12-05 01:46:16.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:46:16 compute-0 nova_compute[349548]: 2025-12-05 01:46:16.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:46:16 compute-0 nova_compute[349548]: 2025-12-05 01:46:16.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:46:16 compute-0 nova_compute[349548]: 2025-12-05 01:46:16.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 01:46:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:46:16
Dec 05 01:46:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:46:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:46:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'backups', 'default.rgw.log', 'default.rgw.control', '.mgr', 'default.rgw.meta']
Dec 05 01:46:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:46:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:46:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:46:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:46:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:46:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:46:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:46:16 compute-0 ceph-mon[192914]: pgmap v1099: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.449005) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899176449073, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2047, "num_deletes": 251, "total_data_size": 3470778, "memory_usage": 3532560, "flush_reason": "Manual Compaction"}
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899176477687, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3383215, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20896, "largest_seqno": 22942, "table_properties": {"data_size": 3373943, "index_size": 5830, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18421, "raw_average_key_size": 19, "raw_value_size": 3355552, "raw_average_value_size": 3627, "num_data_blocks": 264, "num_entries": 925, "num_filter_entries": 925, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764898952, "oldest_key_time": 1764898952, "file_creation_time": 1764899176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 28787 microseconds, and 15546 cpu microseconds.
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.477798) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3383215 bytes OK
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.477833) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.480389) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.480414) EVENT_LOG_v1 {"time_micros": 1764899176480405, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.480445) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3462222, prev total WAL file size 3462222, number of live WAL files 2.
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.482774) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3303KB)], [50(7306KB)]
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899176482938, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10865298, "oldest_snapshot_seqno": -1}
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4690 keys, 9147253 bytes, temperature: kUnknown
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899176570418, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9147253, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9113411, "index_size": 20996, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11781, "raw_key_size": 114753, "raw_average_key_size": 24, "raw_value_size": 9026081, "raw_average_value_size": 1924, "num_data_blocks": 885, "num_entries": 4690, "num_filter_entries": 4690, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764899176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.571645) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9147253 bytes
Dec 05 01:46:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:46:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.649684) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.8 rd, 103.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.1 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5204, records dropped: 514 output_compression: NoCompression
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.649722) EVENT_LOG_v1 {"time_micros": 1764899176649708, "job": 26, "event": "compaction_finished", "compaction_time_micros": 88479, "compaction_time_cpu_micros": 21003, "output_level": 6, "num_output_files": 1, "total_output_size": 9147253, "num_input_records": 5204, "num_output_records": 4690, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899176654614, "job": 26, "event": "table_file_deletion", "file_number": 52}
Dec 05 01:46:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:46:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:46:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899176657498, "job": 26, "event": "table_file_deletion", "file_number": 50}
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.482422) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.658080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.658090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.658095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.658099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.658103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:46:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:46:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:46:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:46:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:46:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:46:16 compute-0 podman[410444]: 2025-12-05 01:46:16.731437874 +0000 UTC m=+0.128935835 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:46:16 compute-0 podman[410443]: 2025-12-05 01:46:16.748863223 +0000 UTC m=+0.147323591 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:46:17 compute-0 nova_compute[349548]: 2025-12-05 01:46:17.063 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:46:17 compute-0 nova_compute[349548]: 2025-12-05 01:46:17.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:46:17 compute-0 nova_compute[349548]: 2025-12-05 01:46:17.065 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 01:46:17 compute-0 nova_compute[349548]: 2025-12-05 01:46:17.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 01:46:17 compute-0 nova_compute[349548]: 2025-12-05 01:46:17.092 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 01:46:17 compute-0 nova_compute[349548]: 2025-12-05 01:46:17.093 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:46:17 compute-0 nova_compute[349548]: 2025-12-05 01:46:17.132 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:46:17 compute-0 nova_compute[349548]: 2025-12-05 01:46:17.133 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:46:17 compute-0 nova_compute[349548]: 2025-12-05 01:46:17.133 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:46:17 compute-0 nova_compute[349548]: 2025-12-05 01:46:17.133 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:46:17 compute-0 nova_compute[349548]: 2025-12-05 01:46:17.134 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:46:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1100: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 47 op/s
Dec 05 01:46:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:46:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:46:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4289739040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:46:17 compute-0 nova_compute[349548]: 2025-12-05 01:46:17.588 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:46:18 compute-0 nova_compute[349548]: 2025-12-05 01:46:18.132 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:46:18 compute-0 nova_compute[349548]: 2025-12-05 01:46:18.133 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4535MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 01:46:18 compute-0 nova_compute[349548]: 2025-12-05 01:46:18.134 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:46:18 compute-0 nova_compute[349548]: 2025-12-05 01:46:18.134 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:46:18 compute-0 nova_compute[349548]: 2025-12-05 01:46:18.216 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 01:46:18 compute-0 nova_compute[349548]: 2025-12-05 01:46:18.217 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 01:46:18 compute-0 nova_compute[349548]: 2025-12-05 01:46:18.254 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:46:18 compute-0 ceph-mon[192914]: pgmap v1100: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 47 op/s
Dec 05 01:46:18 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4289739040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:46:18 compute-0 podman[410525]: 2025-12-05 01:46:18.722505934 +0000 UTC m=+0.134740068 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:46:18 compute-0 podman[410526]: 2025-12-05 01:46:18.743028271 +0000 UTC m=+0.154515664 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.vendor=CentOS)
Dec 05 01:46:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:46:18 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1195552152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:46:18 compute-0 nova_compute[349548]: 2025-12-05 01:46:18.785 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:46:18 compute-0 nova_compute[349548]: 2025-12-05 01:46:18.794 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:46:18 compute-0 nova_compute[349548]: 2025-12-05 01:46:18.816 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:46:18 compute-0 nova_compute[349548]: 2025-12-05 01:46:18.819 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 01:46:18 compute-0 nova_compute[349548]: 2025-12-05 01:46:18.820 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:46:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1101: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 0 B/s wr, 10 op/s
Dec 05 01:46:19 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1195552152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:46:19 compute-0 nova_compute[349548]: 2025-12-05 01:46:19.793 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:46:19 compute-0 nova_compute[349548]: 2025-12-05 01:46:19.794 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:46:20 compute-0 ceph-mon[192914]: pgmap v1101: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 0 B/s wr, 10 op/s
Dec 05 01:46:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1102: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 0 B/s wr, 10 op/s
Dec 05 01:46:22 compute-0 ceph-mon[192914]: pgmap v1102: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 0 B/s wr, 10 op/s
Dec 05 01:46:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:46:22 compute-0 podman[410566]: 2025-12-05 01:46:22.731228064 +0000 UTC m=+0.138652718 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, config_id=edpm, container_name=kepler, managed_by=edpm_ansible, name=ubi9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, architecture=x86_64, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 05 01:46:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1103: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:24 compute-0 ceph-mon[192914]: pgmap v1103: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1104: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:26 compute-0 ceph-mon[192914]: pgmap v1104: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:46:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1105: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:46:28 compute-0 ceph-mon[192914]: pgmap v1105: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1106: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:29 compute-0 podman[410586]: 2025-12-05 01:46:29.689796963 +0000 UTC m=+0.090705430 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:46:29 compute-0 podman[410585]: 2025-12-05 01:46:29.700972667 +0000 UTC m=+0.111801313 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 01:46:29 compute-0 podman[410588]: 2025-12-05 01:46:29.728757258 +0000 UTC m=+0.127041021 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, version=9.6, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1755695350)
Dec 05 01:46:29 compute-0 podman[410587]: 2025-12-05 01:46:29.744006937 +0000 UTC m=+0.157292262 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 05 01:46:29 compute-0 podman[158197]: time="2025-12-05T01:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:46:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 01:46:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8118 "" "Go-http-client/1.1"
Dec 05 01:46:30 compute-0 ceph-mon[192914]: pgmap v1106: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1107: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:31 compute-0 openstack_network_exporter[366555]: ERROR   01:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:46:31 compute-0 openstack_network_exporter[366555]: ERROR   01:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:46:31 compute-0 openstack_network_exporter[366555]: ERROR   01:46:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:46:31 compute-0 openstack_network_exporter[366555]: ERROR   01:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:46:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:46:31 compute-0 openstack_network_exporter[366555]: ERROR   01:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:46:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:46:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:46:32 compute-0 ceph-mon[192914]: pgmap v1107: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1108: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:33 compute-0 ceph-mon[192914]: pgmap v1108: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Dec 05 01:46:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Dec 05 01:46:35 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Dec 05 01:46:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1110: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Dec 05 01:46:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Dec 05 01:46:36 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Dec 05 01:46:36 compute-0 ceph-mon[192914]: osdmap e120: 3 total, 3 up, 3 in
Dec 05 01:46:36 compute-0 ceph-mon[192914]: pgmap v1110: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Dec 05 01:46:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Dec 05 01:46:37 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Dec 05 01:46:37 compute-0 ceph-mon[192914]: osdmap e121: 3 total, 3 up, 3 in
Dec 05 01:46:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1113: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 2.6 MiB/s wr, 7 op/s
Dec 05 01:46:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:46:38 compute-0 ceph-mon[192914]: osdmap e122: 3 total, 3 up, 3 in
Dec 05 01:46:38 compute-0 ceph-mon[192914]: pgmap v1113: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 2.6 MiB/s wr, 7 op/s
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.314 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.315 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.320 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.321 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.321 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.321 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.322 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.322 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.330 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.330 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.332 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.332 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:46:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1114: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 2.6 MiB/s wr, 7 op/s
Dec 05 01:46:40 compute-0 ceph-mon[192914]: pgmap v1114: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 2.6 MiB/s wr, 7 op/s
Dec 05 01:46:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1115: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 2.5 MiB/s wr, 11 op/s
Dec 05 01:46:42 compute-0 ceph-mon[192914]: pgmap v1115: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 2.5 MiB/s wr, 11 op/s
Dec 05 01:46:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:46:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Dec 05 01:46:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Dec 05 01:46:42 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Dec 05 01:46:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 2.2 MiB/s wr, 15 op/s
Dec 05 01:46:43 compute-0 ceph-mon[192914]: osdmap e123: 3 total, 3 up, 3 in
Dec 05 01:46:44 compute-0 ceph-mon[192914]: pgmap v1117: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 2.2 MiB/s wr, 15 op/s
Dec 05 01:46:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 01:46:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4230110696' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:46:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 01:46:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4230110696' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:46:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1118: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 1.2 KiB/s wr, 8 op/s
Dec 05 01:46:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/4230110696' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:46:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/4230110696' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:46:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:46:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:46:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:46:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:46:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:46:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:46:46 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:46:46.466 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 01:46:46 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:46:46.468 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 01:46:46 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:46:46.469 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:46:46 compute-0 ceph-mon[192914]: pgmap v1118: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 1.2 KiB/s wr, 8 op/s
Dec 05 01:46:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1119: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 6 op/s
Dec 05 01:46:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:46:47 compute-0 podman[410670]: 2025-12-05 01:46:47.740202133 +0000 UTC m=+0.136728564 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:46:47 compute-0 podman[410669]: 2025-12-05 01:46:47.756709047 +0000 UTC m=+0.156881911 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:46:48 compute-0 ceph-mon[192914]: pgmap v1119: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 6 op/s
Dec 05 01:46:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1120: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 6 op/s
Dec 05 01:46:49 compute-0 podman[410713]: 2025-12-05 01:46:49.721597442 +0000 UTC m=+0.122529585 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm)
Dec 05 01:46:49 compute-0 podman[410712]: 2025-12-05 01:46:49.744825775 +0000 UTC m=+0.150213443 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec 05 01:46:50 compute-0 ceph-mon[192914]: pgmap v1120: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 6 op/s
Dec 05 01:46:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 307 B/s wr, 3 op/s
Dec 05 01:46:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:46:52 compute-0 ceph-mon[192914]: pgmap v1121: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 307 B/s wr, 3 op/s
Dec 05 01:46:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1122: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:53 compute-0 podman[410752]: 2025-12-05 01:46:53.722632345 +0000 UTC m=+0.129770398 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, build-date=2024-09-18T21:23:30, config_id=edpm, com.redhat.component=ubi9-container, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, distribution-scope=public, release=1214.1726694543, architecture=x86_64, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, managed_by=edpm_ansible)
Dec 05 01:46:54 compute-0 ceph-mon[192914]: pgmap v1122: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1123: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:46:56.173 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:46:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:46:56.174 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:46:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:46:56.174 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:46:56 compute-0 ceph-mon[192914]: pgmap v1123: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:56 compute-0 sshd-session[410771]: Connection closed by 5.101.64.6 port 60023
Dec 05 01:46:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1124: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:46:58 compute-0 ceph-mon[192914]: pgmap v1124: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1125: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:59 compute-0 podman[158197]: time="2025-12-05T01:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:46:59 compute-0 ceph-mon[192914]: pgmap v1125: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:46:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 01:46:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8121 "" "Go-http-client/1.1"
Dec 05 01:47:00 compute-0 podman[410772]: 2025-12-05 01:47:00.735235793 +0000 UTC m=+0.132544866 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:47:00 compute-0 podman[410773]: 2025-12-05 01:47:00.740409419 +0000 UTC m=+0.133763701 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:47:00 compute-0 podman[410775]: 2025-12-05 01:47:00.742737114 +0000 UTC m=+0.118146452 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, release=1755695350, vendor=Red Hat, Inc., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, version=9.6, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible)
Dec 05 01:47:00 compute-0 podman[410774]: 2025-12-05 01:47:00.784523709 +0000 UTC m=+0.170339789 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:47:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1126: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:01 compute-0 openstack_network_exporter[366555]: ERROR   01:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:47:01 compute-0 openstack_network_exporter[366555]: ERROR   01:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:47:01 compute-0 openstack_network_exporter[366555]: ERROR   01:47:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:47:01 compute-0 openstack_network_exporter[366555]: ERROR   01:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:47:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:47:01 compute-0 openstack_network_exporter[366555]: ERROR   01:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:47:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:47:02 compute-0 ceph-mon[192914]: pgmap v1126: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:47:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1127: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:04 compute-0 ceph-mon[192914]: pgmap v1127: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1128: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:06 compute-0 ceph-mon[192914]: pgmap v1128: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1129: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:47:08 compute-0 ceph-mon[192914]: pgmap v1129: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:08 compute-0 sudo[410850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:47:08 compute-0 sudo[410850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:47:08 compute-0 sudo[410850]: pam_unix(sudo:session): session closed for user root
Dec 05 01:47:08 compute-0 sudo[410875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:47:08 compute-0 sudo[410875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:47:08 compute-0 sudo[410875]: pam_unix(sudo:session): session closed for user root
Dec 05 01:47:09 compute-0 sudo[410900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:47:09 compute-0 sudo[410900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:47:09 compute-0 sudo[410900]: pam_unix(sudo:session): session closed for user root
Dec 05 01:47:09 compute-0 sudo[410925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:47:09 compute-0 sudo[410925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:47:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1130: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:09 compute-0 sudo[410925]: pam_unix(sudo:session): session closed for user root
Dec 05 01:47:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 05 01:47:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 01:47:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:47:09 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:47:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:47:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:47:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:47:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:47:09 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev ad6dd14d-5b37-4462-a897-263f65d64d47 does not exist
Dec 05 01:47:09 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 0751906c-ebaf-4820-bf03-44ce664e0a60 does not exist
Dec 05 01:47:09 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c47572b3-fe68-410d-aeed-27ce94544431 does not exist
Dec 05 01:47:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:47:09 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:47:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:47:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:47:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:47:09 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:47:10 compute-0 sudo[410980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:47:10 compute-0 sudo[410980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:47:10 compute-0 sudo[410980]: pam_unix(sudo:session): session closed for user root
Dec 05 01:47:10 compute-0 sudo[411005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:47:10 compute-0 sudo[411005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:47:10 compute-0 sudo[411005]: pam_unix(sudo:session): session closed for user root
Dec 05 01:47:10 compute-0 sudo[411030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:47:10 compute-0 sudo[411030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:47:10 compute-0 sudo[411030]: pam_unix(sudo:session): session closed for user root
Dec 05 01:47:10 compute-0 ceph-mon[192914]: pgmap v1130: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 01:47:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:47:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:47:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:47:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:47:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:47:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:47:10 compute-0 sudo[411055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:47:10 compute-0 sudo[411055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:47:11 compute-0 podman[411121]: 2025-12-05 01:47:11.096441296 +0000 UTC m=+0.094865987 container create f8a4b35c3ee0bbc14022f567bb643d96ce35f2d1d9dc60676dff5dc2ec993962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 05 01:47:11 compute-0 podman[411121]: 2025-12-05 01:47:11.060069554 +0000 UTC m=+0.058494305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:47:11 compute-0 systemd[1]: Started libpod-conmon-f8a4b35c3ee0bbc14022f567bb643d96ce35f2d1d9dc60676dff5dc2ec993962.scope.
Dec 05 01:47:11 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:47:11 compute-0 podman[411121]: 2025-12-05 01:47:11.239344063 +0000 UTC m=+0.237768754 container init f8a4b35c3ee0bbc14022f567bb643d96ce35f2d1d9dc60676dff5dc2ec993962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:47:11 compute-0 podman[411121]: 2025-12-05 01:47:11.255374693 +0000 UTC m=+0.253799384 container start f8a4b35c3ee0bbc14022f567bb643d96ce35f2d1d9dc60676dff5dc2ec993962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 01:47:11 compute-0 podman[411121]: 2025-12-05 01:47:11.262642837 +0000 UTC m=+0.261067588 container attach f8a4b35c3ee0bbc14022f567bb643d96ce35f2d1d9dc60676dff5dc2ec993962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:47:11 compute-0 youthful_neumann[411137]: 167 167
Dec 05 01:47:11 compute-0 systemd[1]: libpod-f8a4b35c3ee0bbc14022f567bb643d96ce35f2d1d9dc60676dff5dc2ec993962.scope: Deactivated successfully.
Dec 05 01:47:11 compute-0 podman[411121]: 2025-12-05 01:47:11.268576694 +0000 UTC m=+0.267001405 container died f8a4b35c3ee0bbc14022f567bb643d96ce35f2d1d9dc60676dff5dc2ec993962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 05 01:47:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1131: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-758371781c17c3b7e6a0aa7dd7efa541ce08c4b5eb8694a3ccd3cb20fe444aa1-merged.mount: Deactivated successfully.
Dec 05 01:47:11 compute-0 podman[411121]: 2025-12-05 01:47:11.35808191 +0000 UTC m=+0.356506611 container remove f8a4b35c3ee0bbc14022f567bb643d96ce35f2d1d9dc60676dff5dc2ec993962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 05 01:47:11 compute-0 systemd[1]: libpod-conmon-f8a4b35c3ee0bbc14022f567bb643d96ce35f2d1d9dc60676dff5dc2ec993962.scope: Deactivated successfully.
Dec 05 01:47:11 compute-0 podman[411162]: 2025-12-05 01:47:11.648607134 +0000 UTC m=+0.091196363 container create 869840ce9f420231de0057d16a2f01c73f50ebda0a2c46eb75988fa52ac56e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ptolemy, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:47:11 compute-0 podman[411162]: 2025-12-05 01:47:11.61357058 +0000 UTC m=+0.056159889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:47:11 compute-0 systemd[1]: Started libpod-conmon-869840ce9f420231de0057d16a2f01c73f50ebda0a2c46eb75988fa52ac56e47.scope.
Dec 05 01:47:11 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b402432b3f575db961c8602a3b776fef0733249b95dc062adb5a094ebeed06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b402432b3f575db961c8602a3b776fef0733249b95dc062adb5a094ebeed06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b402432b3f575db961c8602a3b776fef0733249b95dc062adb5a094ebeed06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b402432b3f575db961c8602a3b776fef0733249b95dc062adb5a094ebeed06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b402432b3f575db961c8602a3b776fef0733249b95dc062adb5a094ebeed06/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:47:11 compute-0 podman[411162]: 2025-12-05 01:47:11.850071457 +0000 UTC m=+0.292660766 container init 869840ce9f420231de0057d16a2f01c73f50ebda0a2c46eb75988fa52ac56e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ptolemy, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:47:11 compute-0 podman[411162]: 2025-12-05 01:47:11.873747202 +0000 UTC m=+0.316336471 container start 869840ce9f420231de0057d16a2f01c73f50ebda0a2c46eb75988fa52ac56e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ptolemy, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 05 01:47:11 compute-0 podman[411162]: 2025-12-05 01:47:11.880746049 +0000 UTC m=+0.323335358 container attach 869840ce9f420231de0057d16a2f01c73f50ebda0a2c46eb75988fa52ac56e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ptolemy, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:47:12 compute-0 ceph-mon[192914]: pgmap v1131: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:47:13 compute-0 cool_ptolemy[411178]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:47:13 compute-0 cool_ptolemy[411178]: --> relative data size: 1.0
Dec 05 01:47:13 compute-0 cool_ptolemy[411178]: --> All data devices are unavailable
Dec 05 01:47:13 compute-0 systemd[1]: libpod-869840ce9f420231de0057d16a2f01c73f50ebda0a2c46eb75988fa52ac56e47.scope: Deactivated successfully.
Dec 05 01:47:13 compute-0 systemd[1]: libpod-869840ce9f420231de0057d16a2f01c73f50ebda0a2c46eb75988fa52ac56e47.scope: Consumed 1.277s CPU time.
Dec 05 01:47:13 compute-0 podman[411162]: 2025-12-05 01:47:13.200696628 +0000 UTC m=+1.643285897 container died 869840ce9f420231de0057d16a2f01c73f50ebda0a2c46eb75988fa52ac56e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:47:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-92b402432b3f575db961c8602a3b776fef0733249b95dc062adb5a094ebeed06-merged.mount: Deactivated successfully.
Dec 05 01:47:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1132: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:13 compute-0 podman[411162]: 2025-12-05 01:47:13.312725277 +0000 UTC m=+1.755314536 container remove 869840ce9f420231de0057d16a2f01c73f50ebda0a2c46eb75988fa52ac56e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ptolemy, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 05 01:47:13 compute-0 systemd[1]: libpod-conmon-869840ce9f420231de0057d16a2f01c73f50ebda0a2c46eb75988fa52ac56e47.scope: Deactivated successfully.
Dec 05 01:47:13 compute-0 sudo[411055]: pam_unix(sudo:session): session closed for user root
Dec 05 01:47:13 compute-0 sudo[411218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:47:13 compute-0 sudo[411218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:47:13 compute-0 sudo[411218]: pam_unix(sudo:session): session closed for user root
Dec 05 01:47:13 compute-0 sudo[411243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:47:13 compute-0 sudo[411243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:47:13 compute-0 sudo[411243]: pam_unix(sudo:session): session closed for user root
Dec 05 01:47:13 compute-0 sudo[411268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:47:13 compute-0 sudo[411268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:47:13 compute-0 sudo[411268]: pam_unix(sudo:session): session closed for user root
Dec 05 01:47:13 compute-0 sudo[411293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:47:13 compute-0 sudo[411293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:47:14 compute-0 podman[411357]: 2025-12-05 01:47:14.52041304 +0000 UTC m=+0.068576558 container create 0760c74bd044b43f7a82975b218193f9d5872d8db4ab1205cfef811790848358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_engelbart, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 01:47:14 compute-0 systemd[1]: Started libpod-conmon-0760c74bd044b43f7a82975b218193f9d5872d8db4ab1205cfef811790848358.scope.
Dec 05 01:47:14 compute-0 ceph-mon[192914]: pgmap v1132: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:14 compute-0 podman[411357]: 2025-12-05 01:47:14.499320107 +0000 UTC m=+0.047483655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:47:14 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:47:14 compute-0 podman[411357]: 2025-12-05 01:47:14.664830089 +0000 UTC m=+0.212993677 container init 0760c74bd044b43f7a82975b218193f9d5872d8db4ab1205cfef811790848358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:47:14 compute-0 podman[411357]: 2025-12-05 01:47:14.683464743 +0000 UTC m=+0.231628291 container start 0760c74bd044b43f7a82975b218193f9d5872d8db4ab1205cfef811790848358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_engelbart, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 01:47:14 compute-0 distracted_engelbart[411373]: 167 167
Dec 05 01:47:14 compute-0 podman[411357]: 2025-12-05 01:47:14.689746939 +0000 UTC m=+0.237910487 container attach 0760c74bd044b43f7a82975b218193f9d5872d8db4ab1205cfef811790848358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_engelbart, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:47:14 compute-0 systemd[1]: libpod-0760c74bd044b43f7a82975b218193f9d5872d8db4ab1205cfef811790848358.scope: Deactivated successfully.
Dec 05 01:47:14 compute-0 podman[411378]: 2025-12-05 01:47:14.769762258 +0000 UTC m=+0.057435245 container died 0760c74bd044b43f7a82975b218193f9d5872d8db4ab1205cfef811790848358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 05 01:47:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecef7733cd8f40583d7892957fa3ce53ece3ec169a85157184062b73811d0f1b-merged.mount: Deactivated successfully.
Dec 05 01:47:14 compute-0 podman[411378]: 2025-12-05 01:47:14.842297397 +0000 UTC m=+0.129970414 container remove 0760c74bd044b43f7a82975b218193f9d5872d8db4ab1205cfef811790848358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_engelbart, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec 05 01:47:14 compute-0 systemd[1]: libpod-conmon-0760c74bd044b43f7a82975b218193f9d5872d8db4ab1205cfef811790848358.scope: Deactivated successfully.
Dec 05 01:47:15 compute-0 podman[411399]: 2025-12-05 01:47:15.152259548 +0000 UTC m=+0.091635046 container create cf83c8b1abb947e6aefa8f6c73935426083ed86b8628842e058653ac0bff793b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:47:15 compute-0 podman[411399]: 2025-12-05 01:47:15.118323885 +0000 UTC m=+0.057699443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:47:15 compute-0 systemd[1]: Started libpod-conmon-cf83c8b1abb947e6aefa8f6c73935426083ed86b8628842e058653ac0bff793b.scope.
Dec 05 01:47:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1133: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8eadb16d7e89d32500700bbeca341accbf4d034cf4440aa4b3cd4a1e969959/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8eadb16d7e89d32500700bbeca341accbf4d034cf4440aa4b3cd4a1e969959/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8eadb16d7e89d32500700bbeca341accbf4d034cf4440aa4b3cd4a1e969959/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8eadb16d7e89d32500700bbeca341accbf4d034cf4440aa4b3cd4a1e969959/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:47:15 compute-0 podman[411399]: 2025-12-05 01:47:15.338569194 +0000 UTC m=+0.277944772 container init cf83c8b1abb947e6aefa8f6c73935426083ed86b8628842e058653ac0bff793b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brown, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:47:15 compute-0 podman[411399]: 2025-12-05 01:47:15.350395657 +0000 UTC m=+0.289771165 container start cf83c8b1abb947e6aefa8f6c73935426083ed86b8628842e058653ac0bff793b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brown, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:47:15 compute-0 podman[411399]: 2025-12-05 01:47:15.358033981 +0000 UTC m=+0.297409739 container attach cf83c8b1abb947e6aefa8f6c73935426083ed86b8628842e058653ac0bff793b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 05 01:47:16 compute-0 nova_compute[349548]: 2025-12-05 01:47:16.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:47:16 compute-0 eager_brown[411415]: {
Dec 05 01:47:16 compute-0 eager_brown[411415]:     "0": [
Dec 05 01:47:16 compute-0 eager_brown[411415]:         {
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "devices": [
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "/dev/loop3"
Dec 05 01:47:16 compute-0 eager_brown[411415]:             ],
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "lv_name": "ceph_lv0",
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "lv_size": "21470642176",
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "name": "ceph_lv0",
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "tags": {
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.cluster_name": "ceph",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.crush_device_class": "",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.encrypted": "0",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.osd_id": "0",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.type": "block",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.vdo": "0"
Dec 05 01:47:16 compute-0 eager_brown[411415]:             },
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "type": "block",
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "vg_name": "ceph_vg0"
Dec 05 01:47:16 compute-0 eager_brown[411415]:         }
Dec 05 01:47:16 compute-0 eager_brown[411415]:     ],
Dec 05 01:47:16 compute-0 eager_brown[411415]:     "1": [
Dec 05 01:47:16 compute-0 eager_brown[411415]:         {
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "devices": [
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "/dev/loop4"
Dec 05 01:47:16 compute-0 eager_brown[411415]:             ],
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "lv_name": "ceph_lv1",
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "lv_size": "21470642176",
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "name": "ceph_lv1",
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "tags": {
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.cluster_name": "ceph",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.crush_device_class": "",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.encrypted": "0",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.osd_id": "1",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.type": "block",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.vdo": "0"
Dec 05 01:47:16 compute-0 eager_brown[411415]:             },
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "type": "block",
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "vg_name": "ceph_vg1"
Dec 05 01:47:16 compute-0 eager_brown[411415]:         }
Dec 05 01:47:16 compute-0 eager_brown[411415]:     ],
Dec 05 01:47:16 compute-0 eager_brown[411415]:     "2": [
Dec 05 01:47:16 compute-0 eager_brown[411415]:         {
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "devices": [
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "/dev/loop5"
Dec 05 01:47:16 compute-0 eager_brown[411415]:             ],
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "lv_name": "ceph_lv2",
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "lv_size": "21470642176",
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "name": "ceph_lv2",
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "tags": {
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.cluster_name": "ceph",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.crush_device_class": "",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.encrypted": "0",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.osd_id": "2",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.type": "block",
Dec 05 01:47:16 compute-0 eager_brown[411415]:                 "ceph.vdo": "0"
Dec 05 01:47:16 compute-0 eager_brown[411415]:             },
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "type": "block",
Dec 05 01:47:16 compute-0 eager_brown[411415]:             "vg_name": "ceph_vg2"
Dec 05 01:47:16 compute-0 eager_brown[411415]:         }
Dec 05 01:47:16 compute-0 eager_brown[411415]:     ]
Dec 05 01:47:16 compute-0 eager_brown[411415]: }
Dec 05 01:47:16 compute-0 systemd[1]: libpod-cf83c8b1abb947e6aefa8f6c73935426083ed86b8628842e058653ac0bff793b.scope: Deactivated successfully.
Dec 05 01:47:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:47:16
Dec 05 01:47:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:47:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:47:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'backups', '.rgw.root', 'images', 'volumes', 'default.rgw.meta']
Dec 05 01:47:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:47:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:47:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:47:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:47:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:47:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:47:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:47:16 compute-0 podman[411424]: 2025-12-05 01:47:16.321093419 +0000 UTC m=+0.049278216 container died cf83c8b1abb947e6aefa8f6c73935426083ed86b8628842e058653ac0bff793b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:47:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e8eadb16d7e89d32500700bbeca341accbf4d034cf4440aa4b3cd4a1e969959-merged.mount: Deactivated successfully.
Dec 05 01:47:16 compute-0 podman[411424]: 2025-12-05 01:47:16.437649925 +0000 UTC m=+0.165834682 container remove cf83c8b1abb947e6aefa8f6c73935426083ed86b8628842e058653ac0bff793b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brown, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:47:16 compute-0 systemd[1]: libpod-conmon-cf83c8b1abb947e6aefa8f6c73935426083ed86b8628842e058653ac0bff793b.scope: Deactivated successfully.
Dec 05 01:47:16 compute-0 sudo[411293]: pam_unix(sudo:session): session closed for user root
Dec 05 01:47:16 compute-0 ceph-mon[192914]: pgmap v1133: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:16 compute-0 sudo[411438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:47:16 compute-0 sudo[411438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:47:16 compute-0 sudo[411438]: pam_unix(sudo:session): session closed for user root
Dec 05 01:47:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:47:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:47:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:47:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:47:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:47:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:47:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:47:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:47:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:47:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:47:16 compute-0 sudo[411463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:47:16 compute-0 sudo[411463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:47:16 compute-0 sudo[411463]: pam_unix(sudo:session): session closed for user root
Dec 05 01:47:16 compute-0 sudo[411488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:47:16 compute-0 sudo[411488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:47:16 compute-0 sudo[411488]: pam_unix(sudo:session): session closed for user root
Dec 05 01:47:17 compute-0 sudo[411513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:47:17 compute-0 sudo[411513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.087 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.088 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.088 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.089 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.090 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.120 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.121 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.121 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.122 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.123 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:47:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1134: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:47:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:47:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1089642857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.653 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:47:17 compute-0 podman[411599]: 2025-12-05 01:47:17.671219586 +0000 UTC m=+0.088641032 container create f4fefc963745ab0eb9a5a26b76951a216fff7e6290f175547aa1b0a77d84d712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 01:47:17 compute-0 systemd[1]: Started libpod-conmon-f4fefc963745ab0eb9a5a26b76951a216fff7e6290f175547aa1b0a77d84d712.scope.
Dec 05 01:47:17 compute-0 podman[411599]: 2025-12-05 01:47:17.639562176 +0000 UTC m=+0.056983622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:47:17 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:47:17 compute-0 podman[411599]: 2025-12-05 01:47:17.804271356 +0000 UTC m=+0.221692812 container init f4fefc963745ab0eb9a5a26b76951a216fff7e6290f175547aa1b0a77d84d712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hermann, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 01:47:17 compute-0 podman[411599]: 2025-12-05 01:47:17.81580318 +0000 UTC m=+0.233224616 container start f4fefc963745ab0eb9a5a26b76951a216fff7e6290f175547aa1b0a77d84d712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 01:47:17 compute-0 podman[411599]: 2025-12-05 01:47:17.821796958 +0000 UTC m=+0.239218384 container attach f4fefc963745ab0eb9a5a26b76951a216fff7e6290f175547aa1b0a77d84d712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hermann, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:47:17 compute-0 compassionate_hermann[411616]: 167 167
Dec 05 01:47:17 compute-0 systemd[1]: libpod-f4fefc963745ab0eb9a5a26b76951a216fff7e6290f175547aa1b0a77d84d712.scope: Deactivated successfully.
Dec 05 01:47:17 compute-0 podman[411599]: 2025-12-05 01:47:17.826306135 +0000 UTC m=+0.243727581 container died f4fefc963745ab0eb9a5a26b76951a216fff7e6290f175547aa1b0a77d84d712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:47:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5d2db5cb3fbaa14d7c83a7c36671b03f6d538226353cbb25353112129626a53-merged.mount: Deactivated successfully.
Dec 05 01:47:17 compute-0 podman[411599]: 2025-12-05 01:47:17.890275913 +0000 UTC m=+0.307697329 container remove f4fefc963745ab0eb9a5a26b76951a216fff7e6290f175547aa1b0a77d84d712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hermann, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:47:17 compute-0 systemd[1]: libpod-conmon-f4fefc963745ab0eb9a5a26b76951a216fff7e6290f175547aa1b0a77d84d712.scope: Deactivated successfully.
Dec 05 01:47:17 compute-0 podman[411620]: 2025-12-05 01:47:17.947873682 +0000 UTC m=+0.127664539 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:47:17 compute-0 podman[411619]: 2025-12-05 01:47:17.977670709 +0000 UTC m=+0.153613208 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:47:18 compute-0 podman[411675]: 2025-12-05 01:47:18.085301405 +0000 UTC m=+0.067566680 container create 057bdcc0bf2227f998ad2ced57c2f557263473a01de824338dd79533e1f5eefe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:47:18 compute-0 nova_compute[349548]: 2025-12-05 01:47:18.137 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:47:18 compute-0 nova_compute[349548]: 2025-12-05 01:47:18.138 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4517MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 01:47:18 compute-0 nova_compute[349548]: 2025-12-05 01:47:18.138 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:47:18 compute-0 nova_compute[349548]: 2025-12-05 01:47:18.138 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:47:18 compute-0 podman[411675]: 2025-12-05 01:47:18.053634655 +0000 UTC m=+0.035899930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:47:18 compute-0 systemd[1]: Started libpod-conmon-057bdcc0bf2227f998ad2ced57c2f557263473a01de824338dd79533e1f5eefe.scope.
Dec 05 01:47:18 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:47:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d7816d7a74395b7722b83871fd191362a1c7ac9913269bc9898a1a518ea16f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:47:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d7816d7a74395b7722b83871fd191362a1c7ac9913269bc9898a1a518ea16f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:47:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d7816d7a74395b7722b83871fd191362a1c7ac9913269bc9898a1a518ea16f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:47:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d7816d7a74395b7722b83871fd191362a1c7ac9913269bc9898a1a518ea16f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:47:18 compute-0 podman[411675]: 2025-12-05 01:47:18.228981473 +0000 UTC m=+0.211246788 container init 057bdcc0bf2227f998ad2ced57c2f557263473a01de824338dd79533e1f5eefe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_nobel, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 01:47:18 compute-0 nova_compute[349548]: 2025-12-05 01:47:18.236 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 01:47:18 compute-0 nova_compute[349548]: 2025-12-05 01:47:18.236 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 01:47:18 compute-0 podman[411675]: 2025-12-05 01:47:18.260576901 +0000 UTC m=+0.242842146 container start 057bdcc0bf2227f998ad2ced57c2f557263473a01de824338dd79533e1f5eefe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 01:47:18 compute-0 podman[411675]: 2025-12-05 01:47:18.266645351 +0000 UTC m=+0.248910626 container attach 057bdcc0bf2227f998ad2ced57c2f557263473a01de824338dd79533e1f5eefe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 01:47:18 compute-0 nova_compute[349548]: 2025-12-05 01:47:18.278 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:47:18 compute-0 ceph-mon[192914]: pgmap v1134: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:18 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1089642857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:47:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:47:18 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1996216443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:47:18 compute-0 nova_compute[349548]: 2025-12-05 01:47:18.775 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:47:18 compute-0 nova_compute[349548]: 2025-12-05 01:47:18.788 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:47:18 compute-0 nova_compute[349548]: 2025-12-05 01:47:18.808 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:47:18 compute-0 nova_compute[349548]: 2025-12-05 01:47:18.810 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 01:47:18 compute-0 nova_compute[349548]: 2025-12-05 01:47:18.811 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:47:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1135: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:19 compute-0 quizzical_nobel[411692]: {
Dec 05 01:47:19 compute-0 quizzical_nobel[411692]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:47:19 compute-0 quizzical_nobel[411692]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:47:19 compute-0 quizzical_nobel[411692]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:47:19 compute-0 quizzical_nobel[411692]:         "osd_id": 0,
Dec 05 01:47:19 compute-0 quizzical_nobel[411692]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:47:19 compute-0 quizzical_nobel[411692]:         "type": "bluestore"
Dec 05 01:47:19 compute-0 quizzical_nobel[411692]:     },
Dec 05 01:47:19 compute-0 quizzical_nobel[411692]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:47:19 compute-0 quizzical_nobel[411692]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:47:19 compute-0 quizzical_nobel[411692]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:47:19 compute-0 quizzical_nobel[411692]:         "osd_id": 1,
Dec 05 01:47:19 compute-0 quizzical_nobel[411692]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:47:19 compute-0 quizzical_nobel[411692]:         "type": "bluestore"
Dec 05 01:47:19 compute-0 quizzical_nobel[411692]:     },
Dec 05 01:47:19 compute-0 quizzical_nobel[411692]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:47:19 compute-0 quizzical_nobel[411692]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:47:19 compute-0 quizzical_nobel[411692]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:47:19 compute-0 quizzical_nobel[411692]:         "osd_id": 2,
Dec 05 01:47:19 compute-0 quizzical_nobel[411692]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:47:19 compute-0 quizzical_nobel[411692]:         "type": "bluestore"
Dec 05 01:47:19 compute-0 quizzical_nobel[411692]:     }
Dec 05 01:47:19 compute-0 quizzical_nobel[411692]: }
Dec 05 01:47:19 compute-0 systemd[1]: libpod-057bdcc0bf2227f998ad2ced57c2f557263473a01de824338dd79533e1f5eefe.scope: Deactivated successfully.
Dec 05 01:47:19 compute-0 systemd[1]: libpod-057bdcc0bf2227f998ad2ced57c2f557263473a01de824338dd79533e1f5eefe.scope: Consumed 1.207s CPU time.
Dec 05 01:47:19 compute-0 podman[411747]: 2025-12-05 01:47:19.540835033 +0000 UTC m=+0.051319393 container died 057bdcc0bf2227f998ad2ced57c2f557263473a01de824338dd79533e1f5eefe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_nobel, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 05 01:47:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-82d7816d7a74395b7722b83871fd191362a1c7ac9913269bc9898a1a518ea16f-merged.mount: Deactivated successfully.
Dec 05 01:47:19 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1996216443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:47:19 compute-0 podman[411747]: 2025-12-05 01:47:19.654536789 +0000 UTC m=+0.165021129 container remove 057bdcc0bf2227f998ad2ced57c2f557263473a01de824338dd79533e1f5eefe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 01:47:19 compute-0 systemd[1]: libpod-conmon-057bdcc0bf2227f998ad2ced57c2f557263473a01de824338dd79533e1f5eefe.scope: Deactivated successfully.
Dec 05 01:47:19 compute-0 sudo[411513]: pam_unix(sudo:session): session closed for user root
Dec 05 01:47:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:47:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:47:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:47:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:47:19 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 12ce805b-c793-4957-8617-17fd90948acd does not exist
Dec 05 01:47:19 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev be294e56-88cb-41ac-bac2-7ad0ac4ad1a7 does not exist
Dec 05 01:47:19 compute-0 nova_compute[349548]: 2025-12-05 01:47:19.791 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:47:19 compute-0 nova_compute[349548]: 2025-12-05 01:47:19.791 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:47:19 compute-0 nova_compute[349548]: 2025-12-05 01:47:19.792 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:47:19 compute-0 nova_compute[349548]: 2025-12-05 01:47:19.792 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:47:19 compute-0 sudo[411762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:47:19 compute-0 sudo[411762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:47:19 compute-0 sudo[411762]: pam_unix(sudo:session): session closed for user root
Dec 05 01:47:20 compute-0 sudo[411799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:47:20 compute-0 podman[411787]: 2025-12-05 01:47:20.047697779 +0000 UTC m=+0.117908335 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec 05 01:47:20 compute-0 sudo[411799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:47:20 compute-0 sudo[411799]: pam_unix(sudo:session): session closed for user root
Dec 05 01:47:20 compute-0 nova_compute[349548]: 2025-12-05 01:47:20.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:47:20 compute-0 podman[411786]: 2025-12-05 01:47:20.077094265 +0000 UTC m=+0.150866741 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 01:47:20 compute-0 ceph-mon[192914]: pgmap v1135: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:47:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:47:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1136: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:47:22 compute-0 ceph-mon[192914]: pgmap v1136: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1137: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:24 compute-0 ceph-mon[192914]: pgmap v1137: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:24 compute-0 podman[411846]: 2025-12-05 01:47:24.718860376 +0000 UTC m=+0.122456833 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-type=git, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9, config_id=edpm, managed_by=edpm_ansible, container_name=kepler, release-0.7.12=, architecture=x86_64, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 01:47:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1138: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 05 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:47:26 compute-0 ceph-mon[192914]: pgmap v1138: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1139: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:47:28 compute-0 ceph-mon[192914]: pgmap v1139: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1140: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:29 compute-0 ceph-mon[192914]: pgmap v1140: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:29 compute-0 podman[158197]: time="2025-12-05T01:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:47:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 01:47:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8116 "" "Go-http-client/1.1"
Dec 05 01:47:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1141: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:31 compute-0 openstack_network_exporter[366555]: ERROR   01:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:47:31 compute-0 openstack_network_exporter[366555]: ERROR   01:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:47:31 compute-0 openstack_network_exporter[366555]: ERROR   01:47:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:47:31 compute-0 openstack_network_exporter[366555]: ERROR   01:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:47:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:47:31 compute-0 openstack_network_exporter[366555]: ERROR   01:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:47:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:47:31 compute-0 podman[411863]: 2025-12-05 01:47:31.705729869 +0000 UTC m=+0.122487984 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:47:31 compute-0 podman[411864]: 2025-12-05 01:47:31.716144341 +0000 UTC m=+0.121371832 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:47:31 compute-0 podman[411871]: 2025-12-05 01:47:31.737271225 +0000 UTC m=+0.115020094 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 05 01:47:31 compute-0 podman[411869]: 2025-12-05 01:47:31.758750599 +0000 UTC m=+0.149775731 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 01:47:31 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:47:31.805 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 01:47:31 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:47:31.806 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 01:47:32 compute-0 ceph-mon[192914]: pgmap v1141: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.589319) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899252589357, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1159, "num_deletes": 506, "total_data_size": 1209092, "memory_usage": 1235512, "flush_reason": "Manual Compaction"}
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899252602254, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 907396, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22943, "largest_seqno": 24101, "table_properties": {"data_size": 902752, "index_size": 1720, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 13993, "raw_average_key_size": 18, "raw_value_size": 890981, "raw_average_value_size": 1200, "num_data_blocks": 77, "num_entries": 742, "num_filter_entries": 742, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764899177, "oldest_key_time": 1764899177, "file_creation_time": 1764899252, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 13035 microseconds, and 7365 cpu microseconds.
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.602350) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 907396 bytes OK
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.602376) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.604790) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.604813) EVENT_LOG_v1 {"time_micros": 1764899252604806, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.604836) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1202657, prev total WAL file size 1202657, number of live WAL files 2.
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.606053) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(886KB)], [53(8932KB)]
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899252606088, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10054649, "oldest_snapshot_seqno": -1}
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4425 keys, 6961519 bytes, temperature: kUnknown
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899252670552, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 6961519, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6932304, "index_size": 17073, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 110889, "raw_average_key_size": 25, "raw_value_size": 6852390, "raw_average_value_size": 1548, "num_data_blocks": 711, "num_entries": 4425, "num_filter_entries": 4425, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764899252, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.671107) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 6961519 bytes
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.674868) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.7 rd, 107.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.7 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(18.8) write-amplify(7.7) OK, records in: 5432, records dropped: 1007 output_compression: NoCompression
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.675002) EVENT_LOG_v1 {"time_micros": 1764899252674983, "job": 28, "event": "compaction_finished", "compaction_time_micros": 64573, "compaction_time_cpu_micros": 37042, "output_level": 6, "num_output_files": 1, "total_output_size": 6961519, "num_input_records": 5432, "num_output_records": 4425, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899252675543, "job": 28, "event": "table_file_deletion", "file_number": 55}
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899252679180, "job": 28, "event": "table_file_deletion", "file_number": 53}
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.605748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.679426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.679432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.679434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.679436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.679438) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:47:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1142: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:34 compute-0 ceph-mon[192914]: pgmap v1142: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1143: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:36 compute-0 ceph-mon[192914]: pgmap v1143: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1144: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:47:37 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:47:37.808 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:47:38 compute-0 ceph-mon[192914]: pgmap v1144: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1145: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:40 compute-0 ceph-mon[192914]: pgmap v1145: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1146: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:47:42 compute-0 ceph-mon[192914]: pgmap v1146: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1147: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:44 compute-0 ceph-mon[192914]: pgmap v1147: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 01:47:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3277609033' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:47:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 01:47:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3277609033' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:47:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1148: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/3277609033' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:47:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/3277609033' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:47:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:47:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:47:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:47:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:47:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:47:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:47:46 compute-0 ceph-mon[192914]: pgmap v1148: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1149: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:47:48 compute-0 ceph-mon[192914]: pgmap v1149: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:48 compute-0 podman[411950]: 2025-12-05 01:47:48.721818581 +0000 UTC m=+0.131970041 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:47:48 compute-0 podman[411951]: 2025-12-05 01:47:48.73282383 +0000 UTC m=+0.135351495 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:47:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1150: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:49 compute-0 ceph-mon[192914]: pgmap v1150: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:50 compute-0 podman[411989]: 2025-12-05 01:47:50.717255554 +0000 UTC m=+0.127766522 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4)
Dec 05 01:47:50 compute-0 podman[411990]: 2025-12-05 01:47:50.762946938 +0000 UTC m=+0.164516835 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:47:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1151: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:52 compute-0 ceph-mon[192914]: pgmap v1151: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:52 compute-0 nova_compute[349548]: 2025-12-05 01:47:52.397 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:47:52 compute-0 nova_compute[349548]: 2025-12-05 01:47:52.398 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:47:52 compute-0 nova_compute[349548]: 2025-12-05 01:47:52.431 349552 DEBUG nova.compute.manager [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 05 01:47:52 compute-0 nova_compute[349548]: 2025-12-05 01:47:52.561 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:47:52 compute-0 nova_compute[349548]: 2025-12-05 01:47:52.562 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:47:52 compute-0 nova_compute[349548]: 2025-12-05 01:47:52.575 349552 DEBUG nova.virt.hardware [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 05 01:47:52 compute-0 nova_compute[349548]: 2025-12-05 01:47:52.576 349552 INFO nova.compute.claims [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Claim successful on node compute-0.ctlplane.example.com
Dec 05 01:47:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:47:52 compute-0 nova_compute[349548]: 2025-12-05 01:47:52.702 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:47:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:47:53 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1279624068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.202 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.218 349552 DEBUG nova.compute.provider_tree [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.237 349552 DEBUG nova.scheduler.client.report [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.269 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.270 349552 DEBUG nova.compute.manager [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 05 01:47:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1152: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.324 349552 DEBUG nova.compute.manager [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 05 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.325 349552 DEBUG nova.network.neutron [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 05 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.356 349552 INFO nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 05 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.398 349552 DEBUG nova.compute.manager [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 05 01:47:53 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1279624068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.527 349552 DEBUG nova.compute.manager [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 05 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.529 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 05 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.530 349552 INFO nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Creating image(s)
Dec 05 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.587 349552 DEBUG nova.storage.rbd_utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.652 349552 DEBUG nova.storage.rbd_utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.713 349552 DEBUG nova.storage.rbd_utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.723 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "af0f6d73e40706411141d751e7ebef271f1a5b42" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.725 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "af0f6d73e40706411141d751e7ebef271f1a5b42" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:47:54 compute-0 nova_compute[349548]: 2025-12-05 01:47:54.022 349552 DEBUG nova.virt.libvirt.imagebackend [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Image locations are: [{'url': 'rbd://cbd280d3-cbd8-528b-ace6-2b3a887cdcee/images/aa58c1e9-bdcc-4e60-9cee-eaeee0741251/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://cbd280d3-cbd8-528b-ace6-2b3a887cdcee/images/aa58c1e9-bdcc-4e60-9cee-eaeee0741251/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 05 01:47:54 compute-0 nova_compute[349548]: 2025-12-05 01:47:54.229 349552 WARNING oslo_policy.policy [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Dec 05 01:47:54 compute-0 nova_compute[349548]: 2025-12-05 01:47:54.230 349552 WARNING oslo_policy.policy [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Dec 05 01:47:54 compute-0 ceph-mon[192914]: pgmap v1152: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:47:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1153: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 01:47:55 compute-0 nova_compute[349548]: 2025-12-05 01:47:55.371 349552 DEBUG nova.network.neutron [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Successfully created port: 68143c81-65a4-4ed0-8902-dbe0c8d89224 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 05 01:47:55 compute-0 nova_compute[349548]: 2025-12-05 01:47:55.664 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:47:55 compute-0 podman[412103]: 2025-12-05 01:47:55.732534892 +0000 UTC m=+0.135760827 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, name=ubi9, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, com.redhat.component=ubi9-container, release=1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, distribution-scope=public, release-0.7.12=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 01:47:55 compute-0 nova_compute[349548]: 2025-12-05 01:47:55.765 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42.part --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:47:55 compute-0 nova_compute[349548]: 2025-12-05 01:47:55.767 349552 DEBUG nova.virt.images [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] aa58c1e9-bdcc-4e60-9cee-eaeee0741251 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Dec 05 01:47:55 compute-0 nova_compute[349548]: 2025-12-05 01:47:55.769 349552 DEBUG nova.privsep.utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 05 01:47:55 compute-0 nova_compute[349548]: 2025-12-05 01:47:55.770 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42.part /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:47:55 compute-0 nova_compute[349548]: 2025-12-05 01:47:55.971 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42.part /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42.converted" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:47:55 compute-0 nova_compute[349548]: 2025-12-05 01:47:55.983 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:47:56 compute-0 nova_compute[349548]: 2025-12-05 01:47:56.066 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42.converted --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:47:56 compute-0 nova_compute[349548]: 2025-12-05 01:47:56.069 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "af0f6d73e40706411141d751e7ebef271f1a5b42" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:47:56 compute-0 nova_compute[349548]: 2025-12-05 01:47:56.116 349552 DEBUG nova.storage.rbd_utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:47:56 compute-0 nova_compute[349548]: 2025-12-05 01:47:56.124 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:47:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:47:56.174 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:47:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:47:56.175 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:47:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:47:56.175 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:47:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Dec 05 01:47:56 compute-0 ceph-mon[192914]: pgmap v1153: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 05 01:47:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Dec 05 01:47:56 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Dec 05 01:47:56 compute-0 nova_compute[349548]: 2025-12-05 01:47:56.567 349552 DEBUG nova.network.neutron [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Successfully updated port: 68143c81-65a4-4ed0-8902-dbe0c8d89224 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 05 01:47:56 compute-0 nova_compute[349548]: 2025-12-05 01:47:56.589 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:47:56 compute-0 nova_compute[349548]: 2025-12-05 01:47:56.589 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquired lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:47:56 compute-0 nova_compute[349548]: 2025-12-05 01:47:56.589 349552 DEBUG nova.network.neutron [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 05 01:47:56 compute-0 nova_compute[349548]: 2025-12-05 01:47:56.753 349552 DEBUG nova.network.neutron [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 05 01:47:57 compute-0 nova_compute[349548]: 2025-12-05 01:47:57.090 349552 DEBUG nova.compute.manager [req-f28f4562-ba29-4bcc-8622-f9502d453a2e req-2f0d602a-4578-44cb-9bb0-300c96c33a59 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Received event network-changed-68143c81-65a4-4ed0-8902-dbe0c8d89224 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 01:47:57 compute-0 nova_compute[349548]: 2025-12-05 01:47:57.090 349552 DEBUG nova.compute.manager [req-f28f4562-ba29-4bcc-8622-f9502d453a2e req-2f0d602a-4578-44cb-9bb0-300c96c33a59 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Refreshing instance network info cache due to event network-changed-68143c81-65a4-4ed0-8902-dbe0c8d89224. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 01:47:57 compute-0 nova_compute[349548]: 2025-12-05 01:47:57.091 349552 DEBUG oslo_concurrency.lockutils [req-f28f4562-ba29-4bcc-8622-f9502d453a2e req-2f0d602a-4578-44cb-9bb0-300c96c33a59 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:47:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1155: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 204 B/s wr, 8 op/s
Dec 05 01:47:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Dec 05 01:47:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Dec 05 01:47:57 compute-0 ceph-mon[192914]: osdmap e124: 3 total, 3 up, 3 in
Dec 05 01:47:57 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Dec 05 01:47:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:47:57 compute-0 nova_compute[349548]: 2025-12-05 01:47:57.802 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.678s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:47:57 compute-0 nova_compute[349548]: 2025-12-05 01:47:57.960 349552 DEBUG nova.storage.rbd_utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] resizing rbd image b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 05 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.229 349552 DEBUG nova.objects.instance [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'migration_context' on Instance uuid b69a0e24-1bc4-46a5-92d7-367c1efd53df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.300 349552 DEBUG nova.storage.rbd_utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.359 349552 DEBUG nova.storage.rbd_utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.368 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.369 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.370 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.416 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.417 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.459 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:47:58 compute-0 ceph-mon[192914]: pgmap v1155: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 204 B/s wr, 8 op/s
Dec 05 01:47:58 compute-0 ceph-mon[192914]: osdmap e125: 3 total, 3 up, 3 in
Dec 05 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.461 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.501 349552 DEBUG nova.storage.rbd_utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.513 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.805 349552 DEBUG nova.network.neutron [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updating instance_info_cache with network_info: [{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.838 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Releasing lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.840 349552 DEBUG nova.compute.manager [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Instance network_info: |[{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 05 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.841 349552 DEBUG oslo_concurrency.lockutils [req-f28f4562-ba29-4bcc-8622-f9502d453a2e req-2f0d602a-4578-44cb-9bb0-300c96c33a59 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.842 349552 DEBUG nova.network.neutron [req-f28f4562-ba29-4bcc-8622-f9502d453a2e req-2f0d602a-4578-44cb-9bb0-300c96c33a59 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Refreshing network info cache for port 68143c81-65a4-4ed0-8902-dbe0c8d89224 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 01:47:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1157: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 255 B/s wr, 10 op/s
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.528 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.719 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.719 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Ensure instance console log exists: /var/lib/nova/instances/b69a0e24-1bc4-46a5-92d7-367c1efd53df/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.720 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.720 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.721 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.724 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Start _get_guest_xml network_info=[{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-05T01:46:34Z,direct_url=<?>,disk_format='qcow2',id=aa58c1e9-bdcc-4e60-9cee-eaeee0741251,min_disk=0,min_ram=0,name='cirros',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-05T01:46:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}], 'ephemerals': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.731 349552 WARNING nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.738 349552 DEBUG nova.virt.libvirt.host [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.738 349552 DEBUG nova.virt.libvirt.host [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.745 349552 DEBUG nova.virt.libvirt.host [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.745 349552 DEBUG nova.virt.libvirt.host [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.746 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.746 349552 DEBUG nova.virt.hardware [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T01:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='7d473820-6f66-40b4-b8d1-decd466d7dd2',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-05T01:46:34Z,direct_url=<?>,disk_format='qcow2',id=aa58c1e9-bdcc-4e60-9cee-eaeee0741251,min_disk=0,min_ram=0,name='cirros',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-05T01:46:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.746 349552 DEBUG nova.virt.hardware [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.747 349552 DEBUG nova.virt.hardware [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.747 349552 DEBUG nova.virt.hardware [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.747 349552 DEBUG nova.virt.hardware [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.747 349552 DEBUG nova.virt.hardware [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.747 349552 DEBUG nova.virt.hardware [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.748 349552 DEBUG nova.virt.hardware [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.748 349552 DEBUG nova.virt.hardware [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.748 349552 DEBUG nova.virt.hardware [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.748 349552 DEBUG nova.virt.hardware [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 05 01:47:59 compute-0 podman[158197]: time="2025-12-05T01:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.753 349552 DEBUG nova.privsep.utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 05 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.753 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:47:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 01:47:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8124 "" "Go-http-client/1.1"
Dec 05 01:48:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 01:48:00 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3742020724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:48:00 compute-0 nova_compute[349548]: 2025-12-05 01:48:00.264 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:48:00 compute-0 nova_compute[349548]: 2025-12-05 01:48:00.266 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:48:00 compute-0 ceph-mon[192914]: pgmap v1157: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 255 B/s wr, 10 op/s
Dec 05 01:48:00 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3742020724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:48:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 01:48:00 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3644237013' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:48:00 compute-0 nova_compute[349548]: 2025-12-05 01:48:00.741 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:48:00 compute-0 nova_compute[349548]: 2025-12-05 01:48:00.791 349552 DEBUG nova.storage.rbd_utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:48:00 compute-0 nova_compute[349548]: 2025-12-05 01:48:00.803 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:48:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 01:48:01 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/870861907' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.321 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.323 349552 DEBUG nova.virt.libvirt.vif [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T01:47:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ad982b73954486390215862ee62239f',ramdisk_id='',reservation_id='r-u7sbhrgz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T01:47:53Z,user_data=None,user_id='ff880837791d4f49a54672b8d0e705ff',uuid=b69a0e24-1bc4-46a5-92d7-367c1efd53df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.323 349552 DEBUG nova.network.os_vif_util [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converting VIF {"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.324 349552 DEBUG nova.network.os_vif_util [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:12:24,bridge_name='br-int',has_traffic_filtering=True,id=68143c81-65a4-4ed0-8902-dbe0c8d89224,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68143c81-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 01:48:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1158: 321 pgs: 321 active+clean; 25 MiB data, 167 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 438 KiB/s wr, 43 op/s
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.328 349552 DEBUG nova.objects.instance [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'pci_devices' on Instance uuid b69a0e24-1bc4-46a5-92d7-367c1efd53df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.350 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] End _get_guest_xml xml=<domain type="kvm">
Dec 05 01:48:01 compute-0 nova_compute[349548]:   <uuid>b69a0e24-1bc4-46a5-92d7-367c1efd53df</uuid>
Dec 05 01:48:01 compute-0 nova_compute[349548]:   <name>instance-00000001</name>
Dec 05 01:48:01 compute-0 nova_compute[349548]:   <memory>524288</memory>
Dec 05 01:48:01 compute-0 nova_compute[349548]:   <vcpu>1</vcpu>
Dec 05 01:48:01 compute-0 nova_compute[349548]:   <metadata>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <nova:name>test_0</nova:name>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <nova:creationTime>2025-12-05 01:47:59</nova:creationTime>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <nova:flavor name="m1.small">
Dec 05 01:48:01 compute-0 nova_compute[349548]:         <nova:memory>512</nova:memory>
Dec 05 01:48:01 compute-0 nova_compute[349548]:         <nova:disk>1</nova:disk>
Dec 05 01:48:01 compute-0 nova_compute[349548]:         <nova:swap>0</nova:swap>
Dec 05 01:48:01 compute-0 nova_compute[349548]:         <nova:ephemeral>1</nova:ephemeral>
Dec 05 01:48:01 compute-0 nova_compute[349548]:         <nova:vcpus>1</nova:vcpus>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       </nova:flavor>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <nova:owner>
Dec 05 01:48:01 compute-0 nova_compute[349548]:         <nova:user uuid="ff880837791d4f49a54672b8d0e705ff">admin</nova:user>
Dec 05 01:48:01 compute-0 nova_compute[349548]:         <nova:project uuid="6ad982b73954486390215862ee62239f">admin</nova:project>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       </nova:owner>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <nova:root type="image" uuid="aa58c1e9-bdcc-4e60-9cee-eaeee0741251"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <nova:ports>
Dec 05 01:48:01 compute-0 nova_compute[349548]:         <nova:port uuid="68143c81-65a4-4ed0-8902-dbe0c8d89224">
Dec 05 01:48:01 compute-0 nova_compute[349548]:           <nova:ip type="fixed" address="192.168.0.48" ipVersion="4"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:         </nova:port>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       </nova:ports>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     </nova:instance>
Dec 05 01:48:01 compute-0 nova_compute[349548]:   </metadata>
Dec 05 01:48:01 compute-0 nova_compute[349548]:   <sysinfo type="smbios">
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <system>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <entry name="manufacturer">RDO</entry>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <entry name="product">OpenStack Compute</entry>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <entry name="serial">b69a0e24-1bc4-46a5-92d7-367c1efd53df</entry>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <entry name="uuid">b69a0e24-1bc4-46a5-92d7-367c1efd53df</entry>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <entry name="family">Virtual Machine</entry>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     </system>
Dec 05 01:48:01 compute-0 nova_compute[349548]:   </sysinfo>
Dec 05 01:48:01 compute-0 nova_compute[349548]:   <os>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <boot dev="hd"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <smbios mode="sysinfo"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:   </os>
Dec 05 01:48:01 compute-0 nova_compute[349548]:   <features>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <acpi/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <apic/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <vmcoreinfo/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:   </features>
Dec 05 01:48:01 compute-0 nova_compute[349548]:   <clock offset="utc">
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <timer name="pit" tickpolicy="delay"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <timer name="hpet" present="no"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:   </clock>
Dec 05 01:48:01 compute-0 nova_compute[349548]:   <cpu mode="host-model" match="exact">
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <topology sockets="1" cores="1" threads="1"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:   </cpu>
Dec 05 01:48:01 compute-0 nova_compute[349548]:   <devices>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <disk type="network" device="disk">
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk">
Dec 05 01:48:01 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       </source>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 01:48:01 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       </auth>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <target dev="vda" bus="virtio"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     </disk>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <disk type="network" device="disk">
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk.eph0">
Dec 05 01:48:01 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       </source>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 01:48:01 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       </auth>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <target dev="vdb" bus="virtio"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     </disk>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <disk type="network" device="cdrom">
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk.config">
Dec 05 01:48:01 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       </source>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 01:48:01 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       </auth>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <target dev="sda" bus="sata"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     </disk>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <interface type="ethernet">
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <mac address="fa:16:3e:0c:12:24"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <driver name="vhost" rx_queue_size="512"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <mtu size="1442"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <target dev="tap68143c81-65"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     </interface>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <serial type="pty">
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <log file="/var/lib/nova/instances/b69a0e24-1bc4-46a5-92d7-367c1efd53df/console.log" append="off"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     </serial>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <video>
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     </video>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <input type="tablet" bus="usb"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <rng model="virtio">
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <backend model="random">/dev/urandom</backend>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     </rng>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <controller type="usb" index="0"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     <memballoon model="virtio">
Dec 05 01:48:01 compute-0 nova_compute[349548]:       <stats period="10"/>
Dec 05 01:48:01 compute-0 nova_compute[349548]:     </memballoon>
Dec 05 01:48:01 compute-0 nova_compute[349548]:   </devices>
Dec 05 01:48:01 compute-0 nova_compute[349548]: </domain>
Dec 05 01:48:01 compute-0 nova_compute[349548]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.350 349552 DEBUG nova.compute.manager [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Preparing to wait for external event network-vif-plugged-68143c81-65a4-4ed0-8902-dbe0c8d89224 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.351 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.351 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.351 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.352 349552 DEBUG nova.virt.libvirt.vif [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T01:47:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ad982b73954486390215862ee62239f',ramdisk_id='',reservation_id='r-u7sbhrgz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T01:47:53Z,user_data=None,user_id='ff880837791d4f49a54672b8d0e705ff',uuid=b69a0e24-1bc4-46a5-92d7-367c1efd53df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.352 349552 DEBUG nova.network.os_vif_util [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converting VIF {"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.353 349552 DEBUG nova.network.os_vif_util [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:12:24,bridge_name='br-int',has_traffic_filtering=True,id=68143c81-65a4-4ed0-8902-dbe0c8d89224,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68143c81-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.353 349552 DEBUG os_vif [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:12:24,bridge_name='br-int',has_traffic_filtering=True,id=68143c81-65a4-4ed0-8902-dbe0c8d89224,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68143c81-65') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.388 349552 DEBUG ovsdbapp.backend.ovs_idl [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.389 349552 DEBUG ovsdbapp.backend.ovs_idl [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.389 349552 DEBUG ovsdbapp.backend.ovs_idl [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.389 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.390 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.390 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.391 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.392 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.394 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.402 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.403 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.403 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.404 349552 INFO oslo.privsep.daemon [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp4qlrv0yp/privsep.sock']
Dec 05 01:48:01 compute-0 openstack_network_exporter[366555]: ERROR   01:48:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:48:01 compute-0 openstack_network_exporter[366555]: ERROR   01:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:48:01 compute-0 openstack_network_exporter[366555]: ERROR   01:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:48:01 compute-0 openstack_network_exporter[366555]: ERROR   01:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:48:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:48:01 compute-0 openstack_network_exporter[366555]: ERROR   01:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:48:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.465 349552 DEBUG nova.network.neutron [req-f28f4562-ba29-4bcc-8622-f9502d453a2e req-2f0d602a-4578-44cb-9bb0-300c96c33a59 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updated VIF entry in instance network info cache for port 68143c81-65a4-4ed0-8902-dbe0c8d89224. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.466 349552 DEBUG nova.network.neutron [req-f28f4562-ba29-4bcc-8622-f9502d453a2e req-2f0d602a-4578-44cb-9bb0-300c96c33a59 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updating instance_info_cache with network_info: [{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 01:48:01 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3644237013' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:48:01 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/870861907' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.487 349552 DEBUG oslo_concurrency.lockutils [req-f28f4562-ba29-4bcc-8622-f9502d453a2e req-2f0d602a-4578-44cb-9bb0-300c96c33a59 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.218 349552 INFO oslo.privsep.daemon [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Spawned new privsep daemon via rootwrap
Dec 05 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.089 412465 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 05 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.096 412465 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 05 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.100 412465 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Dec 05 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.100 412465 INFO oslo.privsep.daemon [-] privsep daemon running as pid 412465
Dec 05 01:48:02 compute-0 ceph-mon[192914]: pgmap v1158: 321 pgs: 321 active+clean; 25 MiB data, 167 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 438 KiB/s wr, 43 op/s
Dec 05 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.591 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.592 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68143c81-65, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.593 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap68143c81-65, col_values=(('external_ids', {'iface-id': '68143c81-65a4-4ed0-8902-dbe0c8d89224', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:12:24', 'vm-uuid': 'b69a0e24-1bc4-46a5-92d7-367c1efd53df'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:48:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.597 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:02 compute-0 NetworkManager[49092]: <info>  [1764899282.5975] manager: (tap68143c81-65): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Dec 05 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.600 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.610 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.611 349552 INFO os_vif [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:12:24,bridge_name='br-int',has_traffic_filtering=True,id=68143c81-65a4-4ed0-8902-dbe0c8d89224,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68143c81-65')
Dec 05 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.678 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.680 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.680 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.680 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No VIF found with MAC fa:16:3e:0c:12:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 05 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.682 349552 INFO nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Using config drive
Dec 05 01:48:02 compute-0 podman[412469]: 2025-12-05 01:48:02.683036112 +0000 UTC m=+0.101681919 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 01:48:02 compute-0 podman[412472]: 2025-12-05 01:48:02.701195502 +0000 UTC m=+0.098084808 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, config_id=edpm, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, version=9.6)
Dec 05 01:48:02 compute-0 podman[412470]: 2025-12-05 01:48:02.716265526 +0000 UTC m=+0.122639228 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.731 349552 DEBUG nova.storage.rbd_utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:48:02 compute-0 podman[412471]: 2025-12-05 01:48:02.746444324 +0000 UTC m=+0.145632984 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:48:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1159: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 65 op/s
Dec 05 01:48:03 compute-0 nova_compute[349548]: 2025-12-05 01:48:03.373 349552 INFO nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Creating config drive at /var/lib/nova/instances/b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.config
Dec 05 01:48:03 compute-0 nova_compute[349548]: 2025-12-05 01:48:03.382 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc2vnvoxp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:48:03 compute-0 nova_compute[349548]: 2025-12-05 01:48:03.530 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc2vnvoxp" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:48:03 compute-0 nova_compute[349548]: 2025-12-05 01:48:03.587 349552 DEBUG nova.storage.rbd_utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:48:03 compute-0 nova_compute[349548]: 2025-12-05 01:48:03.599 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.config b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:48:03 compute-0 nova_compute[349548]: 2025-12-05 01:48:03.927 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.config b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:48:03 compute-0 nova_compute[349548]: 2025-12-05 01:48:03.929 349552 INFO nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Deleting local config drive /var/lib/nova/instances/b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.config because it was imported into RBD.
Dec 05 01:48:03 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 05 01:48:04 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 05 01:48:04 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Dec 05 01:48:04 compute-0 kernel: tap68143c81-65: entered promiscuous mode
Dec 05 01:48:04 compute-0 NetworkManager[49092]: <info>  [1764899284.1463] manager: (tap68143c81-65): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Dec 05 01:48:04 compute-0 ovn_controller[89286]: 2025-12-05T01:48:04Z|00027|binding|INFO|Claiming lport 68143c81-65a4-4ed0-8902-dbe0c8d89224 for this chassis.
Dec 05 01:48:04 compute-0 ovn_controller[89286]: 2025-12-05T01:48:04Z|00028|binding|INFO|68143c81-65a4-4ed0-8902-dbe0c8d89224: Claiming fa:16:3e:0c:12:24 192.168.0.48
Dec 05 01:48:04 compute-0 nova_compute[349548]: 2025-12-05 01:48:04.151 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:04 compute-0 nova_compute[349548]: 2025-12-05 01:48:04.170 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:04 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:04.185 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:12:24 192.168.0.48'], port_security=['fa:16:3e:0c:12:24 192.168.0.48'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.48/24', 'neutron:device_id': 'b69a0e24-1bc4-46a5-92d7-367c1efd53df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ad982b73954486390215862ee62239f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf07c149-4b4f-4cc9-a5b5-cfd139acbede', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8440543a-d57d-422f-b491-49a678c2776e, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=68143c81-65a4-4ed0-8902-dbe0c8d89224) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 01:48:04 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:04.187 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 68143c81-65a4-4ed0-8902-dbe0c8d89224 in datapath 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 bound to our chassis
Dec 05 01:48:04 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:04.189 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183
Dec 05 01:48:04 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:04.192 287122 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpft85vvy5/privsep.sock']
Dec 05 01:48:04 compute-0 systemd-udevd[412648]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 01:48:04 compute-0 NetworkManager[49092]: <info>  [1764899284.2522] device (tap68143c81-65): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 05 01:48:04 compute-0 NetworkManager[49092]: <info>  [1764899284.2634] device (tap68143c81-65): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 05 01:48:04 compute-0 systemd-machined[138700]: New machine qemu-1-instance-00000001.
Dec 05 01:48:04 compute-0 nova_compute[349548]: 2025-12-05 01:48:04.280 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:04 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Dec 05 01:48:04 compute-0 ovn_controller[89286]: 2025-12-05T01:48:04Z|00029|binding|INFO|Setting lport 68143c81-65a4-4ed0-8902-dbe0c8d89224 ovn-installed in OVS
Dec 05 01:48:04 compute-0 ovn_controller[89286]: 2025-12-05T01:48:04Z|00030|binding|INFO|Setting lport 68143c81-65a4-4ed0-8902-dbe0c8d89224 up in Southbound
Dec 05 01:48:04 compute-0 nova_compute[349548]: 2025-12-05 01:48:04.290 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:04 compute-0 ceph-mon[192914]: pgmap v1159: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 65 op/s
Dec 05 01:48:04 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 05 01:48:04 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 05 01:48:04 compute-0 nova_compute[349548]: 2025-12-05 01:48:04.910 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764899284.9092321, b69a0e24-1bc4-46a5-92d7-367c1efd53df => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 01:48:04 compute-0 nova_compute[349548]: 2025-12-05 01:48:04.911 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] VM Started (Lifecycle Event)
Dec 05 01:48:04 compute-0 nova_compute[349548]: 2025-12-05 01:48:04.967 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 01:48:04 compute-0 nova_compute[349548]: 2025-12-05 01:48:04.975 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764899284.9093807, b69a0e24-1bc4-46a5-92d7-367c1efd53df => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 01:48:04 compute-0 nova_compute[349548]: 2025-12-05 01:48:04.975 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] VM Paused (Lifecycle Event)
Dec 05 01:48:04 compute-0 nova_compute[349548]: 2025-12-05 01:48:04.996 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 01:48:05 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:05.001 287122 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 05 01:48:05 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:05.002 287122 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpft85vvy5/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.005 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 01:48:05 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:04.859 412744 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 05 01:48:05 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:04.865 412744 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 05 01:48:05 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:04.869 412744 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Dec 05 01:48:05 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:04.870 412744 INFO oslo.privsep.daemon [-] privsep daemon running as pid 412744
Dec 05 01:48:05 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:05.008 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[6435c739-6104-49d0-ad72-c5e8e65ee199]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.029 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 01:48:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1160: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 70 op/s
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.360 349552 DEBUG nova.compute.manager [req-3c8fd7c3-84b0-40ce-b4c2-a5a6956fe60e req-3558dab2-8ad8-4b40-9944-72e6862064f2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Received event network-vif-plugged-68143c81-65a4-4ed0-8902-dbe0c8d89224 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.361 349552 DEBUG oslo_concurrency.lockutils [req-3c8fd7c3-84b0-40ce-b4c2-a5a6956fe60e req-3558dab2-8ad8-4b40-9944-72e6862064f2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.362 349552 DEBUG oslo_concurrency.lockutils [req-3c8fd7c3-84b0-40ce-b4c2-a5a6956fe60e req-3558dab2-8ad8-4b40-9944-72e6862064f2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.363 349552 DEBUG oslo_concurrency.lockutils [req-3c8fd7c3-84b0-40ce-b4c2-a5a6956fe60e req-3558dab2-8ad8-4b40-9944-72e6862064f2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.364 349552 DEBUG nova.compute.manager [req-3c8fd7c3-84b0-40ce-b4c2-a5a6956fe60e req-3558dab2-8ad8-4b40-9944-72e6862064f2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Processing event network-vif-plugged-68143c81-65a4-4ed0-8902-dbe0c8d89224 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.365 349552 DEBUG nova.compute.manager [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.372 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.373 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764899285.3715525, b69a0e24-1bc4-46a5-92d7-367c1efd53df => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.373 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] VM Resumed (Lifecycle Event)
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.392 349552 INFO nova.virt.libvirt.driver [-] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Instance spawned successfully.
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.393 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.405 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.420 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.431 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.432 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.438 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.440 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.446 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.448 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.453 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.527 349552 INFO nova.compute.manager [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Took 12.00 seconds to spawn the instance on the hypervisor.
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.528 349552 DEBUG nova.compute.manager [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.633 349552 INFO nova.compute.manager [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Took 13.12 seconds to build instance.
Dec 05 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.678 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:48:05 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:05.699 412744 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:48:05 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:05.700 412744 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:48:05 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:05.701 412744 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:48:06 compute-0 nova_compute[349548]: 2025-12-05 01:48:06.308 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:06 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:06.364 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[0c817f9d-0be6-411c-983a-21a0ee91a1ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:48:06 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:06.366 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap49f7d2f1-f1 in ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 05 01:48:06 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:06.369 412744 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap49f7d2f1-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 05 01:48:06 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:06.369 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[ca746c4f-833d-4d5f-898b-191c6811646b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:48:06 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:06.374 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[cd01e1c7-7be7-48e0-bdb3-3935f187443f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:48:06 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:06.420 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[982b3858-14b6-42f2-bb91-882ad80b3ba7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:48:06 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:06.462 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[d346d983-f575-4cbf-9874-04779dc8c4c3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:48:06 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:06.465 287122 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpq2x94x5c/privsep.sock']
Dec 05 01:48:06 compute-0 ceph-mon[192914]: pgmap v1160: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 70 op/s
Dec 05 01:48:07 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:07.246 287122 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 05 01:48:07 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:07.248 287122 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpq2x94x5c/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 05 01:48:07 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:07.127 412758 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 05 01:48:07 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:07.134 412758 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 05 01:48:07 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:07.138 412758 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 05 01:48:07 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:07.138 412758 INFO oslo.privsep.daemon [-] privsep daemon running as pid 412758
Dec 05 01:48:07 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:07.254 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[30d33942-75d9-486f-9671-55ba3c07e2ef]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:48:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1161: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 1.6 MiB/s wr, 62 op/s
Dec 05 01:48:07 compute-0 nova_compute[349548]: 2025-12-05 01:48:07.495 349552 DEBUG nova.compute.manager [req-108e38ca-599d-4b1c-8917-580740a35e71 req-a2299452-9c43-4dd8-bb3b-95a83f57f46c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Received event network-vif-plugged-68143c81-65a4-4ed0-8902-dbe0c8d89224 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 01:48:07 compute-0 nova_compute[349548]: 2025-12-05 01:48:07.495 349552 DEBUG oslo_concurrency.lockutils [req-108e38ca-599d-4b1c-8917-580740a35e71 req-a2299452-9c43-4dd8-bb3b-95a83f57f46c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:48:07 compute-0 nova_compute[349548]: 2025-12-05 01:48:07.496 349552 DEBUG oslo_concurrency.lockutils [req-108e38ca-599d-4b1c-8917-580740a35e71 req-a2299452-9c43-4dd8-bb3b-95a83f57f46c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:48:07 compute-0 nova_compute[349548]: 2025-12-05 01:48:07.496 349552 DEBUG oslo_concurrency.lockutils [req-108e38ca-599d-4b1c-8917-580740a35e71 req-a2299452-9c43-4dd8-bb3b-95a83f57f46c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:48:07 compute-0 nova_compute[349548]: 2025-12-05 01:48:07.496 349552 DEBUG nova.compute.manager [req-108e38ca-599d-4b1c-8917-580740a35e71 req-a2299452-9c43-4dd8-bb3b-95a83f57f46c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] No waiting events found dispatching network-vif-plugged-68143c81-65a4-4ed0-8902-dbe0c8d89224 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 01:48:07 compute-0 nova_compute[349548]: 2025-12-05 01:48:07.497 349552 WARNING nova.compute.manager [req-108e38ca-599d-4b1c-8917-580740a35e71 req-a2299452-9c43-4dd8-bb3b-95a83f57f46c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Received unexpected event network-vif-plugged-68143c81-65a4-4ed0-8902-dbe0c8d89224 for instance with vm_state active and task_state None.
Dec 05 01:48:07 compute-0 nova_compute[349548]: 2025-12-05 01:48:07.599 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:48:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Dec 05 01:48:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Dec 05 01:48:07 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Dec 05 01:48:07 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:07.741 412758 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:48:07 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:07.741 412758 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:48:07 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:07.741 412758 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.352 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[1a4edc2d-cbfa-4168-a4b6-34ae2baa4e9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.386 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[3e44a0ba-0f1f-4077-b2f6-8bd65ba65715]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:48:08 compute-0 NetworkManager[49092]: <info>  [1764899288.3889] manager: (tap49f7d2f1-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/23)
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.428 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[bfafd821-313e-4785-913d-28f5994ede25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.432 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[ff92f567-6d06-419a-954d-9e8956e94c2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:48:08 compute-0 systemd-udevd[412771]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 01:48:08 compute-0 NetworkManager[49092]: <info>  [1764899288.4803] device (tap49f7d2f1-f0): carrier: link connected
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.488 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[30a991ab-53e9-4445-867c-2ce9a782e927]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.518 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[fd0307eb-7b02-4e2e-808f-bd8e22392c71]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49f7d2f1-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:8a:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537514, 'reachable_time': 20575, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412788, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.543 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[f3b6e3c0-09e5-4427-b688-9e85f9387c01]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec6:8a33'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537514, 'tstamp': 537514}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412789, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.568 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[97cb49d7-4daf-431b-8a94-4cd34fc93031]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49f7d2f1-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:8a:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537514, 'reachable_time': 20575, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 412790, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:48:08 compute-0 ceph-mon[192914]: pgmap v1161: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 1.6 MiB/s wr, 62 op/s
Dec 05 01:48:08 compute-0 ceph-mon[192914]: osdmap e126: 3 total, 3 up, 3 in
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.614 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[b00aefed-bc7e-404f-a94a-2bf14a8c92be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.710 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c6050a-bb8e-4d57-a8ef-440d26b15487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.713 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49f7d2f1-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.714 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.714 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49f7d2f1-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:48:08 compute-0 nova_compute[349548]: 2025-12-05 01:48:08.718 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:08 compute-0 kernel: tap49f7d2f1-f0: entered promiscuous mode
Dec 05 01:48:08 compute-0 NetworkManager[49092]: <info>  [1764899288.7246] manager: (tap49f7d2f1-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/24)
Dec 05 01:48:08 compute-0 nova_compute[349548]: 2025-12-05 01:48:08.726 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.731 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap49f7d2f1-f0, col_values=(('external_ids', {'iface-id': '35b0af3f-4a87-44c5-9b77-2f08261b9985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:48:08 compute-0 ovn_controller[89286]: 2025-12-05T01:48:08Z|00031|binding|INFO|Releasing lport 35b0af3f-4a87-44c5-9b77-2f08261b9985 from this chassis (sb_readonly=0)
Dec 05 01:48:08 compute-0 nova_compute[349548]: 2025-12-05 01:48:08.734 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:08 compute-0 nova_compute[349548]: 2025-12-05 01:48:08.737 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.739 287122 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/49f7d2f1-f1ff-4dcc-94db-d088dc8d3183.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/49f7d2f1-f1ff-4dcc-94db-d088dc8d3183.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.741 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[15ceeff3-e992-47d2-aa4c-d52722bf6123]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.743 287122 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]: global
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]:     log         /dev/log local0 debug
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]:     log-tag     haproxy-metadata-proxy-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]:     user        root
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]:     group       root
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]:     maxconn     1024
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]:     pidfile     /var/lib/neutron/external/pids/49f7d2f1-f1ff-4dcc-94db-d088dc8d3183.pid.haproxy
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]:     daemon
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]: 
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]: defaults
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]:     log global
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]:     mode http
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]:     option httplog
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]:     option dontlognull
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]:     option http-server-close
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]:     option forwardfor
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]:     retries                 3
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]:     timeout http-request    30s
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]:     timeout connect         30s
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]:     timeout client          32s
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]:     timeout server          32s
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]:     timeout http-keep-alive 30s
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]: 
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]: 
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]: listen listener
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]:     bind 169.254.169.254:80
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]:     server metadata /var/lib/neutron/metadata_proxy
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]:     http-request add-header X-OVN-Network-ID 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 05 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.745 287122 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'env', 'PROCESS_TAG=haproxy-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/49f7d2f1-f1ff-4dcc-94db-d088dc8d3183.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 05 01:48:08 compute-0 nova_compute[349548]: 2025-12-05 01:48:08.757 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:09 compute-0 podman[412823]: 2025-12-05 01:48:09.323516519 +0000 UTC m=+0.122749932 container create 70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 01:48:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1163: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 1.6 MiB/s wr, 62 op/s
Dec 05 01:48:09 compute-0 podman[412823]: 2025-12-05 01:48:09.250009872 +0000 UTC m=+0.049243335 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 05 01:48:09 compute-0 systemd[1]: Started libpod-conmon-70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe.scope.
Dec 05 01:48:09 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:48:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a820a613b1e07df1e33c546156b70839ccd983fd42dcef40eb3db4bae4f3e023/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 01:48:09 compute-0 podman[412823]: 2025-12-05 01:48:09.492791206 +0000 UTC m=+0.292024679 container init 70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 01:48:09 compute-0 podman[412823]: 2025-12-05 01:48:09.508617521 +0000 UTC m=+0.307850924 container start 70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 05 01:48:09 compute-0 neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183[412838]: [NOTICE]   (412842) : New worker (412844) forked
Dec 05 01:48:09 compute-0 neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183[412838]: [NOTICE]   (412842) : Loading success.
Dec 05 01:48:10 compute-0 ceph-mon[192914]: pgmap v1163: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 1.6 MiB/s wr, 62 op/s
Dec 05 01:48:11 compute-0 nova_compute[349548]: 2025-12-05 01:48:11.313 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1164: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 460 KiB/s rd, 1.3 MiB/s wr, 46 op/s
Dec 05 01:48:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:48:12 compute-0 nova_compute[349548]: 2025-12-05 01:48:12.605 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:12 compute-0 ceph-mon[192914]: pgmap v1164: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 460 KiB/s rd, 1.3 MiB/s wr, 46 op/s
Dec 05 01:48:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1165: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 15 KiB/s wr, 72 op/s
Dec 05 01:48:14 compute-0 ceph-mon[192914]: pgmap v1165: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 15 KiB/s wr, 72 op/s
Dec 05 01:48:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1166: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 15 KiB/s wr, 61 op/s
Dec 05 01:48:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:48:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:48:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:48:16
Dec 05 01:48:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:48:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:48:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['images', 'default.rgw.control', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'vms']
Dec 05 01:48:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:48:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:48:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:48:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:48:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:48:16 compute-0 nova_compute[349548]: 2025-12-05 01:48:16.315 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:16 compute-0 NetworkManager[49092]: <info>  [1764899296.3531] manager: (patch-br-int-to-provnet-f36f4e0f-0425-4742-afb6-bfffeac36335): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/25)
Dec 05 01:48:16 compute-0 nova_compute[349548]: 2025-12-05 01:48:16.351 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:16 compute-0 ovn_controller[89286]: 2025-12-05T01:48:16Z|00032|binding|INFO|Releasing lport 35b0af3f-4a87-44c5-9b77-2f08261b9985 from this chassis (sb_readonly=0)
Dec 05 01:48:16 compute-0 NetworkManager[49092]: <info>  [1764899296.3575] device (patch-br-int-to-provnet-f36f4e0f-0425-4742-afb6-bfffeac36335)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 01:48:16 compute-0 NetworkManager[49092]: <info>  [1764899296.3746] manager: (patch-provnet-f36f4e0f-0425-4742-afb6-bfffeac36335-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Dec 05 01:48:16 compute-0 NetworkManager[49092]: <info>  [1764899296.3785] device (patch-provnet-f36f4e0f-0425-4742-afb6-bfffeac36335-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 05 01:48:16 compute-0 NetworkManager[49092]: <info>  [1764899296.3868] manager: (patch-provnet-f36f4e0f-0425-4742-afb6-bfffeac36335-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Dec 05 01:48:16 compute-0 NetworkManager[49092]: <info>  [1764899296.3911] manager: (patch-br-int-to-provnet-f36f4e0f-0425-4742-afb6-bfffeac36335): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Dec 05 01:48:16 compute-0 NetworkManager[49092]: <info>  [1764899296.3937] device (patch-br-int-to-provnet-f36f4e0f-0425-4742-afb6-bfffeac36335)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 05 01:48:16 compute-0 NetworkManager[49092]: <info>  [1764899296.3961] device (patch-provnet-f36f4e0f-0425-4742-afb6-bfffeac36335-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 05 01:48:16 compute-0 ovn_controller[89286]: 2025-12-05T01:48:16Z|00033|binding|INFO|Releasing lport 35b0af3f-4a87-44c5-9b77-2f08261b9985 from this chassis (sb_readonly=0)
Dec 05 01:48:16 compute-0 nova_compute[349548]: 2025-12-05 01:48:16.411 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:16 compute-0 nova_compute[349548]: 2025-12-05 01:48:16.419 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:16.528 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 01:48:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:16.530 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 01:48:16 compute-0 nova_compute[349548]: 2025-12-05 01:48:16.534 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:16 compute-0 nova_compute[349548]: 2025-12-05 01:48:16.643 349552 DEBUG nova.compute.manager [req-05d4582e-fce2-4d13-aca6-0fb2a415ceda req-e02e655a-33bb-47b6-925c-f0d859dcc66a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Received event network-changed-68143c81-65a4-4ed0-8902-dbe0c8d89224 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 01:48:16 compute-0 nova_compute[349548]: 2025-12-05 01:48:16.643 349552 DEBUG nova.compute.manager [req-05d4582e-fce2-4d13-aca6-0fb2a415ceda req-e02e655a-33bb-47b6-925c-f0d859dcc66a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Refreshing instance network info cache due to event network-changed-68143c81-65a4-4ed0-8902-dbe0c8d89224. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 01:48:16 compute-0 nova_compute[349548]: 2025-12-05 01:48:16.644 349552 DEBUG oslo_concurrency.lockutils [req-05d4582e-fce2-4d13-aca6-0fb2a415ceda req-e02e655a-33bb-47b6-925c-f0d859dcc66a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:48:16 compute-0 nova_compute[349548]: 2025-12-05 01:48:16.644 349552 DEBUG oslo_concurrency.lockutils [req-05d4582e-fce2-4d13-aca6-0fb2a415ceda req-e02e655a-33bb-47b6-925c-f0d859dcc66a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:48:16 compute-0 nova_compute[349548]: 2025-12-05 01:48:16.645 349552 DEBUG nova.network.neutron [req-05d4582e-fce2-4d13-aca6-0fb2a415ceda req-e02e655a-33bb-47b6-925c-f0d859dcc66a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Refreshing network info cache for port 68143c81-65a4-4ed0-8902-dbe0c8d89224 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 01:48:16 compute-0 ceph-mon[192914]: pgmap v1166: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 15 KiB/s wr, 61 op/s
Dec 05 01:48:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:48:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:48:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:48:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:48:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:48:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:48:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:48:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:48:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:48:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:48:17 compute-0 nova_compute[349548]: 2025-12-05 01:48:17.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:48:17 compute-0 nova_compute[349548]: 2025-12-05 01:48:17.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:48:17 compute-0 nova_compute[349548]: 2025-12-05 01:48:17.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 01:48:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1167: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 55 op/s
Dec 05 01:48:17 compute-0 nova_compute[349548]: 2025-12-05 01:48:17.608 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:48:18 compute-0 nova_compute[349548]: 2025-12-05 01:48:18.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:48:18 compute-0 nova_compute[349548]: 2025-12-05 01:48:18.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:48:18 compute-0 ceph-mon[192914]: pgmap v1167: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 55 op/s
Dec 05 01:48:19 compute-0 nova_compute[349548]: 2025-12-05 01:48:19.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:48:19 compute-0 nova_compute[349548]: 2025-12-05 01:48:19.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:48:19 compute-0 nova_compute[349548]: 2025-12-05 01:48:19.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 01:48:19 compute-0 nova_compute[349548]: 2025-12-05 01:48:19.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 01:48:19 compute-0 nova_compute[349548]: 2025-12-05 01:48:19.171 349552 DEBUG nova.network.neutron [req-05d4582e-fce2-4d13-aca6-0fb2a415ceda req-e02e655a-33bb-47b6-925c-f0d859dcc66a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updated VIF entry in instance network info cache for port 68143c81-65a4-4ed0-8902-dbe0c8d89224. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 01:48:19 compute-0 nova_compute[349548]: 2025-12-05 01:48:19.172 349552 DEBUG nova.network.neutron [req-05d4582e-fce2-4d13-aca6-0fb2a415ceda req-e02e655a-33bb-47b6-925c-f0d859dcc66a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updating instance_info_cache with network_info: [{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 01:48:19 compute-0 nova_compute[349548]: 2025-12-05 01:48:19.215 349552 DEBUG oslo_concurrency.lockutils [req-05d4582e-fce2-4d13-aca6-0fb2a415ceda req-e02e655a-33bb-47b6-925c-f0d859dcc66a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:48:19 compute-0 nova_compute[349548]: 2025-12-05 01:48:19.326 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:48:19 compute-0 nova_compute[349548]: 2025-12-05 01:48:19.327 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:48:19 compute-0 nova_compute[349548]: 2025-12-05 01:48:19.328 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 01:48:19 compute-0 nova_compute[349548]: 2025-12-05 01:48:19.328 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b69a0e24-1bc4-46a5-92d7-367c1efd53df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 01:48:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1168: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 47 op/s
Dec 05 01:48:19 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:19.535 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:48:19 compute-0 podman[412856]: 2025-12-05 01:48:19.828101649 +0000 UTC m=+0.233564435 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:48:19 compute-0 ceph-mon[192914]: pgmap v1168: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 47 op/s
Dec 05 01:48:19 compute-0 podman[412857]: 2025-12-05 01:48:19.842832473 +0000 UTC m=+0.245206202 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:48:20 compute-0 sudo[412897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:48:20 compute-0 sudo[412897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:48:20 compute-0 sudo[412897]: pam_unix(sudo:session): session closed for user root
Dec 05 01:48:20 compute-0 sudo[412922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:48:20 compute-0 sudo[412922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:48:20 compute-0 sudo[412922]: pam_unix(sudo:session): session closed for user root
Dec 05 01:48:20 compute-0 sudo[412947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:48:20 compute-0 sudo[412947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:48:20 compute-0 sudo[412947]: pam_unix(sudo:session): session closed for user root
Dec 05 01:48:20 compute-0 sudo[412972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:48:20 compute-0 sudo[412972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:48:21 compute-0 nova_compute[349548]: 2025-12-05 01:48:21.318 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1169: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 45 op/s
Dec 05 01:48:21 compute-0 sudo[412972]: pam_unix(sudo:session): session closed for user root
Dec 05 01:48:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:48:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:48:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:48:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:48:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:48:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:48:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev ee7136f9-95a2-4986-81c3-45835f3aaf37 does not exist
Dec 05 01:48:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev aa8bd3f3-9d0d-4de5-9d5b-a0a667ab5e28 does not exist
Dec 05 01:48:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 81d1ad86-45f6-4b57-a160-50329b987400 does not exist
Dec 05 01:48:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:48:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:48:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:48:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:48:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:48:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:48:21 compute-0 sudo[413027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:48:21 compute-0 sudo[413027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:48:21 compute-0 sudo[413027]: pam_unix(sudo:session): session closed for user root
Dec 05 01:48:21 compute-0 podman[413047]: 2025-12-05 01:48:21.720294662 +0000 UTC m=+0.123639406 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm)
Dec 05 01:48:21 compute-0 sudo[413069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:48:21 compute-0 sudo[413069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:48:21 compute-0 sudo[413069]: pam_unix(sudo:session): session closed for user root
Dec 05 01:48:21 compute-0 podman[413051]: 2025-12-05 01:48:21.750553052 +0000 UTC m=+0.149220115 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 01:48:21 compute-0 sudo[413113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:48:21 compute-0 sudo[413113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:48:21 compute-0 sudo[413113]: pam_unix(sudo:session): session closed for user root
Dec 05 01:48:21 compute-0 sudo[413139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:48:21 compute-0 sudo[413139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.018 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updating instance_info_cache with network_info: [{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.032 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.034 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.035 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.036 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.037 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.061 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.064 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.066 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.067 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.068 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:48:22 compute-0 ceph-mon[192914]: pgmap v1169: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 45 op/s
Dec 05 01:48:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:48:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:48:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:48:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:48:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:48:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:48:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:48:22 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/472524412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.594 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:48:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.612 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:22 compute-0 podman[413220]: 2025-12-05 01:48:22.614668219 +0000 UTC m=+0.095506085 container create df55db5150e14674db6a1a1f36a59201c65d19065bdba61c5e6d71827d165272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:48:22 compute-0 podman[413220]: 2025-12-05 01:48:22.574109779 +0000 UTC m=+0.054947695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.687 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.688 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.689 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:48:22 compute-0 systemd[1]: Started libpod-conmon-df55db5150e14674db6a1a1f36a59201c65d19065bdba61c5e6d71827d165272.scope.
Dec 05 01:48:22 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:48:22 compute-0 podman[413220]: 2025-12-05 01:48:22.780069488 +0000 UTC m=+0.260907314 container init df55db5150e14674db6a1a1f36a59201c65d19065bdba61c5e6d71827d165272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brattain, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 05 01:48:22 compute-0 podman[413220]: 2025-12-05 01:48:22.799928336 +0000 UTC m=+0.280766172 container start df55db5150e14674db6a1a1f36a59201c65d19065bdba61c5e6d71827d165272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brattain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:48:22 compute-0 podman[413220]: 2025-12-05 01:48:22.805080241 +0000 UTC m=+0.285918097 container attach df55db5150e14674db6a1a1f36a59201c65d19065bdba61c5e6d71827d165272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:48:22 compute-0 silly_brattain[413237]: 167 167
Dec 05 01:48:22 compute-0 systemd[1]: libpod-df55db5150e14674db6a1a1f36a59201c65d19065bdba61c5e6d71827d165272.scope: Deactivated successfully.
Dec 05 01:48:22 compute-0 podman[413220]: 2025-12-05 01:48:22.816873742 +0000 UTC m=+0.297711608 container died df55db5150e14674db6a1a1f36a59201c65d19065bdba61c5e6d71827d165272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:48:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebc0bc3565cc8b7ba7657194d6b5207e8024d58085b7ef9b3fbcef3703dbf568-merged.mount: Deactivated successfully.
Dec 05 01:48:22 compute-0 podman[413220]: 2025-12-05 01:48:22.902409906 +0000 UTC m=+0.383247772 container remove df55db5150e14674db6a1a1f36a59201c65d19065bdba61c5e6d71827d165272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brattain, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Dec 05 01:48:22 compute-0 systemd[1]: libpod-conmon-df55db5150e14674db6a1a1f36a59201c65d19065bdba61c5e6d71827d165272.scope: Deactivated successfully.
Dec 05 01:48:23 compute-0 podman[413260]: 2025-12-05 01:48:23.163495914 +0000 UTC m=+0.064128353 container create 768f702c717820b02470096f5baed7d19277ca00f79556f4074efdda07e5648e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.221 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.223 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4056MB free_disk=59.97224044799805GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.224 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.224 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:48:23 compute-0 podman[413260]: 2025-12-05 01:48:23.137151424 +0000 UTC m=+0.037783883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:48:23 compute-0 systemd[1]: Started libpod-conmon-768f702c717820b02470096f5baed7d19277ca00f79556f4074efdda07e5648e.scope.
Dec 05 01:48:23 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:48:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aba23301f83851e4754989e0da5e2c83ff1740f2b42d2ad2061b6f39af6a924/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:48:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aba23301f83851e4754989e0da5e2c83ff1740f2b42d2ad2061b6f39af6a924/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:48:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aba23301f83851e4754989e0da5e2c83ff1740f2b42d2ad2061b6f39af6a924/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:48:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aba23301f83851e4754989e0da5e2c83ff1740f2b42d2ad2061b6f39af6a924/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:48:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aba23301f83851e4754989e0da5e2c83ff1740f2b42d2ad2061b6f39af6a924/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:48:23 compute-0 podman[413260]: 2025-12-05 01:48:23.301872713 +0000 UTC m=+0.202505252 container init 768f702c717820b02470096f5baed7d19277ca00f79556f4074efdda07e5648e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_clarke, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 01:48:23 compute-0 podman[413260]: 2025-12-05 01:48:23.322307547 +0000 UTC m=+0.222939986 container start 768f702c717820b02470096f5baed7d19277ca00f79556f4074efdda07e5648e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_clarke, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 05 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.326 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.326 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.327 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 01:48:23 compute-0 podman[413260]: 2025-12-05 01:48:23.327348689 +0000 UTC m=+0.227981148 container attach 768f702c717820b02470096f5baed7d19277ca00f79556f4074efdda07e5648e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_clarke, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 01:48:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1170: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 36 op/s
Dec 05 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.359 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:48:23 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/472524412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:48:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:48:23 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1638324648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.867 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.877 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.921 349552 ERROR nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [req-c40066fa-ef9c-4584-9a01-3e6b37f078e5] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID acf26aa2-2fef-4a53-8a44-6cfa2eb15d17.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-c40066fa-ef9c-4584-9a01-3e6b37f078e5"}]}
Dec 05 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.945 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing inventories for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.965 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating ProviderTree inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.965 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.983 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing aggregate associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 01:48:24 compute-0 nova_compute[349548]: 2025-12-05 01:48:24.005 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing trait associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, traits: HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 01:48:24 compute-0 nova_compute[349548]: 2025-12-05 01:48:24.051 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:48:24 compute-0 ceph-mon[192914]: pgmap v1170: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 36 op/s
Dec 05 01:48:24 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1638324648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:48:24 compute-0 busy_clarke[413277]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:48:24 compute-0 busy_clarke[413277]: --> relative data size: 1.0
Dec 05 01:48:24 compute-0 busy_clarke[413277]: --> All data devices are unavailable
Dec 05 01:48:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:48:24 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1810754302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:48:24 compute-0 systemd[1]: libpod-768f702c717820b02470096f5baed7d19277ca00f79556f4074efdda07e5648e.scope: Deactivated successfully.
Dec 05 01:48:24 compute-0 systemd[1]: libpod-768f702c717820b02470096f5baed7d19277ca00f79556f4074efdda07e5648e.scope: Consumed 1.205s CPU time.
Dec 05 01:48:24 compute-0 podman[413260]: 2025-12-05 01:48:24.585548532 +0000 UTC m=+1.486181011 container died 768f702c717820b02470096f5baed7d19277ca00f79556f4074efdda07e5648e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_clarke, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 01:48:24 compute-0 nova_compute[349548]: 2025-12-05 01:48:24.598 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:48:24 compute-0 nova_compute[349548]: 2025-12-05 01:48:24.607 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 01:48:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-2aba23301f83851e4754989e0da5e2c83ff1740f2b42d2ad2061b6f39af6a924-merged.mount: Deactivated successfully.
Dec 05 01:48:24 compute-0 podman[413260]: 2025-12-05 01:48:24.6591352 +0000 UTC m=+1.559767639 container remove 768f702c717820b02470096f5baed7d19277ca00f79556f4074efdda07e5648e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:48:24 compute-0 systemd[1]: libpod-conmon-768f702c717820b02470096f5baed7d19277ca00f79556f4074efdda07e5648e.scope: Deactivated successfully.
Dec 05 01:48:24 compute-0 sudo[413139]: pam_unix(sudo:session): session closed for user root
Dec 05 01:48:24 compute-0 nova_compute[349548]: 2025-12-05 01:48:24.718 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updated inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 05 01:48:24 compute-0 nova_compute[349548]: 2025-12-05 01:48:24.719 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 05 01:48:24 compute-0 nova_compute[349548]: 2025-12-05 01:48:24.719 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 01:48:24 compute-0 nova_compute[349548]: 2025-12-05 01:48:24.742 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 01:48:24 compute-0 nova_compute[349548]: 2025-12-05 01:48:24.743 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:48:24 compute-0 sudo[413362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:48:24 compute-0 sudo[413362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:48:24 compute-0 sudo[413362]: pam_unix(sudo:session): session closed for user root
Dec 05 01:48:24 compute-0 sudo[413387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:48:24 compute-0 sudo[413387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:48:24 compute-0 sudo[413387]: pam_unix(sudo:session): session closed for user root
Dec 05 01:48:25 compute-0 sudo[413412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:48:25 compute-0 sudo[413412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:48:25 compute-0 sudo[413412]: pam_unix(sudo:session): session closed for user root
Dec 05 01:48:25 compute-0 sudo[413437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:48:25 compute-0 sudo[413437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:48:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1171: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:48:25 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1810754302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:48:25 compute-0 podman[413499]: 2025-12-05 01:48:25.846442441 +0000 UTC m=+0.095954368 container create 848710801ee4bb4447f3636eb1a5c29385ceec03767a02c5d297a175153fcd88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hawking, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:48:25 compute-0 podman[413499]: 2025-12-05 01:48:25.810610574 +0000 UTC m=+0.060122541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:48:25 compute-0 systemd[1]: Started libpod-conmon-848710801ee4bb4447f3636eb1a5c29385ceec03767a02c5d297a175153fcd88.scope.
Dec 05 01:48:25 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:48:26 compute-0 podman[413499]: 2025-12-05 01:48:26.000533922 +0000 UTC m=+0.250045829 container init 848710801ee4bb4447f3636eb1a5c29385ceec03767a02c5d297a175153fcd88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 05 01:48:26 compute-0 podman[413499]: 2025-12-05 01:48:26.012111778 +0000 UTC m=+0.261623705 container start 848710801ee4bb4447f3636eb1a5c29385ceec03767a02c5d297a175153fcd88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:48:26 compute-0 podman[413499]: 2025-12-05 01:48:26.019159476 +0000 UTC m=+0.268671383 container attach 848710801ee4bb4447f3636eb1a5c29385ceec03767a02c5d297a175153fcd88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hawking, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 01:48:26 compute-0 hardcore_hawking[413521]: 167 167
Dec 05 01:48:26 compute-0 systemd[1]: libpod-848710801ee4bb4447f3636eb1a5c29385ceec03767a02c5d297a175153fcd88.scope: Deactivated successfully.
Dec 05 01:48:26 compute-0 conmon[413521]: conmon 848710801ee4bb4447f3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-848710801ee4bb4447f3636eb1a5c29385ceec03767a02c5d297a175153fcd88.scope/container/memory.events
Dec 05 01:48:26 compute-0 podman[413499]: 2025-12-05 01:48:26.025751421 +0000 UTC m=+0.275263368 container died 848710801ee4bb4447f3636eb1a5c29385ceec03767a02c5d297a175153fcd88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hawking, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 01:48:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-f73d82ef0dbed4c9b2c5acf5fbb5fc964e2f668b5cc6f3cbdcb6003eb80691c9-merged.mount: Deactivated successfully.
Dec 05 01:48:26 compute-0 podman[413513]: 2025-12-05 01:48:26.096003585 +0000 UTC m=+0.168595339 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.openshift.expose-services=, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 05 01:48:26 compute-0 podman[413499]: 2025-12-05 01:48:26.115001109 +0000 UTC m=+0.364512996 container remove 848710801ee4bb4447f3636eb1a5c29385ceec03767a02c5d297a175153fcd88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:48:26 compute-0 systemd[1]: libpod-conmon-848710801ee4bb4447f3636eb1a5c29385ceec03767a02c5d297a175153fcd88.scope: Deactivated successfully.
Dec 05 01:48:26 compute-0 nova_compute[349548]: 2025-12-05 01:48:26.318 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:26 compute-0 podman[413557]: 2025-12-05 01:48:26.401556743 +0000 UTC m=+0.118586074 container create 900193d71d2eab9987d2fff9645d45b63c84deb786c6f35c5c15cec4a3e52e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:48:26 compute-0 podman[413557]: 2025-12-05 01:48:26.349045217 +0000 UTC m=+0.066074628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:48:26 compute-0 ceph-mon[192914]: pgmap v1171: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:48:26 compute-0 systemd[1]: Started libpod-conmon-900193d71d2eab9987d2fff9645d45b63c84deb786c6f35c5c15cec4a3e52e21.scope.
Dec 05 01:48:26 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87e930bccd022c58fdd81e61c29313295d7503ac3b56866fd8e9b7c0e7c4570/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87e930bccd022c58fdd81e61c29313295d7503ac3b56866fd8e9b7c0e7c4570/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87e930bccd022c58fdd81e61c29313295d7503ac3b56866fd8e9b7c0e7c4570/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87e930bccd022c58fdd81e61c29313295d7503ac3b56866fd8e9b7c0e7c4570/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:48:26 compute-0 podman[413557]: 2025-12-05 01:48:26.583421385 +0000 UTC m=+0.300450806 container init 900193d71d2eab9987d2fff9645d45b63c84deb786c6f35c5c15cec4a3e52e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:48:26 compute-0 podman[413557]: 2025-12-05 01:48:26.607166392 +0000 UTC m=+0.324195753 container start 900193d71d2eab9987d2fff9645d45b63c84deb786c6f35c5c15cec4a3e52e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 01:48:26 compute-0 podman[413557]: 2025-12-05 01:48:26.61386828 +0000 UTC m=+0.330897641 container attach 900193d71d2eab9987d2fff9645d45b63c84deb786c6f35c5c15cec4a3e52e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0002673989263853617 of space, bias 1.0, pg target 0.08021967791560852 quantized to 32 (current 32)
Dec 05 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 05 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:48:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1172: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:48:27 compute-0 kind_bell[413574]: {
Dec 05 01:48:27 compute-0 kind_bell[413574]:     "0": [
Dec 05 01:48:27 compute-0 kind_bell[413574]:         {
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "devices": [
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "/dev/loop3"
Dec 05 01:48:27 compute-0 kind_bell[413574]:             ],
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "lv_name": "ceph_lv0",
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "lv_size": "21470642176",
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "name": "ceph_lv0",
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "tags": {
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.cluster_name": "ceph",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.crush_device_class": "",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.encrypted": "0",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.osd_id": "0",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.type": "block",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.vdo": "0"
Dec 05 01:48:27 compute-0 kind_bell[413574]:             },
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "type": "block",
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "vg_name": "ceph_vg0"
Dec 05 01:48:27 compute-0 kind_bell[413574]:         }
Dec 05 01:48:27 compute-0 kind_bell[413574]:     ],
Dec 05 01:48:27 compute-0 kind_bell[413574]:     "1": [
Dec 05 01:48:27 compute-0 kind_bell[413574]:         {
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "devices": [
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "/dev/loop4"
Dec 05 01:48:27 compute-0 kind_bell[413574]:             ],
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "lv_name": "ceph_lv1",
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "lv_size": "21470642176",
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "name": "ceph_lv1",
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "tags": {
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.cluster_name": "ceph",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.crush_device_class": "",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.encrypted": "0",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.osd_id": "1",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.type": "block",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.vdo": "0"
Dec 05 01:48:27 compute-0 kind_bell[413574]:             },
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "type": "block",
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "vg_name": "ceph_vg1"
Dec 05 01:48:27 compute-0 kind_bell[413574]:         }
Dec 05 01:48:27 compute-0 kind_bell[413574]:     ],
Dec 05 01:48:27 compute-0 kind_bell[413574]:     "2": [
Dec 05 01:48:27 compute-0 kind_bell[413574]:         {
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "devices": [
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "/dev/loop5"
Dec 05 01:48:27 compute-0 kind_bell[413574]:             ],
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "lv_name": "ceph_lv2",
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "lv_size": "21470642176",
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "name": "ceph_lv2",
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "tags": {
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.cluster_name": "ceph",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.crush_device_class": "",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.encrypted": "0",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.osd_id": "2",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.type": "block",
Dec 05 01:48:27 compute-0 kind_bell[413574]:                 "ceph.vdo": "0"
Dec 05 01:48:27 compute-0 kind_bell[413574]:             },
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "type": "block",
Dec 05 01:48:27 compute-0 kind_bell[413574]:             "vg_name": "ceph_vg2"
Dec 05 01:48:27 compute-0 kind_bell[413574]:         }
Dec 05 01:48:27 compute-0 kind_bell[413574]:     ]
Dec 05 01:48:27 compute-0 kind_bell[413574]: }
Dec 05 01:48:27 compute-0 systemd[1]: libpod-900193d71d2eab9987d2fff9645d45b63c84deb786c6f35c5c15cec4a3e52e21.scope: Deactivated successfully.
Dec 05 01:48:27 compute-0 podman[413557]: 2025-12-05 01:48:27.493389329 +0000 UTC m=+1.210418690 container died 900193d71d2eab9987d2fff9645d45b63c84deb786c6f35c5c15cec4a3e52e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:48:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-a87e930bccd022c58fdd81e61c29313295d7503ac3b56866fd8e9b7c0e7c4570-merged.mount: Deactivated successfully.
Dec 05 01:48:27 compute-0 podman[413557]: 2025-12-05 01:48:27.596999522 +0000 UTC m=+1.314028863 container remove 900193d71d2eab9987d2fff9645d45b63c84deb786c6f35c5c15cec4a3e52e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 05 01:48:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:48:27 compute-0 nova_compute[349548]: 2025-12-05 01:48:27.614 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:27 compute-0 systemd[1]: libpod-conmon-900193d71d2eab9987d2fff9645d45b63c84deb786c6f35c5c15cec4a3e52e21.scope: Deactivated successfully.
Dec 05 01:48:27 compute-0 sudo[413437]: pam_unix(sudo:session): session closed for user root
Dec 05 01:48:27 compute-0 sudo[413593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:48:27 compute-0 sudo[413593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:48:27 compute-0 sudo[413593]: pam_unix(sudo:session): session closed for user root
Dec 05 01:48:27 compute-0 sudo[413618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:48:27 compute-0 sudo[413618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:48:27 compute-0 sudo[413618]: pam_unix(sudo:session): session closed for user root
Dec 05 01:48:28 compute-0 sudo[413643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:48:28 compute-0 sudo[413643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:48:28 compute-0 sudo[413643]: pam_unix(sudo:session): session closed for user root
Dec 05 01:48:28 compute-0 sudo[413668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:48:28 compute-0 sudo[413668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:48:28 compute-0 ceph-mon[192914]: pgmap v1172: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:48:28 compute-0 podman[413729]: 2025-12-05 01:48:28.86249271 +0000 UTC m=+0.089718503 container create d36d24ddf748b34ead4c1945f13de21d39d38f8dd381a6380c757bd2adf628e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:48:28 compute-0 podman[413729]: 2025-12-05 01:48:28.825410207 +0000 UTC m=+0.052635950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:48:28 compute-0 systemd[1]: Started libpod-conmon-d36d24ddf748b34ead4c1945f13de21d39d38f8dd381a6380c757bd2adf628e0.scope.
Dec 05 01:48:29 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:48:29 compute-0 podman[413729]: 2025-12-05 01:48:29.048199689 +0000 UTC m=+0.275425462 container init d36d24ddf748b34ead4c1945f13de21d39d38f8dd381a6380c757bd2adf628e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 05 01:48:29 compute-0 podman[413729]: 2025-12-05 01:48:29.064134237 +0000 UTC m=+0.291359960 container start d36d24ddf748b34ead4c1945f13de21d39d38f8dd381a6380c757bd2adf628e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:48:29 compute-0 podman[413729]: 2025-12-05 01:48:29.070589558 +0000 UTC m=+0.297815341 container attach d36d24ddf748b34ead4c1945f13de21d39d38f8dd381a6380c757bd2adf628e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:48:29 compute-0 determined_blackburn[413745]: 167 167
Dec 05 01:48:29 compute-0 systemd[1]: libpod-d36d24ddf748b34ead4c1945f13de21d39d38f8dd381a6380c757bd2adf628e0.scope: Deactivated successfully.
Dec 05 01:48:29 compute-0 podman[413729]: 2025-12-05 01:48:29.088744859 +0000 UTC m=+0.315970592 container died d36d24ddf748b34ead4c1945f13de21d39d38f8dd381a6380c757bd2adf628e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 05 01:48:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea9fa6ec47294c4443c7524fd8c5da632fb5647a526e82093d07c2a86de4edbe-merged.mount: Deactivated successfully.
Dec 05 01:48:29 compute-0 podman[413729]: 2025-12-05 01:48:29.189803019 +0000 UTC m=+0.417028722 container remove d36d24ddf748b34ead4c1945f13de21d39d38f8dd381a6380c757bd2adf628e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 05 01:48:29 compute-0 systemd[1]: libpod-conmon-d36d24ddf748b34ead4c1945f13de21d39d38f8dd381a6380c757bd2adf628e0.scope: Deactivated successfully.
Dec 05 01:48:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1173: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:48:29 compute-0 podman[413770]: 2025-12-05 01:48:29.506756758 +0000 UTC m=+0.094098106 container create 6a582f2ab40cbc34349e093fd4dc345ce25da18e291b3bef25c238170e0ca37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_darwin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 01:48:29 compute-0 podman[413770]: 2025-12-05 01:48:29.472220677 +0000 UTC m=+0.059562045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:48:29 compute-0 systemd[1]: Started libpod-conmon-6a582f2ab40cbc34349e093fd4dc345ce25da18e291b3bef25c238170e0ca37b.scope.
Dec 05 01:48:29 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b64a75317d6c81dc8749a0a85117cd7b19dd5836129b179db629cf7a12a54c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b64a75317d6c81dc8749a0a85117cd7b19dd5836129b179db629cf7a12a54c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b64a75317d6c81dc8749a0a85117cd7b19dd5836129b179db629cf7a12a54c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b64a75317d6c81dc8749a0a85117cd7b19dd5836129b179db629cf7a12a54c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:48:29 compute-0 podman[413770]: 2025-12-05 01:48:29.687030404 +0000 UTC m=+0.274371762 container init 6a582f2ab40cbc34349e093fd4dc345ce25da18e291b3bef25c238170e0ca37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_darwin, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:48:29 compute-0 podman[413770]: 2025-12-05 01:48:29.706228364 +0000 UTC m=+0.293569712 container start 6a582f2ab40cbc34349e093fd4dc345ce25da18e291b3bef25c238170e0ca37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_darwin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:48:29 compute-0 podman[413770]: 2025-12-05 01:48:29.713634572 +0000 UTC m=+0.300975990 container attach 6a582f2ab40cbc34349e093fd4dc345ce25da18e291b3bef25c238170e0ca37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 05 01:48:29 compute-0 podman[158197]: time="2025-12-05T01:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:48:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45383 "" "Go-http-client/1.1"
Dec 05 01:48:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9027 "" "Go-http-client/1.1"
Dec 05 01:48:30 compute-0 ceph-mon[192914]: pgmap v1173: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:48:30 compute-0 vigorous_darwin[413786]: {
Dec 05 01:48:30 compute-0 vigorous_darwin[413786]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:48:30 compute-0 vigorous_darwin[413786]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:48:30 compute-0 vigorous_darwin[413786]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:48:30 compute-0 vigorous_darwin[413786]:         "osd_id": 0,
Dec 05 01:48:30 compute-0 vigorous_darwin[413786]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:48:30 compute-0 vigorous_darwin[413786]:         "type": "bluestore"
Dec 05 01:48:30 compute-0 vigorous_darwin[413786]:     },
Dec 05 01:48:30 compute-0 vigorous_darwin[413786]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:48:30 compute-0 vigorous_darwin[413786]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:48:30 compute-0 vigorous_darwin[413786]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:48:30 compute-0 vigorous_darwin[413786]:         "osd_id": 1,
Dec 05 01:48:30 compute-0 vigorous_darwin[413786]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:48:30 compute-0 vigorous_darwin[413786]:         "type": "bluestore"
Dec 05 01:48:30 compute-0 vigorous_darwin[413786]:     },
Dec 05 01:48:30 compute-0 vigorous_darwin[413786]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:48:30 compute-0 vigorous_darwin[413786]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:48:30 compute-0 vigorous_darwin[413786]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:48:30 compute-0 vigorous_darwin[413786]:         "osd_id": 2,
Dec 05 01:48:30 compute-0 vigorous_darwin[413786]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:48:30 compute-0 vigorous_darwin[413786]:         "type": "bluestore"
Dec 05 01:48:30 compute-0 vigorous_darwin[413786]:     }
Dec 05 01:48:30 compute-0 vigorous_darwin[413786]: }
Dec 05 01:48:30 compute-0 systemd[1]: libpod-6a582f2ab40cbc34349e093fd4dc345ce25da18e291b3bef25c238170e0ca37b.scope: Deactivated successfully.
Dec 05 01:48:30 compute-0 systemd[1]: libpod-6a582f2ab40cbc34349e093fd4dc345ce25da18e291b3bef25c238170e0ca37b.scope: Consumed 1.133s CPU time.
Dec 05 01:48:30 compute-0 podman[413770]: 2025-12-05 01:48:30.843080226 +0000 UTC m=+1.430421554 container died 6a582f2ab40cbc34349e093fd4dc345ce25da18e291b3bef25c238170e0ca37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_darwin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:48:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9b64a75317d6c81dc8749a0a85117cd7b19dd5836129b179db629cf7a12a54c-merged.mount: Deactivated successfully.
Dec 05 01:48:30 compute-0 podman[413770]: 2025-12-05 01:48:30.927950521 +0000 UTC m=+1.515291849 container remove 6a582f2ab40cbc34349e093fd4dc345ce25da18e291b3bef25c238170e0ca37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_darwin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 05 01:48:30 compute-0 sudo[413668]: pam_unix(sudo:session): session closed for user root
Dec 05 01:48:30 compute-0 systemd[1]: libpod-conmon-6a582f2ab40cbc34349e093fd4dc345ce25da18e291b3bef25c238170e0ca37b.scope: Deactivated successfully.
Dec 05 01:48:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:48:31 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:48:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:48:31 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:48:31 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b09f8d77-cab0-491f-a0b9-7bc9b220c36f does not exist
Dec 05 01:48:31 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 61364117-7920-4225-8895-0e7a5c661ec9 does not exist
Dec 05 01:48:31 compute-0 sudo[413828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:48:31 compute-0 sudo[413828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:48:31 compute-0 sudo[413828]: pam_unix(sudo:session): session closed for user root
Dec 05 01:48:31 compute-0 sudo[413853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:48:31 compute-0 sudo[413853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:48:31 compute-0 sudo[413853]: pam_unix(sudo:session): session closed for user root
Dec 05 01:48:31 compute-0 nova_compute[349548]: 2025-12-05 01:48:31.320 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1174: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:48:31 compute-0 openstack_network_exporter[366555]: ERROR   01:48:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:48:31 compute-0 openstack_network_exporter[366555]: ERROR   01:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:48:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:48:31 compute-0 openstack_network_exporter[366555]: ERROR   01:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:48:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:48:31 compute-0 openstack_network_exporter[366555]: ERROR   01:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:48:31 compute-0 openstack_network_exporter[366555]: ERROR   01:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:48:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:48:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:48:32 compute-0 ceph-mon[192914]: pgmap v1174: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:48:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:48:32 compute-0 nova_compute[349548]: 2025-12-05 01:48:32.618 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1175: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:48:33 compute-0 podman[413879]: 2025-12-05 01:48:33.668340703 +0000 UTC m=+0.082822669 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:48:33 compute-0 podman[413878]: 2025-12-05 01:48:33.701091613 +0000 UTC m=+0.108659005 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 05 01:48:33 compute-0 podman[413880]: 2025-12-05 01:48:33.778202681 +0000 UTC m=+0.174609649 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 05 01:48:33 compute-0 podman[413881]: 2025-12-05 01:48:33.781952736 +0000 UTC m=+0.173345533 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vcs-type=git, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, release=1755695350, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9)
Dec 05 01:48:34 compute-0 ceph-mon[192914]: pgmap v1175: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:48:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1176: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:48:36 compute-0 nova_compute[349548]: 2025-12-05 01:48:36.324 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:36 compute-0 ceph-mon[192914]: pgmap v1176: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:48:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1177: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:48:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:48:37 compute-0 nova_compute[349548]: 2025-12-05 01:48:37.622 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.315 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.315 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.316 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.325 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance b69a0e24-1bc4-46a5-92d7-367c1efd53df from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 05 01:48:38 compute-0 ceph-mon[192914]: pgmap v1177: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.726 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/b69a0e24-1bc4-46a5-92d7-367c1efd53df -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}03a5c5085f72a10a14834caf2c8f725d7bea9761ee1da0af3d318eb89d91a8ae" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 05 01:48:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1178: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:48:40 compute-0 ceph-mon[192914]: pgmap v1178: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.315 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1849 Content-Type: application/json Date: Fri, 05 Dec 2025 01:48:39 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-73605b56-4a3c-4695-81c0-2207f68d2fe0 x-openstack-request-id: req-73605b56-4a3c-4695-81c0-2207f68d2fe0 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.315 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "b69a0e24-1bc4-46a5-92d7-367c1efd53df", "name": "test_0", "status": "ACTIVE", "tenant_id": "6ad982b73954486390215862ee62239f", "user_id": "ff880837791d4f49a54672b8d0e705ff", "metadata": {}, "hostId": "c00078154b620f81ef3acab090afa15b914aca6c57286253be564282", "image": {"id": "aa58c1e9-bdcc-4e60-9cee-eaeee0741251", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/aa58c1e9-bdcc-4e60-9cee-eaeee0741251"}]}, "flavor": {"id": "7d473820-6f66-40b4-b8d1-decd466d7dd2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/7d473820-6f66-40b4-b8d1-decd466d7dd2"}]}, "created": "2025-12-05T01:47:49Z", "updated": "2025-12-05T01:48:05Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.48", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:0c:12:24"}, {"version": 4, "addr": "192.168.122.212", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:0c:12:24"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/b69a0e24-1bc4-46a5-92d7-367c1efd53df"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/b69a0e24-1bc4-46a5-92d7-367c1efd53df"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-05T01:48:05.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.315 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/b69a0e24-1bc4-46a5-92d7-367c1efd53df used request id req-73605b56-4a3c-4695-81c0-2207f68d2fe0 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.318 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b69a0e24-1bc4-46a5-92d7-367c1efd53df', 'name': 'test_0', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.318 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.318 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.319 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.320 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.322 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.324 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.324 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.325 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.325 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.326 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T01:48:41.319300) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.327 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T01:48:41.325173) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 nova_compute[349548]: 2025-12-05 01:48:41.326 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1179: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 0 op/s
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.356 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.357 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.357 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.358 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.358 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.359 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.359 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.359 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.359 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.360 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T01:48:41.359756) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.361 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.361 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.361 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.361 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.361 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.363 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.363 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.363 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-05T01:48:41.362672) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.364 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.367 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.368 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.368 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.368 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.368 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T01:48:41.368636) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.368 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ovn_controller[89286]: 2025-12-05T01:48:41Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0c:12:24 192.168.0.48
Dec 05 01:48:41 compute-0 ovn_controller[89286]: 2025-12-05T01:48:41Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0c:12:24 192.168.0.48
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.461 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 21187584 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.462 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 2160128 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.462 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 221518 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.463 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.463 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.463 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.463 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.464 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.464 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.464 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 1851644501 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.465 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T01:48:41.464445) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.467 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 233035127 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.467 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 163808441 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.468 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.468 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.468 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.469 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.469 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.469 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.469 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 716 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.470 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 114 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.470 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 95 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.471 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.471 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.472 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.472 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.472 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.472 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.472 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.473 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.473 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.474 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.475 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.475 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.479 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.479 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.480 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.480 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 17543168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.480 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T01:48:41.469428) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.480 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.481 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.482 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.482 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T01:48:41.472565) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.482 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.482 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.483 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.483 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.483 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.484 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T01:48:41.480131) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.484 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T01:48:41.483489) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.528 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.529 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.530 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.530 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.531 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.531 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.531 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 5533603682 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.532 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 28454640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.532 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T01:48:41.531315) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.532 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.533 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.534 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.534 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.534 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.534 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.535 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 130 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.535 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T01:48:41.534858) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.536 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.536 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.537 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.538 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.538 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.538 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.538 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.539 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T01:48:41.538917) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.544 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b69a0e24-1bc4-46a5-92d7-367c1efd53df / tap68143c81-65 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.545 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.545 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.546 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.546 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.547 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.547 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.547 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.548 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T01:48:41.547385) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.548 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.549 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.549 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.550 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.550 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.550 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.550 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T01:48:41.550327) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.551 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.551 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.552 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.553 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.553 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.553 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.554 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.554 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T01:48:41.554124) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.555 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.555 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.556 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.556 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.556 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.557 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes volume: 903 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.557 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T01:48:41.556817) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.558 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.558 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.559 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.559 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.560 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.560 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.561 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.561 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T01:48:41.560074) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.561 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.562 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.562 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.563 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.563 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.563 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-05T01:48:41.563169) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.564 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.564 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.565 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.565 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.565 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.566 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.566 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/memory.usage volume: 33.296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.566 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T01:48:41.566119) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.567 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.567 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.568 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.568 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.568 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.569 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.569 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes volume: 1786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.569 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T01:48:41.569055) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.570 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.570 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.570 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.571 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.571 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.572 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.572 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets volume: 5 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.572 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T01:48:41.571923) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.573 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.573 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.573 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.574 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.574 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.575 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.575 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T01:48:41.574708) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.576 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.576 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.576 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.576 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.577 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/cpu volume: 33670000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.577 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T01:48:41.576803) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.578 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.578 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.578 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.579 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.579 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T01:48:41.578794) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.580 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.580 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.580 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.580 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.580 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.581 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.581 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T01:48:41.580853) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.581 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.582 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.582 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.583 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.583 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.583 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.584 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.585 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.585 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.585 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.585 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.585 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.588 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.588 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.588 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:48:42 compute-0 ceph-mon[192914]: pgmap v1179: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 0 op/s
Dec 05 01:48:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:48:42 compute-0 nova_compute[349548]: 2025-12-05 01:48:42.625 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1180: 321 pgs: 321 active+clean; 63 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 1.2 MiB/s wr, 34 op/s
Dec 05 01:48:44 compute-0 ceph-mon[192914]: pgmap v1180: 321 pgs: 321 active+clean; 63 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 1.2 MiB/s wr, 34 op/s
Dec 05 01:48:44 compute-0 systemd[1]: Starting dnf makecache...
Dec 05 01:48:44 compute-0 dnf[413960]: Metadata cache refreshed recently.
Dec 05 01:48:44 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 05 01:48:44 compute-0 systemd[1]: Finished dnf makecache.
Dec 05 01:48:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 01:48:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1683311609' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:48:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 01:48:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1683311609' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:48:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1181: 321 pgs: 321 active+clean; 71 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 1.4 MiB/s wr, 38 op/s
Dec 05 01:48:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1683311609' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:48:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1683311609' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:48:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:48:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:48:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:48:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:48:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:48:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:48:46 compute-0 nova_compute[349548]: 2025-12-05 01:48:46.330 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:46 compute-0 ovn_controller[89286]: 2025-12-05T01:48:46Z|00034|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Dec 05 01:48:46 compute-0 ceph-mon[192914]: pgmap v1181: 321 pgs: 321 active+clean; 71 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 1.4 MiB/s wr, 38 op/s
Dec 05 01:48:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1182: 321 pgs: 321 active+clean; 77 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec 05 01:48:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:48:47 compute-0 nova_compute[349548]: 2025-12-05 01:48:47.627 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:48 compute-0 ceph-mon[192914]: pgmap v1182: 321 pgs: 321 active+clean; 77 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec 05 01:48:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1183: 321 pgs: 321 active+clean; 77 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec 05 01:48:50 compute-0 ceph-mon[192914]: pgmap v1183: 321 pgs: 321 active+clean; 77 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec 05 01:48:50 compute-0 podman[413962]: 2025-12-05 01:48:50.71094188 +0000 UTC m=+0.116836494 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:48:50 compute-0 podman[413963]: 2025-12-05 01:48:50.737362203 +0000 UTC m=+0.136951020 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:48:51 compute-0 nova_compute[349548]: 2025-12-05 01:48:51.334 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1184: 321 pgs: 321 active+clean; 77 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec 05 01:48:52 compute-0 ceph-mon[192914]: pgmap v1184: 321 pgs: 321 active+clean; 77 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec 05 01:48:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:48:52 compute-0 nova_compute[349548]: 2025-12-05 01:48:52.630 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:52 compute-0 podman[414004]: 2025-12-05 01:48:52.735582112 +0000 UTC m=+0.140565952 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 05 01:48:52 compute-0 podman[414005]: 2025-12-05 01:48:52.750266635 +0000 UTC m=+0.146522199 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 05 01:48:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1185: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 153 KiB/s rd, 1.5 MiB/s wr, 55 op/s
Dec 05 01:48:54 compute-0 ceph-mon[192914]: pgmap v1185: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 153 KiB/s rd, 1.5 MiB/s wr, 55 op/s
Dec 05 01:48:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1186: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 333 KiB/s wr, 21 op/s
Dec 05 01:48:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:56.176 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:48:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:56.178 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:48:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:56.179 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:48:56 compute-0 nova_compute[349548]: 2025-12-05 01:48:56.338 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:56 compute-0 ceph-mon[192914]: pgmap v1186: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 333 KiB/s wr, 21 op/s
Dec 05 01:48:56 compute-0 podman[414040]: 2025-12-05 01:48:56.734441457 +0000 UTC m=+0.134619495 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-type=git, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, managed_by=edpm_ansible, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, release=1214.1726694543, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 05 01:48:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1187: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 72 KiB/s wr, 18 op/s
Dec 05 01:48:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:48:57 compute-0 nova_compute[349548]: 2025-12-05 01:48:57.635 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:48:58 compute-0 ceph-mon[192914]: pgmap v1187: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 72 KiB/s wr, 18 op/s
Dec 05 01:48:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1188: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec 05 01:48:59 compute-0 podman[158197]: time="2025-12-05T01:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:48:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 01:48:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8595 "" "Go-http-client/1.1"
Dec 05 01:49:00 compute-0 ceph-mon[192914]: pgmap v1188: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec 05 01:49:01 compute-0 nova_compute[349548]: 2025-12-05 01:49:01.341 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1189: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec 05 01:49:01 compute-0 openstack_network_exporter[366555]: ERROR   01:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:49:01 compute-0 openstack_network_exporter[366555]: ERROR   01:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:49:01 compute-0 openstack_network_exporter[366555]: ERROR   01:49:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:49:01 compute-0 openstack_network_exporter[366555]: ERROR   01:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:49:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:49:01 compute-0 openstack_network_exporter[366555]: ERROR   01:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:49:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:49:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:49:02 compute-0 nova_compute[349548]: 2025-12-05 01:49:02.638 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:02 compute-0 ceph-mon[192914]: pgmap v1189: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec 05 01:49:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1190: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec 05 01:49:04 compute-0 ceph-mon[192914]: pgmap v1190: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec 05 01:49:04 compute-0 podman[414063]: 2025-12-05 01:49:04.754294819 +0000 UTC m=+0.125620462 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, config_id=edpm, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 05 01:49:04 compute-0 podman[414060]: 2025-12-05 01:49:04.764308081 +0000 UTC m=+0.156004416 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd)
Dec 05 01:49:04 compute-0 podman[414062]: 2025-12-05 01:49:04.765581396 +0000 UTC m=+0.142445404 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 05 01:49:04 compute-0 podman[414061]: 2025-12-05 01:49:04.773374045 +0000 UTC m=+0.158385382 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:49:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1191: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:49:06 compute-0 nova_compute[349548]: 2025-12-05 01:49:06.345 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:06 compute-0 ceph-mon[192914]: pgmap v1191: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:49:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1192: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:49:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:49:07 compute-0 nova_compute[349548]: 2025-12-05 01:49:07.643 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:08 compute-0 nova_compute[349548]: 2025-12-05 01:49:08.129 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:49:08 compute-0 nova_compute[349548]: 2025-12-05 01:49:08.130 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:49:08 compute-0 nova_compute[349548]: 2025-12-05 01:49:08.158 349552 DEBUG nova.compute.manager [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 05 01:49:08 compute-0 nova_compute[349548]: 2025-12-05 01:49:08.380 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:49:08 compute-0 nova_compute[349548]: 2025-12-05 01:49:08.380 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:49:08 compute-0 nova_compute[349548]: 2025-12-05 01:49:08.393 349552 DEBUG nova.virt.hardware [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 05 01:49:08 compute-0 nova_compute[349548]: 2025-12-05 01:49:08.393 349552 INFO nova.compute.claims [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Claim successful on node compute-0.ctlplane.example.com
Dec 05 01:49:08 compute-0 nova_compute[349548]: 2025-12-05 01:49:08.587 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:49:08 compute-0 ceph-mon[192914]: pgmap v1192: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:49:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:49:09 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/932633543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.052 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.064 349552 DEBUG nova.compute.provider_tree [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.082 349552 DEBUG nova.scheduler.client.report [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.113 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.114 349552 DEBUG nova.compute.manager [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 05 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.172 349552 DEBUG nova.compute.manager [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 05 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.174 349552 DEBUG nova.network.neutron [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 05 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.205 349552 INFO nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 05 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.242 349552 DEBUG nova.compute.manager [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 05 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.339 349552 DEBUG nova.compute.manager [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 05 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.341 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 05 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.342 349552 INFO nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Creating image(s)
Dec 05 01:49:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1193: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.398 349552 DEBUG nova.storage.rbd_utils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.452 349552 DEBUG nova.storage.rbd_utils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.507 349552 DEBUG nova.storage.rbd_utils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.516 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.609 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.611 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "af0f6d73e40706411141d751e7ebef271f1a5b42" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.613 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "af0f6d73e40706411141d751e7ebef271f1a5b42" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.614 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "af0f6d73e40706411141d751e7ebef271f1a5b42" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.667 349552 DEBUG nova.storage.rbd_utils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.681 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:49:09 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/932633543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:49:10 compute-0 nova_compute[349548]: 2025-12-05 01:49:10.137 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:49:10 compute-0 nova_compute[349548]: 2025-12-05 01:49:10.314 349552 DEBUG nova.storage.rbd_utils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] resizing rbd image b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 05 01:49:10 compute-0 nova_compute[349548]: 2025-12-05 01:49:10.574 349552 DEBUG nova.objects.instance [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'migration_context' on Instance uuid b82c3f0e-6d6a-4a7b-9556-b609ad63e497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 01:49:10 compute-0 nova_compute[349548]: 2025-12-05 01:49:10.640 349552 DEBUG nova.storage.rbd_utils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:49:10 compute-0 nova_compute[349548]: 2025-12-05 01:49:10.695 349552 DEBUG nova.storage.rbd_utils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:49:10 compute-0 nova_compute[349548]: 2025-12-05 01:49:10.713 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:49:10 compute-0 ceph-mon[192914]: pgmap v1193: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:49:10 compute-0 nova_compute[349548]: 2025-12-05 01:49:10.813 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:49:10 compute-0 nova_compute[349548]: 2025-12-05 01:49:10.816 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:49:10 compute-0 nova_compute[349548]: 2025-12-05 01:49:10.818 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:49:10 compute-0 nova_compute[349548]: 2025-12-05 01:49:10.820 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:49:10 compute-0 nova_compute[349548]: 2025-12-05 01:49:10.875 349552 DEBUG nova.storage.rbd_utils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:49:10 compute-0 nova_compute[349548]: 2025-12-05 01:49:10.903 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:49:11 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 05 01:49:11 compute-0 nova_compute[349548]: 2025-12-05 01:49:11.349 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1194: 321 pgs: 321 active+clean; 79 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 1.8 KiB/s wr, 0 op/s
Dec 05 01:49:11 compute-0 nova_compute[349548]: 2025-12-05 01:49:11.523 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:49:11 compute-0 nova_compute[349548]: 2025-12-05 01:49:11.744 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 05 01:49:11 compute-0 nova_compute[349548]: 2025-12-05 01:49:11.745 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Ensure instance console log exists: /var/lib/nova/instances/b82c3f0e-6d6a-4a7b-9556-b609ad63e497/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 05 01:49:11 compute-0 nova_compute[349548]: 2025-12-05 01:49:11.746 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:49:11 compute-0 nova_compute[349548]: 2025-12-05 01:49:11.747 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:49:11 compute-0 nova_compute[349548]: 2025-12-05 01:49:11.747 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:49:12 compute-0 nova_compute[349548]: 2025-12-05 01:49:12.550 349552 DEBUG nova.network.neutron [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Successfully updated port: 554930d3-ff53-4ef1-af0a-bad6acef1456 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 05 01:49:12 compute-0 nova_compute[349548]: 2025-12-05 01:49:12.566 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:49:12 compute-0 nova_compute[349548]: 2025-12-05 01:49:12.566 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquired lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:49:12 compute-0 nova_compute[349548]: 2025-12-05 01:49:12.566 349552 DEBUG nova.network.neutron [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 05 01:49:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:49:12 compute-0 nova_compute[349548]: 2025-12-05 01:49:12.648 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:12 compute-0 ceph-mon[192914]: pgmap v1194: 321 pgs: 321 active+clean; 79 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 1.8 KiB/s wr, 0 op/s
Dec 05 01:49:13 compute-0 nova_compute[349548]: 2025-12-05 01:49:13.013 349552 DEBUG nova.compute.manager [req-096b245d-d16e-499c-8ec6-ed988b956ed1 req-a087135e-135c-4997-9c09-7943a3616e80 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Received event network-changed-554930d3-ff53-4ef1-af0a-bad6acef1456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 01:49:13 compute-0 nova_compute[349548]: 2025-12-05 01:49:13.014 349552 DEBUG nova.compute.manager [req-096b245d-d16e-499c-8ec6-ed988b956ed1 req-a087135e-135c-4997-9c09-7943a3616e80 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Refreshing instance network info cache due to event network-changed-554930d3-ff53-4ef1-af0a-bad6acef1456. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 01:49:13 compute-0 nova_compute[349548]: 2025-12-05 01:49:13.015 349552 DEBUG oslo_concurrency.lockutils [req-096b245d-d16e-499c-8ec6-ed988b956ed1 req-a087135e-135c-4997-9c09-7943a3616e80 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:49:13 compute-0 nova_compute[349548]: 2025-12-05 01:49:13.358 349552 DEBUG nova.network.neutron [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 05 01:49:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1195: 321 pgs: 321 active+clean; 106 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 26 op/s
Dec 05 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.524 349552 DEBUG nova.network.neutron [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updating instance_info_cache with network_info: [{"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.549 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Releasing lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.549 349552 DEBUG nova.compute.manager [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Instance network_info: |[{"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 05 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.550 349552 DEBUG oslo_concurrency.lockutils [req-096b245d-d16e-499c-8ec6-ed988b956ed1 req-a087135e-135c-4997-9c09-7943a3616e80 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.551 349552 DEBUG nova.network.neutron [req-096b245d-d16e-499c-8ec6-ed988b956ed1 req-a087135e-135c-4997-9c09-7943a3616e80 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Refreshing network info cache for port 554930d3-ff53-4ef1-af0a-bad6acef1456 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.557 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Start _get_guest_xml network_info=[{"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-05T01:46:34Z,direct_url=<?>,disk_format='qcow2',id=aa58c1e9-bdcc-4e60-9cee-eaeee0741251,min_disk=0,min_ram=0,name='cirros',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-05T01:46:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}], 'ephemerals': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 05 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.569 349552 WARNING nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.588 349552 DEBUG nova.virt.libvirt.host [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 05 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.589 349552 DEBUG nova.virt.libvirt.host [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 05 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.595 349552 DEBUG nova.virt.libvirt.host [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 05 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.596 349552 DEBUG nova.virt.libvirt.host [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 05 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.596 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 05 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.597 349552 DEBUG nova.virt.hardware [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T01:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='7d473820-6f66-40b4-b8d1-decd466d7dd2',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-05T01:46:34Z,direct_url=<?>,disk_format='qcow2',id=aa58c1e9-bdcc-4e60-9cee-eaeee0741251,min_disk=0,min_ram=0,name='cirros',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-05T01:46:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 05 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.597 349552 DEBUG nova.virt.hardware [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 05 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.598 349552 DEBUG nova.virt.hardware [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 05 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.598 349552 DEBUG nova.virt.hardware [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 05 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.598 349552 DEBUG nova.virt.hardware [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 05 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.599 349552 DEBUG nova.virt.hardware [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 05 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.599 349552 DEBUG nova.virt.hardware [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 05 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.600 349552 DEBUG nova.virt.hardware [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 05 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.600 349552 DEBUG nova.virt.hardware [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 05 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.600 349552 DEBUG nova.virt.hardware [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 05 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.601 349552 DEBUG nova.virt.hardware [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 05 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.605 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:49:14 compute-0 ceph-mon[192914]: pgmap v1195: 321 pgs: 321 active+clean; 106 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 26 op/s
Dec 05 01:49:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 01:49:15 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1781996239' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:49:15 compute-0 nova_compute[349548]: 2025-12-05 01:49:15.217 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:49:15 compute-0 nova_compute[349548]: 2025-12-05 01:49:15.219 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:49:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1196: 321 pgs: 321 active+clean; 110 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec 05 01:49:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 01:49:15 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3888621187' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:49:15 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 01:49:15 compute-0 nova_compute[349548]: 2025-12-05 01:49:15.728 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:49:15 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1781996239' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:49:15 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3888621187' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:49:15 compute-0 nova_compute[349548]: 2025-12-05 01:49:15.785 349552 DEBUG nova.storage.rbd_utils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:49:15 compute-0 nova_compute[349548]: 2025-12-05 01:49:15.798 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.010 349552 DEBUG nova.network.neutron [req-096b245d-d16e-499c-8ec6-ed988b956ed1 req-a087135e-135c-4997-9c09-7943a3616e80 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updated VIF entry in instance network info cache for port 554930d3-ff53-4ef1-af0a-bad6acef1456. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.011 349552 DEBUG nova.network.neutron [req-096b245d-d16e-499c-8ec6-ed988b956ed1 req-a087135e-135c-4997-9c09-7943a3616e80 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updating instance_info_cache with network_info: [{"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.031 349552 DEBUG oslo_concurrency.lockutils [req-096b245d-d16e-499c-8ec6-ed988b956ed1 req-a087135e-135c-4997-9c09-7943a3616e80 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:49:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:49:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:49:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:49:16
Dec 05 01:49:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:49:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:49:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'volumes', 'default.rgw.log', 'backups', 'images', '.rgw.root']
Dec 05 01:49:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:49:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:49:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:49:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:49:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:49:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 01:49:16 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3081362992' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.351 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.369 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.371 349552 DEBUG nova.virt.libvirt.vif [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T01:49:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj',id=2,image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ad982b73954486390215862ee62239f',ramdisk_id='',reservation_id='r-rt9976xc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T01:49:09Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01NzYxMzI0NDc4NDAzNTAzNjkyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU3NjEzMjQ0Nzg0MDM1MDM2OTI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTc2MTMyNDQ3ODQwMzUwMzY5Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU3NjEzMjQ0Nzg0MDM1MDM2OTI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01NzYxMzI0NDc4NDAzNTAzNjkyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01NzYxMzI0NDc4NDAzNTAzNjkyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Dec 05 01:49:16 compute-0 nova_compute[349548]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTc2MTMyNDQ3ODQwMzUwMzY5Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU3NjEzMjQ0Nzg0MDM1MDM2OTI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01NzYxMzI0NDc4NDAzNTAzNjkyPT0tLQo=',user_id='ff880837791d4f49a54672b8d0e705ff',uuid=b82c3f0e-6d6a-4a7b-9556-b609ad63e497,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.371 349552 DEBUG nova.network.os_vif_util [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converting VIF {"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.373 349552 DEBUG nova.network.os_vif_util [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:63:18,bridge_name='br-int',has_traffic_filtering=True,id=554930d3-ff53-4ef1-af0a-bad6acef1456,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap554930d3-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.375 349552 DEBUG nova.objects.instance [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'pci_devices' on Instance uuid b82c3f0e-6d6a-4a7b-9556-b609ad63e497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.392 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] End _get_guest_xml xml=<domain type="kvm">
Dec 05 01:49:16 compute-0 nova_compute[349548]:   <uuid>b82c3f0e-6d6a-4a7b-9556-b609ad63e497</uuid>
Dec 05 01:49:16 compute-0 nova_compute[349548]:   <name>instance-00000002</name>
Dec 05 01:49:16 compute-0 nova_compute[349548]:   <memory>524288</memory>
Dec 05 01:49:16 compute-0 nova_compute[349548]:   <vcpu>1</vcpu>
Dec 05 01:49:16 compute-0 nova_compute[349548]:   <metadata>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <nova:name>vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj</nova:name>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <nova:creationTime>2025-12-05 01:49:14</nova:creationTime>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <nova:flavor name="m1.small">
Dec 05 01:49:16 compute-0 nova_compute[349548]:         <nova:memory>512</nova:memory>
Dec 05 01:49:16 compute-0 nova_compute[349548]:         <nova:disk>1</nova:disk>
Dec 05 01:49:16 compute-0 nova_compute[349548]:         <nova:swap>0</nova:swap>
Dec 05 01:49:16 compute-0 nova_compute[349548]:         <nova:ephemeral>1</nova:ephemeral>
Dec 05 01:49:16 compute-0 nova_compute[349548]:         <nova:vcpus>1</nova:vcpus>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       </nova:flavor>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <nova:owner>
Dec 05 01:49:16 compute-0 nova_compute[349548]:         <nova:user uuid="ff880837791d4f49a54672b8d0e705ff">admin</nova:user>
Dec 05 01:49:16 compute-0 nova_compute[349548]:         <nova:project uuid="6ad982b73954486390215862ee62239f">admin</nova:project>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       </nova:owner>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <nova:root type="image" uuid="aa58c1e9-bdcc-4e60-9cee-eaeee0741251"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <nova:ports>
Dec 05 01:49:16 compute-0 nova_compute[349548]:         <nova:port uuid="554930d3-ff53-4ef1-af0a-bad6acef1456">
Dec 05 01:49:16 compute-0 nova_compute[349548]:           <nova:ip type="fixed" address="192.168.0.23" ipVersion="4"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:         </nova:port>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       </nova:ports>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     </nova:instance>
Dec 05 01:49:16 compute-0 nova_compute[349548]:   </metadata>
Dec 05 01:49:16 compute-0 nova_compute[349548]:   <sysinfo type="smbios">
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <system>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <entry name="manufacturer">RDO</entry>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <entry name="product">OpenStack Compute</entry>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <entry name="serial">b82c3f0e-6d6a-4a7b-9556-b609ad63e497</entry>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <entry name="uuid">b82c3f0e-6d6a-4a7b-9556-b609ad63e497</entry>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <entry name="family">Virtual Machine</entry>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     </system>
Dec 05 01:49:16 compute-0 nova_compute[349548]:   </sysinfo>
Dec 05 01:49:16 compute-0 nova_compute[349548]:   <os>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <boot dev="hd"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <smbios mode="sysinfo"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:   </os>
Dec 05 01:49:16 compute-0 nova_compute[349548]:   <features>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <acpi/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <apic/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <vmcoreinfo/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:   </features>
Dec 05 01:49:16 compute-0 nova_compute[349548]:   <clock offset="utc">
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <timer name="pit" tickpolicy="delay"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <timer name="hpet" present="no"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:   </clock>
Dec 05 01:49:16 compute-0 nova_compute[349548]:   <cpu mode="host-model" match="exact">
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <topology sockets="1" cores="1" threads="1"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:   </cpu>
Dec 05 01:49:16 compute-0 nova_compute[349548]:   <devices>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <disk type="network" device="disk">
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk">
Dec 05 01:49:16 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       </source>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 01:49:16 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       </auth>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <target dev="vda" bus="virtio"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     </disk>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <disk type="network" device="disk">
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk.eph0">
Dec 05 01:49:16 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       </source>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 01:49:16 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       </auth>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <target dev="vdb" bus="virtio"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     </disk>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <disk type="network" device="cdrom">
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk.config">
Dec 05 01:49:16 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       </source>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 01:49:16 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       </auth>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <target dev="sda" bus="sata"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     </disk>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <interface type="ethernet">
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <mac address="fa:16:3e:43:63:18"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <driver name="vhost" rx_queue_size="512"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <mtu size="1442"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <target dev="tap554930d3-ff"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     </interface>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <serial type="pty">
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <log file="/var/lib/nova/instances/b82c3f0e-6d6a-4a7b-9556-b609ad63e497/console.log" append="off"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     </serial>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <video>
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     </video>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <input type="tablet" bus="usb"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <rng model="virtio">
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <backend model="random">/dev/urandom</backend>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     </rng>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <controller type="usb" index="0"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     <memballoon model="virtio">
Dec 05 01:49:16 compute-0 nova_compute[349548]:       <stats period="10"/>
Dec 05 01:49:16 compute-0 nova_compute[349548]:     </memballoon>
Dec 05 01:49:16 compute-0 nova_compute[349548]:   </devices>
Dec 05 01:49:16 compute-0 nova_compute[349548]: </domain>
Dec 05 01:49:16 compute-0 nova_compute[349548]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.392 349552 DEBUG nova.compute.manager [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Preparing to wait for external event network-vif-plugged-554930d3-ff53-4ef1-af0a-bad6acef1456 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.393 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.393 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.394 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.395 349552 DEBUG nova.virt.libvirt.vif [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T01:49:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj',id=2,image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ad982b73954486390215862ee62239f',ramdisk_id='',reservation_id='r-rt9976xc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T01:49:09Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01NzYxMzI0NDc4NDAzNTAzNjkyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU3NjEzMjQ0Nzg0MDM1MDM2OTI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTc2MTMyNDQ3ODQwMzUwMzY5Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU3NjEzMjQ0Nzg0MDM1MDM2OTI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01NzYxMzI0NDc4NDAzNTAzNjkyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01NzYxMzI0NDc4NDAzNTAzNjkyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Dec 05 01:49:16 compute-0 nova_compute[349548]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTc2MTMyNDQ3ODQwMzUwMzY5Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU3NjEzMjQ0Nzg0MDM1MDM2OTI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01NzYxMzI0NDc4NDAzNTAzNjkyPT0tLQo=',user_id='ff880837791d4f49a54672b8d0e705ff',uuid=b82c3f0e-6d6a-4a7b-9556-b609ad63e497,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.395 349552 DEBUG nova.network.os_vif_util [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converting VIF {"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.396 349552 DEBUG nova.network.os_vif_util [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:63:18,bridge_name='br-int',has_traffic_filtering=True,id=554930d3-ff53-4ef1-af0a-bad6acef1456,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap554930d3-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.397 349552 DEBUG os_vif [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:63:18,bridge_name='br-int',has_traffic_filtering=True,id=554930d3-ff53-4ef1-af0a-bad6acef1456,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap554930d3-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.398 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.398 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.399 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.403 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.403 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap554930d3-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.404 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap554930d3-ff, col_values=(('external_ids', {'iface-id': '554930d3-ff53-4ef1-af0a-bad6acef1456', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:63:18', 'vm-uuid': 'b82c3f0e-6d6a-4a7b-9556-b609ad63e497'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.407 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:16 compute-0 NetworkManager[49092]: <info>  [1764899356.4093] manager: (tap554930d3-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.410 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.421 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.422 349552 INFO os_vif [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:63:18,bridge_name='br-int',has_traffic_filtering=True,id=554930d3-ff53-4ef1-af0a-bad6acef1456,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap554930d3-ff')
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.483 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.484 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.484 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.485 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No VIF found with MAC fa:16:3e:43:63:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.485 349552 INFO nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Using config drive
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.542 349552 DEBUG nova.storage.rbd_utils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:49:16 compute-0 rsyslogd[188644]: message too long (8192) with configured size 8096, begin of message is: 2025-12-05 01:49:16.371 349552 DEBUG nova.virt.libvirt.vif [None req-4272eb25-2b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 05 01:49:16 compute-0 rsyslogd[188644]: message too long (8192) with configured size 8096, begin of message is: 2025-12-05 01:49:16.395 349552 DEBUG nova.virt.libvirt.vif [None req-4272eb25-2b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 05 01:49:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:49:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:49:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:49:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:49:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:49:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:49:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:49:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:49:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:49:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:49:16 compute-0 ceph-mon[192914]: pgmap v1196: 321 pgs: 321 active+clean; 110 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec 05 01:49:16 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3081362992' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.924 349552 INFO nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Creating config drive at /var/lib/nova/instances/b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.config
Dec 05 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.937 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6cs05sit execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:49:17 compute-0 nova_compute[349548]: 2025-12-05 01:49:17.088 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6cs05sit" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:49:17 compute-0 nova_compute[349548]: 2025-12-05 01:49:17.142 349552 DEBUG nova.storage.rbd_utils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:49:17 compute-0 nova_compute[349548]: 2025-12-05 01:49:17.154 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.config b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:49:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1197: 321 pgs: 321 active+clean; 110 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec 05 01:49:17 compute-0 nova_compute[349548]: 2025-12-05 01:49:17.478 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.config b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:49:17 compute-0 nova_compute[349548]: 2025-12-05 01:49:17.479 349552 INFO nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Deleting local config drive /var/lib/nova/instances/b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.config because it was imported into RBD.
Dec 05 01:49:17 compute-0 kernel: tap554930d3-ff: entered promiscuous mode
Dec 05 01:49:17 compute-0 NetworkManager[49092]: <info>  [1764899357.5721] manager: (tap554930d3-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Dec 05 01:49:17 compute-0 nova_compute[349548]: 2025-12-05 01:49:17.578 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:17 compute-0 ovn_controller[89286]: 2025-12-05T01:49:17Z|00035|binding|INFO|Claiming lport 554930d3-ff53-4ef1-af0a-bad6acef1456 for this chassis.
Dec 05 01:49:17 compute-0 ovn_controller[89286]: 2025-12-05T01:49:17Z|00036|binding|INFO|554930d3-ff53-4ef1-af0a-bad6acef1456: Claiming fa:16:3e:43:63:18 192.168.0.23
Dec 05 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.594 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:63:18 192.168.0.23'], port_security=['fa:16:3e:43:63:18 192.168.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-qkgif4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-port-nevnpfznt6pg', 'neutron:cidrs': '192.168.0.23/24', 'neutron:device_id': 'b82c3f0e-6d6a-4a7b-9556-b609ad63e497', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-qkgif4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-port-nevnpfznt6pg', 'neutron:project_id': '6ad982b73954486390215862ee62239f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf07c149-4b4f-4cc9-a5b5-cfd139acbede', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8440543a-d57d-422f-b491-49a678c2776e, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=554930d3-ff53-4ef1-af0a-bad6acef1456) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.595 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 554930d3-ff53-4ef1-af0a-bad6acef1456 in datapath 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 bound to our chassis
Dec 05 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.596 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183
Dec 05 01:49:17 compute-0 systemd-udevd[414624]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 01:49:17 compute-0 nova_compute[349548]: 2025-12-05 01:49:17.620 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.621 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd80830-593a-46fc-96e4-75f54308fef5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:49:17 compute-0 ovn_controller[89286]: 2025-12-05T01:49:17Z|00037|binding|INFO|Setting lport 554930d3-ff53-4ef1-af0a-bad6acef1456 ovn-installed in OVS
Dec 05 01:49:17 compute-0 ovn_controller[89286]: 2025-12-05T01:49:17Z|00038|binding|INFO|Setting lport 554930d3-ff53-4ef1-af0a-bad6acef1456 up in Southbound
Dec 05 01:49:17 compute-0 NetworkManager[49092]: <info>  [1764899357.6334] device (tap554930d3-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 05 01:49:17 compute-0 nova_compute[349548]: 2025-12-05 01:49:17.636 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:17 compute-0 NetworkManager[49092]: <info>  [1764899357.6409] device (tap554930d3-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 05 01:49:17 compute-0 systemd-machined[138700]: New machine qemu-2-instance-00000002.
Dec 05 01:49:17 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Dec 05 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.670 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[df2872c2-5314-4172-8e89-3996591b3054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.674 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[f9fb9944-03d8-47ef-a5ab-be1341796ec9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.714 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[b05f7c0e-60a7-4451-bbeb-783b36d8593f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.736 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[97d323b3-90f8-473e-a43d-af3397a2e937]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49f7d2f1-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:8a:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537514, 'reachable_time': 20575, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414639, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.760 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[427c8aa9-fa59-42f0-9154-f1d301f0daf4]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap49f7d2f1-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537531, 'tstamp': 537531}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414640, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap49f7d2f1-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537536, 'tstamp': 537536}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414640, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.762 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49f7d2f1-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:49:17 compute-0 nova_compute[349548]: 2025-12-05 01:49:17.766 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:17 compute-0 nova_compute[349548]: 2025-12-05 01:49:17.769 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:17 compute-0 ceph-mon[192914]: pgmap v1197: 321 pgs: 321 active+clean; 110 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec 05 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.770 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49f7d2f1-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.770 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.771 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap49f7d2f1-f0, col_values=(('external_ids', {'iface-id': '35b0af3f-4a87-44c5-9b77-2f08261b9985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.772 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 01:49:18 compute-0 nova_compute[349548]: 2025-12-05 01:49:18.437 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764899358.4372408, b82c3f0e-6d6a-4a7b-9556-b609ad63e497 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 01:49:18 compute-0 nova_compute[349548]: 2025-12-05 01:49:18.438 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] VM Started (Lifecycle Event)
Dec 05 01:49:18 compute-0 nova_compute[349548]: 2025-12-05 01:49:18.467 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 01:49:18 compute-0 nova_compute[349548]: 2025-12-05 01:49:18.476 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764899358.437429, b82c3f0e-6d6a-4a7b-9556-b609ad63e497 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 01:49:18 compute-0 nova_compute[349548]: 2025-12-05 01:49:18.477 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] VM Paused (Lifecycle Event)
Dec 05 01:49:18 compute-0 nova_compute[349548]: 2025-12-05 01:49:18.530 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 01:49:18 compute-0 nova_compute[349548]: 2025-12-05 01:49:18.540 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 01:49:18 compute-0 nova_compute[349548]: 2025-12-05 01:49:18.562 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.196 349552 DEBUG nova.compute.manager [req-cffbf402-ae4e-40ce-bc5e-80c9e25379cf req-af011fa0-8536-4c28-b083-752c01359f14 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Received event network-vif-plugged-554930d3-ff53-4ef1-af0a-bad6acef1456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.196 349552 DEBUG oslo_concurrency.lockutils [req-cffbf402-ae4e-40ce-bc5e-80c9e25379cf req-af011fa0-8536-4c28-b083-752c01359f14 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.197 349552 DEBUG oslo_concurrency.lockutils [req-cffbf402-ae4e-40ce-bc5e-80c9e25379cf req-af011fa0-8536-4c28-b083-752c01359f14 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.197 349552 DEBUG oslo_concurrency.lockutils [req-cffbf402-ae4e-40ce-bc5e-80c9e25379cf req-af011fa0-8536-4c28-b083-752c01359f14 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.198 349552 DEBUG nova.compute.manager [req-cffbf402-ae4e-40ce-bc5e-80c9e25379cf req-af011fa0-8536-4c28-b083-752c01359f14 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Processing event network-vif-plugged-554930d3-ff53-4ef1-af0a-bad6acef1456 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.199 349552 DEBUG nova.compute.manager [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.206 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764899359.2056804, b82c3f0e-6d6a-4a7b-9556-b609ad63e497 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.207 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] VM Resumed (Lifecycle Event)
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.212 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.221 349552 INFO nova.virt.libvirt.driver [-] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Instance spawned successfully.
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.221 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.225 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.239 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.250 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.251 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.252 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.253 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.254 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.255 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 01:49:19 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:19.259 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 01:49:19 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:19.261 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.267 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.270 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.312 349552 INFO nova.compute.manager [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Took 9.97 seconds to spawn the instance on the hypervisor.
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.312 349552 DEBUG nova.compute.manager [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 01:49:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1198: 321 pgs: 321 active+clean; 110 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.381 349552 INFO nova.compute.manager [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Took 11.04 seconds to build instance.
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.398 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.774 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.775 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.775 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.776 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.776 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 01:49:20 compute-0 nova_compute[349548]: 2025-12-05 01:49:20.064 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:49:20 compute-0 nova_compute[349548]: 2025-12-05 01:49:20.064 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:49:20 compute-0 ceph-mon[192914]: pgmap v1198: 321 pgs: 321 active+clean; 110 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec 05 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.061 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.113 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.113 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.114 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.289 349552 DEBUG nova.compute.manager [req-51baec92-7a32-4423-98fd-4b11c5bca7f7 req-ae5aaa2a-5e98-4b45-9107-e3d2b687b424 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Received event network-vif-plugged-554930d3-ff53-4ef1-af0a-bad6acef1456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.290 349552 DEBUG oslo_concurrency.lockutils [req-51baec92-7a32-4423-98fd-4b11c5bca7f7 req-ae5aaa2a-5e98-4b45-9107-e3d2b687b424 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.290 349552 DEBUG oslo_concurrency.lockutils [req-51baec92-7a32-4423-98fd-4b11c5bca7f7 req-ae5aaa2a-5e98-4b45-9107-e3d2b687b424 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.290 349552 DEBUG oslo_concurrency.lockutils [req-51baec92-7a32-4423-98fd-4b11c5bca7f7 req-ae5aaa2a-5e98-4b45-9107-e3d2b687b424 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.291 349552 DEBUG nova.compute.manager [req-51baec92-7a32-4423-98fd-4b11c5bca7f7 req-ae5aaa2a-5e98-4b45-9107-e3d2b687b424 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] No waiting events found dispatching network-vif-plugged-554930d3-ff53-4ef1-af0a-bad6acef1456 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.291 349552 WARNING nova.compute.manager [req-51baec92-7a32-4423-98fd-4b11c5bca7f7 req-ae5aaa2a-5e98-4b45-9107-e3d2b687b424 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Received unexpected event network-vif-plugged-554930d3-ff53-4ef1-af0a-bad6acef1456 for instance with vm_state active and task_state None.
Dec 05 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.355 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1199: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.4 MiB/s wr, 41 op/s
Dec 05 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.407 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.581 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.581 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.582 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.582 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b69a0e24-1bc4-46a5-92d7-367c1efd53df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 01:49:21 compute-0 podman[414703]: 2025-12-05 01:49:21.697524949 +0000 UTC m=+0.113776049 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 01:49:21 compute-0 podman[414704]: 2025-12-05 01:49:21.708783505 +0000 UTC m=+0.109786677 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 01:49:22 compute-0 ceph-mon[192914]: pgmap v1199: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.4 MiB/s wr, 41 op/s
Dec 05 01:49:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:49:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1200: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 669 KiB/s rd, 1.4 MiB/s wr, 69 op/s
Dec 05 01:49:23 compute-0 podman[414741]: 2025-12-05 01:49:23.711198695 +0000 UTC m=+0.121435644 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:49:23 compute-0 podman[414740]: 2025-12-05 01:49:23.729742397 +0000 UTC m=+0.140330056 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 01:49:24 compute-0 ceph-mon[192914]: pgmap v1200: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 669 KiB/s rd, 1.4 MiB/s wr, 69 op/s
Dec 05 01:49:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1201: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 744 KiB/s rd, 23 KiB/s wr, 46 op/s
Dec 05 01:49:25 compute-0 nova_compute[349548]: 2025-12-05 01:49:25.402 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updating instance_info_cache with network_info: [{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 01:49:25 compute-0 nova_compute[349548]: 2025-12-05 01:49:25.417 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:49:25 compute-0 nova_compute[349548]: 2025-12-05 01:49:25.418 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 01:49:25 compute-0 nova_compute[349548]: 2025-12-05 01:49:25.419 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:49:25 compute-0 nova_compute[349548]: 2025-12-05 01:49:25.419 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:49:25 compute-0 nova_compute[349548]: 2025-12-05 01:49:25.455 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:49:25 compute-0 nova_compute[349548]: 2025-12-05 01:49:25.456 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:49:25 compute-0 nova_compute[349548]: 2025-12-05 01:49:25.456 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:49:25 compute-0 nova_compute[349548]: 2025-12-05 01:49:25.457 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:49:25 compute-0 nova_compute[349548]: 2025-12-05 01:49:25.457 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:49:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:49:25 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3601159938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.009 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.158 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.158 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.158 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.166 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.167 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.167 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:49:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:26.264 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.357 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.411 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:26 compute-0 ceph-mon[192914]: pgmap v1201: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 744 KiB/s rd, 23 KiB/s wr, 46 op/s
Dec 05 01:49:26 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3601159938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008201929494692974 of space, bias 1.0, pg target 0.24605788484078922 quantized to 32 (current 32)
Dec 05 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 05 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.684 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.686 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3818MB free_disk=59.93907928466797GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.686 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.686 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.786 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.787 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.787 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.787 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.868 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:49:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:49:27 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1772842204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:49:27 compute-0 nova_compute[349548]: 2025-12-05 01:49:27.365 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:49:27 compute-0 nova_compute[349548]: 2025-12-05 01:49:27.378 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:49:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1202: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 21 KiB/s wr, 61 op/s
Dec 05 01:49:27 compute-0 nova_compute[349548]: 2025-12-05 01:49:27.405 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:49:27 compute-0 nova_compute[349548]: 2025-12-05 01:49:27.459 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 01:49:27 compute-0 nova_compute[349548]: 2025-12-05 01:49:27.460 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:49:27 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1772842204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:49:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:49:27 compute-0 podman[414823]: 2025-12-05 01:49:27.729496409 +0000 UTC m=+0.145400668 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.tags=base rhel9, vcs-type=git, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=edpm, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Dec 05 01:49:28 compute-0 ceph-mon[192914]: pgmap v1202: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 21 KiB/s wr, 61 op/s
Dec 05 01:49:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1203: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 60 op/s
Dec 05 01:49:29 compute-0 podman[158197]: time="2025-12-05T01:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:49:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 01:49:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8611 "" "Go-http-client/1.1"
Dec 05 01:49:30 compute-0 ceph-mon[192914]: pgmap v1203: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 60 op/s
Dec 05 01:49:31 compute-0 nova_compute[349548]: 2025-12-05 01:49:31.360 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1204: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 60 op/s
Dec 05 01:49:31 compute-0 sudo[414841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:49:31 compute-0 sudo[414841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:49:31 compute-0 sudo[414841]: pam_unix(sudo:session): session closed for user root
Dec 05 01:49:31 compute-0 nova_compute[349548]: 2025-12-05 01:49:31.413 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:31 compute-0 openstack_network_exporter[366555]: ERROR   01:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:49:31 compute-0 openstack_network_exporter[366555]: ERROR   01:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:49:31 compute-0 openstack_network_exporter[366555]: ERROR   01:49:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:49:31 compute-0 openstack_network_exporter[366555]: ERROR   01:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:49:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:49:31 compute-0 openstack_network_exporter[366555]: ERROR   01:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:49:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:49:31 compute-0 sudo[414866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:49:31 compute-0 sudo[414866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:49:31 compute-0 sudo[414866]: pam_unix(sudo:session): session closed for user root
Dec 05 01:49:31 compute-0 sudo[414892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:49:31 compute-0 sudo[414892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:49:31 compute-0 sudo[414892]: pam_unix(sudo:session): session closed for user root
Dec 05 01:49:31 compute-0 sudo[414917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 05 01:49:31 compute-0 sudo[414917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:49:32 compute-0 ceph-mon[192914]: pgmap v1204: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 60 op/s
Dec 05 01:49:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:49:32 compute-0 podman[415010]: 2025-12-05 01:49:32.64665229 +0000 UTC m=+0.120641122 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:49:32 compute-0 podman[415010]: 2025-12-05 01:49:32.760794628 +0000 UTC m=+0.234783380 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 05 01:49:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1205: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 170 B/s wr, 56 op/s
Dec 05 01:49:33 compute-0 sudo[414917]: pam_unix(sudo:session): session closed for user root
Dec 05 01:49:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:49:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:49:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:49:34 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:49:34 compute-0 sudo[415155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:49:34 compute-0 sudo[415155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:49:34 compute-0 sudo[415155]: pam_unix(sudo:session): session closed for user root
Dec 05 01:49:34 compute-0 sudo[415180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:49:34 compute-0 sudo[415180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:49:34 compute-0 sudo[415180]: pam_unix(sudo:session): session closed for user root
Dec 05 01:49:34 compute-0 sudo[415205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:49:34 compute-0 sudo[415205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:49:34 compute-0 sudo[415205]: pam_unix(sudo:session): session closed for user root
Dec 05 01:49:34 compute-0 sudo[415230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:49:34 compute-0 sudo[415230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:49:34 compute-0 ceph-mon[192914]: pgmap v1205: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 170 B/s wr, 56 op/s
Dec 05 01:49:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:49:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:49:35 compute-0 sudo[415230]: pam_unix(sudo:session): session closed for user root
Dec 05 01:49:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:49:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:49:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:49:35 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:49:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:49:35 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:49:35 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7285b0d1-e9ca-4284-9a45-7209c8c782d0 does not exist
Dec 05 01:49:35 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 89d86a9f-4042-4959-9281-aed5b031a10c does not exist
Dec 05 01:49:35 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev ead88ac9-311b-43f7-87b9-d2a5461e4377 does not exist
Dec 05 01:49:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:49:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:49:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:49:35 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:49:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:49:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:49:35 compute-0 sudo[415284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:49:35 compute-0 sudo[415284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:49:35 compute-0 sudo[415284]: pam_unix(sudo:session): session closed for user root
Dec 05 01:49:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1206: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 876 KiB/s rd, 27 op/s
Dec 05 01:49:35 compute-0 sudo[415335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:49:35 compute-0 sudo[415335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:49:35 compute-0 sudo[415335]: pam_unix(sudo:session): session closed for user root
Dec 05 01:49:35 compute-0 podman[415317]: 2025-12-05 01:49:35.487432123 +0000 UTC m=+0.112965716 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public)
Dec 05 01:49:35 compute-0 podman[415308]: 2025-12-05 01:49:35.49087871 +0000 UTC m=+0.143442623 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec 05 01:49:35 compute-0 podman[415309]: 2025-12-05 01:49:35.494473041 +0000 UTC m=+0.136279612 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:49:35 compute-0 podman[415310]: 2025-12-05 01:49:35.524030131 +0000 UTC m=+0.151871229 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 05 01:49:35 compute-0 sudo[415413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:49:35 compute-0 sudo[415413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:49:35 compute-0 sudo[415413]: pam_unix(sudo:session): session closed for user root
Dec 05 01:49:35 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:49:35 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:49:35 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:49:35 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:49:35 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:49:35 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:49:35 compute-0 sudo[415442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:49:35 compute-0 sudo[415442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:49:36 compute-0 podman[415506]: 2025-12-05 01:49:36.148494293 +0000 UTC m=+0.080018910 container create 8b9d8cedc77bde7ee349e718c660319255ece5bea18bf54dcf0f6c7733a86d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wilson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:49:36 compute-0 podman[415506]: 2025-12-05 01:49:36.121057171 +0000 UTC m=+0.052581778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:49:36 compute-0 systemd[1]: Started libpod-conmon-8b9d8cedc77bde7ee349e718c660319255ece5bea18bf54dcf0f6c7733a86d23.scope.
Dec 05 01:49:36 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:49:36 compute-0 podman[415506]: 2025-12-05 01:49:36.30138146 +0000 UTC m=+0.232906057 container init 8b9d8cedc77bde7ee349e718c660319255ece5bea18bf54dcf0f6c7733a86d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Dec 05 01:49:36 compute-0 podman[415506]: 2025-12-05 01:49:36.31169985 +0000 UTC m=+0.243224467 container start 8b9d8cedc77bde7ee349e718c660319255ece5bea18bf54dcf0f6c7733a86d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wilson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:49:36 compute-0 podman[415506]: 2025-12-05 01:49:36.318361057 +0000 UTC m=+0.249885664 container attach 8b9d8cedc77bde7ee349e718c660319255ece5bea18bf54dcf0f6c7733a86d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:49:36 compute-0 focused_wilson[415522]: 167 167
Dec 05 01:49:36 compute-0 systemd[1]: libpod-8b9d8cedc77bde7ee349e718c660319255ece5bea18bf54dcf0f6c7733a86d23.scope: Deactivated successfully.
Dec 05 01:49:36 compute-0 podman[415506]: 2025-12-05 01:49:36.326760783 +0000 UTC m=+0.258285370 container died 8b9d8cedc77bde7ee349e718c660319255ece5bea18bf54dcf0f6c7733a86d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 05 01:49:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fec9c873bd35a411320e8d9d147f8929e7783c7d13c037acc97340b58aff708-merged.mount: Deactivated successfully.
Dec 05 01:49:36 compute-0 nova_compute[349548]: 2025-12-05 01:49:36.368 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:36 compute-0 podman[415506]: 2025-12-05 01:49:36.3946231 +0000 UTC m=+0.326147687 container remove 8b9d8cedc77bde7ee349e718c660319255ece5bea18bf54dcf0f6c7733a86d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 05 01:49:36 compute-0 systemd[1]: libpod-conmon-8b9d8cedc77bde7ee349e718c660319255ece5bea18bf54dcf0f6c7733a86d23.scope: Deactivated successfully.
Dec 05 01:49:36 compute-0 nova_compute[349548]: 2025-12-05 01:49:36.417 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:36 compute-0 ceph-mon[192914]: pgmap v1206: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 876 KiB/s rd, 27 op/s
Dec 05 01:49:36 compute-0 podman[415543]: 2025-12-05 01:49:36.675541686 +0000 UTC m=+0.091888134 container create 2758df6ea9af5712b74482241efbef8a9d8c7cb03877da3bac3d91c2945d372b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 01:49:36 compute-0 podman[415543]: 2025-12-05 01:49:36.64118628 +0000 UTC m=+0.057532778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:49:36 compute-0 systemd[1]: Started libpod-conmon-2758df6ea9af5712b74482241efbef8a9d8c7cb03877da3bac3d91c2945d372b.scope.
Dec 05 01:49:36 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e42a0e41f7d658d7dab3ea6bd33f9fdd54b5a8db8cc564482b0cd85845de8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e42a0e41f7d658d7dab3ea6bd33f9fdd54b5a8db8cc564482b0cd85845de8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e42a0e41f7d658d7dab3ea6bd33f9fdd54b5a8db8cc564482b0cd85845de8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e42a0e41f7d658d7dab3ea6bd33f9fdd54b5a8db8cc564482b0cd85845de8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e42a0e41f7d658d7dab3ea6bd33f9fdd54b5a8db8cc564482b0cd85845de8c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:49:36 compute-0 podman[415543]: 2025-12-05 01:49:36.884159419 +0000 UTC m=+0.300505877 container init 2758df6ea9af5712b74482241efbef8a9d8c7cb03877da3bac3d91c2945d372b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:49:36 compute-0 podman[415543]: 2025-12-05 01:49:36.908211975 +0000 UTC m=+0.324558433 container start 2758df6ea9af5712b74482241efbef8a9d8c7cb03877da3bac3d91c2945d372b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hertz, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 05 01:49:36 compute-0 podman[415543]: 2025-12-05 01:49:36.915942023 +0000 UTC m=+0.332288521 container attach 2758df6ea9af5712b74482241efbef8a9d8c7cb03877da3bac3d91c2945d372b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 01:49:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1207: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 784 KiB/s rd, 24 op/s
Dec 05 01:49:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:49:38 compute-0 focused_hertz[415559]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:49:38 compute-0 focused_hertz[415559]: --> relative data size: 1.0
Dec 05 01:49:38 compute-0 focused_hertz[415559]: --> All data devices are unavailable
Dec 05 01:49:38 compute-0 systemd[1]: libpod-2758df6ea9af5712b74482241efbef8a9d8c7cb03877da3bac3d91c2945d372b.scope: Deactivated successfully.
Dec 05 01:49:38 compute-0 systemd[1]: libpod-2758df6ea9af5712b74482241efbef8a9d8c7cb03877da3bac3d91c2945d372b.scope: Consumed 1.342s CPU time.
Dec 05 01:49:38 compute-0 podman[415543]: 2025-12-05 01:49:38.334389159 +0000 UTC m=+1.750735607 container died 2758df6ea9af5712b74482241efbef8a9d8c7cb03877da3bac3d91c2945d372b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:49:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7e42a0e41f7d658d7dab3ea6bd33f9fdd54b5a8db8cc564482b0cd85845de8c-merged.mount: Deactivated successfully.
Dec 05 01:49:38 compute-0 podman[415543]: 2025-12-05 01:49:38.447067976 +0000 UTC m=+1.863414404 container remove 2758df6ea9af5712b74482241efbef8a9d8c7cb03877da3bac3d91c2945d372b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hertz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:49:38 compute-0 systemd[1]: libpod-conmon-2758df6ea9af5712b74482241efbef8a9d8c7cb03877da3bac3d91c2945d372b.scope: Deactivated successfully.
Dec 05 01:49:38 compute-0 sudo[415442]: pam_unix(sudo:session): session closed for user root
Dec 05 01:49:38 compute-0 ceph-mon[192914]: pgmap v1207: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 784 KiB/s rd, 24 op/s
Dec 05 01:49:38 compute-0 sudo[415599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:49:38 compute-0 sudo[415599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:49:38 compute-0 sudo[415599]: pam_unix(sudo:session): session closed for user root
Dec 05 01:49:38 compute-0 sudo[415624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:49:38 compute-0 sudo[415624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:49:38 compute-0 sudo[415624]: pam_unix(sudo:session): session closed for user root
Dec 05 01:49:38 compute-0 sudo[415649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:49:38 compute-0 sudo[415649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:49:38 compute-0 sudo[415649]: pam_unix(sudo:session): session closed for user root
Dec 05 01:49:39 compute-0 sudo[415674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:49:39 compute-0 sudo[415674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:49:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1208: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:49:39 compute-0 podman[415738]: 2025-12-05 01:49:39.585523863 +0000 UTC m=+0.067518879 container create cef8519ab84f02928b161a17255fe2ee127914ebf0bdd5b31299c4a5504ddb88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goodall, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 01:49:39 compute-0 podman[415738]: 2025-12-05 01:49:39.559468561 +0000 UTC m=+0.041463586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:49:39 compute-0 systemd[1]: Started libpod-conmon-cef8519ab84f02928b161a17255fe2ee127914ebf0bdd5b31299c4a5504ddb88.scope.
Dec 05 01:49:39 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:49:39 compute-0 podman[415738]: 2025-12-05 01:49:39.749297297 +0000 UTC m=+0.231292382 container init cef8519ab84f02928b161a17255fe2ee127914ebf0bdd5b31299c4a5504ddb88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:49:39 compute-0 podman[415738]: 2025-12-05 01:49:39.767088527 +0000 UTC m=+0.249083522 container start cef8519ab84f02928b161a17255fe2ee127914ebf0bdd5b31299c4a5504ddb88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goodall, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:49:39 compute-0 podman[415738]: 2025-12-05 01:49:39.772361645 +0000 UTC m=+0.254356640 container attach cef8519ab84f02928b161a17255fe2ee127914ebf0bdd5b31299c4a5504ddb88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 01:49:39 compute-0 happy_goodall[415754]: 167 167
Dec 05 01:49:39 compute-0 systemd[1]: libpod-cef8519ab84f02928b161a17255fe2ee127914ebf0bdd5b31299c4a5504ddb88.scope: Deactivated successfully.
Dec 05 01:49:39 compute-0 podman[415738]: 2025-12-05 01:49:39.778814876 +0000 UTC m=+0.260809871 container died cef8519ab84f02928b161a17255fe2ee127914ebf0bdd5b31299c4a5504ddb88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:49:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-39500b64761faec6ba4e81ae37e495dca72438a94211e052c4db8908e5283eb0-merged.mount: Deactivated successfully.
Dec 05 01:49:39 compute-0 podman[415738]: 2025-12-05 01:49:39.847143577 +0000 UTC m=+0.329138572 container remove cef8519ab84f02928b161a17255fe2ee127914ebf0bdd5b31299c4a5504ddb88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goodall, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:49:39 compute-0 systemd[1]: libpod-conmon-cef8519ab84f02928b161a17255fe2ee127914ebf0bdd5b31299c4a5504ddb88.scope: Deactivated successfully.
Dec 05 01:49:40 compute-0 podman[415780]: 2025-12-05 01:49:40.115493009 +0000 UTC m=+0.060310226 container create ffea9984a50f13179cf92dbcd37fb51875e9de9a3e8683eac78c72903d8806da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 05 01:49:40 compute-0 systemd[1]: Started libpod-conmon-ffea9984a50f13179cf92dbcd37fb51875e9de9a3e8683eac78c72903d8806da.scope.
Dec 05 01:49:40 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871b8c987ce21e8eaf09d34c48acc1a3bb015bade895392034f5712d26be8319/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:49:40 compute-0 podman[415780]: 2025-12-05 01:49:40.095852687 +0000 UTC m=+0.040669904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871b8c987ce21e8eaf09d34c48acc1a3bb015bade895392034f5712d26be8319/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871b8c987ce21e8eaf09d34c48acc1a3bb015bade895392034f5712d26be8319/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871b8c987ce21e8eaf09d34c48acc1a3bb015bade895392034f5712d26be8319/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:49:40 compute-0 podman[415780]: 2025-12-05 01:49:40.206811175 +0000 UTC m=+0.151628462 container init ffea9984a50f13179cf92dbcd37fb51875e9de9a3e8683eac78c72903d8806da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 05 01:49:40 compute-0 podman[415780]: 2025-12-05 01:49:40.228603318 +0000 UTC m=+0.173420535 container start ffea9984a50f13179cf92dbcd37fb51875e9de9a3e8683eac78c72903d8806da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:49:40 compute-0 podman[415780]: 2025-12-05 01:49:40.241434739 +0000 UTC m=+0.186252046 container attach ffea9984a50f13179cf92dbcd37fb51875e9de9a3e8683eac78c72903d8806da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:49:40 compute-0 ceph-mon[192914]: pgmap v1208: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]: {
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:     "0": [
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:         {
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "devices": [
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "/dev/loop3"
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             ],
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "lv_name": "ceph_lv0",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "lv_size": "21470642176",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "name": "ceph_lv0",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "tags": {
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.cluster_name": "ceph",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.crush_device_class": "",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.encrypted": "0",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.osd_id": "0",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.type": "block",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.vdo": "0"
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             },
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "type": "block",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "vg_name": "ceph_vg0"
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:         }
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:     ],
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:     "1": [
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:         {
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "devices": [
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "/dev/loop4"
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             ],
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "lv_name": "ceph_lv1",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "lv_size": "21470642176",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "name": "ceph_lv1",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "tags": {
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.cluster_name": "ceph",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.crush_device_class": "",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.encrypted": "0",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.osd_id": "1",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.type": "block",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.vdo": "0"
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             },
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "type": "block",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "vg_name": "ceph_vg1"
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:         }
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:     ],
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:     "2": [
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:         {
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "devices": [
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "/dev/loop5"
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             ],
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "lv_name": "ceph_lv2",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "lv_size": "21470642176",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "name": "ceph_lv2",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "tags": {
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.cluster_name": "ceph",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.crush_device_class": "",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.encrypted": "0",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.osd_id": "2",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.type": "block",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:                 "ceph.vdo": "0"
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             },
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "type": "block",
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:             "vg_name": "ceph_vg2"
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:         }
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]:     ]
Dec 05 01:49:40 compute-0 great_proskuriakova[415796]: }
Dec 05 01:49:41 compute-0 systemd[1]: libpod-ffea9984a50f13179cf92dbcd37fb51875e9de9a3e8683eac78c72903d8806da.scope: Deactivated successfully.
Dec 05 01:49:41 compute-0 podman[415805]: 2025-12-05 01:49:41.113106608 +0000 UTC m=+0.064736040 container died ffea9984a50f13179cf92dbcd37fb51875e9de9a3e8683eac78c72903d8806da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 01:49:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-871b8c987ce21e8eaf09d34c48acc1a3bb015bade895392034f5712d26be8319-merged.mount: Deactivated successfully.
Dec 05 01:49:41 compute-0 podman[415805]: 2025-12-05 01:49:41.196188493 +0000 UTC m=+0.147817895 container remove ffea9984a50f13179cf92dbcd37fb51875e9de9a3e8683eac78c72903d8806da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 05 01:49:41 compute-0 systemd[1]: libpod-conmon-ffea9984a50f13179cf92dbcd37fb51875e9de9a3e8683eac78c72903d8806da.scope: Deactivated successfully.
Dec 05 01:49:41 compute-0 sudo[415674]: pam_unix(sudo:session): session closed for user root
Dec 05 01:49:41 compute-0 nova_compute[349548]: 2025-12-05 01:49:41.365 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1209: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:49:41 compute-0 sudo[415820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:49:41 compute-0 sudo[415820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:49:41 compute-0 sudo[415820]: pam_unix(sudo:session): session closed for user root
Dec 05 01:49:41 compute-0 nova_compute[349548]: 2025-12-05 01:49:41.425 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:41 compute-0 sudo[415845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:49:41 compute-0 sudo[415845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:49:41 compute-0 sudo[415845]: pam_unix(sudo:session): session closed for user root
Dec 05 01:49:41 compute-0 sudo[415870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:49:41 compute-0 sudo[415870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:49:41 compute-0 sudo[415870]: pam_unix(sudo:session): session closed for user root
Dec 05 01:49:41 compute-0 sudo[415895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:49:41 compute-0 sudo[415895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:49:42 compute-0 podman[415959]: 2025-12-05 01:49:42.438709965 +0000 UTC m=+0.090680630 container create d2fa6e350c5eae094c12b554feb4fcc254543778c7c93e7bc83ef545d88ebc06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_poitras, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:49:42 compute-0 podman[415959]: 2025-12-05 01:49:42.406079588 +0000 UTC m=+0.058050353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:49:42 compute-0 systemd[1]: Started libpod-conmon-d2fa6e350c5eae094c12b554feb4fcc254543778c7c93e7bc83ef545d88ebc06.scope.
Dec 05 01:49:42 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:49:42 compute-0 podman[415959]: 2025-12-05 01:49:42.600725288 +0000 UTC m=+0.252695993 container init d2fa6e350c5eae094c12b554feb4fcc254543778c7c93e7bc83ef545d88ebc06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:49:42 compute-0 ceph-mon[192914]: pgmap v1209: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:49:42 compute-0 podman[415959]: 2025-12-05 01:49:42.619165157 +0000 UTC m=+0.271135832 container start d2fa6e350c5eae094c12b554feb4fcc254543778c7c93e7bc83ef545d88ebc06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_poitras, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 05 01:49:42 compute-0 podman[415959]: 2025-12-05 01:49:42.625489795 +0000 UTC m=+0.277460510 container attach d2fa6e350c5eae094c12b554feb4fcc254543778c7c93e7bc83ef545d88ebc06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:49:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:49:42 compute-0 brave_poitras[415974]: 167 167
Dec 05 01:49:42 compute-0 systemd[1]: libpod-d2fa6e350c5eae094c12b554feb4fcc254543778c7c93e7bc83ef545d88ebc06.scope: Deactivated successfully.
Dec 05 01:49:42 compute-0 podman[415959]: 2025-12-05 01:49:42.636105963 +0000 UTC m=+0.288076628 container died d2fa6e350c5eae094c12b554feb4fcc254543778c7c93e7bc83ef545d88ebc06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 05 01:49:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3ddd612ad0bf1d7028392304452fc09319f1a601464d802644fb7ff5fe91856-merged.mount: Deactivated successfully.
Dec 05 01:49:42 compute-0 podman[415959]: 2025-12-05 01:49:42.711390399 +0000 UTC m=+0.363361034 container remove d2fa6e350c5eae094c12b554feb4fcc254543778c7c93e7bc83ef545d88ebc06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_poitras, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:49:42 compute-0 systemd[1]: libpod-conmon-d2fa6e350c5eae094c12b554feb4fcc254543778c7c93e7bc83ef545d88ebc06.scope: Deactivated successfully.
Dec 05 01:49:42 compute-0 podman[415997]: 2025-12-05 01:49:42.973214928 +0000 UTC m=+0.099204420 container create 1fd699c29055334a988589f8e65e611a98d013985b0517974237ca24b3f7d3ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_einstein, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:49:43 compute-0 podman[415997]: 2025-12-05 01:49:42.923875071 +0000 UTC m=+0.049864633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:49:43 compute-0 systemd[1]: Started libpod-conmon-1fd699c29055334a988589f8e65e611a98d013985b0517974237ca24b3f7d3ca.scope.
Dec 05 01:49:43 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:49:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3a59d9a4af107811c14eb5cdf5f5892f9ba5f97867a45f767928e58459808d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:49:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3a59d9a4af107811c14eb5cdf5f5892f9ba5f97867a45f767928e58459808d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:49:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3a59d9a4af107811c14eb5cdf5f5892f9ba5f97867a45f767928e58459808d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:49:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3a59d9a4af107811c14eb5cdf5f5892f9ba5f97867a45f767928e58459808d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:49:43 compute-0 podman[415997]: 2025-12-05 01:49:43.145034927 +0000 UTC m=+0.271024509 container init 1fd699c29055334a988589f8e65e611a98d013985b0517974237ca24b3f7d3ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 01:49:43 compute-0 podman[415997]: 2025-12-05 01:49:43.155855211 +0000 UTC m=+0.281844713 container start 1fd699c29055334a988589f8e65e611a98d013985b0517974237ca24b3f7d3ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 05 01:49:43 compute-0 podman[415997]: 2025-12-05 01:49:43.161490399 +0000 UTC m=+0.287479921 container attach 1fd699c29055334a988589f8e65e611a98d013985b0517974237ca24b3f7d3ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:49:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1210: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:49:44 compute-0 gallant_einstein[416013]: {
Dec 05 01:49:44 compute-0 gallant_einstein[416013]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:49:44 compute-0 gallant_einstein[416013]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:49:44 compute-0 gallant_einstein[416013]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:49:44 compute-0 gallant_einstein[416013]:         "osd_id": 0,
Dec 05 01:49:44 compute-0 gallant_einstein[416013]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:49:44 compute-0 gallant_einstein[416013]:         "type": "bluestore"
Dec 05 01:49:44 compute-0 gallant_einstein[416013]:     },
Dec 05 01:49:44 compute-0 gallant_einstein[416013]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:49:44 compute-0 gallant_einstein[416013]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:49:44 compute-0 gallant_einstein[416013]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:49:44 compute-0 gallant_einstein[416013]:         "osd_id": 1,
Dec 05 01:49:44 compute-0 gallant_einstein[416013]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:49:44 compute-0 gallant_einstein[416013]:         "type": "bluestore"
Dec 05 01:49:44 compute-0 gallant_einstein[416013]:     },
Dec 05 01:49:44 compute-0 gallant_einstein[416013]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:49:44 compute-0 gallant_einstein[416013]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:49:44 compute-0 gallant_einstein[416013]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:49:44 compute-0 gallant_einstein[416013]:         "osd_id": 2,
Dec 05 01:49:44 compute-0 gallant_einstein[416013]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:49:44 compute-0 gallant_einstein[416013]:         "type": "bluestore"
Dec 05 01:49:44 compute-0 gallant_einstein[416013]:     }
Dec 05 01:49:44 compute-0 gallant_einstein[416013]: }
Dec 05 01:49:44 compute-0 systemd[1]: libpod-1fd699c29055334a988589f8e65e611a98d013985b0517974237ca24b3f7d3ca.scope: Deactivated successfully.
Dec 05 01:49:44 compute-0 podman[415997]: 2025-12-05 01:49:44.505720531 +0000 UTC m=+1.631710023 container died 1fd699c29055334a988589f8e65e611a98d013985b0517974237ca24b3f7d3ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:49:44 compute-0 systemd[1]: libpod-1fd699c29055334a988589f8e65e611a98d013985b0517974237ca24b3f7d3ca.scope: Consumed 1.283s CPU time.
Dec 05 01:49:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3a59d9a4af107811c14eb5cdf5f5892f9ba5f97867a45f767928e58459808d7-merged.mount: Deactivated successfully.
Dec 05 01:49:44 compute-0 podman[415997]: 2025-12-05 01:49:44.59501836 +0000 UTC m=+1.721007852 container remove 1fd699c29055334a988589f8e65e611a98d013985b0517974237ca24b3f7d3ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 05 01:49:44 compute-0 ceph-mon[192914]: pgmap v1210: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:49:44 compute-0 systemd[1]: libpod-conmon-1fd699c29055334a988589f8e65e611a98d013985b0517974237ca24b3f7d3ca.scope: Deactivated successfully.
Dec 05 01:49:44 compute-0 sudo[415895]: pam_unix(sudo:session): session closed for user root
Dec 05 01:49:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:49:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:49:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:49:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:49:44 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 2736901e-ce09-4f1e-b309-d2c3aed9c45a does not exist
Dec 05 01:49:44 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 4247f875-db79-4afd-9855-45478bd00643 does not exist
Dec 05 01:49:44 compute-0 sudo[416058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:49:44 compute-0 sudo[416058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:49:44 compute-0 sudo[416058]: pam_unix(sudo:session): session closed for user root
Dec 05 01:49:44 compute-0 sudo[416083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:49:44 compute-0 sudo[416083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:49:44 compute-0 sudo[416083]: pam_unix(sudo:session): session closed for user root
Dec 05 01:49:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 01:49:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1921548264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:49:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 01:49:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1921548264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:49:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1211: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:49:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:49:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:49:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1921548264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:49:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1921548264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:49:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:49:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:49:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:49:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:49:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:49:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:49:46 compute-0 nova_compute[349548]: 2025-12-05 01:49:46.366 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:46 compute-0 nova_compute[349548]: 2025-12-05 01:49:46.427 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:46 compute-0 ceph-mon[192914]: pgmap v1211: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:49:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1212: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec 05 01:49:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:49:47 compute-0 ovn_controller[89286]: 2025-12-05T01:49:47Z|00039|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Dec 05 01:49:48 compute-0 ceph-mon[192914]: pgmap v1212: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec 05 01:49:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1213: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec 05 01:49:50 compute-0 ceph-mon[192914]: pgmap v1213: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec 05 01:49:51 compute-0 nova_compute[349548]: 2025-12-05 01:49:51.368 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1214: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec 05 01:49:51 compute-0 nova_compute[349548]: 2025-12-05 01:49:51.429 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.757501) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899391757554, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1430, "num_deletes": 251, "total_data_size": 2187945, "memory_usage": 2227528, "flush_reason": "Manual Compaction"}
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899391775372, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 2144357, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24102, "largest_seqno": 25531, "table_properties": {"data_size": 2137662, "index_size": 3830, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14161, "raw_average_key_size": 20, "raw_value_size": 2124095, "raw_average_value_size": 3004, "num_data_blocks": 171, "num_entries": 707, "num_filter_entries": 707, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764899253, "oldest_key_time": 1764899253, "file_creation_time": 1764899391, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 17908 microseconds, and 8046 cpu microseconds.
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.775417) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 2144357 bytes OK
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.775434) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.779799) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.779813) EVENT_LOG_v1 {"time_micros": 1764899391779808, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.779830) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 2181608, prev total WAL file size 2181608, number of live WAL files 2.
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.780979) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(2094KB)], [56(6798KB)]
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899391781060, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 9105876, "oldest_snapshot_seqno": -1}
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4614 keys, 7358865 bytes, temperature: kUnknown
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899391842466, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7358865, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7327968, "index_size": 18243, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11589, "raw_key_size": 115515, "raw_average_key_size": 25, "raw_value_size": 7244292, "raw_average_value_size": 1570, "num_data_blocks": 756, "num_entries": 4614, "num_filter_entries": 4614, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764899391, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.842826) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7358865 bytes
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.846793) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.1 rd, 119.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 6.6 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(7.7) write-amplify(3.4) OK, records in: 5132, records dropped: 518 output_compression: NoCompression
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.846835) EVENT_LOG_v1 {"time_micros": 1764899391846813, "job": 30, "event": "compaction_finished", "compaction_time_micros": 61497, "compaction_time_cpu_micros": 34178, "output_level": 6, "num_output_files": 1, "total_output_size": 7358865, "num_input_records": 5132, "num_output_records": 4614, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899391848367, "job": 30, "event": "table_file_deletion", "file_number": 58}
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899391851627, "job": 30, "event": "table_file_deletion", "file_number": 56}
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.780621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.852219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.852224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.852226) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.852228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.852230) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:49:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:49:52 compute-0 podman[416110]: 2025-12-05 01:49:52.733226902 +0000 UTC m=+0.134445399 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 05 01:49:52 compute-0 podman[416111]: 2025-12-05 01:49:52.737376578 +0000 UTC m=+0.129649694 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 01:49:52 compute-0 ceph-mon[192914]: pgmap v1214: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec 05 01:49:52 compute-0 ovn_controller[89286]: 2025-12-05T01:49:52Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:43:63:18 192.168.0.23
Dec 05 01:49:52 compute-0 ovn_controller[89286]: 2025-12-05T01:49:52Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:43:63:18 192.168.0.23
Dec 05 01:49:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1215: 321 pgs: 321 active+clean; 115 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 259 KiB/s wr, 7 op/s
Dec 05 01:49:54 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 05 01:49:54 compute-0 podman[416152]: 2025-12-05 01:49:54.688044044 +0000 UTC m=+0.096197224 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS)
Dec 05 01:49:54 compute-0 podman[416153]: 2025-12-05 01:49:54.699503936 +0000 UTC m=+0.099623572 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 05 01:49:54 compute-0 ceph-mon[192914]: pgmap v1215: 321 pgs: 321 active+clean; 115 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 259 KiB/s wr, 7 op/s
Dec 05 01:49:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1216: 321 pgs: 321 active+clean; 123 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 509 KiB/s wr, 30 op/s
Dec 05 01:49:55 compute-0 ceph-mon[192914]: pgmap v1216: 321 pgs: 321 active+clean; 123 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 509 KiB/s wr, 30 op/s
Dec 05 01:49:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:56.178 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:49:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:56.179 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:49:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:56.180 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:49:56 compute-0 nova_compute[349548]: 2025-12-05 01:49:56.371 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:56 compute-0 nova_compute[349548]: 2025-12-05 01:49:56.432 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:49:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1217: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec 05 01:49:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:49:58 compute-0 ceph-mon[192914]: pgmap v1217: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec 05 01:49:58 compute-0 podman[416190]: 2025-12-05 01:49:58.726570821 +0000 UTC m=+0.126333112 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, config_id=edpm, release=1214.1726694543, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9, version=9.4, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container)
Dec 05 01:49:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1218: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec 05 01:49:59 compute-0 podman[158197]: time="2025-12-05T01:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:49:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 01:49:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8617 "" "Go-http-client/1.1"
Dec 05 01:50:00 compute-0 ceph-mon[192914]: pgmap v1218: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec 05 01:50:01 compute-0 nova_compute[349548]: 2025-12-05 01:50:01.375 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:50:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1219: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec 05 01:50:01 compute-0 openstack_network_exporter[366555]: ERROR   01:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:50:01 compute-0 openstack_network_exporter[366555]: ERROR   01:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:50:01 compute-0 openstack_network_exporter[366555]: ERROR   01:50:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:50:01 compute-0 openstack_network_exporter[366555]: ERROR   01:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:50:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:50:01 compute-0 openstack_network_exporter[366555]: ERROR   01:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:50:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:50:01 compute-0 nova_compute[349548]: 2025-12-05 01:50:01.434 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:50:02 compute-0 ceph-mon[192914]: pgmap v1219: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec 05 01:50:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:50:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1220: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec 05 01:50:04 compute-0 ceph-mon[192914]: pgmap v1220: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec 05 01:50:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1221: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 143 KiB/s rd, 1.2 MiB/s wr, 51 op/s
Dec 05 01:50:05 compute-0 podman[416210]: 2025-12-05 01:50:05.72316048 +0000 UTC m=+0.128128722 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:50:05 compute-0 podman[416217]: 2025-12-05 01:50:05.74450569 +0000 UTC m=+0.122022170 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, container_name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container)
Dec 05 01:50:05 compute-0 podman[416211]: 2025-12-05 01:50:05.74664507 +0000 UTC m=+0.141019324 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:50:05 compute-0 podman[416212]: 2025-12-05 01:50:05.764757759 +0000 UTC m=+0.155249184 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec 05 01:50:06 compute-0 nova_compute[349548]: 2025-12-05 01:50:06.378 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:50:06 compute-0 nova_compute[349548]: 2025-12-05 01:50:06.437 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:50:06 compute-0 ceph-mon[192914]: pgmap v1221: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 143 KiB/s rd, 1.2 MiB/s wr, 51 op/s
Dec 05 01:50:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1222: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 1016 KiB/s wr, 28 op/s
Dec 05 01:50:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:50:08 compute-0 ceph-mon[192914]: pgmap v1222: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 1016 KiB/s wr, 28 op/s
Dec 05 01:50:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1223: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec 05 01:50:10 compute-0 nova_compute[349548]: 2025-12-05 01:50:10.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:50:10 compute-0 nova_compute[349548]: 2025-12-05 01:50:10.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 01:50:10 compute-0 ceph-mon[192914]: pgmap v1223: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec 05 01:50:11 compute-0 nova_compute[349548]: 2025-12-05 01:50:11.089 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:50:11 compute-0 nova_compute[349548]: 2025-12-05 01:50:11.089 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 01:50:11 compute-0 nova_compute[349548]: 2025-12-05 01:50:11.104 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 01:50:11 compute-0 nova_compute[349548]: 2025-12-05 01:50:11.381 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:50:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1224: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec 05 01:50:11 compute-0 nova_compute[349548]: 2025-12-05 01:50:11.440 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:50:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:50:12 compute-0 ceph-mon[192914]: pgmap v1224: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec 05 01:50:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1225: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec 05 01:50:14 compute-0 ceph-mon[192914]: pgmap v1225: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec 05 01:50:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1226: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 05 01:50:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:50:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:50:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:50:16
Dec 05 01:50:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:50:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:50:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'vms', 'default.rgw.log', 'default.rgw.meta', 'backups', 'images', '.mgr']
Dec 05 01:50:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:50:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:50:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:50:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:50:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:50:16 compute-0 nova_compute[349548]: 2025-12-05 01:50:16.385 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:50:16 compute-0 nova_compute[349548]: 2025-12-05 01:50:16.444 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:50:16 compute-0 ceph-mon[192914]: pgmap v1226: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 05 01:50:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:50:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:50:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:50:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:50:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:50:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:50:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:50:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:50:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:50:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:50:17 compute-0 nova_compute[349548]: 2025-12-05 01:50:17.084 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:50:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1227: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:50:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:50:18 compute-0 ceph-mon[192914]: pgmap v1227: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:50:19 compute-0 nova_compute[349548]: 2025-12-05 01:50:19.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:50:19 compute-0 sshd-session[416294]: Invalid user admin from 139.19.117.131 port 39764
Dec 05 01:50:19 compute-0 sshd-session[416294]: userauth_pubkey: signature algorithm ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
Dec 05 01:50:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1228: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:50:20 compute-0 nova_compute[349548]: 2025-12-05 01:50:20.089 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:50:20 compute-0 nova_compute[349548]: 2025-12-05 01:50:20.090 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:50:20 compute-0 nova_compute[349548]: 2025-12-05 01:50:20.091 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:50:20 compute-0 ceph-mon[192914]: pgmap v1228: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:50:21 compute-0 nova_compute[349548]: 2025-12-05 01:50:21.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:50:21 compute-0 nova_compute[349548]: 2025-12-05 01:50:21.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:50:21 compute-0 nova_compute[349548]: 2025-12-05 01:50:21.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 01:50:21 compute-0 nova_compute[349548]: 2025-12-05 01:50:21.388 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:50:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1229: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:50:21 compute-0 nova_compute[349548]: 2025-12-05 01:50:21.448 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:50:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:50:22 compute-0 ceph-mon[192914]: pgmap v1229: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:50:23 compute-0 nova_compute[349548]: 2025-12-05 01:50:23.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:50:23 compute-0 nova_compute[349548]: 2025-12-05 01:50:23.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 01:50:23 compute-0 nova_compute[349548]: 2025-12-05 01:50:23.362 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:50:23 compute-0 nova_compute[349548]: 2025-12-05 01:50:23.363 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:50:23 compute-0 nova_compute[349548]: 2025-12-05 01:50:23.364 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 01:50:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1230: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:50:23 compute-0 podman[416297]: 2025-12-05 01:50:23.698277551 +0000 UTC m=+0.099655412 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 01:50:23 compute-0 podman[416298]: 2025-12-05 01:50:23.706651926 +0000 UTC m=+0.100830265 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 01:50:24 compute-0 ceph-mon[192914]: pgmap v1230: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:50:24 compute-0 nova_compute[349548]: 2025-12-05 01:50:24.884 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updating instance_info_cache with network_info: [{"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 01:50:24 compute-0 nova_compute[349548]: 2025-12-05 01:50:24.911 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:50:24 compute-0 nova_compute[349548]: 2025-12-05 01:50:24.912 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 01:50:24 compute-0 nova_compute[349548]: 2025-12-05 01:50:24.913 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:50:24 compute-0 nova_compute[349548]: 2025-12-05 01:50:24.914 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:50:24 compute-0 nova_compute[349548]: 2025-12-05 01:50:24.952 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:50:24 compute-0 nova_compute[349548]: 2025-12-05 01:50:24.953 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:50:24 compute-0 nova_compute[349548]: 2025-12-05 01:50:24.954 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:50:24 compute-0 nova_compute[349548]: 2025-12-05 01:50:24.955 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:50:24 compute-0 nova_compute[349548]: 2025-12-05 01:50:24.956 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:50:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1231: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 01:50:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:50:25 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1982490388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:50:25 compute-0 nova_compute[349548]: 2025-12-05 01:50:25.467 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:50:25 compute-0 nova_compute[349548]: 2025-12-05 01:50:25.608 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:50:25 compute-0 nova_compute[349548]: 2025-12-05 01:50:25.609 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:50:25 compute-0 nova_compute[349548]: 2025-12-05 01:50:25.609 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:50:25 compute-0 nova_compute[349548]: 2025-12-05 01:50:25.618 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:50:25 compute-0 nova_compute[349548]: 2025-12-05 01:50:25.618 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:50:25 compute-0 nova_compute[349548]: 2025-12-05 01:50:25.619 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:50:25 compute-0 podman[416362]: 2025-12-05 01:50:25.720712142 +0000 UTC m=+0.134156921 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 01:50:25 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1982490388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:50:25 compute-0 podman[416363]: 2025-12-05 01:50:25.771188231 +0000 UTC m=+0.179771844 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 05 01:50:26 compute-0 nova_compute[349548]: 2025-12-05 01:50:26.234 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:50:26 compute-0 nova_compute[349548]: 2025-12-05 01:50:26.235 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3751MB free_disk=59.92203903198242GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 01:50:26 compute-0 nova_compute[349548]: 2025-12-05 01:50:26.235 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:50:26 compute-0 nova_compute[349548]: 2025-12-05 01:50:26.235 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:50:26 compute-0 nova_compute[349548]: 2025-12-05 01:50:26.392 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:50:26 compute-0 nova_compute[349548]: 2025-12-05 01:50:26.408 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:50:26 compute-0 nova_compute[349548]: 2025-12-05 01:50:26.409 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:50:26 compute-0 nova_compute[349548]: 2025-12-05 01:50:26.409 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 01:50:26 compute-0 nova_compute[349548]: 2025-12-05 01:50:26.410 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 01:50:26 compute-0 nova_compute[349548]: 2025-12-05 01:50:26.450 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:50:26 compute-0 nova_compute[349548]: 2025-12-05 01:50:26.550 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00110425264130364 of space, bias 1.0, pg target 0.331275792391092 quantized to 32 (current 32)
Dec 05 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 05 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:50:26 compute-0 ceph-mon[192914]: pgmap v1231: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 01:50:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:50:27 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1960956354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:50:27 compute-0 nova_compute[349548]: 2025-12-05 01:50:27.139 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:50:27 compute-0 nova_compute[349548]: 2025-12-05 01:50:27.150 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:50:27 compute-0 nova_compute[349548]: 2025-12-05 01:50:27.172 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:50:27 compute-0 nova_compute[349548]: 2025-12-05 01:50:27.174 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 01:50:27 compute-0 nova_compute[349548]: 2025-12-05 01:50:27.174 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:50:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1232: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec 05 01:50:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:50:27 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1960956354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:50:28 compute-0 sshd-session[416294]: Connection closed by invalid user admin 139.19.117.131 port 39764 [preauth]
Dec 05 01:50:28 compute-0 ceph-mon[192914]: pgmap v1232: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec 05 01:50:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1233: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec 05 01:50:29 compute-0 podman[416422]: 2025-12-05 01:50:29.72014345 +0000 UTC m=+0.124185871 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, version=9.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.openshift.tags=base rhel9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 05 01:50:29 compute-0 podman[158197]: time="2025-12-05T01:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:50:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 01:50:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8619 "" "Go-http-client/1.1"
Dec 05 01:50:29 compute-0 ceph-mon[192914]: pgmap v1233: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec 05 01:50:31 compute-0 nova_compute[349548]: 2025-12-05 01:50:31.395 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:50:31 compute-0 openstack_network_exporter[366555]: ERROR   01:50:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:50:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1234: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec 05 01:50:31 compute-0 openstack_network_exporter[366555]: ERROR   01:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:50:31 compute-0 openstack_network_exporter[366555]: ERROR   01:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:50:31 compute-0 openstack_network_exporter[366555]: ERROR   01:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:50:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:50:31 compute-0 openstack_network_exporter[366555]: ERROR   01:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:50:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:50:31 compute-0 nova_compute[349548]: 2025-12-05 01:50:31.453 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:50:32 compute-0 ceph-mon[192914]: pgmap v1234: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec 05 01:50:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:50:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1235: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec 05 01:50:34 compute-0 ceph-mon[192914]: pgmap v1235: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec 05 01:50:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1236: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec 05 01:50:36 compute-0 nova_compute[349548]: 2025-12-05 01:50:36.398 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:50:36 compute-0 nova_compute[349548]: 2025-12-05 01:50:36.456 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:50:36 compute-0 ceph-mon[192914]: pgmap v1236: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec 05 01:50:36 compute-0 podman[416444]: 2025-12-05 01:50:36.727843487 +0000 UTC m=+0.121909277 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, managed_by=edpm_ansible, io.buildah.version=1.33.7, name=ubi9-minimal, config_id=edpm, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter)
Dec 05 01:50:36 compute-0 podman[416442]: 2025-12-05 01:50:36.740995397 +0000 UTC m=+0.136887829 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 01:50:36 compute-0 podman[416441]: 2025-12-05 01:50:36.742135249 +0000 UTC m=+0.145032818 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 05 01:50:36 compute-0 podman[416443]: 2025-12-05 01:50:36.784583862 +0000 UTC m=+0.172107369 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Dec 05 01:50:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1237: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.2 KiB/s wr, 0 op/s
Dec 05 01:50:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.315 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.316 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.317 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.326 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b69a0e24-1bc4-46a5-92d7-367c1efd53df', 'name': 'test_0', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.344 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 05 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.346 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/b82c3f0e-6d6a-4a7b-9556-b609ad63e497 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}03a5c5085f72a10a14834caf2c8f725d7bea9761ee1da0af3d318eb89d91a8ae" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 05 01:50:38 compute-0 ceph-mon[192914]: pgmap v1237: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.2 KiB/s wr, 0 op/s
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.147 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Fri, 05 Dec 2025 01:50:38 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-58f198cb-93ad-46c1-bb96-e853ba57f9d0 x-openstack-request-id: req-58f198cb-93ad-46c1-bb96-e853ba57f9d0 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.148 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "b82c3f0e-6d6a-4a7b-9556-b609ad63e497", "name": "vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj", "status": "ACTIVE", "tenant_id": "6ad982b73954486390215862ee62239f", "user_id": "ff880837791d4f49a54672b8d0e705ff", "metadata": {"metering.server_group": "b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1"}, "hostId": "c00078154b620f81ef3acab090afa15b914aca6c57286253be564282", "image": {"id": "aa58c1e9-bdcc-4e60-9cee-eaeee0741251", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/aa58c1e9-bdcc-4e60-9cee-eaeee0741251"}]}, "flavor": {"id": "7d473820-6f66-40b4-b8d1-decd466d7dd2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/7d473820-6f66-40b4-b8d1-decd466d7dd2"}]}, "created": "2025-12-05T01:49:06Z", "updated": "2025-12-05T01:49:19Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.23", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:43:63:18"}, {"version": 4, "addr": "192.168.122.213", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:43:63:18"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/b82c3f0e-6d6a-4a7b-9556-b609ad63e497"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/b82c3f0e-6d6a-4a7b-9556-b609ad63e497"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-05T01:49:19.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.149 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/b82c3f0e-6d6a-4a7b-9556-b609ad63e497 used request id req-58f198cb-93ad-46c1-bb96-e853ba57f9d0 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.151 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b82c3f0e-6d6a-4a7b-9556-b609ad63e497', 'name': 'vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.152 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.153 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.154 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.154 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.155 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T01:50:39.154686) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.156 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.157 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.157 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.158 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.158 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.159 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.160 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T01:50:39.159501) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.197 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.198 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.199 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.235 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.236 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.238 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.239 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.240 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.241 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.242 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.242 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.243 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.244 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T01:50:39.243355) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.245 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.246 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.247 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.247 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.248 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.249 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.250 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.250 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-05T01:50:39.249310) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.250 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj>]
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.251 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.252 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.253 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.253 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.254 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.255 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T01:50:39.254553) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.346 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.347 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.348 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1238: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 85 B/s wr, 0 op/s
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.427 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.428 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.429 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.430 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.431 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.432 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.432 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.433 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.433 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.434 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 2043636416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.434 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T01:50:39.433651) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.435 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 325714825 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.435 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 190759187 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.436 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 2069488567 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.437 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 288882839 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.438 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 182154388 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.439 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.440 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.440 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.441 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.441 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.442 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.442 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.443 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T01:50:39.442112) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.443 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.444 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.445 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.445 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.446 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.448 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.448 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.449 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.449 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.450 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.450 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.451 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.452 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.452 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.453 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T01:50:39.450590) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.453 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.454 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.455 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.456 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.457 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.457 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.458 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.458 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.459 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.460 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.460 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.461 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.461 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T01:50:39.459479) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.462 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.463 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.464 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.465 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.465 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.466 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.467 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.467 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.468 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.469 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T01:50:39.468228) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.509 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.558 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.559 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.559 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.560 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.560 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.561 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.561 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 7524740776 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.561 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T01:50:39.561159) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.562 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 28454640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.563 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.563 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 9045351841 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.564 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 32028870 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.564 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.565 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.566 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.566 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.567 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.567 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.567 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.568 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T01:50:39.567516) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.568 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.569 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.569 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.570 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.571 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.572 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.573 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.573 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.574 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.574 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T01:50:39.574340) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.580 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.586 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b82c3f0e-6d6a-4a7b-9556-b609ad63e497 / tap554930d3-ff inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.587 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.587 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.588 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.590 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.590 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.590 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T01:50:39.590109) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.591 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.592 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.593 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.593 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.595 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.595 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T01:50:39.594647) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.596 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.596 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.597 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.598 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.598 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.599 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.600 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.600 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.600 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.601 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.601 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.602 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T01:50:39.601811) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.603 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.604 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.604 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.605 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.605 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.605 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.606 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.606 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.606 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T01:50:39.606300) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.607 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.bytes volume: 4488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.608 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.608 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.609 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.609 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.609 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.610 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.610 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes.delta volume: 1299 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.610 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T01:50:39.610250) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.611 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.612 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.612 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.613 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.613 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.613 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.614 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.614 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.614 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-05T01:50:39.614182) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.615 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj>]
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.615 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.616 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.616 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.616 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.617 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.617 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/memory.usage volume: 49.03125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.617 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T01:50:39.617103) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.617 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/memory.usage volume: 49.1640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.618 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.619 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.619 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.619 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.619 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.620 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.620 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.621 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.620 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T01:50:39.620191) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.621 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.621 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.622 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.622 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.622 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.622 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.623 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.623 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets volume: 37 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.624 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.624 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T01:50:39.622932) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.624 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.624 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.625 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.625 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.625 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.625 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes.delta volume: 182 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.626 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T01:50:39.625565) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.626 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.627 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.627 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.628 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.628 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.628 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.629 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/cpu volume: 36620000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.629 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T01:50:39.628580) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.629 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/cpu volume: 40610000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.630 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.630 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.630 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.630 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.631 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.631 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.631 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.632 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.632 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T01:50:39.631461) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.632 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.633 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.633 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.633 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.633 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.634 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.634 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.634 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.634 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T01:50:39.634167) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.635 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:50:40 compute-0 ceph-mon[192914]: pgmap v1238: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 85 B/s wr, 0 op/s
Dec 05 01:50:41 compute-0 nova_compute[349548]: 2025-12-05 01:50:41.402 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:50:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1239: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 4.4 KiB/s wr, 1 op/s
Dec 05 01:50:41 compute-0 nova_compute[349548]: 2025-12-05 01:50:41.459 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:50:42 compute-0 ceph-mon[192914]: pgmap v1239: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 4.4 KiB/s wr, 1 op/s
Dec 05 01:50:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:50:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1240: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.1 KiB/s wr, 1 op/s
Dec 05 01:50:44 compute-0 ceph-mon[192914]: pgmap v1240: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.1 KiB/s wr, 1 op/s
Dec 05 01:50:45 compute-0 sudo[416525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:50:45 compute-0 sudo[416525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:50:45 compute-0 sudo[416525]: pam_unix(sudo:session): session closed for user root
Dec 05 01:50:45 compute-0 sudo[416550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:50:45 compute-0 sudo[416550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:50:45 compute-0 sudo[416550]: pam_unix(sudo:session): session closed for user root
Dec 05 01:50:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 01:50:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1333884908' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:50:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 01:50:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1333884908' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:50:45 compute-0 sudo[416575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:50:45 compute-0 sudo[416575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:50:45 compute-0 sudo[416575]: pam_unix(sudo:session): session closed for user root
Dec 05 01:50:45 compute-0 sudo[416600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:50:45 compute-0 sudo[416600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:50:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.1 KiB/s wr, 1 op/s
Dec 05 01:50:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1333884908' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:50:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1333884908' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:50:45 compute-0 sudo[416600]: pam_unix(sudo:session): session closed for user root
Dec 05 01:50:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:50:46 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:50:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:50:46 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:50:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:50:46 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:50:46 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev dfd76698-4d74-4eb8-a05a-9fd45bfeaa20 does not exist
Dec 05 01:50:46 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c7330ea7-0108-4db8-9e5a-20e8f92141e7 does not exist
Dec 05 01:50:46 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 3b90d230-4a08-4aa5-8e75-27167d5bcbf4 does not exist
Dec 05 01:50:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:50:46 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:50:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:50:46 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:50:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:50:46 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:50:46 compute-0 sudo[416655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:50:46 compute-0 sudo[416655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:50:46 compute-0 sudo[416655]: pam_unix(sudo:session): session closed for user root
Dec 05 01:50:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:50:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:50:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:50:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:50:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:50:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:50:46 compute-0 sudo[416680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:50:46 compute-0 sudo[416680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:50:46 compute-0 sudo[416680]: pam_unix(sudo:session): session closed for user root
Dec 05 01:50:46 compute-0 sudo[416705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:50:46 compute-0 sudo[416705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:50:46 compute-0 nova_compute[349548]: 2025-12-05 01:50:46.405 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:50:46 compute-0 sudo[416705]: pam_unix(sudo:session): session closed for user root
Dec 05 01:50:46 compute-0 nova_compute[349548]: 2025-12-05 01:50:46.461 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:50:46 compute-0 sudo[416730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:50:46 compute-0 sudo[416730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:50:46 compute-0 ceph-mon[192914]: pgmap v1241: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.1 KiB/s wr, 1 op/s
Dec 05 01:50:46 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:50:46 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:50:46 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:50:46 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:50:46 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:50:46 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:50:47 compute-0 podman[416788]: 2025-12-05 01:50:47.033589639 +0000 UTC m=+0.080504314 container create e0d24a17cdd442f02b921e81098ffff45a5f833cb0a98da88a915fb2f64e06af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:50:47 compute-0 systemd[1]: Started libpod-conmon-e0d24a17cdd442f02b921e81098ffff45a5f833cb0a98da88a915fb2f64e06af.scope.
Dec 05 01:50:47 compute-0 podman[416788]: 2025-12-05 01:50:47.008523734 +0000 UTC m=+0.055438449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:50:47 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:50:47 compute-0 podman[416788]: 2025-12-05 01:50:47.151594826 +0000 UTC m=+0.198509521 container init e0d24a17cdd442f02b921e81098ffff45a5f833cb0a98da88a915fb2f64e06af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:50:47 compute-0 podman[416788]: 2025-12-05 01:50:47.160606729 +0000 UTC m=+0.207521404 container start e0d24a17cdd442f02b921e81098ffff45a5f833cb0a98da88a915fb2f64e06af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mirzakhani, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Dec 05 01:50:47 compute-0 podman[416788]: 2025-12-05 01:50:47.166430012 +0000 UTC m=+0.213344747 container attach e0d24a17cdd442f02b921e81098ffff45a5f833cb0a98da88a915fb2f64e06af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mirzakhani, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:50:47 compute-0 lucid_mirzakhani[416804]: 167 167
Dec 05 01:50:47 compute-0 systemd[1]: libpod-e0d24a17cdd442f02b921e81098ffff45a5f833cb0a98da88a915fb2f64e06af.scope: Deactivated successfully.
Dec 05 01:50:47 compute-0 podman[416788]: 2025-12-05 01:50:47.170413524 +0000 UTC m=+0.217328199 container died e0d24a17cdd442f02b921e81098ffff45a5f833cb0a98da88a915fb2f64e06af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mirzakhani, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:50:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-ceebd4cdcdb5e3517391ddb933f65a4c5808f61c88577f3d45ca524ffb53b903-merged.mount: Deactivated successfully.
Dec 05 01:50:47 compute-0 podman[416788]: 2025-12-05 01:50:47.234580568 +0000 UTC m=+0.281495243 container remove e0d24a17cdd442f02b921e81098ffff45a5f833cb0a98da88a915fb2f64e06af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 05 01:50:47 compute-0 systemd[1]: libpod-conmon-e0d24a17cdd442f02b921e81098ffff45a5f833cb0a98da88a915fb2f64e06af.scope: Deactivated successfully.
Dec 05 01:50:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1242: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.1 KiB/s wr, 1 op/s
Dec 05 01:50:47 compute-0 podman[416827]: 2025-12-05 01:50:47.508401234 +0000 UTC m=+0.091320148 container create 96aaf190738ece6c8de78420c976fb7dba10db522f4aad3cac3a27a4a7a119a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mestorf, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:50:47 compute-0 podman[416827]: 2025-12-05 01:50:47.46379292 +0000 UTC m=+0.046711854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:50:47 compute-0 systemd[1]: Started libpod-conmon-96aaf190738ece6c8de78420c976fb7dba10db522f4aad3cac3a27a4a7a119a0.scope.
Dec 05 01:50:47 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:50:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2caafe7fcd9431722604f09be8f948776807c0fe7cc9ace0f74caa7c40465fc8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:50:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2caafe7fcd9431722604f09be8f948776807c0fe7cc9ace0f74caa7c40465fc8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:50:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2caafe7fcd9431722604f09be8f948776807c0fe7cc9ace0f74caa7c40465fc8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:50:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2caafe7fcd9431722604f09be8f948776807c0fe7cc9ace0f74caa7c40465fc8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:50:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2caafe7fcd9431722604f09be8f948776807c0fe7cc9ace0f74caa7c40465fc8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:50:47 compute-0 podman[416827]: 2025-12-05 01:50:47.640049244 +0000 UTC m=+0.222968148 container init 96aaf190738ece6c8de78420c976fb7dba10db522f4aad3cac3a27a4a7a119a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 05 01:50:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:50:47 compute-0 podman[416827]: 2025-12-05 01:50:47.659637765 +0000 UTC m=+0.242556659 container start 96aaf190738ece6c8de78420c976fb7dba10db522f4aad3cac3a27a4a7a119a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 01:50:47 compute-0 podman[416827]: 2025-12-05 01:50:47.666064755 +0000 UTC m=+0.248983689 container attach 96aaf190738ece6c8de78420c976fb7dba10db522f4aad3cac3a27a4a7a119a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 05 01:50:48 compute-0 ceph-mon[192914]: pgmap v1242: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.1 KiB/s wr, 1 op/s
Dec 05 01:50:48 compute-0 vibrant_mestorf[416841]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:50:48 compute-0 vibrant_mestorf[416841]: --> relative data size: 1.0
Dec 05 01:50:48 compute-0 vibrant_mestorf[416841]: --> All data devices are unavailable
Dec 05 01:50:48 compute-0 systemd[1]: libpod-96aaf190738ece6c8de78420c976fb7dba10db522f4aad3cac3a27a4a7a119a0.scope: Deactivated successfully.
Dec 05 01:50:48 compute-0 podman[416827]: 2025-12-05 01:50:48.910375488 +0000 UTC m=+1.493294412 container died 96aaf190738ece6c8de78420c976fb7dba10db522f4aad3cac3a27a4a7a119a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 05 01:50:48 compute-0 systemd[1]: libpod-96aaf190738ece6c8de78420c976fb7dba10db522f4aad3cac3a27a4a7a119a0.scope: Consumed 1.197s CPU time.
Dec 05 01:50:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-2caafe7fcd9431722604f09be8f948776807c0fe7cc9ace0f74caa7c40465fc8-merged.mount: Deactivated successfully.
Dec 05 01:50:49 compute-0 podman[416827]: 2025-12-05 01:50:49.016226073 +0000 UTC m=+1.599144967 container remove 96aaf190738ece6c8de78420c976fb7dba10db522f4aad3cac3a27a4a7a119a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mestorf, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 05 01:50:49 compute-0 sudo[416730]: pam_unix(sudo:session): session closed for user root
Dec 05 01:50:49 compute-0 systemd[1]: libpod-conmon-96aaf190738ece6c8de78420c976fb7dba10db522f4aad3cac3a27a4a7a119a0.scope: Deactivated successfully.
Dec 05 01:50:49 compute-0 sudo[416883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:50:49 compute-0 sudo[416883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:50:49 compute-0 sudo[416883]: pam_unix(sudo:session): session closed for user root
Dec 05 01:50:49 compute-0 sudo[416908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:50:49 compute-0 sudo[416908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:50:49 compute-0 sudo[416908]: pam_unix(sudo:session): session closed for user root
Dec 05 01:50:49 compute-0 sudo[416933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:50:49 compute-0 sudo[416933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:50:49 compute-0 sudo[416933]: pam_unix(sudo:session): session closed for user root
Dec 05 01:50:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1243: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Dec 05 01:50:49 compute-0 sudo[416958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:50:49 compute-0 sudo[416958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:50:50 compute-0 podman[417023]: 2025-12-05 01:50:50.135450539 +0000 UTC m=+0.088603131 container create 79e07fd37124ec0fa3b436ba271b6e24ab6b71e9dd42b63395686faf66c55a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:50:50 compute-0 podman[417023]: 2025-12-05 01:50:50.097443881 +0000 UTC m=+0.050596513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:50:50 compute-0 systemd[1]: Started libpod-conmon-79e07fd37124ec0fa3b436ba271b6e24ab6b71e9dd42b63395686faf66c55a34.scope.
Dec 05 01:50:50 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:50:50 compute-0 podman[417023]: 2025-12-05 01:50:50.321860409 +0000 UTC m=+0.275013051 container init 79e07fd37124ec0fa3b436ba271b6e24ab6b71e9dd42b63395686faf66c55a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:50:50 compute-0 podman[417023]: 2025-12-05 01:50:50.338153907 +0000 UTC m=+0.291306499 container start 79e07fd37124ec0fa3b436ba271b6e24ab6b71e9dd42b63395686faf66c55a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:50:50 compute-0 hungry_torvalds[417036]: 167 167
Dec 05 01:50:50 compute-0 podman[417023]: 2025-12-05 01:50:50.346017278 +0000 UTC m=+0.299169910 container attach 79e07fd37124ec0fa3b436ba271b6e24ab6b71e9dd42b63395686faf66c55a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:50:50 compute-0 systemd[1]: libpod-79e07fd37124ec0fa3b436ba271b6e24ab6b71e9dd42b63395686faf66c55a34.scope: Deactivated successfully.
Dec 05 01:50:50 compute-0 podman[417023]: 2025-12-05 01:50:50.349471955 +0000 UTC m=+0.302624537 container died 79e07fd37124ec0fa3b436ba271b6e24ab6b71e9dd42b63395686faf66c55a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 05 01:50:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c5f1df5ee704705dddda7b76e842b89a61f6d0ceb89d127c260db790936eb48-merged.mount: Deactivated successfully.
Dec 05 01:50:50 compute-0 podman[417023]: 2025-12-05 01:50:50.43931189 +0000 UTC m=+0.392464452 container remove 79e07fd37124ec0fa3b436ba271b6e24ab6b71e9dd42b63395686faf66c55a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 01:50:50 compute-0 systemd[1]: libpod-conmon-79e07fd37124ec0fa3b436ba271b6e24ab6b71e9dd42b63395686faf66c55a34.scope: Deactivated successfully.
Dec 05 01:50:50 compute-0 ceph-mon[192914]: pgmap v1243: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Dec 05 01:50:50 compute-0 podman[417059]: 2025-12-05 01:50:50.724502595 +0000 UTC m=+0.086676187 container create 0198fc6f51bd93f201176463fd2e83ea56a712d860c6fe1062a97522080dc822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cohen, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 05 01:50:50 compute-0 podman[417059]: 2025-12-05 01:50:50.685276613 +0000 UTC m=+0.047450215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:50:50 compute-0 systemd[1]: Started libpod-conmon-0198fc6f51bd93f201176463fd2e83ea56a712d860c6fe1062a97522080dc822.scope.
Dec 05 01:50:50 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:50:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d127c47c74c8ef44b9b8876303ef525c8449a66a3f16535011a6f9ce588f906/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:50:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d127c47c74c8ef44b9b8876303ef525c8449a66a3f16535011a6f9ce588f906/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:50:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d127c47c74c8ef44b9b8876303ef525c8449a66a3f16535011a6f9ce588f906/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:50:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d127c47c74c8ef44b9b8876303ef525c8449a66a3f16535011a6f9ce588f906/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:50:50 compute-0 podman[417059]: 2025-12-05 01:50:50.922361986 +0000 UTC m=+0.284535588 container init 0198fc6f51bd93f201176463fd2e83ea56a712d860c6fe1062a97522080dc822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 05 01:50:50 compute-0 podman[417059]: 2025-12-05 01:50:50.943297855 +0000 UTC m=+0.305471437 container start 0198fc6f51bd93f201176463fd2e83ea56a712d860c6fe1062a97522080dc822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 01:50:50 compute-0 podman[417059]: 2025-12-05 01:50:50.951271409 +0000 UTC m=+0.313445031 container attach 0198fc6f51bd93f201176463fd2e83ea56a712d860c6fe1062a97522080dc822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 05 01:50:51 compute-0 nova_compute[349548]: 2025-12-05 01:50:51.407 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:50:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1244: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Dec 05 01:50:51 compute-0 nova_compute[349548]: 2025-12-05 01:50:51.465 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]: {
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:     "0": [
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:         {
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "devices": [
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "/dev/loop3"
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             ],
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "lv_name": "ceph_lv0",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "lv_size": "21470642176",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "name": "ceph_lv0",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "tags": {
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.cluster_name": "ceph",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.crush_device_class": "",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.encrypted": "0",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.osd_id": "0",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.type": "block",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.vdo": "0"
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             },
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "type": "block",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "vg_name": "ceph_vg0"
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:         }
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:     ],
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:     "1": [
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:         {
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "devices": [
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "/dev/loop4"
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             ],
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "lv_name": "ceph_lv1",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "lv_size": "21470642176",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "name": "ceph_lv1",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "tags": {
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.cluster_name": "ceph",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.crush_device_class": "",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.encrypted": "0",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.osd_id": "1",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.type": "block",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.vdo": "0"
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             },
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "type": "block",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "vg_name": "ceph_vg1"
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:         }
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:     ],
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:     "2": [
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:         {
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "devices": [
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "/dev/loop5"
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             ],
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "lv_name": "ceph_lv2",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "lv_size": "21470642176",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "name": "ceph_lv2",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "tags": {
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.cluster_name": "ceph",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.crush_device_class": "",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.encrypted": "0",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.osd_id": "2",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.type": "block",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:                 "ceph.vdo": "0"
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             },
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "type": "block",
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:             "vg_name": "ceph_vg2"
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:         }
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]:     ]
Dec 05 01:50:51 compute-0 intelligent_cohen[417075]: }
Dec 05 01:50:51 compute-0 systemd[1]: libpod-0198fc6f51bd93f201176463fd2e83ea56a712d860c6fe1062a97522080dc822.scope: Deactivated successfully.
Dec 05 01:50:51 compute-0 podman[417059]: 2025-12-05 01:50:51.8475404 +0000 UTC m=+1.209713992 container died 0198fc6f51bd93f201176463fd2e83ea56a712d860c6fe1062a97522080dc822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 05 01:50:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d127c47c74c8ef44b9b8876303ef525c8449a66a3f16535011a6f9ce588f906-merged.mount: Deactivated successfully.
Dec 05 01:50:51 compute-0 podman[417059]: 2025-12-05 01:50:51.950652928 +0000 UTC m=+1.312826490 container remove 0198fc6f51bd93f201176463fd2e83ea56a712d860c6fe1062a97522080dc822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cohen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 05 01:50:51 compute-0 systemd[1]: libpod-conmon-0198fc6f51bd93f201176463fd2e83ea56a712d860c6fe1062a97522080dc822.scope: Deactivated successfully.
Dec 05 01:50:51 compute-0 sudo[416958]: pam_unix(sudo:session): session closed for user root
Dec 05 01:50:52 compute-0 sudo[417096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:50:52 compute-0 sudo[417096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:50:52 compute-0 sudo[417096]: pam_unix(sudo:session): session closed for user root
Dec 05 01:50:52 compute-0 sudo[417121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:50:52 compute-0 sudo[417121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:50:52 compute-0 sudo[417121]: pam_unix(sudo:session): session closed for user root
Dec 05 01:50:52 compute-0 sudo[417146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:50:52 compute-0 sudo[417146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:50:52 compute-0 sudo[417146]: pam_unix(sudo:session): session closed for user root
Dec 05 01:50:52 compute-0 sudo[417171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:50:52 compute-0 sudo[417171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:50:52 compute-0 ceph-mon[192914]: pgmap v1244: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Dec 05 01:50:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:50:53 compute-0 podman[417234]: 2025-12-05 01:50:53.127108483 +0000 UTC m=+0.083845707 container create 06e93d89771bc55a6172214a3a7f11c49ceeaae6803466e9480db7802500d7b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mirzakhani, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 01:50:53 compute-0 podman[417234]: 2025-12-05 01:50:53.093971442 +0000 UTC m=+0.050708716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:50:53 compute-0 systemd[1]: Started libpod-conmon-06e93d89771bc55a6172214a3a7f11c49ceeaae6803466e9480db7802500d7b8.scope.
Dec 05 01:50:53 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:50:53 compute-0 podman[417234]: 2025-12-05 01:50:53.32409619 +0000 UTC m=+0.280833414 container init 06e93d89771bc55a6172214a3a7f11c49ceeaae6803466e9480db7802500d7b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Dec 05 01:50:53 compute-0 podman[417234]: 2025-12-05 01:50:53.345744049 +0000 UTC m=+0.302481273 container start 06e93d89771bc55a6172214a3a7f11c49ceeaae6803466e9480db7802500d7b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mirzakhani, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:50:53 compute-0 podman[417234]: 2025-12-05 01:50:53.353234289 +0000 UTC m=+0.309971483 container attach 06e93d89771bc55a6172214a3a7f11c49ceeaae6803466e9480db7802500d7b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mirzakhani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:50:53 compute-0 interesting_mirzakhani[417250]: 167 167
Dec 05 01:50:53 compute-0 systemd[1]: libpod-06e93d89771bc55a6172214a3a7f11c49ceeaae6803466e9480db7802500d7b8.scope: Deactivated successfully.
Dec 05 01:50:53 compute-0 podman[417234]: 2025-12-05 01:50:53.35787914 +0000 UTC m=+0.314616324 container died 06e93d89771bc55a6172214a3a7f11c49ceeaae6803466e9480db7802500d7b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mirzakhani, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 05 01:50:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-3135414f441c6ebb5a41c0262287df5e0969c357ad91399fd42f0b5716402a22-merged.mount: Deactivated successfully.
Dec 05 01:50:53 compute-0 podman[417234]: 2025-12-05 01:50:53.420793658 +0000 UTC m=+0.377530862 container remove 06e93d89771bc55a6172214a3a7f11c49ceeaae6803466e9480db7802500d7b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mirzakhani, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:50:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec 05 01:50:53 compute-0 systemd[1]: libpod-conmon-06e93d89771bc55a6172214a3a7f11c49ceeaae6803466e9480db7802500d7b8.scope: Deactivated successfully.
Dec 05 01:50:53 compute-0 podman[417273]: 2025-12-05 01:50:53.683187992 +0000 UTC m=+0.079313610 container create e6fa7bacebabd47cde41ebe74d378374790a0c72c353b81094d73651e0dfb0b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:50:53 compute-0 podman[417273]: 2025-12-05 01:50:53.659758933 +0000 UTC m=+0.055884571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:50:53 compute-0 systemd[1]: Started libpod-conmon-e6fa7bacebabd47cde41ebe74d378374790a0c72c353b81094d73651e0dfb0b7.scope.
Dec 05 01:50:53 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13c76badd6e610703e8e57aebf9d0dced77460115f76befbcee03c451cf18781/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13c76badd6e610703e8e57aebf9d0dced77460115f76befbcee03c451cf18781/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13c76badd6e610703e8e57aebf9d0dced77460115f76befbcee03c451cf18781/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13c76badd6e610703e8e57aebf9d0dced77460115f76befbcee03c451cf18781/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:50:53 compute-0 podman[417273]: 2025-12-05 01:50:53.877573555 +0000 UTC m=+0.273699193 container init e6fa7bacebabd47cde41ebe74d378374790a0c72c353b81094d73651e0dfb0b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ptolemy, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:50:53 compute-0 podman[417273]: 2025-12-05 01:50:53.895435127 +0000 UTC m=+0.291560755 container start e6fa7bacebabd47cde41ebe74d378374790a0c72c353b81094d73651e0dfb0b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:50:53 compute-0 podman[417273]: 2025-12-05 01:50:53.902716202 +0000 UTC m=+0.298841850 container attach e6fa7bacebabd47cde41ebe74d378374790a0c72c353b81094d73651e0dfb0b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Dec 05 01:50:53 compute-0 podman[417290]: 2025-12-05 01:50:53.918275929 +0000 UTC m=+0.109260722 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 01:50:53 compute-0 podman[417291]: 2025-12-05 01:50:53.934565337 +0000 UTC m=+0.105514607 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 01:50:54 compute-0 ceph-mon[192914]: pgmap v1245: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec 05 01:50:54 compute-0 infallible_ptolemy[417292]: {
Dec 05 01:50:54 compute-0 infallible_ptolemy[417292]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:50:54 compute-0 infallible_ptolemy[417292]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:50:54 compute-0 infallible_ptolemy[417292]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:50:54 compute-0 infallible_ptolemy[417292]:         "osd_id": 0,
Dec 05 01:50:54 compute-0 infallible_ptolemy[417292]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:50:54 compute-0 infallible_ptolemy[417292]:         "type": "bluestore"
Dec 05 01:50:54 compute-0 infallible_ptolemy[417292]:     },
Dec 05 01:50:54 compute-0 infallible_ptolemy[417292]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:50:54 compute-0 infallible_ptolemy[417292]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:50:54 compute-0 infallible_ptolemy[417292]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:50:54 compute-0 infallible_ptolemy[417292]:         "osd_id": 1,
Dec 05 01:50:54 compute-0 infallible_ptolemy[417292]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:50:54 compute-0 infallible_ptolemy[417292]:         "type": "bluestore"
Dec 05 01:50:54 compute-0 infallible_ptolemy[417292]:     },
Dec 05 01:50:54 compute-0 infallible_ptolemy[417292]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:50:54 compute-0 infallible_ptolemy[417292]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:50:54 compute-0 infallible_ptolemy[417292]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:50:54 compute-0 infallible_ptolemy[417292]:         "osd_id": 2,
Dec 05 01:50:54 compute-0 infallible_ptolemy[417292]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:50:54 compute-0 infallible_ptolemy[417292]:         "type": "bluestore"
Dec 05 01:50:54 compute-0 infallible_ptolemy[417292]:     }
Dec 05 01:50:54 compute-0 infallible_ptolemy[417292]: }
Dec 05 01:50:55 compute-0 systemd[1]: libpod-e6fa7bacebabd47cde41ebe74d378374790a0c72c353b81094d73651e0dfb0b7.scope: Deactivated successfully.
Dec 05 01:50:55 compute-0 systemd[1]: libpod-e6fa7bacebabd47cde41ebe74d378374790a0c72c353b81094d73651e0dfb0b7.scope: Consumed 1.149s CPU time.
Dec 05 01:50:55 compute-0 podman[417273]: 2025-12-05 01:50:55.0490328 +0000 UTC m=+1.445158529 container died e6fa7bacebabd47cde41ebe74d378374790a0c72c353b81094d73651e0dfb0b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:50:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-13c76badd6e610703e8e57aebf9d0dced77460115f76befbcee03c451cf18781-merged.mount: Deactivated successfully.
Dec 05 01:50:55 compute-0 podman[417273]: 2025-12-05 01:50:55.157753876 +0000 UTC m=+1.553879504 container remove e6fa7bacebabd47cde41ebe74d378374790a0c72c353b81094d73651e0dfb0b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 05 01:50:55 compute-0 systemd[1]: libpod-conmon-e6fa7bacebabd47cde41ebe74d378374790a0c72c353b81094d73651e0dfb0b7.scope: Deactivated successfully.
Dec 05 01:50:55 compute-0 sudo[417171]: pam_unix(sudo:session): session closed for user root
Dec 05 01:50:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:50:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:50:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:50:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:50:55 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 68cf5ad9-f711-450e-9428-69c25c9419b0 does not exist
Dec 05 01:50:55 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 62354550-00ff-45b0-8d49-dc6cd1436235 does not exist
Dec 05 01:50:55 compute-0 sudo[417376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:50:55 compute-0 sudo[417376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:50:55 compute-0 sudo[417376]: pam_unix(sudo:session): session closed for user root
Dec 05 01:50:55 compute-0 sudo[417401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:50:55 compute-0 sudo[417401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:50:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1246: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:50:55 compute-0 sudo[417401]: pam_unix(sudo:session): session closed for user root
Dec 05 01:50:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:50:56.179 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:50:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:50:56.179 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:50:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:50:56.180 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:50:56 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:50:56 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:50:56 compute-0 ceph-mon[192914]: pgmap v1246: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:50:56 compute-0 nova_compute[349548]: 2025-12-05 01:50:56.411 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:50:56 compute-0 nova_compute[349548]: 2025-12-05 01:50:56.468 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:50:56 compute-0 podman[417427]: 2025-12-05 01:50:56.718574395 +0000 UTC m=+0.127157905 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 01:50:56 compute-0 podman[417426]: 2025-12-05 01:50:56.767591843 +0000 UTC m=+0.169217717 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 05 01:50:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1247: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:50:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:50:58 compute-0 ceph-mon[192914]: pgmap v1247: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:50:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:50:59 compute-0 podman[158197]: time="2025-12-05T01:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:50:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 01:50:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8629 "" "Go-http-client/1.1"
Dec 05 01:51:00 compute-0 ceph-mon[192914]: pgmap v1248: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:00 compute-0 podman[417462]: 2025-12-05 01:51:00.676120535 +0000 UTC m=+0.089160816 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, com.redhat.component=ubi9-container, release-0.7.12=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, name=ubi9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, version=9.4, architecture=x86_64, container_name=kepler, distribution-scope=public, managed_by=edpm_ansible, config_id=edpm, io.openshift.expose-services=)
Dec 05 01:51:01 compute-0 nova_compute[349548]: 2025-12-05 01:51:01.414 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:51:01 compute-0 openstack_network_exporter[366555]: ERROR   01:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:51:01 compute-0 openstack_network_exporter[366555]: ERROR   01:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:51:01 compute-0 openstack_network_exporter[366555]: ERROR   01:51:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:51:01 compute-0 openstack_network_exporter[366555]: ERROR   01:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:51:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:51:01 compute-0 openstack_network_exporter[366555]: ERROR   01:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:51:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:51:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1249: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:01 compute-0 nova_compute[349548]: 2025-12-05 01:51:01.471 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:51:02 compute-0 ceph-mon[192914]: pgmap v1249: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:51:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1250: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:04 compute-0 ceph-mon[192914]: pgmap v1250: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1251: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:06 compute-0 nova_compute[349548]: 2025-12-05 01:51:06.418 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:51:06 compute-0 nova_compute[349548]: 2025-12-05 01:51:06.474 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:51:06 compute-0 ceph-mon[192914]: pgmap v1251: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1252: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:51:07 compute-0 podman[417482]: 2025-12-05 01:51:07.721828912 +0000 UTC m=+0.123168383 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 01:51:07 compute-0 podman[417483]: 2025-12-05 01:51:07.741770412 +0000 UTC m=+0.135340225 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 01:51:07 compute-0 podman[417485]: 2025-12-05 01:51:07.757779492 +0000 UTC m=+0.139083410 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_id=edpm, managed_by=edpm_ansible, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 05 01:51:07 compute-0 podman[417484]: 2025-12-05 01:51:07.795752279 +0000 UTC m=+0.171403438 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 01:51:08 compute-0 ceph-mon[192914]: pgmap v1252: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1253: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:10 compute-0 ceph-mon[192914]: pgmap v1253: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:11 compute-0 nova_compute[349548]: 2025-12-05 01:51:11.420 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:51:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1254: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 05 01:51:11 compute-0 nova_compute[349548]: 2025-12-05 01:51:11.478 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:51:12 compute-0 ceph-mon[192914]: pgmap v1254: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 05 01:51:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:51:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1255: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 05 01:51:14 compute-0 ceph-mon[192914]: pgmap v1255: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 05 01:51:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1256: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 05 01:51:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:51:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:51:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:51:16
Dec 05 01:51:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:51:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:51:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta']
Dec 05 01:51:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:51:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:51:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:51:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:51:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:51:16 compute-0 nova_compute[349548]: 2025-12-05 01:51:16.422 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:51:16 compute-0 nova_compute[349548]: 2025-12-05 01:51:16.481 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:51:16 compute-0 ceph-mon[192914]: pgmap v1256: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 05 01:51:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:51:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:51:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:51:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:51:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:51:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:51:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:51:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:51:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:51:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:51:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1257: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 05 01:51:17 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 05 01:51:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:51:18 compute-0 ceph-mon[192914]: pgmap v1257: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 05 01:51:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1258: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 05 01:51:19 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 05 01:51:20 compute-0 ceph-mon[192914]: pgmap v1258: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 05 01:51:21 compute-0 nova_compute[349548]: 2025-12-05 01:51:21.326 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:51:21 compute-0 nova_compute[349548]: 2025-12-05 01:51:21.327 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:51:21 compute-0 nova_compute[349548]: 2025-12-05 01:51:21.327 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:51:21 compute-0 nova_compute[349548]: 2025-12-05 01:51:21.327 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:51:21 compute-0 nova_compute[349548]: 2025-12-05 01:51:21.327 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:51:21 compute-0 nova_compute[349548]: 2025-12-05 01:51:21.328 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 01:51:21 compute-0 nova_compute[349548]: 2025-12-05 01:51:21.426 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:51:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 05 01:51:21 compute-0 nova_compute[349548]: 2025-12-05 01:51:21.483 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:51:22 compute-0 nova_compute[349548]: 2025-12-05 01:51:22.063 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:51:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:51:22 compute-0 ceph-mon[192914]: pgmap v1259: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 05 01:51:23 compute-0 nova_compute[349548]: 2025-12-05 01:51:23.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:51:23 compute-0 nova_compute[349548]: 2025-12-05 01:51:23.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 01:51:23 compute-0 nova_compute[349548]: 2025-12-05 01:51:23.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 01:51:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1260: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:23 compute-0 nova_compute[349548]: 2025-12-05 01:51:23.500 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:51:23 compute-0 nova_compute[349548]: 2025-12-05 01:51:23.501 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:51:23 compute-0 nova_compute[349548]: 2025-12-05 01:51:23.501 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 01:51:23 compute-0 nova_compute[349548]: 2025-12-05 01:51:23.501 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b69a0e24-1bc4-46a5-92d7-367c1efd53df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 01:51:24 compute-0 ceph-mon[192914]: pgmap v1260: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:24 compute-0 podman[417566]: 2025-12-05 01:51:24.722211073 +0000 UTC m=+0.121941548 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:51:24 compute-0 podman[417565]: 2025-12-05 01:51:24.72458032 +0000 UTC m=+0.129712967 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 05 01:51:24 compute-0 nova_compute[349548]: 2025-12-05 01:51:24.962 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updating instance_info_cache with network_info: [{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 01:51:24 compute-0 nova_compute[349548]: 2025-12-05 01:51:24.982 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:51:24 compute-0 nova_compute[349548]: 2025-12-05 01:51:24.982 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 01:51:24 compute-0 nova_compute[349548]: 2025-12-05 01:51:24.983 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.098 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.123 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.123 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.123 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.124 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.124 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:51:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1261: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:51:25 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3815787811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.624 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:51:25 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3815787811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.749 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.750 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.750 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.759 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.760 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.760 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.176 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.177 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3747MB free_disk=59.9220085144043GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.177 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.178 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.300 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.301 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.302 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.302 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.398 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.429 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.486 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011047613669662043 of space, bias 1.0, pg target 0.3314284100898613 quantized to 32 (current 32)
Dec 05 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 05 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:51:26 compute-0 ceph-mon[192914]: pgmap v1261: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:51:26 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1287143309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.877 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.889 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.911 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.914 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.914 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:51:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:51:27 compute-0 podman[417653]: 2025-12-05 01:51:27.688510403 +0000 UTC m=+0.096014209 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Dec 05 01:51:27 compute-0 podman[417652]: 2025-12-05 01:51:27.706340084 +0000 UTC m=+0.112938676 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 05 01:51:27 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1287143309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:51:28 compute-0 ceph-mon[192914]: pgmap v1262: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1263: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:29 compute-0 podman[158197]: time="2025-12-05T01:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:51:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 01:51:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8622 "" "Go-http-client/1.1"
Dec 05 01:51:30 compute-0 ceph-mon[192914]: pgmap v1263: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:31 compute-0 openstack_network_exporter[366555]: ERROR   01:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:51:31 compute-0 openstack_network_exporter[366555]: ERROR   01:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:51:31 compute-0 openstack_network_exporter[366555]: ERROR   01:51:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:51:31 compute-0 openstack_network_exporter[366555]: ERROR   01:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:51:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:51:31 compute-0 openstack_network_exporter[366555]: ERROR   01:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:51:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:51:31 compute-0 nova_compute[349548]: 2025-12-05 01:51:31.430 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:51:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:31 compute-0 nova_compute[349548]: 2025-12-05 01:51:31.489 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:51:31 compute-0 podman[417690]: 2025-12-05 01:51:31.710353631 +0000 UTC m=+0.134218513 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, release=1214.1726694543, release-0.7.12=, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., container_name=kepler, build-date=2024-09-18T21:23:30, name=ubi9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git)
Dec 05 01:51:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:51:32 compute-0 ceph-mon[192914]: pgmap v1264: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1265: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:34 compute-0 ceph-mon[192914]: pgmap v1265: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1266: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:36 compute-0 nova_compute[349548]: 2025-12-05 01:51:36.433 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:51:36 compute-0 nova_compute[349548]: 2025-12-05 01:51:36.490 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:51:36 compute-0 ceph-mon[192914]: pgmap v1266: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1267: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:51:38 compute-0 podman[417711]: 2025-12-05 01:51:38.695828074 +0000 UTC m=+0.101525324 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:51:38 compute-0 podman[417710]: 2025-12-05 01:51:38.697708597 +0000 UTC m=+0.109299033 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 01:51:38 compute-0 podman[417713]: 2025-12-05 01:51:38.712827962 +0000 UTC m=+0.112166913 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, release=1755695350, vendor=Red Hat, Inc., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 05 01:51:38 compute-0 podman[417712]: 2025-12-05 01:51:38.740281934 +0000 UTC m=+0.132225118 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 01:51:38 compute-0 ceph-mon[192914]: pgmap v1267: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1268: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:40 compute-0 ceph-mon[192914]: pgmap v1268: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:41 compute-0 nova_compute[349548]: 2025-12-05 01:51:41.435 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:51:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1269: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:41 compute-0 nova_compute[349548]: 2025-12-05 01:51:41.492 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:51:41 compute-0 ceph-mon[192914]: pgmap v1269: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:51:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1270: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:44 compute-0 ceph-mon[192914]: pgmap v1270: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 01:51:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3335169055' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:51:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 01:51:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3335169055' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:51:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1271: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/3335169055' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:51:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/3335169055' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:51:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:51:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:51:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:51:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:51:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:51:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:51:46 compute-0 nova_compute[349548]: 2025-12-05 01:51:46.437 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:51:46 compute-0 nova_compute[349548]: 2025-12-05 01:51:46.494 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:51:46 compute-0 ceph-mon[192914]: pgmap v1271: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1272: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:51:48 compute-0 sshd-session[417793]: Connection closed by 189.219.254.5 port 41749
Dec 05 01:51:48 compute-0 ceph-mon[192914]: pgmap v1272: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1273: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:50 compute-0 ceph-mon[192914]: pgmap v1273: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:51 compute-0 nova_compute[349548]: 2025-12-05 01:51:51.439 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:51:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1274: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:51 compute-0 nova_compute[349548]: 2025-12-05 01:51:51.496 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:51:52 compute-0 ceph-mon[192914]: pgmap v1274: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:51:53 compute-0 sshd-session[417794]: Connection closed by authenticating user root 189.219.254.5 port 41813 [preauth]
Dec 05 01:51:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1275: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:54 compute-0 ceph-mon[192914]: pgmap v1275: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1276: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:55 compute-0 sudo[417797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:51:55 compute-0 sudo[417797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:51:55 compute-0 sudo[417797]: pam_unix(sudo:session): session closed for user root
Dec 05 01:51:55 compute-0 sudo[417835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:51:55 compute-0 sudo[417835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:51:55 compute-0 sudo[417835]: pam_unix(sudo:session): session closed for user root
Dec 05 01:51:55 compute-0 podman[417820]: 2025-12-05 01:51:55.744352578 +0000 UTC m=+0.152779465 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:51:55 compute-0 podman[417822]: 2025-12-05 01:51:55.747948779 +0000 UTC m=+0.144875873 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:51:55 compute-0 sudo[417885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:51:55 compute-0 sudo[417885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:51:55 compute-0 sudo[417885]: pam_unix(sudo:session): session closed for user root
Dec 05 01:51:55 compute-0 sudo[417910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:51:55 compute-0 sudo[417910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:51:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:51:56.180 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:51:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:51:56.181 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:51:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:51:56.181 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:51:56 compute-0 nova_compute[349548]: 2025-12-05 01:51:56.441 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:51:56 compute-0 nova_compute[349548]: 2025-12-05 01:51:56.498 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:51:56 compute-0 ceph-mon[192914]: pgmap v1276: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:56 compute-0 sudo[417910]: pam_unix(sudo:session): session closed for user root
Dec 05 01:51:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:51:56 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:51:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:51:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:51:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:51:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:51:56 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c43b53e7-837e-4d33-98f2-01878302d075 does not exist
Dec 05 01:51:56 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8340c628-1e23-458d-aa27-732ac72a16da does not exist
Dec 05 01:51:56 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 15e66a8b-bfc7-4615-b8cd-690ffce4bbf6 does not exist
Dec 05 01:51:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:51:56 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:51:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:51:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:51:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:51:56 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:51:56 compute-0 sudo[417965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:51:56 compute-0 sudo[417965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:51:56 compute-0 sudo[417965]: pam_unix(sudo:session): session closed for user root
Dec 05 01:51:57 compute-0 sudo[417990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:51:57 compute-0 sudo[417990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:51:57 compute-0 sudo[417990]: pam_unix(sudo:session): session closed for user root
Dec 05 01:51:57 compute-0 sudo[418015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:51:57 compute-0 sudo[418015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:51:57 compute-0 sudo[418015]: pam_unix(sudo:session): session closed for user root
Dec 05 01:51:57 compute-0 sudo[418040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:51:57 compute-0 sudo[418040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:51:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1277: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:51:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:51:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:51:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:51:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:51:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:51:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:51:58 compute-0 podman[418104]: 2025-12-05 01:51:58.029413222 +0000 UTC m=+0.119573382 container create f38862cebb9596e5bcb0cc928531c3bae334216142e4a7273e4989576e865955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_nightingale, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:51:58 compute-0 podman[418104]: 2025-12-05 01:51:57.990616951 +0000 UTC m=+0.080777131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:51:58 compute-0 systemd[1]: Started libpod-conmon-f38862cebb9596e5bcb0cc928531c3bae334216142e4a7273e4989576e865955.scope.
Dec 05 01:51:58 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:51:58 compute-0 podman[418104]: 2025-12-05 01:51:58.186109716 +0000 UTC m=+0.276269906 container init f38862cebb9596e5bcb0cc928531c3bae334216142e4a7273e4989576e865955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:51:58 compute-0 podman[418104]: 2025-12-05 01:51:58.20262275 +0000 UTC m=+0.292782900 container start f38862cebb9596e5bcb0cc928531c3bae334216142e4a7273e4989576e865955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_nightingale, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:51:58 compute-0 podman[418104]: 2025-12-05 01:51:58.208153445 +0000 UTC m=+0.298313605 container attach f38862cebb9596e5bcb0cc928531c3bae334216142e4a7273e4989576e865955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_nightingale, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 05 01:51:58 compute-0 relaxed_nightingale[418131]: 167 167
Dec 05 01:51:58 compute-0 systemd[1]: libpod-f38862cebb9596e5bcb0cc928531c3bae334216142e4a7273e4989576e865955.scope: Deactivated successfully.
Dec 05 01:51:58 compute-0 podman[418104]: 2025-12-05 01:51:58.211686755 +0000 UTC m=+0.301846915 container died f38862cebb9596e5bcb0cc928531c3bae334216142e4a7273e4989576e865955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 01:51:58 compute-0 podman[418116]: 2025-12-05 01:51:58.224456103 +0000 UTC m=+0.117381080 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:51:58 compute-0 podman[418119]: 2025-12-05 01:51:58.236515142 +0000 UTC m=+0.131524297 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm)
Dec 05 01:51:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8fa11ecc8fe33b620ac001eceeb8fef697271570d29c3088874843a45907d8e-merged.mount: Deactivated successfully.
Dec 05 01:51:58 compute-0 podman[418104]: 2025-12-05 01:51:58.286821076 +0000 UTC m=+0.376981236 container remove f38862cebb9596e5bcb0cc928531c3bae334216142e4a7273e4989576e865955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 05 01:51:58 compute-0 systemd[1]: libpod-conmon-f38862cebb9596e5bcb0cc928531c3bae334216142e4a7273e4989576e865955.scope: Deactivated successfully.
Dec 05 01:51:58 compute-0 podman[418178]: 2025-12-05 01:51:58.571657642 +0000 UTC m=+0.095815224 container create e4e8cc298d89c0ec5c562840df0c998a72f62fd3582cd89edf041bb8b71fda3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:51:58 compute-0 podman[418178]: 2025-12-05 01:51:58.543346096 +0000 UTC m=+0.067503718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:51:58 compute-0 systemd[1]: Started libpod-conmon-e4e8cc298d89c0ec5c562840df0c998a72f62fd3582cd89edf041bb8b71fda3a.scope.
Dec 05 01:51:58 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67647ac600f23e01ac5d14513ba41f0e9f66fc5e7e0ef6d5612a3c9f031587b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67647ac600f23e01ac5d14513ba41f0e9f66fc5e7e0ef6d5612a3c9f031587b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67647ac600f23e01ac5d14513ba41f0e9f66fc5e7e0ef6d5612a3c9f031587b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67647ac600f23e01ac5d14513ba41f0e9f66fc5e7e0ef6d5612a3c9f031587b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67647ac600f23e01ac5d14513ba41f0e9f66fc5e7e0ef6d5612a3c9f031587b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:51:58 compute-0 podman[418178]: 2025-12-05 01:51:58.745698834 +0000 UTC m=+0.269856416 container init e4e8cc298d89c0ec5c562840df0c998a72f62fd3582cd89edf041bb8b71fda3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_stonebraker, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:51:58 compute-0 podman[418178]: 2025-12-05 01:51:58.782066896 +0000 UTC m=+0.306224508 container start e4e8cc298d89c0ec5c562840df0c998a72f62fd3582cd89edf041bb8b71fda3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_stonebraker, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 01:51:58 compute-0 podman[418178]: 2025-12-05 01:51:58.79038886 +0000 UTC m=+0.314546462 container attach e4e8cc298d89c0ec5c562840df0c998a72f62fd3582cd89edf041bb8b71fda3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_stonebraker, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 05 01:51:58 compute-0 ceph-mon[192914]: pgmap v1277: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1278: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:51:59 compute-0 podman[158197]: time="2025-12-05T01:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:51:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45520 "" "Go-http-client/1.1"
Dec 05 01:51:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9031 "" "Go-http-client/1.1"
Dec 05 01:51:59 compute-0 kind_stonebraker[418195]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:51:59 compute-0 kind_stonebraker[418195]: --> relative data size: 1.0
Dec 05 01:51:59 compute-0 kind_stonebraker[418195]: --> All data devices are unavailable
Dec 05 01:52:00 compute-0 systemd[1]: libpod-e4e8cc298d89c0ec5c562840df0c998a72f62fd3582cd89edf041bb8b71fda3a.scope: Deactivated successfully.
Dec 05 01:52:00 compute-0 systemd[1]: libpod-e4e8cc298d89c0ec5c562840df0c998a72f62fd3582cd89edf041bb8b71fda3a.scope: Consumed 1.173s CPU time.
Dec 05 01:52:00 compute-0 podman[418178]: 2025-12-05 01:52:00.043999334 +0000 UTC m=+1.568156946 container died e4e8cc298d89c0ec5c562840df0c998a72f62fd3582cd89edf041bb8b71fda3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:52:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-e67647ac600f23e01ac5d14513ba41f0e9f66fc5e7e0ef6d5612a3c9f031587b-merged.mount: Deactivated successfully.
Dec 05 01:52:00 compute-0 podman[418178]: 2025-12-05 01:52:00.132829111 +0000 UTC m=+1.656986683 container remove e4e8cc298d89c0ec5c562840df0c998a72f62fd3582cd89edf041bb8b71fda3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_stonebraker, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:52:00 compute-0 systemd[1]: libpod-conmon-e4e8cc298d89c0ec5c562840df0c998a72f62fd3582cd89edf041bb8b71fda3a.scope: Deactivated successfully.
Dec 05 01:52:00 compute-0 sudo[418040]: pam_unix(sudo:session): session closed for user root
Dec 05 01:52:00 compute-0 sudo[418236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:52:00 compute-0 sudo[418236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:52:00 compute-0 sudo[418236]: pam_unix(sudo:session): session closed for user root
Dec 05 01:52:00 compute-0 sudo[418261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:52:00 compute-0 sudo[418261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:52:00 compute-0 sudo[418261]: pam_unix(sudo:session): session closed for user root
Dec 05 01:52:00 compute-0 sudo[418286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:52:00 compute-0 sudo[418286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:52:00 compute-0 sudo[418286]: pam_unix(sudo:session): session closed for user root
Dec 05 01:52:00 compute-0 sudo[418311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:52:00 compute-0 sudo[418311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:52:00 compute-0 ceph-mon[192914]: pgmap v1278: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:01 compute-0 podman[418374]: 2025-12-05 01:52:01.234314359 +0000 UTC m=+0.063518166 container create 683e804e26084946bf623a292583097029d870f7fee42fd2d4a9c7172c5164b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:52:01 compute-0 podman[418374]: 2025-12-05 01:52:01.213158885 +0000 UTC m=+0.042362672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:52:01 compute-0 systemd[1]: Started libpod-conmon-683e804e26084946bf623a292583097029d870f7fee42fd2d4a9c7172c5164b9.scope.
Dec 05 01:52:01 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:52:01 compute-0 podman[418374]: 2025-12-05 01:52:01.38661293 +0000 UTC m=+0.215816787 container init 683e804e26084946bf623a292583097029d870f7fee42fd2d4a9c7172c5164b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 01:52:01 compute-0 podman[418374]: 2025-12-05 01:52:01.400462009 +0000 UTC m=+0.229665786 container start 683e804e26084946bf623a292583097029d870f7fee42fd2d4a9c7172c5164b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 05 01:52:01 compute-0 podman[418374]: 2025-12-05 01:52:01.405511581 +0000 UTC m=+0.234715458 container attach 683e804e26084946bf623a292583097029d870f7fee42fd2d4a9c7172c5164b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 05 01:52:01 compute-0 hopeful_wright[418389]: 167 167
Dec 05 01:52:01 compute-0 systemd[1]: libpod-683e804e26084946bf623a292583097029d870f7fee42fd2d4a9c7172c5164b9.scope: Deactivated successfully.
Dec 05 01:52:01 compute-0 conmon[418389]: conmon 683e804e26084946bf62 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-683e804e26084946bf623a292583097029d870f7fee42fd2d4a9c7172c5164b9.scope/container/memory.events
Dec 05 01:52:01 compute-0 podman[418374]: 2025-12-05 01:52:01.410546562 +0000 UTC m=+0.239750329 container died 683e804e26084946bf623a292583097029d870f7fee42fd2d4a9c7172c5164b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:52:01 compute-0 openstack_network_exporter[366555]: ERROR   01:52:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:52:01 compute-0 openstack_network_exporter[366555]: ERROR   01:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:52:01 compute-0 openstack_network_exporter[366555]: ERROR   01:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:52:01 compute-0 openstack_network_exporter[366555]: ERROR   01:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:52:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:52:01 compute-0 openstack_network_exporter[366555]: ERROR   01:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:52:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:52:01 compute-0 nova_compute[349548]: 2025-12-05 01:52:01.443 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:52:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-0dfae183bad71dc35b57cffa714a869e9e15e288d6bf93c153de4c81d7f0c3ba-merged.mount: Deactivated successfully.
Dec 05 01:52:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1279: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:01 compute-0 podman[418374]: 2025-12-05 01:52:01.481931659 +0000 UTC m=+0.311135436 container remove 683e804e26084946bf623a292583097029d870f7fee42fd2d4a9c7172c5164b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 05 01:52:01 compute-0 nova_compute[349548]: 2025-12-05 01:52:01.504 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:52:01 compute-0 systemd[1]: libpod-conmon-683e804e26084946bf623a292583097029d870f7fee42fd2d4a9c7172c5164b9.scope: Deactivated successfully.
Dec 05 01:52:01 compute-0 podman[418414]: 2025-12-05 01:52:01.776507827 +0000 UTC m=+0.099111447 container create 2a7cb1f2e2c86a9ba6c314a1cd786a5c5609fedc35976912d183ec22d05ff954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 01:52:01 compute-0 podman[418414]: 2025-12-05 01:52:01.724759853 +0000 UTC m=+0.047363563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:52:01 compute-0 systemd[1]: Started libpod-conmon-2a7cb1f2e2c86a9ba6c314a1cd786a5c5609fedc35976912d183ec22d05ff954.scope.
Dec 05 01:52:01 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:52:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1039f4c548ba584d9b62bbe15a2410a126996fd1880ff692ffefc52b8d0aaf21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:52:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1039f4c548ba584d9b62bbe15a2410a126996fd1880ff692ffefc52b8d0aaf21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:52:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1039f4c548ba584d9b62bbe15a2410a126996fd1880ff692ffefc52b8d0aaf21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:52:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1039f4c548ba584d9b62bbe15a2410a126996fd1880ff692ffefc52b8d0aaf21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:52:01 compute-0 podman[418428]: 2025-12-05 01:52:01.938195712 +0000 UTC m=+0.102847912 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=, container_name=kepler, vendor=Red Hat, Inc., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, config_id=edpm, release=1214.1726694543, architecture=x86_64)
Dec 05 01:52:01 compute-0 podman[418414]: 2025-12-05 01:52:01.95558855 +0000 UTC m=+0.278192200 container init 2a7cb1f2e2c86a9ba6c314a1cd786a5c5609fedc35976912d183ec22d05ff954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gauss, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec 05 01:52:01 compute-0 podman[418414]: 2025-12-05 01:52:01.968523604 +0000 UTC m=+0.291127224 container start 2a7cb1f2e2c86a9ba6c314a1cd786a5c5609fedc35976912d183ec22d05ff954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gauss, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:52:01 compute-0 podman[418414]: 2025-12-05 01:52:01.973067132 +0000 UTC m=+0.295670782 container attach 2a7cb1f2e2c86a9ba6c314a1cd786a5c5609fedc35976912d183ec22d05ff954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gauss, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:52:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]: {
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:     "0": [
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:         {
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "devices": [
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "/dev/loop3"
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             ],
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "lv_name": "ceph_lv0",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "lv_size": "21470642176",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "name": "ceph_lv0",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "tags": {
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.cluster_name": "ceph",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.crush_device_class": "",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.encrypted": "0",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.osd_id": "0",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.type": "block",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.vdo": "0"
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             },
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "type": "block",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "vg_name": "ceph_vg0"
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:         }
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:     ],
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:     "1": [
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:         {
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "devices": [
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "/dev/loop4"
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             ],
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "lv_name": "ceph_lv1",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "lv_size": "21470642176",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "name": "ceph_lv1",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "tags": {
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.cluster_name": "ceph",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.crush_device_class": "",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.encrypted": "0",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.osd_id": "1",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.type": "block",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.vdo": "0"
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             },
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "type": "block",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "vg_name": "ceph_vg1"
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:         }
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:     ],
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:     "2": [
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:         {
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "devices": [
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "/dev/loop5"
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             ],
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "lv_name": "ceph_lv2",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "lv_size": "21470642176",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "name": "ceph_lv2",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "tags": {
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.cluster_name": "ceph",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.crush_device_class": "",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.encrypted": "0",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.osd_id": "2",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.type": "block",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:                 "ceph.vdo": "0"
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             },
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "type": "block",
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:             "vg_name": "ceph_vg2"
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:         }
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]:     ]
Dec 05 01:52:02 compute-0 stupefied_gauss[418439]: }
Dec 05 01:52:02 compute-0 systemd[1]: libpod-2a7cb1f2e2c86a9ba6c314a1cd786a5c5609fedc35976912d183ec22d05ff954.scope: Deactivated successfully.
Dec 05 01:52:02 compute-0 podman[418414]: 2025-12-05 01:52:02.739836593 +0000 UTC m=+1.062440223 container died 2a7cb1f2e2c86a9ba6c314a1cd786a5c5609fedc35976912d183ec22d05ff954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gauss, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:52:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-1039f4c548ba584d9b62bbe15a2410a126996fd1880ff692ffefc52b8d0aaf21-merged.mount: Deactivated successfully.
Dec 05 01:52:02 compute-0 ceph-mon[192914]: pgmap v1279: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:02 compute-0 podman[418414]: 2025-12-05 01:52:02.821490378 +0000 UTC m=+1.144093998 container remove 2a7cb1f2e2c86a9ba6c314a1cd786a5c5609fedc35976912d183ec22d05ff954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:52:02 compute-0 systemd[1]: libpod-conmon-2a7cb1f2e2c86a9ba6c314a1cd786a5c5609fedc35976912d183ec22d05ff954.scope: Deactivated successfully.
Dec 05 01:52:02 compute-0 sudo[418311]: pam_unix(sudo:session): session closed for user root
Dec 05 01:52:03 compute-0 sudo[418470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:52:03 compute-0 sudo[418470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:52:03 compute-0 sudo[418470]: pam_unix(sudo:session): session closed for user root
Dec 05 01:52:03 compute-0 sudo[418495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:52:03 compute-0 sudo[418495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:52:03 compute-0 sudo[418495]: pam_unix(sudo:session): session closed for user root
Dec 05 01:52:03 compute-0 sudo[418520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:52:03 compute-0 sudo[418520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:52:03 compute-0 sudo[418520]: pam_unix(sudo:session): session closed for user root
Dec 05 01:52:03 compute-0 sudo[418545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:52:03 compute-0 sudo[418545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:52:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1280: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:03 compute-0 podman[418607]: 2025-12-05 01:52:03.97255735 +0000 UTC m=+0.080120933 container create e03ae56bac1a2c27672b2c27e171caa80e11be8053ee1763b3ded87e58eac64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ellis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 05 01:52:04 compute-0 podman[418607]: 2025-12-05 01:52:03.947628269 +0000 UTC m=+0.055191912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:52:04 compute-0 systemd[1]: Started libpod-conmon-e03ae56bac1a2c27672b2c27e171caa80e11be8053ee1763b3ded87e58eac64e.scope.
Dec 05 01:52:04 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:52:04 compute-0 podman[418607]: 2025-12-05 01:52:04.105314761 +0000 UTC m=+0.212878364 container init e03ae56bac1a2c27672b2c27e171caa80e11be8053ee1763b3ded87e58eac64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ellis, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:52:04 compute-0 podman[418607]: 2025-12-05 01:52:04.114337655 +0000 UTC m=+0.221901228 container start e03ae56bac1a2c27672b2c27e171caa80e11be8053ee1763b3ded87e58eac64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ellis, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:52:04 compute-0 podman[418607]: 2025-12-05 01:52:04.118961865 +0000 UTC m=+0.226525458 container attach e03ae56bac1a2c27672b2c27e171caa80e11be8053ee1763b3ded87e58eac64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ellis, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:52:04 compute-0 brave_ellis[418623]: 167 167
Dec 05 01:52:04 compute-0 systemd[1]: libpod-e03ae56bac1a2c27672b2c27e171caa80e11be8053ee1763b3ded87e58eac64e.scope: Deactivated successfully.
Dec 05 01:52:04 compute-0 podman[418607]: 2025-12-05 01:52:04.121487616 +0000 UTC m=+0.229051189 container died e03ae56bac1a2c27672b2c27e171caa80e11be8053ee1763b3ded87e58eac64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:52:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecabc8b3369dd63e3c4f9d4304f40126a6addbf5bafffc606fa1030166ae5afa-merged.mount: Deactivated successfully.
Dec 05 01:52:04 compute-0 podman[418607]: 2025-12-05 01:52:04.174244708 +0000 UTC m=+0.281808291 container remove e03ae56bac1a2c27672b2c27e171caa80e11be8053ee1763b3ded87e58eac64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:52:04 compute-0 systemd[1]: libpod-conmon-e03ae56bac1a2c27672b2c27e171caa80e11be8053ee1763b3ded87e58eac64e.scope: Deactivated successfully.
Dec 05 01:52:04 compute-0 podman[418646]: 2025-12-05 01:52:04.424362318 +0000 UTC m=+0.086498972 container create 2160cf935e6ad96ee5bda2511b1f315d2de4ffd9916346f71c23373959a2c03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shockley, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 01:52:04 compute-0 podman[418646]: 2025-12-05 01:52:04.391196116 +0000 UTC m=+0.053332810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:52:04 compute-0 systemd[1]: Started libpod-conmon-2160cf935e6ad96ee5bda2511b1f315d2de4ffd9916346f71c23373959a2c03e.scope.
Dec 05 01:52:04 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:52:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12bf3f2ed8ad5e5655b28fd45e8afab6698d8e976822a41e9aa520bd95f94b7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:52:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12bf3f2ed8ad5e5655b28fd45e8afab6698d8e976822a41e9aa520bd95f94b7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:52:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12bf3f2ed8ad5e5655b28fd45e8afab6698d8e976822a41e9aa520bd95f94b7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:52:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12bf3f2ed8ad5e5655b28fd45e8afab6698d8e976822a41e9aa520bd95f94b7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:52:04 compute-0 podman[418646]: 2025-12-05 01:52:04.607822035 +0000 UTC m=+0.269958719 container init 2160cf935e6ad96ee5bda2511b1f315d2de4ffd9916346f71c23373959a2c03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 05 01:52:04 compute-0 podman[418646]: 2025-12-05 01:52:04.627155158 +0000 UTC m=+0.289291772 container start 2160cf935e6ad96ee5bda2511b1f315d2de4ffd9916346f71c23373959a2c03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shockley, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 05 01:52:04 compute-0 podman[418646]: 2025-12-05 01:52:04.631615063 +0000 UTC m=+0.293751717 container attach 2160cf935e6ad96ee5bda2511b1f315d2de4ffd9916346f71c23373959a2c03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shockley, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 05 01:52:04 compute-0 ceph-mon[192914]: pgmap v1280: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1281: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:05 compute-0 modest_shockley[418662]: {
Dec 05 01:52:05 compute-0 modest_shockley[418662]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:52:05 compute-0 modest_shockley[418662]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:52:05 compute-0 modest_shockley[418662]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:52:05 compute-0 modest_shockley[418662]:         "osd_id": 0,
Dec 05 01:52:05 compute-0 modest_shockley[418662]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:52:05 compute-0 modest_shockley[418662]:         "type": "bluestore"
Dec 05 01:52:05 compute-0 modest_shockley[418662]:     },
Dec 05 01:52:05 compute-0 modest_shockley[418662]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:52:05 compute-0 modest_shockley[418662]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:52:05 compute-0 modest_shockley[418662]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:52:05 compute-0 modest_shockley[418662]:         "osd_id": 1,
Dec 05 01:52:05 compute-0 modest_shockley[418662]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:52:05 compute-0 modest_shockley[418662]:         "type": "bluestore"
Dec 05 01:52:05 compute-0 modest_shockley[418662]:     },
Dec 05 01:52:05 compute-0 modest_shockley[418662]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:52:05 compute-0 modest_shockley[418662]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:52:05 compute-0 modest_shockley[418662]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:52:05 compute-0 modest_shockley[418662]:         "osd_id": 2,
Dec 05 01:52:05 compute-0 modest_shockley[418662]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:52:05 compute-0 modest_shockley[418662]:         "type": "bluestore"
Dec 05 01:52:05 compute-0 modest_shockley[418662]:     }
Dec 05 01:52:05 compute-0 modest_shockley[418662]: }
Dec 05 01:52:05 compute-0 systemd[1]: libpod-2160cf935e6ad96ee5bda2511b1f315d2de4ffd9916346f71c23373959a2c03e.scope: Deactivated successfully.
Dec 05 01:52:05 compute-0 podman[418646]: 2025-12-05 01:52:05.778802115 +0000 UTC m=+1.440938739 container died 2160cf935e6ad96ee5bda2511b1f315d2de4ffd9916346f71c23373959a2c03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shockley, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 05 01:52:05 compute-0 systemd[1]: libpod-2160cf935e6ad96ee5bda2511b1f315d2de4ffd9916346f71c23373959a2c03e.scope: Consumed 1.138s CPU time.
Dec 05 01:52:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-12bf3f2ed8ad5e5655b28fd45e8afab6698d8e976822a41e9aa520bd95f94b7f-merged.mount: Deactivated successfully.
Dec 05 01:52:05 compute-0 ceph-mon[192914]: pgmap v1281: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:05 compute-0 podman[418646]: 2025-12-05 01:52:05.855872372 +0000 UTC m=+1.518008996 container remove 2160cf935e6ad96ee5bda2511b1f315d2de4ffd9916346f71c23373959a2c03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shockley, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:52:05 compute-0 systemd[1]: libpod-conmon-2160cf935e6ad96ee5bda2511b1f315d2de4ffd9916346f71c23373959a2c03e.scope: Deactivated successfully.
Dec 05 01:52:05 compute-0 sudo[418545]: pam_unix(sudo:session): session closed for user root
Dec 05 01:52:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:52:05 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:52:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:52:05 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:52:05 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 0b2dd061-b004-4831-8301-51d38d67fa8f does not exist
Dec 05 01:52:05 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 0d1fc0e5-84b2-4c26-a1fa-44e8dffc43ec does not exist
Dec 05 01:52:06 compute-0 sudo[418708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:52:06 compute-0 sudo[418708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:52:06 compute-0 sudo[418708]: pam_unix(sudo:session): session closed for user root
Dec 05 01:52:06 compute-0 sudo[418733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:52:06 compute-0 sudo[418733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:52:06 compute-0 sudo[418733]: pam_unix(sudo:session): session closed for user root
Dec 05 01:52:06 compute-0 nova_compute[349548]: 2025-12-05 01:52:06.446 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:52:06 compute-0 nova_compute[349548]: 2025-12-05 01:52:06.506 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:52:06 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:52:06 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:52:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1282: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:52:07 compute-0 ceph-mon[192914]: pgmap v1282: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1283: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:09 compute-0 podman[418758]: 2025-12-05 01:52:09.721691344 +0000 UTC m=+0.114533910 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 05 01:52:09 compute-0 podman[418759]: 2025-12-05 01:52:09.737022975 +0000 UTC m=+0.131275031 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:52:09 compute-0 podman[418761]: 2025-12-05 01:52:09.745267997 +0000 UTC m=+0.114102618 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41)
Dec 05 01:52:09 compute-0 podman[418760]: 2025-12-05 01:52:09.799140911 +0000 UTC m=+0.181615386 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 01:52:10 compute-0 ceph-mon[192914]: pgmap v1283: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:11 compute-0 nova_compute[349548]: 2025-12-05 01:52:11.447 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:52:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1284: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:11 compute-0 nova_compute[349548]: 2025-12-05 01:52:11.509 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:52:12 compute-0 ceph-mon[192914]: pgmap v1284: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:52:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1285: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:14 compute-0 ceph-mon[192914]: pgmap v1285: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1286: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:52:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:52:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:52:16
Dec 05 01:52:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:52:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:52:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'images', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', '.mgr', 'volumes']
Dec 05 01:52:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:52:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:52:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:52:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:52:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:52:16 compute-0 nova_compute[349548]: 2025-12-05 01:52:16.450 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:52:16 compute-0 nova_compute[349548]: 2025-12-05 01:52:16.512 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:52:16 compute-0 ceph-mon[192914]: pgmap v1286: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:52:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:52:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:52:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:52:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:52:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:52:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:52:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:52:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:52:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:52:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1287: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:52:18 compute-0 ceph-mon[192914]: pgmap v1287: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1288: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:20 compute-0 ceph-mon[192914]: pgmap v1288: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:21 compute-0 nova_compute[349548]: 2025-12-05 01:52:21.453 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:52:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1289: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:21 compute-0 nova_compute[349548]: 2025-12-05 01:52:21.515 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:52:22 compute-0 ceph-mon[192914]: pgmap v1289: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:52:22 compute-0 nova_compute[349548]: 2025-12-05 01:52:22.884 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:52:22 compute-0 nova_compute[349548]: 2025-12-05 01:52:22.884 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:52:22 compute-0 nova_compute[349548]: 2025-12-05 01:52:22.885 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:52:22 compute-0 nova_compute[349548]: 2025-12-05 01:52:22.885 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:52:22 compute-0 nova_compute[349548]: 2025-12-05 01:52:22.885 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:52:22 compute-0 nova_compute[349548]: 2025-12-05 01:52:22.886 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 01:52:23 compute-0 nova_compute[349548]: 2025-12-05 01:52:23.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:52:23 compute-0 nova_compute[349548]: 2025-12-05 01:52:23.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 01:52:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1290: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:23 compute-0 nova_compute[349548]: 2025-12-05 01:52:23.534 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:52:23 compute-0 nova_compute[349548]: 2025-12-05 01:52:23.535 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:52:23 compute-0 nova_compute[349548]: 2025-12-05 01:52:23.536 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 01:52:24 compute-0 ceph-mon[192914]: pgmap v1290: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.107 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updating instance_info_cache with network_info: [{"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.124 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.124 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.125 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.125 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.126 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.150 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.151 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.151 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.152 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.152 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:52:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1291: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:52:25 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1054848363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.629 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:52:25 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1054848363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.767 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.771 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.771 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.780 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.781 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.782 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.256 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.257 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3745MB free_disk=59.9220085144043GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.258 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.258 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.366 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.367 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.367 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.367 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.446 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.464 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.517 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:52:26 compute-0 podman[418876]: 2025-12-05 01:52:26.69557926 +0000 UTC m=+0.088144428 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011047613669662043 of space, bias 1.0, pg target 0.3314284100898613 quantized to 32 (current 32)
Dec 05 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 05 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:52:26 compute-0 podman[418865]: 2025-12-05 01:52:26.734473754 +0000 UTC m=+0.131117757 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:52:26 compute-0 ceph-mon[192914]: pgmap v1291: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:52:26 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1898561506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:52:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 01:52:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 2400.0 total, 600.0 interval
                                            Cumulative writes: 5973 writes, 26K keys, 5973 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                            Cumulative WAL: 5973 writes, 5973 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1367 writes, 6150 keys, 1367 commit groups, 1.0 writes per commit group, ingest: 8.79 MB, 0.01 MB/s
                                            Interval WAL: 1367 writes, 1367 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    108.2      0.28              0.13        15    0.019       0      0       0.0       0.0
                                              L6      1/0    7.02 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    135.8    109.9      0.90              0.42        14    0.064     63K   7823       0.0       0.0
                                             Sum      1/0    7.02 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3    103.6    109.5      1.18              0.55        29    0.041     63K   7823       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.5     98.8     99.9      0.38              0.17         8    0.047     20K   2553       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    135.8    109.9      0.90              0.42        14    0.064     63K   7823       0.0       0.0
                                            High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    110.0      0.27              0.13        14    0.020       0      0       0.0       0.0
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 2400.0 total, 600.0 interval
                                            Flush(GB): cumulative 0.029, interval 0.008
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.13 GB write, 0.05 MB/s write, 0.12 GB read, 0.05 MB/s read, 1.2 seconds
                                            Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.4 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56463779d1f0#2 capacity: 308.00 MB usage: 13.15 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000117 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(849,12.64 MB,4.10526%) FilterBlock(30,181.92 KB,0.0576812%) IndexBlock(30,338.42 KB,0.107302%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Dec 05 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.987 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.998 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:52:27 compute-0 nova_compute[349548]: 2025-12-05 01:52:27.024 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:52:27 compute-0 nova_compute[349548]: 2025-12-05 01:52:27.027 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 01:52:27 compute-0 nova_compute[349548]: 2025-12-05 01:52:27.027 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:52:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1292: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:52:27 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1898561506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:52:28 compute-0 podman[418929]: 2025-12-05 01:52:28.720058631 +0000 UTC m=+0.122203236 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 01:52:28 compute-0 podman[418928]: 2025-12-05 01:52:28.737655466 +0000 UTC m=+0.140600653 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec 05 01:52:28 compute-0 ceph-mon[192914]: pgmap v1292: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1293: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:29 compute-0 podman[158197]: time="2025-12-05T01:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:52:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 01:52:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8626 "" "Go-http-client/1.1"
Dec 05 01:52:30 compute-0 ceph-mon[192914]: pgmap v1293: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:31 compute-0 openstack_network_exporter[366555]: ERROR   01:52:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:52:31 compute-0 openstack_network_exporter[366555]: ERROR   01:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:52:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:52:31 compute-0 openstack_network_exporter[366555]: ERROR   01:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:52:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:52:31 compute-0 openstack_network_exporter[366555]: ERROR   01:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:52:31 compute-0 openstack_network_exporter[366555]: ERROR   01:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:52:31 compute-0 nova_compute[349548]: 2025-12-05 01:52:31.458 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:52:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1294: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:31 compute-0 nova_compute[349548]: 2025-12-05 01:52:31.525 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:52:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:52:32 compute-0 podman[418965]: 2025-12-05 01:52:32.690756011 +0000 UTC m=+0.108659015 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, container_name=kepler, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 05 01:52:32 compute-0 ceph-mon[192914]: pgmap v1294: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1295: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:34 compute-0 ceph-mon[192914]: pgmap v1295: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1296: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:36 compute-0 nova_compute[349548]: 2025-12-05 01:52:36.464 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:52:36 compute-0 nova_compute[349548]: 2025-12-05 01:52:36.528 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:52:36 compute-0 ceph-mon[192914]: pgmap v1296: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1297: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.316 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.317 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.318 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.331 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b69a0e24-1bc4-46a5-92d7-367c1efd53df', 'name': 'test_0', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.337 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b82c3f0e-6d6a-4a7b-9556-b609ad63e497', 'name': 'vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.338 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.338 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.338 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.339 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T01:52:38.339029) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.341 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.341 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.341 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.342 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.342 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.342 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T01:52:38.342393) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.378 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.378 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.379 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.418 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.419 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.420 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.421 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.421 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.422 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.422 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.422 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.423 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.424 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.424 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.425 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.425 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.426 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.426 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T01:52:38.422837) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.426 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.427 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.427 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.428 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T01:52:38.427553) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.520 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.521 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.522 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.619 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.620 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.621 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.622 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.623 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.623 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.623 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.623 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.623 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.624 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 2043636416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.624 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 325714825 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.625 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T01:52:38.623704) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.625 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 190759187 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.626 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 2069488567 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.626 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 288882839 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.627 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 182154388 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.628 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.628 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.628 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.628 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.628 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.629 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.629 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.629 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.630 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T01:52:38.628798) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.630 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.631 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.631 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.632 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.633 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.634 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.634 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.634 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.634 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.635 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.635 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.635 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T01:52:38.635033) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.636 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.637 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.637 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.638 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.639 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.640 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.640 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.640 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.641 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.641 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.641 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.642 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.643 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.643 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T01:52:38.641714) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.644 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.644 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 41824256 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.645 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.646 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.647 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.647 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.647 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.648 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.648 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.648 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.649 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T01:52:38.648433) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.691 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.734 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.734 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.735 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.735 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.735 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.735 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.735 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.735 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 7524740776 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.735 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 28454640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.735 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.736 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 9113944897 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.736 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 32028870 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.736 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.737 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.737 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.737 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.737 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.737 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.737 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.737 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T01:52:38.735397) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.737 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.737 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.738 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.738 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.738 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T01:52:38.737641) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.738 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.738 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.739 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.739 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.739 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.739 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.739 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.739 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.739 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T01:52:38.739564) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.745 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.749 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.749 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.749 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.750 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.750 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.750 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.750 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.750 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.750 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.751 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.751 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.751 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.751 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.751 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.751 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.751 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.751 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T01:52:38.750232) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.751 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.752 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.752 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.752 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.752 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.753 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.753 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.753 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.753 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.753 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T01:52:38.751634) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.753 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.753 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.753 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.754 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.754 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.754 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.754 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.754 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T01:52:38.753655) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.754 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.754 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.754 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.754 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.755 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.bytes volume: 4628 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.755 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.755 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.755 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.755 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.756 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.756 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.756 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.756 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.756 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.757 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.757 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.757 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.757 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.757 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/memory.usage volume: 49.03125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.757 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/memory.usage volume: 49.15625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.757 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.758 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.758 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.758 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T01:52:38.754814) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.758 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.758 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T01:52:38.756003) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.758 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.758 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.758 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T01:52:38.757341) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.758 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.759 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.759 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.759 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.759 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.759 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.759 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.760 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.760 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T01:52:38.758539) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.760 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets volume: 39 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.760 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T01:52:38.759824) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.760 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.760 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.760 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.760 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.761 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.761 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.761 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.761 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.761 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.761 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.761 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.762 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.762 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.762 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.762 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/cpu volume: 38600000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.762 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/cpu volume: 158680000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.762 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.763 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.763 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.763 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.763 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.763 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.763 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.763 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.764 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.764 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.764 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.764 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.764 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.764 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.764 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T01:52:38.761060) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.764 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T01:52:38.762210) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.764 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.765 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T01:52:38.763435) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.765 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.765 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T01:52:38.764636) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.765 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.768 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.768 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.768 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.768 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.768 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:52:38 compute-0 ceph-mon[192914]: pgmap v1297: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1298: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:40 compute-0 podman[418986]: 2025-12-05 01:52:40.704223125 +0000 UTC m=+0.116580986 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:52:40 compute-0 podman[418987]: 2025-12-05 01:52:40.705353086 +0000 UTC m=+0.108327095 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 01:52:40 compute-0 podman[418989]: 2025-12-05 01:52:40.739533543 +0000 UTC m=+0.129158728 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.buildah.version=1.33.7, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal)
Dec 05 01:52:40 compute-0 podman[418988]: 2025-12-05 01:52:40.75975769 +0000 UTC m=+0.153224082 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 05 01:52:40 compute-0 ceph-mon[192914]: pgmap v1298: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:41 compute-0 nova_compute[349548]: 2025-12-05 01:52:41.468 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:52:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1299: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:41 compute-0 nova_compute[349548]: 2025-12-05 01:52:41.531 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:52:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:52:42 compute-0 ceph-mon[192914]: pgmap v1299: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1300: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:44 compute-0 ceph-mon[192914]: pgmap v1300: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 01:52:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1828599589' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:52:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 01:52:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1828599589' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:52:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1301: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1828599589' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:52:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1828599589' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:52:45 compute-0 ceph-mon[192914]: pgmap v1301: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:52:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:52:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:52:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:52:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:52:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:52:46 compute-0 nova_compute[349548]: 2025-12-05 01:52:46.471 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:52:46 compute-0 nova_compute[349548]: 2025-12-05 01:52:46.534 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:52:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1302: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:52:48 compute-0 ceph-mon[192914]: pgmap v1302: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1303: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:50 compute-0 ceph-mon[192914]: pgmap v1303: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:51 compute-0 nova_compute[349548]: 2025-12-05 01:52:51.471 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:52:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1304: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:51 compute-0 nova_compute[349548]: 2025-12-05 01:52:51.536 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:52:52 compute-0 ceph-mon[192914]: pgmap v1304: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:52:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1305: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:54 compute-0 ceph-mon[192914]: pgmap v1305: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1306: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:52:56.181 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:52:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:52:56.182 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:52:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:52:56.183 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:52:56 compute-0 nova_compute[349548]: 2025-12-05 01:52:56.474 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:52:56 compute-0 nova_compute[349548]: 2025-12-05 01:52:56.538 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:52:56 compute-0 ceph-mon[192914]: pgmap v1306: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1307: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:52:57 compute-0 podman[419072]: 2025-12-05 01:52:57.696849732 +0000 UTC m=+0.100718931 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 05 01:52:57 compute-0 podman[419073]: 2025-12-05 01:52:57.710795743 +0000 UTC m=+0.113496289 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 01:52:58 compute-0 ceph-mon[192914]: pgmap v1307: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1308: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:52:59 compute-0 podman[419109]: 2025-12-05 01:52:59.723822389 +0000 UTC m=+0.129793426 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 05 01:52:59 compute-0 podman[158197]: time="2025-12-05T01:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:52:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 01:52:59 compute-0 podman[419110]: 2025-12-05 01:52:59.774773656 +0000 UTC m=+0.176955267 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 05 01:52:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8622 "" "Go-http-client/1.1"
Dec 05 01:53:00 compute-0 ceph-mon[192914]: pgmap v1308: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:01 compute-0 openstack_network_exporter[366555]: ERROR   01:53:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:53:01 compute-0 openstack_network_exporter[366555]: ERROR   01:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:53:01 compute-0 openstack_network_exporter[366555]: ERROR   01:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:53:01 compute-0 openstack_network_exporter[366555]: ERROR   01:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:53:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:53:01 compute-0 openstack_network_exporter[366555]: ERROR   01:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:53:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:53:01 compute-0 nova_compute[349548]: 2025-12-05 01:53:01.476 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1309: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:01 compute-0 nova_compute[349548]: 2025-12-05 01:53:01.541 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:53:02 compute-0 ceph-mon[192914]: pgmap v1309: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1310: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:03 compute-0 podman[419147]: 2025-12-05 01:53:03.678032478 +0000 UTC m=+0.102870442 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release-0.7.12=, distribution-scope=public, io.buildah.version=1.29.0, vcs-type=git, io.openshift.expose-services=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 05 01:53:04 compute-0 ceph-mon[192914]: pgmap v1310: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1311: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:06 compute-0 sudo[419167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:53:06 compute-0 sudo[419167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:53:06 compute-0 sudo[419167]: pam_unix(sudo:session): session closed for user root
Dec 05 01:53:06 compute-0 sudo[419192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:53:06 compute-0 sudo[419192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:53:06 compute-0 sudo[419192]: pam_unix(sudo:session): session closed for user root
Dec 05 01:53:06 compute-0 nova_compute[349548]: 2025-12-05 01:53:06.480 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:06 compute-0 sudo[419217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:53:06 compute-0 sudo[419217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:53:06 compute-0 sudo[419217]: pam_unix(sudo:session): session closed for user root
Dec 05 01:53:06 compute-0 nova_compute[349548]: 2025-12-05 01:53:06.544 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:06 compute-0 sudo[419242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:53:06 compute-0 sudo[419242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:53:06 compute-0 ceph-mon[192914]: pgmap v1311: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:07 compute-0 sudo[419242]: pam_unix(sudo:session): session closed for user root
Dec 05 01:53:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:53:07 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:53:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:53:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:53:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:53:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:53:07 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8d9710f4-39d8-4eaf-a837-8402af085405 does not exist
Dec 05 01:53:07 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 69bc13fb-8402-4f49-a2f8-8166046b8312 does not exist
Dec 05 01:53:07 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 87213a3a-3f39-4970-88e9-1df0ec7da356 does not exist
Dec 05 01:53:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:53:07 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:53:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:53:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:53:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:53:07 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:53:07 compute-0 sudo[419297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:53:07 compute-0 sudo[419297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:53:07 compute-0 sudo[419297]: pam_unix(sudo:session): session closed for user root
Dec 05 01:53:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1312: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:07 compute-0 sudo[419322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:53:07 compute-0 sudo[419322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:53:07 compute-0 sudo[419322]: pam_unix(sudo:session): session closed for user root
Dec 05 01:53:07 compute-0 sudo[419347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:53:07 compute-0 sudo[419347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:53:07 compute-0 sudo[419347]: pam_unix(sudo:session): session closed for user root
Dec 05 01:53:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:53:07 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:53:07 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:53:07 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:53:07 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:53:07 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:53:07 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:53:07 compute-0 sudo[419372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:53:07 compute-0 sudo[419372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:53:08 compute-0 podman[419436]: 2025-12-05 01:53:08.381751989 +0000 UTC m=+0.092991335 container create 0c0341ad7548475be41dcac8e0cdfba3854bb3a4af9ecd9593c163d7dd3f8610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_varahamihira, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 05 01:53:08 compute-0 podman[419436]: 2025-12-05 01:53:08.338510458 +0000 UTC m=+0.049749804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:53:08 compute-0 systemd[1]: Started libpod-conmon-0c0341ad7548475be41dcac8e0cdfba3854bb3a4af9ecd9593c163d7dd3f8610.scope.
Dec 05 01:53:08 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:53:08 compute-0 podman[419436]: 2025-12-05 01:53:08.548790377 +0000 UTC m=+0.260029753 container init 0c0341ad7548475be41dcac8e0cdfba3854bb3a4af9ecd9593c163d7dd3f8610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:53:08 compute-0 podman[419436]: 2025-12-05 01:53:08.558112928 +0000 UTC m=+0.269352234 container start 0c0341ad7548475be41dcac8e0cdfba3854bb3a4af9ecd9593c163d7dd3f8610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_varahamihira, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:53:08 compute-0 podman[419436]: 2025-12-05 01:53:08.562948384 +0000 UTC m=+0.274187750 container attach 0c0341ad7548475be41dcac8e0cdfba3854bb3a4af9ecd9593c163d7dd3f8610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_varahamihira, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 05 01:53:08 compute-0 nostalgic_varahamihira[419451]: 167 167
Dec 05 01:53:08 compute-0 podman[419436]: 2025-12-05 01:53:08.57138945 +0000 UTC m=+0.282628796 container died 0c0341ad7548475be41dcac8e0cdfba3854bb3a4af9ecd9593c163d7dd3f8610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_varahamihira, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 05 01:53:08 compute-0 systemd[1]: libpod-0c0341ad7548475be41dcac8e0cdfba3854bb3a4af9ecd9593c163d7dd3f8610.scope: Deactivated successfully.
Dec 05 01:53:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-92baa89a7fe535aea3b47f51cfa9f48e1286ab1d41d8dbf23f28e5b5f2a49227-merged.mount: Deactivated successfully.
Dec 05 01:53:08 compute-0 podman[419436]: 2025-12-05 01:53:08.670028623 +0000 UTC m=+0.381267939 container remove 0c0341ad7548475be41dcac8e0cdfba3854bb3a4af9ecd9593c163d7dd3f8610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_varahamihira, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 05 01:53:08 compute-0 ceph-mon[192914]: pgmap v1312: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:08 compute-0 systemd[1]: libpod-conmon-0c0341ad7548475be41dcac8e0cdfba3854bb3a4af9ecd9593c163d7dd3f8610.scope: Deactivated successfully.
Dec 05 01:53:08 compute-0 podman[419475]: 2025-12-05 01:53:08.926065893 +0000 UTC m=+0.079227200 container create 9a76ffd9e14e27abf737c6bf65b04dff11435e32beacbecccd9d65aa457cfec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feistel, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:53:09 compute-0 podman[419475]: 2025-12-05 01:53:08.901086253 +0000 UTC m=+0.054247560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:53:09 compute-0 systemd[1]: Started libpod-conmon-9a76ffd9e14e27abf737c6bf65b04dff11435e32beacbecccd9d65aa457cfec7.scope.
Dec 05 01:53:09 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:53:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e88ac408648593cdeda7e402632fb32d311f4397eb6cc2adaaa4eeb9db0170/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:53:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e88ac408648593cdeda7e402632fb32d311f4397eb6cc2adaaa4eeb9db0170/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:53:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e88ac408648593cdeda7e402632fb32d311f4397eb6cc2adaaa4eeb9db0170/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:53:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e88ac408648593cdeda7e402632fb32d311f4397eb6cc2adaaa4eeb9db0170/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:53:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e88ac408648593cdeda7e402632fb32d311f4397eb6cc2adaaa4eeb9db0170/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:53:09 compute-0 podman[419475]: 2025-12-05 01:53:09.086066474 +0000 UTC m=+0.239227831 container init 9a76ffd9e14e27abf737c6bf65b04dff11435e32beacbecccd9d65aa457cfec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 05 01:53:09 compute-0 podman[419475]: 2025-12-05 01:53:09.100546909 +0000 UTC m=+0.253708246 container start 9a76ffd9e14e27abf737c6bf65b04dff11435e32beacbecccd9d65aa457cfec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feistel, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:53:09 compute-0 podman[419475]: 2025-12-05 01:53:09.108129732 +0000 UTC m=+0.261291089 container attach 9a76ffd9e14e27abf737c6bf65b04dff11435e32beacbecccd9d65aa457cfec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feistel, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:53:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1313: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:10 compute-0 hungry_feistel[419492]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:53:10 compute-0 hungry_feistel[419492]: --> relative data size: 1.0
Dec 05 01:53:10 compute-0 hungry_feistel[419492]: --> All data devices are unavailable
Dec 05 01:53:10 compute-0 systemd[1]: libpod-9a76ffd9e14e27abf737c6bf65b04dff11435e32beacbecccd9d65aa457cfec7.scope: Deactivated successfully.
Dec 05 01:53:10 compute-0 podman[419475]: 2025-12-05 01:53:10.36604903 +0000 UTC m=+1.519210357 container died 9a76ffd9e14e27abf737c6bf65b04dff11435e32beacbecccd9d65aa457cfec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feistel, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:53:10 compute-0 systemd[1]: libpod-9a76ffd9e14e27abf737c6bf65b04dff11435e32beacbecccd9d65aa457cfec7.scope: Consumed 1.198s CPU time.
Dec 05 01:53:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-32e88ac408648593cdeda7e402632fb32d311f4397eb6cc2adaaa4eeb9db0170-merged.mount: Deactivated successfully.
Dec 05 01:53:10 compute-0 podman[419475]: 2025-12-05 01:53:10.438434617 +0000 UTC m=+1.591595934 container remove 9a76ffd9e14e27abf737c6bf65b04dff11435e32beacbecccd9d65aa457cfec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feistel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:53:10 compute-0 systemd[1]: libpod-conmon-9a76ffd9e14e27abf737c6bf65b04dff11435e32beacbecccd9d65aa457cfec7.scope: Deactivated successfully.
Dec 05 01:53:10 compute-0 sudo[419372]: pam_unix(sudo:session): session closed for user root
Dec 05 01:53:10 compute-0 sudo[419532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:53:10 compute-0 sudo[419532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:53:10 compute-0 sudo[419532]: pam_unix(sudo:session): session closed for user root
Dec 05 01:53:10 compute-0 ceph-mon[192914]: pgmap v1313: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:10 compute-0 sudo[419557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:53:10 compute-0 sudo[419557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:53:10 compute-0 sudo[419557]: pam_unix(sudo:session): session closed for user root
Dec 05 01:53:10 compute-0 podman[419581]: 2025-12-05 01:53:10.916917018 +0000 UTC m=+0.098135310 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 01:53:10 compute-0 sudo[419610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:53:10 compute-0 sudo[419610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:53:10 compute-0 sudo[419610]: pam_unix(sudo:session): session closed for user root
Dec 05 01:53:10 compute-0 podman[419582]: 2025-12-05 01:53:10.950150228 +0000 UTC m=+0.134620341 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:53:10 compute-0 podman[419583]: 2025-12-05 01:53:10.965123138 +0000 UTC m=+0.144939291 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 05 01:53:10 compute-0 podman[419584]: 2025-12-05 01:53:10.969821549 +0000 UTC m=+0.132015888 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.openshift.expose-services=, config_id=edpm, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, release=1755695350, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., distribution-scope=public)
Dec 05 01:53:11 compute-0 sudo[419687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:53:11 compute-0 sudo[419687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:53:11 compute-0 nova_compute[349548]: 2025-12-05 01:53:11.483 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1314: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:11 compute-0 nova_compute[349548]: 2025-12-05 01:53:11.547 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:11 compute-0 podman[419753]: 2025-12-05 01:53:11.618667121 +0000 UTC m=+0.078874090 container create 58d5851364421d9062e567d5f59fede427f1194c8fe85e5835542a1c060f3829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 05 01:53:11 compute-0 podman[419753]: 2025-12-05 01:53:11.580846582 +0000 UTC m=+0.041053601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:53:11 compute-0 systemd[1]: Started libpod-conmon-58d5851364421d9062e567d5f59fede427f1194c8fe85e5835542a1c060f3829.scope.
Dec 05 01:53:11 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:53:11 compute-0 podman[419753]: 2025-12-05 01:53:11.771577993 +0000 UTC m=+0.231785002 container init 58d5851364421d9062e567d5f59fede427f1194c8fe85e5835542a1c060f3829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_chaplygin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 05 01:53:11 compute-0 podman[419753]: 2025-12-05 01:53:11.784830014 +0000 UTC m=+0.245036983 container start 58d5851364421d9062e567d5f59fede427f1194c8fe85e5835542a1c060f3829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_chaplygin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:53:11 compute-0 podman[419753]: 2025-12-05 01:53:11.791673376 +0000 UTC m=+0.251880385 container attach 58d5851364421d9062e567d5f59fede427f1194c8fe85e5835542a1c060f3829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:53:11 compute-0 naughty_chaplygin[419769]: 167 167
Dec 05 01:53:11 compute-0 systemd[1]: libpod-58d5851364421d9062e567d5f59fede427f1194c8fe85e5835542a1c060f3829.scope: Deactivated successfully.
Dec 05 01:53:11 compute-0 podman[419753]: 2025-12-05 01:53:11.796481661 +0000 UTC m=+0.256688590 container died 58d5851364421d9062e567d5f59fede427f1194c8fe85e5835542a1c060f3829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 05 01:53:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-c98174671b2e0c20cbed8c6ed796e5bda7dabed3dff77ba4e7d3cd85185b6c81-merged.mount: Deactivated successfully.
Dec 05 01:53:11 compute-0 podman[419753]: 2025-12-05 01:53:11.866607134 +0000 UTC m=+0.326814073 container remove 58d5851364421d9062e567d5f59fede427f1194c8fe85e5835542a1c060f3829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_chaplygin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 05 01:53:11 compute-0 systemd[1]: libpod-conmon-58d5851364421d9062e567d5f59fede427f1194c8fe85e5835542a1c060f3829.scope: Deactivated successfully.
Dec 05 01:53:12 compute-0 podman[419791]: 2025-12-05 01:53:12.120145645 +0000 UTC m=+0.085485615 container create b7630c3646e4f05775bd78fbcddcdf24b46de883c976a45d023765cb65374283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 05 01:53:12 compute-0 podman[419791]: 2025-12-05 01:53:12.085789693 +0000 UTC m=+0.051129713 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:53:12 compute-0 systemd[1]: Started libpod-conmon-b7630c3646e4f05775bd78fbcddcdf24b46de883c976a45d023765cb65374283.scope.
Dec 05 01:53:12 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0c4c80b8253727b30fad30b8d17cfe3953494bd43049446657d92d92fba3d04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0c4c80b8253727b30fad30b8d17cfe3953494bd43049446657d92d92fba3d04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0c4c80b8253727b30fad30b8d17cfe3953494bd43049446657d92d92fba3d04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0c4c80b8253727b30fad30b8d17cfe3953494bd43049446657d92d92fba3d04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:53:12 compute-0 podman[419791]: 2025-12-05 01:53:12.277811851 +0000 UTC m=+0.243151821 container init b7630c3646e4f05775bd78fbcddcdf24b46de883c976a45d023765cb65374283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:53:12 compute-0 podman[419791]: 2025-12-05 01:53:12.296672739 +0000 UTC m=+0.262012679 container start b7630c3646e4f05775bd78fbcddcdf24b46de883c976a45d023765cb65374283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_napier, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:53:12 compute-0 podman[419791]: 2025-12-05 01:53:12.30277212 +0000 UTC m=+0.268112070 container attach b7630c3646e4f05775bd78fbcddcdf24b46de883c976a45d023765cb65374283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 05 01:53:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:53:12 compute-0 ceph-mon[192914]: pgmap v1314: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:13 compute-0 wonderful_napier[419807]: {
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:     "0": [
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:         {
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "devices": [
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "/dev/loop3"
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             ],
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "lv_name": "ceph_lv0",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "lv_size": "21470642176",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "name": "ceph_lv0",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "tags": {
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.cluster_name": "ceph",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.crush_device_class": "",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.encrypted": "0",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.osd_id": "0",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.type": "block",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.vdo": "0"
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             },
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "type": "block",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "vg_name": "ceph_vg0"
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:         }
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:     ],
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:     "1": [
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:         {
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "devices": [
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "/dev/loop4"
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             ],
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "lv_name": "ceph_lv1",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "lv_size": "21470642176",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "name": "ceph_lv1",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "tags": {
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.cluster_name": "ceph",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.crush_device_class": "",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.encrypted": "0",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.osd_id": "1",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.type": "block",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.vdo": "0"
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             },
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "type": "block",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "vg_name": "ceph_vg1"
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:         }
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:     ],
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:     "2": [
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:         {
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "devices": [
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "/dev/loop5"
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             ],
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "lv_name": "ceph_lv2",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "lv_size": "21470642176",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "name": "ceph_lv2",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "tags": {
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.cluster_name": "ceph",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.crush_device_class": "",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.encrypted": "0",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.osd_id": "2",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.type": "block",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:                 "ceph.vdo": "0"
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             },
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "type": "block",
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:             "vg_name": "ceph_vg2"
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:         }
Dec 05 01:53:13 compute-0 wonderful_napier[419807]:     ]
Dec 05 01:53:13 compute-0 wonderful_napier[419807]: }
Dec 05 01:53:13 compute-0 systemd[1]: libpod-b7630c3646e4f05775bd78fbcddcdf24b46de883c976a45d023765cb65374283.scope: Deactivated successfully.
Dec 05 01:53:13 compute-0 podman[419791]: 2025-12-05 01:53:13.173301969 +0000 UTC m=+1.138641899 container died b7630c3646e4f05775bd78fbcddcdf24b46de883c976a45d023765cb65374283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 05 01:53:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0c4c80b8253727b30fad30b8d17cfe3953494bd43049446657d92d92fba3d04-merged.mount: Deactivated successfully.
Dec 05 01:53:13 compute-0 podman[419791]: 2025-12-05 01:53:13.255707686 +0000 UTC m=+1.221047626 container remove b7630c3646e4f05775bd78fbcddcdf24b46de883c976a45d023765cb65374283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_napier, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:53:13 compute-0 systemd[1]: libpod-conmon-b7630c3646e4f05775bd78fbcddcdf24b46de883c976a45d023765cb65374283.scope: Deactivated successfully.
Dec 05 01:53:13 compute-0 sudo[419687]: pam_unix(sudo:session): session closed for user root
Dec 05 01:53:13 compute-0 sudo[419827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:53:13 compute-0 sudo[419827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:53:13 compute-0 sudo[419827]: pam_unix(sudo:session): session closed for user root
Dec 05 01:53:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1315: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:13 compute-0 sudo[419852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:53:13 compute-0 sudo[419852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:53:13 compute-0 sudo[419852]: pam_unix(sudo:session): session closed for user root
Dec 05 01:53:13 compute-0 sudo[419877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:53:13 compute-0 sudo[419877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:53:13 compute-0 sudo[419877]: pam_unix(sudo:session): session closed for user root
Dec 05 01:53:13 compute-0 sudo[419902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:53:13 compute-0 sudo[419902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:53:14 compute-0 podman[419967]: 2025-12-05 01:53:14.321231407 +0000 UTC m=+0.092676746 container create d5feee262eb00d5fcab5e630cf418151998d739dbad5f75ecf4760617fc0049e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 05 01:53:14 compute-0 podman[419967]: 2025-12-05 01:53:14.290699312 +0000 UTC m=+0.062144681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:53:14 compute-0 systemd[1]: Started libpod-conmon-d5feee262eb00d5fcab5e630cf418151998d739dbad5f75ecf4760617fc0049e.scope.
Dec 05 01:53:14 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:53:14 compute-0 podman[419967]: 2025-12-05 01:53:14.466335901 +0000 UTC m=+0.237781220 container init d5feee262eb00d5fcab5e630cf418151998d739dbad5f75ecf4760617fc0049e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 01:53:14 compute-0 podman[419967]: 2025-12-05 01:53:14.476737422 +0000 UTC m=+0.248182741 container start d5feee262eb00d5fcab5e630cf418151998d739dbad5f75ecf4760617fc0049e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:53:14 compute-0 podman[419967]: 2025-12-05 01:53:14.482403811 +0000 UTC m=+0.253849160 container attach d5feee262eb00d5fcab5e630cf418151998d739dbad5f75ecf4760617fc0049e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 05 01:53:14 compute-0 peaceful_williams[419983]: 167 167
Dec 05 01:53:14 compute-0 systemd[1]: libpod-d5feee262eb00d5fcab5e630cf418151998d739dbad5f75ecf4760617fc0049e.scope: Deactivated successfully.
Dec 05 01:53:14 compute-0 podman[419967]: 2025-12-05 01:53:14.484347596 +0000 UTC m=+0.255792945 container died d5feee262eb00d5fcab5e630cf418151998d739dbad5f75ecf4760617fc0049e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:53:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-1411127e7999c03063b85e2375f6cc6331f3a5cb95f7b06d2d79b5aef601a3d5-merged.mount: Deactivated successfully.
Dec 05 01:53:14 compute-0 podman[419967]: 2025-12-05 01:53:14.577264808 +0000 UTC m=+0.348710157 container remove d5feee262eb00d5fcab5e630cf418151998d739dbad5f75ecf4760617fc0049e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 01:53:14 compute-0 systemd[1]: libpod-conmon-d5feee262eb00d5fcab5e630cf418151998d739dbad5f75ecf4760617fc0049e.scope: Deactivated successfully.
Dec 05 01:53:14 compute-0 ceph-mon[192914]: pgmap v1315: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:14 compute-0 podman[420006]: 2025-12-05 01:53:14.856433086 +0000 UTC m=+0.075678560 container create 645e6284fd3b6a1dc3ac914712e0468f658284c8bf89211838f056a387f87dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hertz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:53:14 compute-0 systemd[1]: Started libpod-conmon-645e6284fd3b6a1dc3ac914712e0468f658284c8bf89211838f056a387f87dc2.scope.
Dec 05 01:53:14 compute-0 podman[420006]: 2025-12-05 01:53:14.828243877 +0000 UTC m=+0.047489431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:53:14 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:53:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d710f031c737b0623acf7fa9fa8cd7bd95a9c58569e202e3eca3f78d0ccc0187/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:53:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d710f031c737b0623acf7fa9fa8cd7bd95a9c58569e202e3eca3f78d0ccc0187/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:53:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d710f031c737b0623acf7fa9fa8cd7bd95a9c58569e202e3eca3f78d0ccc0187/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:53:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d710f031c737b0623acf7fa9fa8cd7bd95a9c58569e202e3eca3f78d0ccc0187/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:53:14 compute-0 podman[420006]: 2025-12-05 01:53:14.97938817 +0000 UTC m=+0.198633744 container init 645e6284fd3b6a1dc3ac914712e0468f658284c8bf89211838f056a387f87dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 05 01:53:15 compute-0 podman[420006]: 2025-12-05 01:53:15.001371445 +0000 UTC m=+0.220616949 container start 645e6284fd3b6a1dc3ac914712e0468f658284c8bf89211838f056a387f87dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:53:15 compute-0 podman[420006]: 2025-12-05 01:53:15.009825962 +0000 UTC m=+0.229071466 container attach 645e6284fd3b6a1dc3ac914712e0468f658284c8bf89211838f056a387f87dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 05 01:53:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1316: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:16 compute-0 focused_hertz[420022]: {
Dec 05 01:53:16 compute-0 focused_hertz[420022]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:53:16 compute-0 focused_hertz[420022]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:53:16 compute-0 focused_hertz[420022]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:53:16 compute-0 focused_hertz[420022]:         "osd_id": 0,
Dec 05 01:53:16 compute-0 focused_hertz[420022]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:53:16 compute-0 focused_hertz[420022]:         "type": "bluestore"
Dec 05 01:53:16 compute-0 focused_hertz[420022]:     },
Dec 05 01:53:16 compute-0 focused_hertz[420022]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:53:16 compute-0 focused_hertz[420022]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:53:16 compute-0 focused_hertz[420022]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:53:16 compute-0 focused_hertz[420022]:         "osd_id": 1,
Dec 05 01:53:16 compute-0 focused_hertz[420022]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:53:16 compute-0 focused_hertz[420022]:         "type": "bluestore"
Dec 05 01:53:16 compute-0 focused_hertz[420022]:     },
Dec 05 01:53:16 compute-0 focused_hertz[420022]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:53:16 compute-0 focused_hertz[420022]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:53:16 compute-0 focused_hertz[420022]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:53:16 compute-0 focused_hertz[420022]:         "osd_id": 2,
Dec 05 01:53:16 compute-0 focused_hertz[420022]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:53:16 compute-0 focused_hertz[420022]:         "type": "bluestore"
Dec 05 01:53:16 compute-0 focused_hertz[420022]:     }
Dec 05 01:53:16 compute-0 focused_hertz[420022]: }
Dec 05 01:53:16 compute-0 systemd[1]: libpod-645e6284fd3b6a1dc3ac914712e0468f658284c8bf89211838f056a387f87dc2.scope: Deactivated successfully.
Dec 05 01:53:16 compute-0 podman[420006]: 2025-12-05 01:53:16.220964911 +0000 UTC m=+1.440210395 container died 645e6284fd3b6a1dc3ac914712e0468f658284c8bf89211838f056a387f87dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:53:16 compute-0 systemd[1]: libpod-645e6284fd3b6a1dc3ac914712e0468f658284c8bf89211838f056a387f87dc2.scope: Consumed 1.215s CPU time.
Dec 05 01:53:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d710f031c737b0623acf7fa9fa8cd7bd95a9c58569e202e3eca3f78d0ccc0187-merged.mount: Deactivated successfully.
Dec 05 01:53:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:53:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:53:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:53:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:53:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:53:16
Dec 05 01:53:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:53:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:53:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['images', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.log']
Dec 05 01:53:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:53:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:53:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:53:16 compute-0 podman[420006]: 2025-12-05 01:53:16.304404968 +0000 UTC m=+1.523650452 container remove 645e6284fd3b6a1dc3ac914712e0468f658284c8bf89211838f056a387f87dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 01:53:16 compute-0 systemd[1]: libpod-conmon-645e6284fd3b6a1dc3ac914712e0468f658284c8bf89211838f056a387f87dc2.scope: Deactivated successfully.
Dec 05 01:53:16 compute-0 sudo[419902]: pam_unix(sudo:session): session closed for user root
Dec 05 01:53:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:53:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:53:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:53:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:53:16 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8bc703b0-8718-4a99-8d67-e0f21d9a2428 does not exist
Dec 05 01:53:16 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8347f290-45c8-49f4-8f1b-5d46eced7eab does not exist
Dec 05 01:53:16 compute-0 sudo[420066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:53:16 compute-0 sudo[420066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:53:16 compute-0 sudo[420066]: pam_unix(sudo:session): session closed for user root
Dec 05 01:53:16 compute-0 nova_compute[349548]: 2025-12-05 01:53:16.484 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:16 compute-0 nova_compute[349548]: 2025-12-05 01:53:16.549 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:16 compute-0 sudo[420091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:53:16 compute-0 sudo[420091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:53:16 compute-0 sudo[420091]: pam_unix(sudo:session): session closed for user root
Dec 05 01:53:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:53:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:53:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:53:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:53:16 compute-0 ceph-mon[192914]: pgmap v1316: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:16 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:53:16 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:53:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:53:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:53:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:53:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:53:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:53:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:53:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1317: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:53:19 compute-0 ceph-mon[192914]: pgmap v1317: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1318: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:20 compute-0 ceph-mon[192914]: pgmap v1318: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:21 compute-0 nova_compute[349548]: 2025-12-05 01:53:21.488 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1319: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:21 compute-0 nova_compute[349548]: 2025-12-05 01:53:21.552 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:22 compute-0 ceph-mon[192914]: pgmap v1319: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:53:22 compute-0 nova_compute[349548]: 2025-12-05 01:53:22.970 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:53:22 compute-0 nova_compute[349548]: 2025-12-05 01:53:22.971 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:53:22 compute-0 nova_compute[349548]: 2025-12-05 01:53:22.972 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:53:22 compute-0 nova_compute[349548]: 2025-12-05 01:53:22.972 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 01:53:23 compute-0 nova_compute[349548]: 2025-12-05 01:53:23.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:53:23 compute-0 nova_compute[349548]: 2025-12-05 01:53:23.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:53:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1320: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:24 compute-0 nova_compute[349548]: 2025-12-05 01:53:24.071 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:53:24 compute-0 nova_compute[349548]: 2025-12-05 01:53:24.075 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 01:53:24 compute-0 nova_compute[349548]: 2025-12-05 01:53:24.076 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 01:53:24 compute-0 nova_compute[349548]: 2025-12-05 01:53:24.315 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:53:24 compute-0 nova_compute[349548]: 2025-12-05 01:53:24.316 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:53:24 compute-0 nova_compute[349548]: 2025-12-05 01:53:24.316 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 01:53:24 compute-0 nova_compute[349548]: 2025-12-05 01:53:24.317 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b69a0e24-1bc4-46a5-92d7-367c1efd53df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 01:53:24 compute-0 ceph-mon[192914]: pgmap v1320: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:25 compute-0 nova_compute[349548]: 2025-12-05 01:53:25.493 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updating instance_info_cache with network_info: [{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 01:53:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1321: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:25 compute-0 nova_compute[349548]: 2025-12-05 01:53:25.515 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:53:25 compute-0 nova_compute[349548]: 2025-12-05 01:53:25.516 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 01:53:25 compute-0 nova_compute[349548]: 2025-12-05 01:53:25.517 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:53:25 compute-0 nova_compute[349548]: 2025-12-05 01:53:25.517 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:53:25 compute-0 nova_compute[349548]: 2025-12-05 01:53:25.555 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:53:25 compute-0 nova_compute[349548]: 2025-12-05 01:53:25.556 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:53:25 compute-0 nova_compute[349548]: 2025-12-05 01:53:25.557 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:53:25 compute-0 nova_compute[349548]: 2025-12-05 01:53:25.557 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:53:25 compute-0 nova_compute[349548]: 2025-12-05 01:53:25.558 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:53:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:53:26 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2609764717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.077 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.208 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.210 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.211 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.217 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.217 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.218 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.491 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.555 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:26 compute-0 ceph-mon[192914]: pgmap v1321: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:26 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2609764717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011047613669662043 of space, bias 1.0, pg target 0.3314284100898613 quantized to 32 (current 32)
Dec 05 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 05 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.797 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.798 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3745MB free_disk=59.9220085144043GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.799 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.799 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.874 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.874 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.874 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.874 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.898 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing inventories for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.917 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating ProviderTree inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.918 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.936 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing aggregate associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.965 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing trait associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, traits: HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 01:53:27 compute-0 nova_compute[349548]: 2025-12-05 01:53:27.037 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:53:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:27.352 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 01:53:27 compute-0 nova_compute[349548]: 2025-12-05 01:53:27.352 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:27.355 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 01:53:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:27.357 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:53:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1322: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:53:27 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1390564852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:53:27 compute-0 nova_compute[349548]: 2025-12-05 01:53:27.586 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:53:27 compute-0 nova_compute[349548]: 2025-12-05 01:53:27.595 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:53:27 compute-0 nova_compute[349548]: 2025-12-05 01:53:27.614 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:53:27 compute-0 nova_compute[349548]: 2025-12-05 01:53:27.616 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 01:53:27 compute-0 nova_compute[349548]: 2025-12-05 01:53:27.617 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:53:27 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1390564852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:53:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:53:28 compute-0 ceph-mon[192914]: pgmap v1322: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:28 compute-0 podman[420162]: 2025-12-05 01:53:28.709249152 +0000 UTC m=+0.110828255 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 01:53:28 compute-0 podman[420161]: 2025-12-05 01:53:28.729862539 +0000 UTC m=+0.131138883 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:53:29 compute-0 nova_compute[349548]: 2025-12-05 01:53:29.166 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:53:29 compute-0 nova_compute[349548]: 2025-12-05 01:53:29.186 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:53:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1323: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:29 compute-0 podman[158197]: time="2025-12-05T01:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:53:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 01:53:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8620 "" "Go-http-client/1.1"
Dec 05 01:53:30 compute-0 ceph-mon[192914]: pgmap v1323: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:30 compute-0 podman[420202]: 2025-12-05 01:53:30.726431275 +0000 UTC m=+0.124748935 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:53:30 compute-0 podman[420201]: 2025-12-05 01:53:30.738454562 +0000 UTC m=+0.144475068 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 05 01:53:31 compute-0 openstack_network_exporter[366555]: ERROR   01:53:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:53:31 compute-0 openstack_network_exporter[366555]: ERROR   01:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:53:31 compute-0 openstack_network_exporter[366555]: ERROR   01:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:53:31 compute-0 openstack_network_exporter[366555]: ERROR   01:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:53:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:53:31 compute-0 openstack_network_exporter[366555]: ERROR   01:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:53:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:53:31 compute-0 nova_compute[349548]: 2025-12-05 01:53:31.490 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1324: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:31 compute-0 nova_compute[349548]: 2025-12-05 01:53:31.558 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:31 compute-0 nova_compute[349548]: 2025-12-05 01:53:31.998 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:31.999 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.016 349552 DEBUG nova.compute.manager [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 05 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.099 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.100 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.112 349552 DEBUG nova.virt.hardware [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 05 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.112 349552 INFO nova.compute.claims [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Claim successful on node compute-0.ctlplane.example.com
Dec 05 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.235 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:53:32 compute-0 ceph-mon[192914]: pgmap v1324: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:53:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:53:32 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/140473233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.779 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.791 349552 DEBUG nova.compute.provider_tree [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.815 349552 DEBUG nova.scheduler.client.report [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.849 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.850 349552 DEBUG nova.compute.manager [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 05 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.918 349552 DEBUG nova.compute.manager [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 05 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.918 349552 DEBUG nova.network.neutron [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 05 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.940 349552 INFO nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 05 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.976 349552 DEBUG nova.compute.manager [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 05 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.067 349552 DEBUG nova.compute.manager [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 05 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.070 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 05 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.071 349552 INFO nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Creating image(s)
Dec 05 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.131 349552 DEBUG nova.storage.rbd_utils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.207 349552 DEBUG nova.storage.rbd_utils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.269 349552 DEBUG nova.storage.rbd_utils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.282 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.342 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.344 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "af0f6d73e40706411141d751e7ebef271f1a5b42" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.345 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "af0f6d73e40706411141d751e7ebef271f1a5b42" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.345 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "af0f6d73e40706411141d751e7ebef271f1a5b42" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.387 349552 DEBUG nova.storage.rbd_utils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.397 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:53:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1325: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:33 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/140473233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.861 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:53:34 compute-0 nova_compute[349548]: 2025-12-05 01:53:34.010 349552 DEBUG nova.storage.rbd_utils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] resizing rbd image 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 05 01:53:34 compute-0 nova_compute[349548]: 2025-12-05 01:53:34.232 349552 DEBUG nova.objects.instance [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'migration_context' on Instance uuid 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 01:53:34 compute-0 nova_compute[349548]: 2025-12-05 01:53:34.300 349552 DEBUG nova.storage.rbd_utils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:53:34 compute-0 nova_compute[349548]: 2025-12-05 01:53:34.352 349552 DEBUG nova.storage.rbd_utils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:53:34 compute-0 nova_compute[349548]: 2025-12-05 01:53:34.363 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:53:34 compute-0 nova_compute[349548]: 2025-12-05 01:53:34.430 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:53:34 compute-0 nova_compute[349548]: 2025-12-05 01:53:34.431 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:53:34 compute-0 nova_compute[349548]: 2025-12-05 01:53:34.432 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:53:34 compute-0 nova_compute[349548]: 2025-12-05 01:53:34.432 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:53:34 compute-0 nova_compute[349548]: 2025-12-05 01:53:34.476 349552 DEBUG nova.storage.rbd_utils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:53:34 compute-0 nova_compute[349548]: 2025-12-05 01:53:34.488 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:53:34 compute-0 podman[420497]: 2025-12-05 01:53:34.688154154 +0000 UTC m=+0.110215727 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=edpm, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.component=ubi9-container, distribution-scope=public, release-0.7.12=, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, version=9.4)
Dec 05 01:53:34 compute-0 ceph-mon[192914]: pgmap v1325: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:53:35 compute-0 nova_compute[349548]: 2025-12-05 01:53:35.017 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:53:35 compute-0 nova_compute[349548]: 2025-12-05 01:53:35.272 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 05 01:53:35 compute-0 nova_compute[349548]: 2025-12-05 01:53:35.273 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Ensure instance console log exists: /var/lib/nova/instances/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 05 01:53:35 compute-0 nova_compute[349548]: 2025-12-05 01:53:35.275 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:53:35 compute-0 nova_compute[349548]: 2025-12-05 01:53:35.276 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:53:35 compute-0 nova_compute[349548]: 2025-12-05 01:53:35.277 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:53:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1326: 321 pgs: 321 active+clean; 143 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 270 KiB/s wr, 22 op/s
Dec 05 01:53:36 compute-0 nova_compute[349548]: 2025-12-05 01:53:36.494 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:36 compute-0 nova_compute[349548]: 2025-12-05 01:53:36.561 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:36 compute-0 nova_compute[349548]: 2025-12-05 01:53:36.578 349552 DEBUG nova.network.neutron [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Successfully updated port: 4341bf52-6bd5-42ee-b25d-f3d9844af854 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 05 01:53:36 compute-0 nova_compute[349548]: 2025-12-05 01:53:36.601 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:53:36 compute-0 nova_compute[349548]: 2025-12-05 01:53:36.602 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquired lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:53:36 compute-0 nova_compute[349548]: 2025-12-05 01:53:36.603 349552 DEBUG nova.network.neutron [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 05 01:53:36 compute-0 ceph-mon[192914]: pgmap v1326: 321 pgs: 321 active+clean; 143 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 270 KiB/s wr, 22 op/s
Dec 05 01:53:36 compute-0 nova_compute[349548]: 2025-12-05 01:53:36.756 349552 DEBUG nova.network.neutron [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 05 01:53:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1327: 321 pgs: 321 active+clean; 172 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 36 op/s
Dec 05 01:53:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:53:37 compute-0 nova_compute[349548]: 2025-12-05 01:53:37.796 349552 DEBUG nova.compute.manager [req-8d96d2f9-e265-4b2e-b9c6-ede5ebda4e49 req-0698238d-2415-48c0-b38b-a9c3a382cb19 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Received event network-changed-4341bf52-6bd5-42ee-b25d-f3d9844af854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 01:53:37 compute-0 nova_compute[349548]: 2025-12-05 01:53:37.797 349552 DEBUG nova.compute.manager [req-8d96d2f9-e265-4b2e-b9c6-ede5ebda4e49 req-0698238d-2415-48c0-b38b-a9c3a382cb19 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Refreshing instance network info cache due to event network-changed-4341bf52-6bd5-42ee-b25d-f3d9844af854. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 01:53:37 compute-0 nova_compute[349548]: 2025-12-05 01:53:37.797 349552 DEBUG oslo_concurrency.lockutils [req-8d96d2f9-e265-4b2e-b9c6-ede5ebda4e49 req-0698238d-2415-48c0-b38b-a9c3a382cb19 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:53:38 compute-0 ceph-mon[192914]: pgmap v1327: 321 pgs: 321 active+clean; 172 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 36 op/s
Dec 05 01:53:38 compute-0 nova_compute[349548]: 2025-12-05 01:53:38.942 349552 DEBUG nova.network.neutron [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Updating instance_info_cache with network_info: [{"id": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "address": "fa:16:3e:68:a7:22", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4341bf52-6b", "ovs_interfaceid": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 01:53:38 compute-0 nova_compute[349548]: 2025-12-05 01:53:38.966 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Releasing lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:53:38 compute-0 nova_compute[349548]: 2025-12-05 01:53:38.967 349552 DEBUG nova.compute.manager [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Instance network_info: |[{"id": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "address": "fa:16:3e:68:a7:22", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4341bf52-6b", "ovs_interfaceid": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 05 01:53:38 compute-0 nova_compute[349548]: 2025-12-05 01:53:38.969 349552 DEBUG oslo_concurrency.lockutils [req-8d96d2f9-e265-4b2e-b9c6-ede5ebda4e49 req-0698238d-2415-48c0-b38b-a9c3a382cb19 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:53:38 compute-0 nova_compute[349548]: 2025-12-05 01:53:38.970 349552 DEBUG nova.network.neutron [req-8d96d2f9-e265-4b2e-b9c6-ede5ebda4e49 req-0698238d-2415-48c0-b38b-a9c3a382cb19 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Refreshing network info cache for port 4341bf52-6bd5-42ee-b25d-f3d9844af854 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 01:53:38 compute-0 nova_compute[349548]: 2025-12-05 01:53:38.975 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Start _get_guest_xml network_info=[{"id": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "address": "fa:16:3e:68:a7:22", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4341bf52-6b", "ovs_interfaceid": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-05T01:46:34Z,direct_url=<?>,disk_format='qcow2',id=aa58c1e9-bdcc-4e60-9cee-eaeee0741251,min_disk=0,min_ram=0,name='cirros',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-05T01:46:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}], 'ephemerals': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 05 01:53:38 compute-0 nova_compute[349548]: 2025-12-05 01:53:38.993 349552 WARNING nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.009 349552 DEBUG nova.virt.libvirt.host [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 05 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.010 349552 DEBUG nova.virt.libvirt.host [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 05 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.016 349552 DEBUG nova.virt.libvirt.host [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 05 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.017 349552 DEBUG nova.virt.libvirt.host [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 05 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.018 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 05 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.018 349552 DEBUG nova.virt.hardware [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T01:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='7d473820-6f66-40b4-b8d1-decd466d7dd2',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-05T01:46:34Z,direct_url=<?>,disk_format='qcow2',id=aa58c1e9-bdcc-4e60-9cee-eaeee0741251,min_disk=0,min_ram=0,name='cirros',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-05T01:46:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 05 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.019 349552 DEBUG nova.virt.hardware [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 05 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.019 349552 DEBUG nova.virt.hardware [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 05 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.020 349552 DEBUG nova.virt.hardware [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 05 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.020 349552 DEBUG nova.virt.hardware [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 05 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.020 349552 DEBUG nova.virt.hardware [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 05 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.021 349552 DEBUG nova.virt.hardware [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 05 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.021 349552 DEBUG nova.virt.hardware [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 05 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.022 349552 DEBUG nova.virt.hardware [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 05 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.022 349552 DEBUG nova.virt.hardware [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 05 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.023 349552 DEBUG nova.virt.hardware [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 05 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.026 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:53:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 01:53:39 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3067829114' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:53:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1328: 321 pgs: 321 active+clean; 172 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 36 op/s
Dec 05 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.534 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.537 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:53:39 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3067829114' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.746049) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899619746088, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2039, "num_deletes": 251, "total_data_size": 3364282, "memory_usage": 3421440, "flush_reason": "Manual Compaction"}
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899619768089, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3309185, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25532, "largest_seqno": 27570, "table_properties": {"data_size": 3299989, "index_size": 5818, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18439, "raw_average_key_size": 20, "raw_value_size": 3281649, "raw_average_value_size": 3563, "num_data_blocks": 259, "num_entries": 921, "num_filter_entries": 921, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764899392, "oldest_key_time": 1764899392, "file_creation_time": 1764899619, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 22103 microseconds, and 6845 cpu microseconds.
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.768150) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3309185 bytes OK
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.768171) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.770271) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.770286) EVENT_LOG_v1 {"time_micros": 1764899619770281, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.770303) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3355782, prev total WAL file size 3355782, number of live WAL files 2.
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.771512) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3231KB)], [59(7186KB)]
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899619771589, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10668050, "oldest_snapshot_seqno": -1}
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5021 keys, 8913574 bytes, temperature: kUnknown
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899619814358, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8913574, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8878460, "index_size": 21436, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 124527, "raw_average_key_size": 24, "raw_value_size": 8786097, "raw_average_value_size": 1749, "num_data_blocks": 889, "num_entries": 5021, "num_filter_entries": 5021, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764899619, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.814531) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8913574 bytes
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.816421) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 249.1 rd, 208.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.0 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5535, records dropped: 514 output_compression: NoCompression
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.816435) EVENT_LOG_v1 {"time_micros": 1764899619816428, "job": 32, "event": "compaction_finished", "compaction_time_micros": 42820, "compaction_time_cpu_micros": 18238, "output_level": 6, "num_output_files": 1, "total_output_size": 8913574, "num_input_records": 5535, "num_output_records": 5021, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899619817005, "job": 32, "event": "table_file_deletion", "file_number": 61}
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899619818283, "job": 32, "event": "table_file_deletion", "file_number": 59}
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.771359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.818470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.818475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.818477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.818479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.818481) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:53:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 01:53:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/393097682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.024 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.074 349552 DEBUG nova.storage.rbd_utils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.085 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:53:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 01:53:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4126179487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.625 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.627 349552 DEBUG nova.virt.libvirt.vif [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T01:53:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7',id=3,image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ad982b73954486390215862ee62239f',ramdisk_id='',reservation_id='r-6yiphc1y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T01:53:33Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wNjI3NDkyODY1Nzg2OTkzOTcyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA2Mjc0OTI4NjU3ODY5OTM5NzI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDYyNzQ5Mjg2NTc4Njk5Mzk3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA2Mjc0OTI4NjU3ODY5OTM5NzI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wNjI3NDkyODY1Nzg2OTkzOTcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wNjI3NDkyODY1Nzg2OTkzOTcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Dec 05 01:53:40 compute-0 nova_compute[349548]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDYyNzQ5Mjg2NTc4Njk5Mzk3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA2Mjc0OTI4NjU3ODY5OTM5NzI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wNjI3NDkyODY1Nzg2OTkzOTcyPT0tLQo=',user_id='ff880837791d4f49a54672b8d0e705ff',uuid=7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "address": "fa:16:3e:68:a7:22", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4341bf52-6b", "ovs_interfaceid": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.628 349552 DEBUG nova.network.os_vif_util [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converting VIF {"id": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "address": "fa:16:3e:68:a7:22", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4341bf52-6b", "ovs_interfaceid": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.629 349552 DEBUG nova.network.os_vif_util [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:a7:22,bridge_name='br-int',has_traffic_filtering=True,id=4341bf52-6bd5-42ee-b25d-f3d9844af854,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4341bf52-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.631 349552 DEBUG nova.objects.instance [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'pci_devices' on Instance uuid 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.650 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] End _get_guest_xml xml=<domain type="kvm">
Dec 05 01:53:40 compute-0 nova_compute[349548]:   <uuid>7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5</uuid>
Dec 05 01:53:40 compute-0 nova_compute[349548]:   <name>instance-00000003</name>
Dec 05 01:53:40 compute-0 nova_compute[349548]:   <memory>524288</memory>
Dec 05 01:53:40 compute-0 nova_compute[349548]:   <vcpu>1</vcpu>
Dec 05 01:53:40 compute-0 nova_compute[349548]:   <metadata>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <nova:name>vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7</nova:name>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <nova:creationTime>2025-12-05 01:53:38</nova:creationTime>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <nova:flavor name="m1.small">
Dec 05 01:53:40 compute-0 nova_compute[349548]:         <nova:memory>512</nova:memory>
Dec 05 01:53:40 compute-0 nova_compute[349548]:         <nova:disk>1</nova:disk>
Dec 05 01:53:40 compute-0 nova_compute[349548]:         <nova:swap>0</nova:swap>
Dec 05 01:53:40 compute-0 nova_compute[349548]:         <nova:ephemeral>1</nova:ephemeral>
Dec 05 01:53:40 compute-0 nova_compute[349548]:         <nova:vcpus>1</nova:vcpus>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       </nova:flavor>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <nova:owner>
Dec 05 01:53:40 compute-0 nova_compute[349548]:         <nova:user uuid="ff880837791d4f49a54672b8d0e705ff">admin</nova:user>
Dec 05 01:53:40 compute-0 nova_compute[349548]:         <nova:project uuid="6ad982b73954486390215862ee62239f">admin</nova:project>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       </nova:owner>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <nova:root type="image" uuid="aa58c1e9-bdcc-4e60-9cee-eaeee0741251"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <nova:ports>
Dec 05 01:53:40 compute-0 nova_compute[349548]:         <nova:port uuid="4341bf52-6bd5-42ee-b25d-f3d9844af854">
Dec 05 01:53:40 compute-0 nova_compute[349548]:           <nova:ip type="fixed" address="192.168.0.25" ipVersion="4"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:         </nova:port>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       </nova:ports>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     </nova:instance>
Dec 05 01:53:40 compute-0 nova_compute[349548]:   </metadata>
Dec 05 01:53:40 compute-0 nova_compute[349548]:   <sysinfo type="smbios">
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <system>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <entry name="manufacturer">RDO</entry>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <entry name="product">OpenStack Compute</entry>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <entry name="serial">7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5</entry>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <entry name="uuid">7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5</entry>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <entry name="family">Virtual Machine</entry>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     </system>
Dec 05 01:53:40 compute-0 nova_compute[349548]:   </sysinfo>
Dec 05 01:53:40 compute-0 nova_compute[349548]:   <os>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <boot dev="hd"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <smbios mode="sysinfo"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:   </os>
Dec 05 01:53:40 compute-0 nova_compute[349548]:   <features>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <acpi/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <apic/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <vmcoreinfo/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:   </features>
Dec 05 01:53:40 compute-0 nova_compute[349548]:   <clock offset="utc">
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <timer name="pit" tickpolicy="delay"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <timer name="hpet" present="no"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:   </clock>
Dec 05 01:53:40 compute-0 nova_compute[349548]:   <cpu mode="host-model" match="exact">
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <topology sockets="1" cores="1" threads="1"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:   </cpu>
Dec 05 01:53:40 compute-0 nova_compute[349548]:   <devices>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <disk type="network" device="disk">
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk">
Dec 05 01:53:40 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       </source>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 01:53:40 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       </auth>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <target dev="vda" bus="virtio"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     </disk>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <disk type="network" device="disk">
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk.eph0">
Dec 05 01:53:40 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       </source>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 01:53:40 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       </auth>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <target dev="vdb" bus="virtio"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     </disk>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <disk type="network" device="cdrom">
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk.config">
Dec 05 01:53:40 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       </source>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 01:53:40 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       </auth>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <target dev="sda" bus="sata"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     </disk>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <interface type="ethernet">
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <mac address="fa:16:3e:68:a7:22"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <driver name="vhost" rx_queue_size="512"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <mtu size="1442"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <target dev="tap4341bf52-6b"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     </interface>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <serial type="pty">
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <log file="/var/lib/nova/instances/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/console.log" append="off"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     </serial>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <video>
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     </video>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <input type="tablet" bus="usb"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <rng model="virtio">
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <backend model="random">/dev/urandom</backend>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     </rng>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <controller type="usb" index="0"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     <memballoon model="virtio">
Dec 05 01:53:40 compute-0 nova_compute[349548]:       <stats period="10"/>
Dec 05 01:53:40 compute-0 nova_compute[349548]:     </memballoon>
Dec 05 01:53:40 compute-0 nova_compute[349548]:   </devices>
Dec 05 01:53:40 compute-0 nova_compute[349548]: </domain>
Dec 05 01:53:40 compute-0 nova_compute[349548]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.664 349552 DEBUG nova.compute.manager [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Preparing to wait for external event network-vif-plugged-4341bf52-6bd5-42ee-b25d-f3d9844af854 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.664 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.664 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.665 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.665 349552 DEBUG nova.virt.libvirt.vif [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T01:53:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7',id=3,image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ad982b73954486390215862ee62239f',ramdisk_id='',reservation_id='r-6yiphc1y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T01:53:33Z,user_data='Content-Type: multipart/mixed; boundary="===============0627492865786993972=="
MIME-Version: 1.0

--===============0627492865786993972==
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config"



# Capture all subprocess output into a logfile
# Useful for troubleshooting cloud-init issues
output: {all: '| tee -a /var/log/cloud-init-output.log'}

--===============0627492865786993972==
Content-Type: text/cloud-boothook; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="boothook.sh"

#!/usr/bin/bash

# FIXME(shadower) this is a workaround for cloud-init 0.6.3 present in Ubuntu
# 12.04 LTS:
# https://bugs.launchpad.net/heat/+bug/1257410
#
# The old cloud-init doesn't create the users directly so the commands to do
# this are injected though nova_utils.py.
#
# Once we drop support for 0.6.3, we can safely remove this.


# in case heat-cfntools has been installed from package but no symlinks
# are yet in /opt/aws/bin/
cfn-create-aws-symlinks

# Do not remove - the cloud boothook should always return success
exit 0

--===============0627492865786993972==
Content-Type: text/part-handler; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="part-handler.py"

# part-handler
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import datetime
import errno
import os
import sys


def list_types():
    return ["text/x-cfninitdata"]


def handle_part(data, ctype, filename, payload):
    if ctype == "__begin__":
        try:
            os.makedirs('/var/lib/heat-cfntools', int("700", 8))
        except OSError:
            ex_type, e, tb = sys.exc_info()
            if e.errno != errno.EEXIST:
                raise
        return

    if ctype == "__end__":
        return

    timestamp = datetime.datetime.now()
    with open('/var/log/part-handler.log', 'a') as log:
        log.write('%s filename:%s, ctype:%s\n' % (timestamp, filename, ctype))

    if ctype == 'text/x-cfninitdata':
        with open('/var/lib/heat-cfntools/%s' % filename, 'w') as f:
            f.write(payload)

        # TODO(sdake) hopefully temporary until users move to heat-cfntools-1.3
        with open('/var/lib/cloud/data/%s' % filename, 'w') as f:
            f.write(payload)

--===============0627492865786993972==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-userdata"


--===============0627492865786993972==
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="loguserdata.py"

#!/usr/bin/env python3
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import datetime
import errno
import logging
import os
import subprocess
import sys


VAR_PATH = '/var/lib/heat-cfntools'
LOG = logging.getLogger('heat-provision')


def init_logging():
    LOG.setLevel(logging.INFO)
    LOG.addHandler(logging.StreamHandler())
    fh = logging.FileHandler("/var/log/heat-provision.log")
    os.chmod(fh.baseFilename, int("600", 8))
    LOG.addHandler(fh)


def call(args):

    class LogStream(object):

        def write(self, data):
            LOG.info(data)

    LOG.info('%s\n', ' '.join(args))  # noqa
    try:
        ls = LogStream()
        p = subprocess.Popen(args, stdout=subprocess.PIPE,
                             stderr=subprocess.PIPE)
        data = p.communicate()
        if data:
            for x in data:
                ls.write(x)
    except OSError:
        ex_type, ex, tb = sys.exc_info()
        if ex.errno == errno.ENOEXEC:
            LOG.error('Userdata empty or not executable: %s', ex)
            return os.EX_OK
        else:
            LOG.error('OS error running userdata: %s', ex)
            return os.EX_OSERR
    except Exception:
        ex_type, ex, tb = sys.exc_info()
        LOG.error('Unknown error running userdata: %s', ex)
        return os.EX_SOFTWARE
    return p.returncode


def main():
    userdata_path = os.path.join(VAR_PATH, 'cfn-userdata')
    os.chmod(userdata_path, int("700", 8))

    LOG.info('Provision began: %s', datetime.datetime.now())
    returncode = call([userdata_path])
    LOG.info('Provision done: %s', datetime.datetime.now())
    if returncode:
        return returncode


if __name__ == '__main__':
    init_logging()

    code = main()
    if code:
        LOG.error('Provision failed with exit code %s', code)
        sys.exit(code)

    provision_log = os.path.join(VAR_PATH, 'provision-finished')
    # touch the file so it is timestamped with when finished
    with open(provision_log, 'a'):
        os.utime(provision_log, None)

--===============0627492865786993972==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-metadata-server"

https://heat-cfnapi-internal.openstack.svc:8000/v1/
--===============0627492865786993972==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-boto-cfg"

[Boto]
debug = 0
is_secure = 0
https_validate_certificates = 1
cfn_region_name = heat
cfn_region_endpoint = heat-cfnapi-internal.openstack.svc
--===============0627492865786993972==--
',user_id='ff880837791d4f49a54672b8d0e705ff',uuid=7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "address": "fa:16:3e:68:a7:22", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4341bf52-6b", "ovs_interfaceid": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.666 349552 DEBUG nova.network.os_vif_util [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converting VIF {"id": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "address": "fa:16:3e:68:a7:22", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4341bf52-6b", "ovs_interfaceid": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.666 349552 DEBUG nova.network.os_vif_util [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:a7:22,bridge_name='br-int',has_traffic_filtering=True,id=4341bf52-6bd5-42ee-b25d-f3d9844af854,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4341bf52-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.667 349552 DEBUG os_vif [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:a7:22,bridge_name='br-int',has_traffic_filtering=True,id=4341bf52-6bd5-42ee-b25d-f3d9844af854,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4341bf52-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.667 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.670 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.670 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.675 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.675 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4341bf52-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.676 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4341bf52-6b, col_values=(('external_ids', {'iface-id': '4341bf52-6bd5-42ee-b25d-f3d9844af854', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:a7:22', 'vm-uuid': '7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:53:40 compute-0 NetworkManager[49092]: <info>  [1764899620.6801] manager: (tap4341bf52-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.683 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.696 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.698 349552 INFO os_vif [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:a7:22,bridge_name='br-int',has_traffic_filtering=True,id=4341bf52-6bd5-42ee-b25d-f3d9844af854,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4341bf52-6b')
Dec 05 01:53:40 compute-0 ceph-mon[192914]: pgmap v1328: 321 pgs: 321 active+clean; 172 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 36 op/s
Dec 05 01:53:40 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/393097682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:53:40 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4126179487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.819 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.821 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.822 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.823 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No VIF found with MAC fa:16:3e:68:a7:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.824 349552 INFO nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Using config drive
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.866 349552 DEBUG nova.storage.rbd_utils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:53:40 compute-0 rsyslogd[188644]: message too long (8192) with configured size 8096, begin of message is: 2025-12-05 01:53:40.627 349552 DEBUG nova.virt.libvirt.vif [None req-91ef5c10-b4 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.967 349552 DEBUG nova.network.neutron [req-8d96d2f9-e265-4b2e-b9c6-ede5ebda4e49 req-0698238d-2415-48c0-b38b-a9c3a382cb19 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Updated VIF entry in instance network info cache for port 4341bf52-6bd5-42ee-b25d-f3d9844af854. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.968 349552 DEBUG nova.network.neutron [req-8d96d2f9-e265-4b2e-b9c6-ede5ebda4e49 req-0698238d-2415-48c0-b38b-a9c3a382cb19 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Updating instance_info_cache with network_info: [{"id": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "address": "fa:16:3e:68:a7:22", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4341bf52-6b", "ovs_interfaceid": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.986 349552 DEBUG oslo_concurrency.lockutils [req-8d96d2f9-e265-4b2e-b9c6-ede5ebda4e49 req-0698238d-2415-48c0-b38b-a9c3a382cb19 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:53:41 compute-0 nova_compute[349548]: 2025-12-05 01:53:41.205 349552 INFO nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Creating config drive at /var/lib/nova/instances/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.config
Dec 05 01:53:41 compute-0 nova_compute[349548]: 2025-12-05 01:53:41.219 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp73tf4ki8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:53:41 compute-0 nova_compute[349548]: 2025-12-05 01:53:41.353 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp73tf4ki8" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:53:41 compute-0 nova_compute[349548]: 2025-12-05 01:53:41.416 349552 DEBUG nova.storage.rbd_utils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:53:41 compute-0 nova_compute[349548]: 2025-12-05 01:53:41.429 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.config 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:53:41 compute-0 nova_compute[349548]: 2025-12-05 01:53:41.497 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1329: 321 pgs: 321 active+clean; 172 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec 05 01:53:41 compute-0 podman[420717]: 2025-12-05 01:53:41.688628767 +0000 UTC m=+0.102657116 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 01:53:41 compute-0 podman[420716]: 2025-12-05 01:53:41.691197939 +0000 UTC m=+0.106057981 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:53:41 compute-0 podman[420719]: 2025-12-05 01:53:41.704543783 +0000 UTC m=+0.114535738 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=edpm, release=1755695350, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, distribution-scope=public, name=ubi9-minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container)
Dec 05 01:53:41 compute-0 nova_compute[349548]: 2025-12-05 01:53:41.742 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.config 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:53:41 compute-0 nova_compute[349548]: 2025-12-05 01:53:41.743 349552 INFO nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Deleting local config drive /var/lib/nova/instances/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.config because it was imported into RBD.
Dec 05 01:53:41 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 05 01:53:41 compute-0 podman[420718]: 2025-12-05 01:53:41.787688121 +0000 UTC m=+0.188494379 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec 05 01:53:41 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 05 01:53:41 compute-0 NetworkManager[49092]: <info>  [1764899621.8716] manager: (tap4341bf52-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Dec 05 01:53:41 compute-0 kernel: tap4341bf52-6b: entered promiscuous mode
Dec 05 01:53:41 compute-0 nova_compute[349548]: 2025-12-05 01:53:41.887 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:41 compute-0 ovn_controller[89286]: 2025-12-05T01:53:41Z|00040|binding|INFO|Claiming lport 4341bf52-6bd5-42ee-b25d-f3d9844af854 for this chassis.
Dec 05 01:53:41 compute-0 ovn_controller[89286]: 2025-12-05T01:53:41Z|00041|binding|INFO|4341bf52-6bd5-42ee-b25d-f3d9844af854: Claiming fa:16:3e:68:a7:22 192.168.0.25
Dec 05 01:53:41 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:41.900 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:a7:22 192.168.0.25'], port_security=['fa:16:3e:68:a7:22 192.168.0.25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-qkgif4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-port-3t3utgry676a', 'neutron:cidrs': '192.168.0.25/24', 'neutron:device_id': '7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-qkgif4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-port-3t3utgry676a', 'neutron:project_id': '6ad982b73954486390215862ee62239f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf07c149-4b4f-4cc9-a5b5-cfd139acbede', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8440543a-d57d-422f-b491-49a678c2776e, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=4341bf52-6bd5-42ee-b25d-f3d9844af854) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 01:53:41 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:41.902 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 4341bf52-6bd5-42ee-b25d-f3d9844af854 in datapath 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 bound to our chassis
Dec 05 01:53:41 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:41.905 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183
Dec 05 01:53:41 compute-0 ovn_controller[89286]: 2025-12-05T01:53:41Z|00042|binding|INFO|Setting lport 4341bf52-6bd5-42ee-b25d-f3d9844af854 ovn-installed in OVS
Dec 05 01:53:41 compute-0 ovn_controller[89286]: 2025-12-05T01:53:41Z|00043|binding|INFO|Setting lport 4341bf52-6bd5-42ee-b25d-f3d9844af854 up in Southbound
Dec 05 01:53:41 compute-0 nova_compute[349548]: 2025-12-05 01:53:41.915 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:41 compute-0 systemd-udevd[420841]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 01:53:41 compute-0 systemd-machined[138700]: New machine qemu-3-instance-00000003.
Dec 05 01:53:41 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Dec 05 01:53:41 compute-0 NetworkManager[49092]: <info>  [1764899621.9364] device (tap4341bf52-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 05 01:53:41 compute-0 NetworkManager[49092]: <info>  [1764899621.9382] device (tap4341bf52-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 05 01:53:41 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:41.937 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[6bb4b963-6e7d-4423-a8b8-e6f4a1124e6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:53:41 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:41.971 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[5fade635-7f87-48f0-8d5f-3bb51018c657]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:53:41 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:41.975 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[89f28969-3256-4211-b014-c29c5765c4de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:53:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:42.013 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[db4f7160-1b61-4d4b-b365-42678c2f8f53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:53:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:42.039 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[4e330214-fb49-4b9a-9d3a-6c60620500b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49f7d2f1-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:8a:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 8, 'rx_bytes': 574, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 8, 'rx_bytes': 574, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537514, 'reachable_time': 15952, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 420853, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:53:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:42.061 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[58a717ca-5dd6-4361-a01d-de19dc7915d5]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap49f7d2f1-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537531, 'tstamp': 537531}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420855, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap49f7d2f1-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537536, 'tstamp': 537536}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420855, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:53:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:42.063 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49f7d2f1-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.066 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:42.068 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49f7d2f1-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:53:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:42.069 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 01:53:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:42.070 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap49f7d2f1-f0, col_values=(('external_ids', {'iface-id': '35b0af3f-4a87-44c5-9b77-2f08261b9985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:53:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:42.071 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.106 349552 DEBUG nova.compute.manager [req-146e440c-5d9e-498e-bf31-5caa8724749d req-c7f8bff5-45c2-4490-b0b2-a8a47a875efe a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Received event network-vif-plugged-4341bf52-6bd5-42ee-b25d-f3d9844af854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.107 349552 DEBUG oslo_concurrency.lockutils [req-146e440c-5d9e-498e-bf31-5caa8724749d req-c7f8bff5-45c2-4490-b0b2-a8a47a875efe a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.108 349552 DEBUG oslo_concurrency.lockutils [req-146e440c-5d9e-498e-bf31-5caa8724749d req-c7f8bff5-45c2-4490-b0b2-a8a47a875efe a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.108 349552 DEBUG oslo_concurrency.lockutils [req-146e440c-5d9e-498e-bf31-5caa8724749d req-c7f8bff5-45c2-4490-b0b2-a8a47a875efe a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.109 349552 DEBUG nova.compute.manager [req-146e440c-5d9e-498e-bf31-5caa8724749d req-c7f8bff5-45c2-4490-b0b2-a8a47a875efe a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Processing event network-vif-plugged-4341bf52-6bd5-42ee-b25d-f3d9844af854 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.626 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764899622.623513, 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.627 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] VM Started (Lifecycle Event)
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.630 349552 DEBUG nova.compute.manager [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.636 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.642 349552 INFO nova.virt.libvirt.driver [-] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Instance spawned successfully.
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.643 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.647 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.652 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.672 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.673 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.673 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.674 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.674 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 01:53:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.675 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.679 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.679 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764899622.625402, 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.681 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] VM Paused (Lifecycle Event)
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.710 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.717 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764899622.6345963, 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.718 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] VM Resumed (Lifecycle Event)
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.741 349552 INFO nova.compute.manager [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Took 9.67 seconds to spawn the instance on the hypervisor.
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.741 349552 DEBUG nova.compute.manager [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 01:53:42 compute-0 ceph-mon[192914]: pgmap v1329: 321 pgs: 321 active+clean; 172 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.798 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.808 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.824 349552 INFO nova.compute.manager [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Took 10.76 seconds to build instance.
Dec 05 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.852 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:53:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1330: 321 pgs: 321 active+clean; 172 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec 05 01:53:43 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 05 01:53:43 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 05 01:53:44 compute-0 nova_compute[349548]: 2025-12-05 01:53:44.225 349552 DEBUG nova.compute.manager [req-c57ac460-abde-4043-84fa-b09679a7277f req-9bc7b94a-1a84-4a6f-bf9d-7c0355aa3f8d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Received event network-vif-plugged-4341bf52-6bd5-42ee-b25d-f3d9844af854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 01:53:44 compute-0 nova_compute[349548]: 2025-12-05 01:53:44.227 349552 DEBUG oslo_concurrency.lockutils [req-c57ac460-abde-4043-84fa-b09679a7277f req-9bc7b94a-1a84-4a6f-bf9d-7c0355aa3f8d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:53:44 compute-0 nova_compute[349548]: 2025-12-05 01:53:44.228 349552 DEBUG oslo_concurrency.lockutils [req-c57ac460-abde-4043-84fa-b09679a7277f req-9bc7b94a-1a84-4a6f-bf9d-7c0355aa3f8d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:53:44 compute-0 nova_compute[349548]: 2025-12-05 01:53:44.229 349552 DEBUG oslo_concurrency.lockutils [req-c57ac460-abde-4043-84fa-b09679a7277f req-9bc7b94a-1a84-4a6f-bf9d-7c0355aa3f8d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:53:44 compute-0 nova_compute[349548]: 2025-12-05 01:53:44.229 349552 DEBUG nova.compute.manager [req-c57ac460-abde-4043-84fa-b09679a7277f req-9bc7b94a-1a84-4a6f-bf9d-7c0355aa3f8d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] No waiting events found dispatching network-vif-plugged-4341bf52-6bd5-42ee-b25d-f3d9844af854 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 01:53:44 compute-0 nova_compute[349548]: 2025-12-05 01:53:44.230 349552 WARNING nova.compute.manager [req-c57ac460-abde-4043-84fa-b09679a7277f req-9bc7b94a-1a84-4a6f-bf9d-7c0355aa3f8d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Received unexpected event network-vif-plugged-4341bf52-6bd5-42ee-b25d-f3d9844af854 for instance with vm_state active and task_state None.
Dec 05 01:53:44 compute-0 ceph-mon[192914]: pgmap v1330: 321 pgs: 321 active+clean; 172 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec 05 01:53:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 01:53:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2656003973' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:53:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 01:53:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2656003973' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:53:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1331: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 1.4 MiB/s wr, 54 op/s
Dec 05 01:53:45 compute-0 nova_compute[349548]: 2025-12-05 01:53:45.680 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2656003973' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:53:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2656003973' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:53:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:53:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:53:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:53:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:53:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:53:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:53:46 compute-0 nova_compute[349548]: 2025-12-05 01:53:46.499 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:46 compute-0 ceph-mon[192914]: pgmap v1331: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 1.4 MiB/s wr, 54 op/s
Dec 05 01:53:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1332: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.1 MiB/s wr, 73 op/s
Dec 05 01:53:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:53:48 compute-0 ceph-mon[192914]: pgmap v1332: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.1 MiB/s wr, 73 op/s
Dec 05 01:53:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1333: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 20 KiB/s wr, 59 op/s
Dec 05 01:53:50 compute-0 nova_compute[349548]: 2025-12-05 01:53:50.686 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:50 compute-0 ceph-mon[192914]: pgmap v1333: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 20 KiB/s wr, 59 op/s
Dec 05 01:53:51 compute-0 nova_compute[349548]: 2025-12-05 01:53:51.502 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1334: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 61 op/s
Dec 05 01:53:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:53:52 compute-0 ceph-mon[192914]: pgmap v1334: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 61 op/s
Dec 05 01:53:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1335: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 60 op/s
Dec 05 01:53:54 compute-0 ceph-mon[192914]: pgmap v1335: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 60 op/s
Dec 05 01:53:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1336: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 59 op/s
Dec 05 01:53:55 compute-0 nova_compute[349548]: 2025-12-05 01:53:55.696 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:56.183 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:53:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:56.184 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:53:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:56.185 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:53:56 compute-0 nova_compute[349548]: 2025-12-05 01:53:56.506 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:53:56 compute-0 ceph-mon[192914]: pgmap v1336: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 59 op/s
Dec 05 01:53:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1337: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 43 op/s
Dec 05 01:53:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:53:58 compute-0 ceph-mon[192914]: pgmap v1337: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 43 op/s
Dec 05 01:53:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1338: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1 op/s
Dec 05 01:53:59 compute-0 podman[420936]: 2025-12-05 01:53:59.703468912 +0000 UTC m=+0.107348436 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 05 01:53:59 compute-0 podman[420937]: 2025-12-05 01:53:59.728116513 +0000 UTC m=+0.129582049 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:53:59 compute-0 podman[158197]: time="2025-12-05T01:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:53:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 01:53:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8621 "" "Go-http-client/1.1"
Dec 05 01:54:00 compute-0 nova_compute[349548]: 2025-12-05 01:54:00.703 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:54:00 compute-0 ceph-mon[192914]: pgmap v1338: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1 op/s
Dec 05 01:54:01 compute-0 openstack_network_exporter[366555]: ERROR   01:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:54:01 compute-0 openstack_network_exporter[366555]: ERROR   01:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:54:01 compute-0 openstack_network_exporter[366555]: ERROR   01:54:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:54:01 compute-0 openstack_network_exporter[366555]: ERROR   01:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:54:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:54:01 compute-0 openstack_network_exporter[366555]: ERROR   01:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:54:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:54:01 compute-0 nova_compute[349548]: 2025-12-05 01:54:01.508 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:54:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1339: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1 op/s
Dec 05 01:54:01 compute-0 podman[420976]: 2025-12-05 01:54:01.747482947 +0000 UTC m=+0.142673027 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 01:54:01 compute-0 podman[420975]: 2025-12-05 01:54:01.751486179 +0000 UTC m=+0.155122855 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4)
Dec 05 01:54:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:54:02 compute-0 ceph-mon[192914]: pgmap v1339: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1 op/s
Dec 05 01:54:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1340: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:04 compute-0 ceph-mon[192914]: pgmap v1340: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1341: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:05 compute-0 nova_compute[349548]: 2025-12-05 01:54:05.708 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:54:05 compute-0 podman[421011]: 2025-12-05 01:54:05.730525774 +0000 UTC m=+0.129653682 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, container_name=kepler, vcs-type=git, architecture=x86_64, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_id=edpm, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container)
Dec 05 01:54:06 compute-0 nova_compute[349548]: 2025-12-05 01:54:06.510 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:54:06 compute-0 ceph-mon[192914]: pgmap v1341: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1342: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:54:08 compute-0 ceph-mon[192914]: pgmap v1342: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1343: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:10 compute-0 nova_compute[349548]: 2025-12-05 01:54:10.713 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:54:10 compute-0 ceph-mon[192914]: pgmap v1343: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:11 compute-0 nova_compute[349548]: 2025-12-05 01:54:11.514 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:54:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1344: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:11 compute-0 ovn_controller[89286]: 2025-12-05T01:54:11Z|00044|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Dec 05 01:54:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:54:12 compute-0 podman[421030]: 2025-12-05 01:54:12.713363763 +0000 UTC m=+0.108759447 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 01:54:12 compute-0 podman[421029]: 2025-12-05 01:54:12.743487766 +0000 UTC m=+0.144400005 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS)
Dec 05 01:54:12 compute-0 podman[421032]: 2025-12-05 01:54:12.748541668 +0000 UTC m=+0.128789028 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-type=git)
Dec 05 01:54:12 compute-0 podman[421031]: 2025-12-05 01:54:12.778650481 +0000 UTC m=+0.169548779 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible)
Dec 05 01:54:13 compute-0 ceph-mon[192914]: pgmap v1344: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1345: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:15 compute-0 ceph-mon[192914]: pgmap v1345: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1346: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:15 compute-0 nova_compute[349548]: 2025-12-05 01:54:15.721 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:54:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:54:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:54:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:54:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:54:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:54:16
Dec 05 01:54:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:54:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:54:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'default.rgw.meta', 'default.rgw.control', 'images', '.mgr', 'cephfs.cephfs.meta']
Dec 05 01:54:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:54:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:54:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:54:16 compute-0 nova_compute[349548]: 2025-12-05 01:54:16.516 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:54:16 compute-0 sudo[421113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:54:16 compute-0 sudo[421113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:54:16 compute-0 sudo[421113]: pam_unix(sudo:session): session closed for user root
Dec 05 01:54:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:54:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:54:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:54:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:54:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:54:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:54:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:54:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:54:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:54:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:54:16 compute-0 sudo[421138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:54:16 compute-0 sudo[421138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:54:16 compute-0 sudo[421138]: pam_unix(sudo:session): session closed for user root
Dec 05 01:54:16 compute-0 sudo[421163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:54:17 compute-0 sudo[421163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:54:17 compute-0 sudo[421163]: pam_unix(sudo:session): session closed for user root
Dec 05 01:54:17 compute-0 ceph-mon[192914]: pgmap v1346: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:17 compute-0 sudo[421188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:54:17 compute-0 sudo[421188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:54:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1347: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 341 B/s wr, 7 op/s
Dec 05 01:54:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:54:17 compute-0 sudo[421188]: pam_unix(sudo:session): session closed for user root
Dec 05 01:54:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:54:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:54:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:54:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:54:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:54:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:54:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a7177625-f8cf-4768-860a-b948f5d1dcc3 does not exist
Dec 05 01:54:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev e66943c4-f329-4521-9e57-dd53e87fdedb does not exist
Dec 05 01:54:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev aad8d181-da1d-4616-8ae7-1301bccb0c0a does not exist
Dec 05 01:54:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:54:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:54:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:54:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:54:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:54:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:54:17 compute-0 ovn_controller[89286]: 2025-12-05T01:54:17Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:68:a7:22 192.168.0.25
Dec 05 01:54:18 compute-0 ovn_controller[89286]: 2025-12-05T01:54:18Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:68:a7:22 192.168.0.25
Dec 05 01:54:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:54:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:54:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:54:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:54:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:54:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:54:18 compute-0 sudo[421242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:54:18 compute-0 sudo[421242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:54:18 compute-0 sudo[421242]: pam_unix(sudo:session): session closed for user root
Dec 05 01:54:18 compute-0 sudo[421267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:54:18 compute-0 sudo[421267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:54:18 compute-0 sudo[421267]: pam_unix(sudo:session): session closed for user root
Dec 05 01:54:18 compute-0 sudo[421292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:54:18 compute-0 sudo[421292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:54:18 compute-0 sudo[421292]: pam_unix(sudo:session): session closed for user root
Dec 05 01:54:18 compute-0 sudo[421317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:54:18 compute-0 sudo[421317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:54:19 compute-0 ceph-mon[192914]: pgmap v1347: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 341 B/s wr, 7 op/s
Dec 05 01:54:19 compute-0 podman[421377]: 2025-12-05 01:54:19.163265416 +0000 UTC m=+0.079537319 container create 343ad646bd82bc88d0e0f8dbbd6f9af9cdb3a5eebb9b14556468438f945baab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 01:54:19 compute-0 podman[421377]: 2025-12-05 01:54:19.118995356 +0000 UTC m=+0.035267299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:54:19 compute-0 systemd[1]: Started libpod-conmon-343ad646bd82bc88d0e0f8dbbd6f9af9cdb3a5eebb9b14556468438f945baab4.scope.
Dec 05 01:54:19 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:54:19 compute-0 podman[421377]: 2025-12-05 01:54:19.294256314 +0000 UTC m=+0.210528247 container init 343ad646bd82bc88d0e0f8dbbd6f9af9cdb3a5eebb9b14556468438f945baab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:54:19 compute-0 podman[421377]: 2025-12-05 01:54:19.313064751 +0000 UTC m=+0.229336664 container start 343ad646bd82bc88d0e0f8dbbd6f9af9cdb3a5eebb9b14556468438f945baab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 01:54:19 compute-0 podman[421377]: 2025-12-05 01:54:19.319278005 +0000 UTC m=+0.235549908 container attach 343ad646bd82bc88d0e0f8dbbd6f9af9cdb3a5eebb9b14556468438f945baab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_panini, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 05 01:54:19 compute-0 hardcore_panini[421392]: 167 167
Dec 05 01:54:19 compute-0 systemd[1]: libpod-343ad646bd82bc88d0e0f8dbbd6f9af9cdb3a5eebb9b14556468438f945baab4.scope: Deactivated successfully.
Dec 05 01:54:19 compute-0 podman[421397]: 2025-12-05 01:54:19.423613847 +0000 UTC m=+0.066929925 container died 343ad646bd82bc88d0e0f8dbbd6f9af9cdb3a5eebb9b14556468438f945baab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_panini, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 05 01:54:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a11b8f7943a43ab4cd9fe8a2af9d6651fd462f2142de5dbb246b802d91b64df-merged.mount: Deactivated successfully.
Dec 05 01:54:19 compute-0 podman[421397]: 2025-12-05 01:54:19.510812749 +0000 UTC m=+0.154128747 container remove 343ad646bd82bc88d0e0f8dbbd6f9af9cdb3a5eebb9b14556468438f945baab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_panini, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 01:54:19 compute-0 systemd[1]: libpod-conmon-343ad646bd82bc88d0e0f8dbbd6f9af9cdb3a5eebb9b14556468438f945baab4.scope: Deactivated successfully.
Dec 05 01:54:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1348: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 341 B/s wr, 7 op/s
Dec 05 01:54:19 compute-0 podman[421419]: 2025-12-05 01:54:19.816663025 +0000 UTC m=+0.090672621 container create cb2a71aff4155338a9978944ed7b86527726d7d8e0399fc92eb2efa7cedd3b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 05 01:54:19 compute-0 podman[421419]: 2025-12-05 01:54:19.775399919 +0000 UTC m=+0.049409595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:54:19 compute-0 systemd[1]: Started libpod-conmon-cb2a71aff4155338a9978944ed7b86527726d7d8e0399fc92eb2efa7cedd3b8d.scope.
Dec 05 01:54:19 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/862c40cb65149a4bfac317ca78a69c94478568ab43c3e7b73c2b1a9955bbfbac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/862c40cb65149a4bfac317ca78a69c94478568ab43c3e7b73c2b1a9955bbfbac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/862c40cb65149a4bfac317ca78a69c94478568ab43c3e7b73c2b1a9955bbfbac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/862c40cb65149a4bfac317ca78a69c94478568ab43c3e7b73c2b1a9955bbfbac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/862c40cb65149a4bfac317ca78a69c94478568ab43c3e7b73c2b1a9955bbfbac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:54:19 compute-0 podman[421419]: 2025-12-05 01:54:19.995010929 +0000 UTC m=+0.269020535 container init cb2a71aff4155338a9978944ed7b86527726d7d8e0399fc92eb2efa7cedd3b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_leakey, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 05 01:54:20 compute-0 podman[421419]: 2025-12-05 01:54:20.020264377 +0000 UTC m=+0.294274003 container start cb2a71aff4155338a9978944ed7b86527726d7d8e0399fc92eb2efa7cedd3b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:54:20 compute-0 podman[421419]: 2025-12-05 01:54:20.033269901 +0000 UTC m=+0.307279507 container attach cb2a71aff4155338a9978944ed7b86527726d7d8e0399fc92eb2efa7cedd3b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_leakey, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:54:20 compute-0 nova_compute[349548]: 2025-12-05 01:54:20.726 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:54:21 compute-0 ceph-mon[192914]: pgmap v1348: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 341 B/s wr, 7 op/s
Dec 05 01:54:21 compute-0 sharp_leakey[421435]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:54:21 compute-0 sharp_leakey[421435]: --> relative data size: 1.0
Dec 05 01:54:21 compute-0 sharp_leakey[421435]: --> All data devices are unavailable
Dec 05 01:54:21 compute-0 systemd[1]: libpod-cb2a71aff4155338a9978944ed7b86527726d7d8e0399fc92eb2efa7cedd3b8d.scope: Deactivated successfully.
Dec 05 01:54:21 compute-0 systemd[1]: libpod-cb2a71aff4155338a9978944ed7b86527726d7d8e0399fc92eb2efa7cedd3b8d.scope: Consumed 1.200s CPU time.
Dec 05 01:54:21 compute-0 podman[421419]: 2025-12-05 01:54:21.302928218 +0000 UTC m=+1.576937824 container died cb2a71aff4155338a9978944ed7b86527726d7d8e0399fc92eb2efa7cedd3b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_leakey, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:54:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-862c40cb65149a4bfac317ca78a69c94478568ab43c3e7b73c2b1a9955bbfbac-merged.mount: Deactivated successfully.
Dec 05 01:54:21 compute-0 podman[421419]: 2025-12-05 01:54:21.383069092 +0000 UTC m=+1.657078688 container remove cb2a71aff4155338a9978944ed7b86527726d7d8e0399fc92eb2efa7cedd3b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_leakey, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 01:54:21 compute-0 systemd[1]: libpod-conmon-cb2a71aff4155338a9978944ed7b86527726d7d8e0399fc92eb2efa7cedd3b8d.scope: Deactivated successfully.
Dec 05 01:54:21 compute-0 sudo[421317]: pam_unix(sudo:session): session closed for user root
Dec 05 01:54:21 compute-0 nova_compute[349548]: 2025-12-05 01:54:21.517 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:54:21 compute-0 sudo[421476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:54:21 compute-0 sudo[421476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:54:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1349: 321 pgs: 321 active+clean; 177 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 29 KiB/s wr, 26 op/s
Dec 05 01:54:21 compute-0 sudo[421476]: pam_unix(sudo:session): session closed for user root
Dec 05 01:54:21 compute-0 sudo[421501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:54:21 compute-0 sudo[421501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:54:21 compute-0 sudo[421501]: pam_unix(sudo:session): session closed for user root
Dec 05 01:54:21 compute-0 sudo[421526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:54:21 compute-0 sudo[421526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:54:21 compute-0 sudo[421526]: pam_unix(sudo:session): session closed for user root
Dec 05 01:54:21 compute-0 sudo[421551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:54:21 compute-0 sudo[421551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:54:22 compute-0 nova_compute[349548]: 2025-12-05 01:54:22.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:54:22 compute-0 nova_compute[349548]: 2025-12-05 01:54:22.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:54:22 compute-0 nova_compute[349548]: 2025-12-05 01:54:22.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 01:54:22 compute-0 podman[421616]: 2025-12-05 01:54:22.408518421 +0000 UTC m=+0.065773893 container create 0dc5e906f4147870741680cd835a30cb1b0259f133b0e7792dab347ccf43c84c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 05 01:54:22 compute-0 systemd[1]: Started libpod-conmon-0dc5e906f4147870741680cd835a30cb1b0259f133b0e7792dab347ccf43c84c.scope.
Dec 05 01:54:22 compute-0 podman[421616]: 2025-12-05 01:54:22.384784146 +0000 UTC m=+0.042039638 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:54:22 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:54:22 compute-0 podman[421616]: 2025-12-05 01:54:22.538392098 +0000 UTC m=+0.195647620 container init 0dc5e906f4147870741680cd835a30cb1b0259f133b0e7792dab347ccf43c84c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lewin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:54:22 compute-0 podman[421616]: 2025-12-05 01:54:22.554326994 +0000 UTC m=+0.211582456 container start 0dc5e906f4147870741680cd835a30cb1b0259f133b0e7792dab347ccf43c84c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:54:22 compute-0 podman[421616]: 2025-12-05 01:54:22.559817778 +0000 UTC m=+0.217073250 container attach 0dc5e906f4147870741680cd835a30cb1b0259f133b0e7792dab347ccf43c84c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lewin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 01:54:22 compute-0 sleepy_lewin[421633]: 167 167
Dec 05 01:54:22 compute-0 systemd[1]: libpod-0dc5e906f4147870741680cd835a30cb1b0259f133b0e7792dab347ccf43c84c.scope: Deactivated successfully.
Dec 05 01:54:22 compute-0 podman[421616]: 2025-12-05 01:54:22.568983785 +0000 UTC m=+0.226239287 container died 0dc5e906f4147870741680cd835a30cb1b0259f133b0e7792dab347ccf43c84c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 05 01:54:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a2f21f1185ad79198825176b58b1ecbfb765d28d20ed06f614d90178c1818a3-merged.mount: Deactivated successfully.
Dec 05 01:54:22 compute-0 podman[421616]: 2025-12-05 01:54:22.624438128 +0000 UTC m=+0.281693610 container remove 0dc5e906f4147870741680cd835a30cb1b0259f133b0e7792dab347ccf43c84c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 05 01:54:22 compute-0 systemd[1]: libpod-conmon-0dc5e906f4147870741680cd835a30cb1b0259f133b0e7792dab347ccf43c84c.scope: Deactivated successfully.
Dec 05 01:54:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:54:22 compute-0 podman[421656]: 2025-12-05 01:54:22.844224893 +0000 UTC m=+0.052727218 container create 6d5dd2eecfc637151add432fdffbfd488cffabd44b86408501fe630a60955726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 05 01:54:22 compute-0 systemd[1]: Started libpod-conmon-6d5dd2eecfc637151add432fdffbfd488cffabd44b86408501fe630a60955726.scope.
Dec 05 01:54:22 compute-0 podman[421656]: 2025-12-05 01:54:22.825103988 +0000 UTC m=+0.033606313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:54:22 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe8625996359ea483b0a4b651fb09bbf170a33e1ab0d957024963d36cdb64e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe8625996359ea483b0a4b651fb09bbf170a33e1ab0d957024963d36cdb64e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe8625996359ea483b0a4b651fb09bbf170a33e1ab0d957024963d36cdb64e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe8625996359ea483b0a4b651fb09bbf170a33e1ab0d957024963d36cdb64e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:54:22 compute-0 podman[421656]: 2025-12-05 01:54:22.981302592 +0000 UTC m=+0.189804947 container init 6d5dd2eecfc637151add432fdffbfd488cffabd44b86408501fe630a60955726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:54:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 01:54:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 2400.1 total, 600.0 interval
                                            Cumulative writes: 6573 writes, 26K keys, 6573 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 6573 writes, 1296 syncs, 5.07 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 694 writes, 1830 keys, 694 commit groups, 1.0 writes per commit group, ingest: 1.60 MB, 0.00 MB/s
                                            Interval WAL: 694 writes, 301 syncs, 2.31 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 01:54:22 compute-0 podman[421656]: 2025-12-05 01:54:22.999700087 +0000 UTC m=+0.208202412 container start 6d5dd2eecfc637151add432fdffbfd488cffabd44b86408501fe630a60955726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 01:54:23 compute-0 podman[421656]: 2025-12-05 01:54:23.004719158 +0000 UTC m=+0.213221493 container attach 6d5dd2eecfc637151add432fdffbfd488cffabd44b86408501fe630a60955726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:54:23 compute-0 nova_compute[349548]: 2025-12-05 01:54:23.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:54:23 compute-0 nova_compute[349548]: 2025-12-05 01:54:23.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:54:23 compute-0 ceph-mon[192914]: pgmap v1349: 321 pgs: 321 active+clean; 177 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 29 KiB/s wr, 26 op/s
Dec 05 01:54:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1350: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]: {
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:     "0": [
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:         {
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "devices": [
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "/dev/loop3"
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             ],
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "lv_name": "ceph_lv0",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "lv_size": "21470642176",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "name": "ceph_lv0",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "tags": {
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.cluster_name": "ceph",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.crush_device_class": "",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.encrypted": "0",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.osd_id": "0",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.type": "block",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.vdo": "0"
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             },
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "type": "block",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "vg_name": "ceph_vg0"
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:         }
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:     ],
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:     "1": [
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:         {
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "devices": [
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "/dev/loop4"
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             ],
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "lv_name": "ceph_lv1",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "lv_size": "21470642176",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "name": "ceph_lv1",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "tags": {
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.cluster_name": "ceph",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.crush_device_class": "",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.encrypted": "0",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.osd_id": "1",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.type": "block",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.vdo": "0"
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             },
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "type": "block",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "vg_name": "ceph_vg1"
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:         }
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:     ],
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:     "2": [
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:         {
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "devices": [
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "/dev/loop5"
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             ],
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "lv_name": "ceph_lv2",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "lv_size": "21470642176",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "name": "ceph_lv2",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "tags": {
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.cluster_name": "ceph",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.crush_device_class": "",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.encrypted": "0",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.osd_id": "2",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.type": "block",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:                 "ceph.vdo": "0"
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             },
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "type": "block",
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:             "vg_name": "ceph_vg2"
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:         }
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]:     ]
Dec 05 01:54:23 compute-0 nervous_heisenberg[421672]: }
Dec 05 01:54:23 compute-0 systemd[1]: libpod-6d5dd2eecfc637151add432fdffbfd488cffabd44b86408501fe630a60955726.scope: Deactivated successfully.
Dec 05 01:54:23 compute-0 podman[421656]: 2025-12-05 01:54:23.846706719 +0000 UTC m=+1.055209034 container died 6d5dd2eecfc637151add432fdffbfd488cffabd44b86408501fe630a60955726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:54:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fe8625996359ea483b0a4b651fb09bbf170a33e1ab0d957024963d36cdb64e6-merged.mount: Deactivated successfully.
Dec 05 01:54:23 compute-0 podman[421656]: 2025-12-05 01:54:23.926478733 +0000 UTC m=+1.134981048 container remove 6d5dd2eecfc637151add432fdffbfd488cffabd44b86408501fe630a60955726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:54:23 compute-0 systemd[1]: libpod-conmon-6d5dd2eecfc637151add432fdffbfd488cffabd44b86408501fe630a60955726.scope: Deactivated successfully.
Dec 05 01:54:23 compute-0 sudo[421551]: pam_unix(sudo:session): session closed for user root
Dec 05 01:54:24 compute-0 nova_compute[349548]: 2025-12-05 01:54:24.063 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:54:24 compute-0 nova_compute[349548]: 2025-12-05 01:54:24.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:54:24 compute-0 nova_compute[349548]: 2025-12-05 01:54:24.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 01:54:24 compute-0 sudo[421692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:54:24 compute-0 sudo[421692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:54:24 compute-0 sudo[421692]: pam_unix(sudo:session): session closed for user root
Dec 05 01:54:24 compute-0 sudo[421717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:54:24 compute-0 sudo[421717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:54:24 compute-0 sudo[421717]: pam_unix(sudo:session): session closed for user root
Dec 05 01:54:24 compute-0 sudo[421742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:54:24 compute-0 sudo[421742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:54:24 compute-0 sudo[421742]: pam_unix(sudo:session): session closed for user root
Dec 05 01:54:24 compute-0 sudo[421767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:54:24 compute-0 sudo[421767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:54:24 compute-0 nova_compute[349548]: 2025-12-05 01:54:24.619 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:54:24 compute-0 nova_compute[349548]: 2025-12-05 01:54:24.620 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:54:24 compute-0 nova_compute[349548]: 2025-12-05 01:54:24.620 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 01:54:24 compute-0 podman[421831]: 2025-12-05 01:54:24.946077966 +0000 UTC m=+0.077704707 container create c4fd77345ef6dead86e3fe9c8ebc02e880ffb8e68d3d7abcc38f6287a6c6e95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_elbakyan, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 05 01:54:25 compute-0 podman[421831]: 2025-12-05 01:54:24.920446699 +0000 UTC m=+0.052073470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:54:25 compute-0 systemd[1]: Started libpod-conmon-c4fd77345ef6dead86e3fe9c8ebc02e880ffb8e68d3d7abcc38f6287a6c6e95e.scope.
Dec 05 01:54:25 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:54:25 compute-0 podman[421831]: 2025-12-05 01:54:25.083353311 +0000 UTC m=+0.214980072 container init c4fd77345ef6dead86e3fe9c8ebc02e880ffb8e68d3d7abcc38f6287a6c6e95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_elbakyan, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 01:54:25 compute-0 podman[421831]: 2025-12-05 01:54:25.098760812 +0000 UTC m=+0.230387553 container start c4fd77345ef6dead86e3fe9c8ebc02e880ffb8e68d3d7abcc38f6287a6c6e95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_elbakyan, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:54:25 compute-0 ceph-mon[192914]: pgmap v1350: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec 05 01:54:25 compute-0 podman[421831]: 2025-12-05 01:54:25.103342731 +0000 UTC m=+0.234969482 container attach c4fd77345ef6dead86e3fe9c8ebc02e880ffb8e68d3d7abcc38f6287a6c6e95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:54:25 compute-0 youthful_elbakyan[421847]: 167 167
Dec 05 01:54:25 compute-0 systemd[1]: libpod-c4fd77345ef6dead86e3fe9c8ebc02e880ffb8e68d3d7abcc38f6287a6c6e95e.scope: Deactivated successfully.
Dec 05 01:54:25 compute-0 conmon[421847]: conmon c4fd77345ef6dead86e3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c4fd77345ef6dead86e3fe9c8ebc02e880ffb8e68d3d7abcc38f6287a6c6e95e.scope/container/memory.events
Dec 05 01:54:25 compute-0 podman[421831]: 2025-12-05 01:54:25.11581456 +0000 UTC m=+0.247441321 container died c4fd77345ef6dead86e3fe9c8ebc02e880ffb8e68d3d7abcc38f6287a6c6e95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 05 01:54:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-388d760a135427145ff715cf7cbc370ad8d461857c0e6deb08c1c2bc1c8eafea-merged.mount: Deactivated successfully.
Dec 05 01:54:25 compute-0 podman[421831]: 2025-12-05 01:54:25.166104628 +0000 UTC m=+0.297731369 container remove c4fd77345ef6dead86e3fe9c8ebc02e880ffb8e68d3d7abcc38f6287a6c6e95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 05 01:54:25 compute-0 systemd[1]: libpod-conmon-c4fd77345ef6dead86e3fe9c8ebc02e880ffb8e68d3d7abcc38f6287a6c6e95e.scope: Deactivated successfully.
Dec 05 01:54:25 compute-0 podman[421873]: 2025-12-05 01:54:25.439737482 +0000 UTC m=+0.096917015 container create b59b87d3e467a1c2f4646cd7276a393bd21c4e82bd036c8bd30b475266d2707b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_satoshi, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:54:25 compute-0 podman[421873]: 2025-12-05 01:54:25.410656177 +0000 UTC m=+0.067835790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:54:25 compute-0 systemd[1]: Started libpod-conmon-b59b87d3e467a1c2f4646cd7276a393bd21c4e82bd036c8bd30b475266d2707b.scope.
Dec 05 01:54:25 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8abe4a66391dfc77099148803f536d88263d2e4fd428f50cb6b49541da8e4a79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:54:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1351: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec 05 01:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8abe4a66391dfc77099148803f536d88263d2e4fd428f50cb6b49541da8e4a79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8abe4a66391dfc77099148803f536d88263d2e4fd428f50cb6b49541da8e4a79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8abe4a66391dfc77099148803f536d88263d2e4fd428f50cb6b49541da8e4a79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:54:25 compute-0 podman[421873]: 2025-12-05 01:54:25.58534252 +0000 UTC m=+0.242522133 container init b59b87d3e467a1c2f4646cd7276a393bd21c4e82bd036c8bd30b475266d2707b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_satoshi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:54:25 compute-0 podman[421873]: 2025-12-05 01:54:25.618379955 +0000 UTC m=+0.275559488 container start b59b87d3e467a1c2f4646cd7276a393bd21c4e82bd036c8bd30b475266d2707b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:54:25 compute-0 podman[421873]: 2025-12-05 01:54:25.623175049 +0000 UTC m=+0.280354662 container attach b59b87d3e467a1c2f4646cd7276a393bd21c4e82bd036c8bd30b475266d2707b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_satoshi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 01:54:25 compute-0 nova_compute[349548]: 2025-12-05 01:54:25.734 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:54:25 compute-0 nova_compute[349548]: 2025-12-05 01:54:25.786 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updating instance_info_cache with network_info: [{"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 01:54:25 compute-0 nova_compute[349548]: 2025-12-05 01:54:25.815 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:54:25 compute-0 nova_compute[349548]: 2025-12-05 01:54:25.815 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 01:54:26 compute-0 nova_compute[349548]: 2025-12-05 01:54:26.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:54:26 compute-0 ceph-mon[192914]: pgmap v1351: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec 05 01:54:26 compute-0 nova_compute[349548]: 2025-12-05 01:54:26.520 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016563471666015497 of space, bias 1.0, pg target 0.4969041499804649 quantized to 32 (current 32)
Dec 05 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 05 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:54:26 compute-0 ecstatic_satoshi[421890]: {
Dec 05 01:54:26 compute-0 ecstatic_satoshi[421890]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:54:26 compute-0 ecstatic_satoshi[421890]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:54:26 compute-0 ecstatic_satoshi[421890]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:54:26 compute-0 ecstatic_satoshi[421890]:         "osd_id": 0,
Dec 05 01:54:26 compute-0 ecstatic_satoshi[421890]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:54:26 compute-0 ecstatic_satoshi[421890]:         "type": "bluestore"
Dec 05 01:54:26 compute-0 ecstatic_satoshi[421890]:     },
Dec 05 01:54:26 compute-0 ecstatic_satoshi[421890]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:54:26 compute-0 ecstatic_satoshi[421890]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:54:26 compute-0 ecstatic_satoshi[421890]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:54:26 compute-0 ecstatic_satoshi[421890]:         "osd_id": 1,
Dec 05 01:54:26 compute-0 ecstatic_satoshi[421890]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:54:26 compute-0 ecstatic_satoshi[421890]:         "type": "bluestore"
Dec 05 01:54:26 compute-0 ecstatic_satoshi[421890]:     },
Dec 05 01:54:26 compute-0 ecstatic_satoshi[421890]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:54:26 compute-0 ecstatic_satoshi[421890]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:54:26 compute-0 ecstatic_satoshi[421890]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:54:26 compute-0 ecstatic_satoshi[421890]:         "osd_id": 2,
Dec 05 01:54:26 compute-0 ecstatic_satoshi[421890]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:54:26 compute-0 ecstatic_satoshi[421890]:         "type": "bluestore"
Dec 05 01:54:26 compute-0 ecstatic_satoshi[421890]:     }
Dec 05 01:54:26 compute-0 ecstatic_satoshi[421890]: }
Dec 05 01:54:26 compute-0 systemd[1]: libpod-b59b87d3e467a1c2f4646cd7276a393bd21c4e82bd036c8bd30b475266d2707b.scope: Deactivated successfully.
Dec 05 01:54:26 compute-0 systemd[1]: libpod-b59b87d3e467a1c2f4646cd7276a393bd21c4e82bd036c8bd30b475266d2707b.scope: Consumed 1.212s CPU time.
Dec 05 01:54:26 compute-0 podman[421923]: 2025-12-05 01:54:26.920981435 +0000 UTC m=+0.060952748 container died b59b87d3e467a1c2f4646cd7276a393bd21c4e82bd036c8bd30b475266d2707b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_satoshi, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 05 01:54:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-8abe4a66391dfc77099148803f536d88263d2e4fd428f50cb6b49541da8e4a79-merged.mount: Deactivated successfully.
Dec 05 01:54:27 compute-0 podman[421923]: 2025-12-05 01:54:27.007380855 +0000 UTC m=+0.147352158 container remove b59b87d3e467a1c2f4646cd7276a393bd21c4e82bd036c8bd30b475266d2707b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_satoshi, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 01:54:27 compute-0 systemd[1]: libpod-conmon-b59b87d3e467a1c2f4646cd7276a393bd21c4e82bd036c8bd30b475266d2707b.scope: Deactivated successfully.
Dec 05 01:54:27 compute-0 sudo[421767]: pam_unix(sudo:session): session closed for user root
Dec 05 01:54:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:54:27 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:54:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:54:27 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:54:27 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 77dc7518-5cf2-4ff4-8051-1567cf658588 does not exist
Dec 05 01:54:27 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8771fc56-4f54-4ab2-801c-9fb23e523ee8 does not exist
Dec 05 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.098 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.099 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.100 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.100 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.101 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:54:27 compute-0 sudo[421938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:54:27 compute-0 sudo[421938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:54:27 compute-0 sudo[421938]: pam_unix(sudo:session): session closed for user root
Dec 05 01:54:27 compute-0 sudo[421966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:54:27 compute-0 sudo[421966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:54:27 compute-0 sudo[421966]: pam_unix(sudo:session): session closed for user root
Dec 05 01:54:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1352: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec 05 01:54:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:54:27 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3798772681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.583 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:54:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.738 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.739 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.739 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.745 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.745 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.746 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.752 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.752 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.752 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:54:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:54:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:54:28 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3798772681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:54:28 compute-0 nova_compute[349548]: 2025-12-05 01:54:28.306 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:54:28 compute-0 nova_compute[349548]: 2025-12-05 01:54:28.308 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3505MB free_disk=59.888919830322266GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 01:54:28 compute-0 nova_compute[349548]: 2025-12-05 01:54:28.308 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:54:28 compute-0 nova_compute[349548]: 2025-12-05 01:54:28.309 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:54:28 compute-0 nova_compute[349548]: 2025-12-05 01:54:28.657 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:54:28 compute-0 nova_compute[349548]: 2025-12-05 01:54:28.658 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:54:28 compute-0 nova_compute[349548]: 2025-12-05 01:54:28.658 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:54:28 compute-0 nova_compute[349548]: 2025-12-05 01:54:28.658 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 01:54:28 compute-0 nova_compute[349548]: 2025-12-05 01:54:28.659 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 01:54:28 compute-0 nova_compute[349548]: 2025-12-05 01:54:28.738 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:54:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 01:54:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 2400.2 total, 600.0 interval
                                            Cumulative writes: 8074 writes, 32K keys, 8074 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 8074 writes, 1655 syncs, 4.88 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 887 writes, 3287 keys, 887 commit groups, 1.0 writes per commit group, ingest: 3.45 MB, 0.01 MB/s
                                            Interval WAL: 887 writes, 328 syncs, 2.70 writes per sync, written: 0.00 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 01:54:29 compute-0 ceph-mon[192914]: pgmap v1352: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec 05 01:54:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:54:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2807330372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:54:29 compute-0 nova_compute[349548]: 2025-12-05 01:54:29.259 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:54:29 compute-0 nova_compute[349548]: 2025-12-05 01:54:29.272 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:54:29 compute-0 nova_compute[349548]: 2025-12-05 01:54:29.295 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:54:29 compute-0 nova_compute[349548]: 2025-12-05 01:54:29.329 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 01:54:29 compute-0 nova_compute[349548]: 2025-12-05 01:54:29.329 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:54:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1353: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 1.5 MiB/s wr, 50 op/s
Dec 05 01:54:29 compute-0 podman[158197]: time="2025-12-05T01:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:54:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 01:54:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8628 "" "Go-http-client/1.1"
Dec 05 01:54:30 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2807330372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:54:30 compute-0 podman[422034]: 2025-12-05 01:54:30.699423583 +0000 UTC m=+0.101725180 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 01:54:30 compute-0 podman[422033]: 2025-12-05 01:54:30.721876532 +0000 UTC m=+0.129933340 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 01:54:30 compute-0 nova_compute[349548]: 2025-12-05 01:54:30.737 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:54:31 compute-0 ceph-mon[192914]: pgmap v1353: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 1.5 MiB/s wr, 50 op/s
Dec 05 01:54:31 compute-0 openstack_network_exporter[366555]: ERROR   01:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:54:31 compute-0 openstack_network_exporter[366555]: ERROR   01:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:54:31 compute-0 openstack_network_exporter[366555]: ERROR   01:54:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:54:31 compute-0 openstack_network_exporter[366555]: ERROR   01:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:54:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:54:31 compute-0 openstack_network_exporter[366555]: ERROR   01:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:54:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:54:31 compute-0 nova_compute[349548]: 2025-12-05 01:54:31.522 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:54:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1354: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 1.5 MiB/s wr, 50 op/s
Dec 05 01:54:32 compute-0 ceph-mon[192914]: pgmap v1354: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 1.5 MiB/s wr, 50 op/s
Dec 05 01:54:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:54:32 compute-0 podman[422076]: 2025-12-05 01:54:32.708743295 +0000 UTC m=+0.118378587 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 05 01:54:32 compute-0 podman[422077]: 2025-12-05 01:54:32.745816683 +0000 UTC m=+0.134563570 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:54:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1355: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 1.5 MiB/s wr, 31 op/s
Dec 05 01:54:34 compute-0 ceph-mon[192914]: pgmap v1355: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 1.5 MiB/s wr, 31 op/s
Dec 05 01:54:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1356: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec 05 01:54:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 01:54:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 2400.1 total, 600.0 interval
                                            Cumulative writes: 6650 writes, 27K keys, 6650 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 6650 writes, 1298 syncs, 5.12 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 733 writes, 2726 keys, 733 commit groups, 1.0 writes per commit group, ingest: 3.29 MB, 0.01 MB/s
                                            Interval WAL: 733 writes, 277 syncs, 2.65 writes per sync, written: 0.00 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 01:54:35 compute-0 nova_compute[349548]: 2025-12-05 01:54:35.741 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:54:36 compute-0 nova_compute[349548]: 2025-12-05 01:54:36.524 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:54:36 compute-0 ceph-mon[192914]: pgmap v1356: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec 05 01:54:36 compute-0 podman[422113]: 2025-12-05 01:54:36.717640906 +0000 UTC m=+0.119430276 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_id=edpm, vendor=Red Hat, Inc., release=1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9)
Dec 05 01:54:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1357: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec 05 01:54:37 compute-0 ceph-mgr[193209]: [devicehealth INFO root] Check health
Dec 05 01:54:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.317 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.318 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.319 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.328 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.330 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}03a5c5085f72a10a14834caf2c8f725d7bea9761ee1da0af3d318eb89d91a8ae" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 05 01:54:38 compute-0 ceph-mon[192914]: pgmap v1357: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.838 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Fri, 05 Dec 2025 01:54:38 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-07a1c91e-44a9-4e80-ba31-883598b9668d x-openstack-request-id: req-07a1c91e-44a9-4e80-ba31-883598b9668d _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.839 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5", "name": "vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7", "status": "ACTIVE", "tenant_id": "6ad982b73954486390215862ee62239f", "user_id": "ff880837791d4f49a54672b8d0e705ff", "metadata": {"metering.server_group": "b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1"}, "hostId": "c00078154b620f81ef3acab090afa15b914aca6c57286253be564282", "image": {"id": "aa58c1e9-bdcc-4e60-9cee-eaeee0741251", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/aa58c1e9-bdcc-4e60-9cee-eaeee0741251"}]}, "flavor": {"id": "7d473820-6f66-40b4-b8d1-decd466d7dd2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/7d473820-6f66-40b4-b8d1-decd466d7dd2"}]}, "created": "2025-12-05T01:53:30Z", "updated": "2025-12-05T01:53:42Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.25", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:68:a7:22"}, {"version": 4, "addr": "192.168.122.236", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:68:a7:22"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-05T01:53:42.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.839 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 used request id req-07a1c91e-44a9-4e80-ba31-883598b9668d request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.841 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5', 'name': 'vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.846 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b69a0e24-1bc4-46a5-92d7-367c1efd53df', 'name': 'test_0', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.849 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b82c3f0e-6d6a-4a7b-9556-b609ad63e497', 'name': 'vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.850 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.850 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.850 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.850 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.851 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.852 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.852 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.852 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.852 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T01:54:38.850553) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.853 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.853 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T01:54:38.853059) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.881 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.882 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.882 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.915 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.916 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.916 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.955 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.956 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.957 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.958 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.958 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.958 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.959 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.959 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.960 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.961 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.961 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.961 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.961 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.961 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T01:54:38.959282) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.962 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-05T01:54:38.961776) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.962 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.962 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7>]
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.962 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.963 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.963 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.963 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.964 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T01:54:38.963447) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.025 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.026 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.026 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.114 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.115 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.115 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.178 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.178 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.179 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.180 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.181 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.181 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.181 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.194 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.195 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T01:54:39.194300) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.194 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.latency volume: 1788689993 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.200 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.latency volume: 318906117 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.200 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.latency volume: 246265233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.200 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 2043636416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.201 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 325714825 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.201 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 190759187 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.201 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 2069488567 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.201 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 288882839 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.202 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 182154388 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.202 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.202 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.202 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.203 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.203 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.203 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.203 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.203 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.203 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.203 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.204 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.204 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.204 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.204 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.204 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.205 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.205 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.205 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.205 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.205 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.205 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.206 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.206 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.206 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.206 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.206 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.207 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.207 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.207 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.207 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.210 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.211 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.211 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.211 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.211 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.211 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T01:54:39.203155) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.212 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T01:54:39.205854) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.211 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.212 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.bytes volume: 41705472 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.212 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T01:54:39.211916) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.212 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.213 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.213 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.213 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.214 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.214 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 41824256 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.214 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.214 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.215 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.215 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.215 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.215 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.215 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.216 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.216 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T01:54:39.216039) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.237 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.258 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.277 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.278 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.279 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.279 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.279 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.279 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.279 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.latency volume: 6939125600 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.280 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.latency volume: 30429022 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.280 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.280 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 7524740776 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.280 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T01:54:39.279653) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.281 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 28454640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.281 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.281 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 9113944897 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.281 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 32028870 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.281 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.282 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.282 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.282 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.282 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.283 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.283 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.requests volume: 224 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.283 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T01:54:39.282998) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.283 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.283 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.284 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.284 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.284 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.284 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.284 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.284 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.285 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.285 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.285 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.285 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.285 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.286 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T01:54:39.285834) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.289 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 / tap4341bf52-6b inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.289 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.293 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.297 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.297 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.297 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.297 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.297 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.298 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.298 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.298 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.298 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.298 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T01:54:39.297960) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.298 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.299 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.299 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.299 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.299 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.299 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.299 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T01:54:39.299367) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.299 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.300 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.300 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.300 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.300 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.301 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.301 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.301 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.301 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.302 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.302 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.302 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.302 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.302 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.302 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.302 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.303 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.303 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.303 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.303 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.303 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.303 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T01:54:39.302284) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.303 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.bytes volume: 1991 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.304 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T01:54:39.303585) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.304 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.304 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.bytes volume: 4698 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.304 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.305 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.305 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.305 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.305 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.305 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.305 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T01:54:39.305330) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.305 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.306 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.306 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.306 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.306 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.306 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.306 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.307 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-05T01:54:39.306732) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.307 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.307 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7>]
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.307 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.307 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.307 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.307 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.307 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/memory.usage volume: 49.64453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.307 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T01:54:39.307616) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.308 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/memory.usage volume: 48.91015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.308 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/memory.usage volume: 49.15625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.308 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.309 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.309 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.309 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.309 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.309 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T01:54:39.309314) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.309 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes volume: 2052 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.310 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.bytes volume: 4933 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.310 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.310 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.310 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.310 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.310 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.310 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.311 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.311 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets volume: 40 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.311 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.311 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.311 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T01:54:39.310783) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.312 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.312 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.312 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.312 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.312 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.312 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.313 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T01:54:39.312392) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.313 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.313 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.313 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.314 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.314 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.314 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.314 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/cpu volume: 33920000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.314 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/cpu volume: 40500000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.314 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/cpu volume: 277840000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.315 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.315 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T01:54:39.314303) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.315 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.315 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.316 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.316 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.316 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T01:54:39.316068) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.316 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.316 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.316 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.317 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.317 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.317 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.317 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.317 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.317 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.317 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.318 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T01:54:39.317687) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.318 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.318 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.319 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.321 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.321 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.321 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.321 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:54:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1358: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:40 compute-0 ceph-mon[192914]: pgmap v1358: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:40 compute-0 nova_compute[349548]: 2025-12-05 01:54:40.745 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:54:41 compute-0 nova_compute[349548]: 2025-12-05 01:54:41.526 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:54:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1359: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:42 compute-0 ceph-mon[192914]: pgmap v1359: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:54:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1360: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:43 compute-0 podman[422135]: 2025-12-05 01:54:43.687842241 +0000 UTC m=+0.096984768 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:54:43 compute-0 podman[422138]: 2025-12-05 01:54:43.698806998 +0000 UTC m=+0.094036315 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vendor=Red Hat, Inc., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 01:54:43 compute-0 podman[422136]: 2025-12-05 01:54:43.715130185 +0000 UTC m=+0.117596985 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:54:43 compute-0 podman[422137]: 2025-12-05 01:54:43.748563521 +0000 UTC m=+0.144991031 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec 05 01:54:44 compute-0 ceph-mon[192914]: pgmap v1360: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 01:54:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1583200801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:54:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 01:54:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1583200801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:54:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1361: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1583200801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:54:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1583200801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:54:45 compute-0 nova_compute[349548]: 2025-12-05 01:54:45.750 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:54:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:54:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:54:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:54:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:54:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:54:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:54:46 compute-0 nova_compute[349548]: 2025-12-05 01:54:46.529 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:54:46 compute-0 ceph-mon[192914]: pgmap v1361: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1362: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:54:48 compute-0 ceph-mon[192914]: pgmap v1362: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1363: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:50 compute-0 ceph-mon[192914]: pgmap v1363: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:54:50 compute-0 nova_compute[349548]: 2025-12-05 01:54:50.754 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:54:51 compute-0 nova_compute[349548]: 2025-12-05 01:54:51.536 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:54:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1364: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Dec 05 01:54:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:54:52 compute-0 ceph-mon[192914]: pgmap v1364: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Dec 05 01:54:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1365: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.1 KiB/s wr, 0 op/s
Dec 05 01:54:54 compute-0 ceph-mon[192914]: pgmap v1365: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.1 KiB/s wr, 0 op/s
Dec 05 01:54:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1366: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.1 KiB/s wr, 0 op/s
Dec 05 01:54:55 compute-0 nova_compute[349548]: 2025-12-05 01:54:55.759 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:54:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:54:56.184 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:54:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:54:56.185 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:54:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:54:56.187 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:54:56 compute-0 nova_compute[349548]: 2025-12-05 01:54:56.536 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:54:56 compute-0 ceph-mon[192914]: pgmap v1366: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.1 KiB/s wr, 0 op/s
Dec 05 01:54:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1367: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.1 KiB/s wr, 0 op/s
Dec 05 01:54:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:54:58 compute-0 ceph-mon[192914]: pgmap v1367: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.1 KiB/s wr, 0 op/s
Dec 05 01:54:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1368: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.1 KiB/s wr, 0 op/s
Dec 05 01:54:59 compute-0 podman[158197]: time="2025-12-05T01:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:54:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 01:54:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8636 "" "Go-http-client/1.1"
Dec 05 01:55:00 compute-0 nova_compute[349548]: 2025-12-05 01:55:00.765 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:00 compute-0 ceph-mon[192914]: pgmap v1368: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.1 KiB/s wr, 0 op/s
Dec 05 01:55:01 compute-0 openstack_network_exporter[366555]: ERROR   01:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:55:01 compute-0 openstack_network_exporter[366555]: ERROR   01:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:55:01 compute-0 openstack_network_exporter[366555]: ERROR   01:55:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:55:01 compute-0 openstack_network_exporter[366555]: ERROR   01:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:55:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:55:01 compute-0 openstack_network_exporter[366555]: ERROR   01:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:55:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:55:01 compute-0 nova_compute[349548]: 2025-12-05 01:55:01.540 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1369: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.1 KiB/s wr, 0 op/s
Dec 05 01:55:01 compute-0 podman[422221]: 2025-12-05 01:55:01.674311172 +0000 UTC m=+0.088679845 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:55:01 compute-0 podman[422220]: 2025-12-05 01:55:01.698569821 +0000 UTC m=+0.114154978 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec 05 01:55:02 compute-0 anacron[91608]: Job `cron.monthly' started
Dec 05 01:55:02 compute-0 anacron[91608]: Job `cron.monthly' terminated
Dec 05 01:55:02 compute-0 anacron[91608]: Normal exit (3 jobs run)
Dec 05 01:55:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:55:02 compute-0 ceph-mon[192914]: pgmap v1369: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.1 KiB/s wr, 0 op/s
Dec 05 01:55:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1370: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s wr, 0 op/s
Dec 05 01:55:03 compute-0 podman[422263]: 2025-12-05 01:55:03.726101574 +0000 UTC m=+0.125394463 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Dec 05 01:55:03 compute-0 podman[422262]: 2025-12-05 01:55:03.757310258 +0000 UTC m=+0.163837519 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec 05 01:55:04 compute-0 ceph-mon[192914]: pgmap v1370: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s wr, 0 op/s
Dec 05 01:55:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1371: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:05 compute-0 nova_compute[349548]: 2025-12-05 01:55:05.771 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:06 compute-0 nova_compute[349548]: 2025-12-05 01:55:06.543 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:06 compute-0 ceph-mon[192914]: pgmap v1371: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1372: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:55:07 compute-0 podman[422298]: 2025-12-05 01:55:07.729691406 +0000 UTC m=+0.133859299 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, vcs-type=git, container_name=kepler, name=ubi9, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, config_id=edpm)
Dec 05 01:55:08 compute-0 ceph-mon[192914]: pgmap v1372: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1373: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:10 compute-0 nova_compute[349548]: 2025-12-05 01:55:10.777 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:10 compute-0 ceph-mon[192914]: pgmap v1373: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:11 compute-0 nova_compute[349548]: 2025-12-05 01:55:11.545 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1374: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:55:12 compute-0 ceph-mon[192914]: pgmap v1374: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1375: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:14 compute-0 podman[422318]: 2025-12-05 01:55:14.713617236 +0000 UTC m=+0.123316475 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 01:55:14 compute-0 podman[422319]: 2025-12-05 01:55:14.745864529 +0000 UTC m=+0.142994846 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:55:14 compute-0 podman[422326]: 2025-12-05 01:55:14.753733729 +0000 UTC m=+0.131691739 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350)
Dec 05 01:55:14 compute-0 podman[422320]: 2025-12-05 01:55:14.803307966 +0000 UTC m=+0.175275158 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 01:55:14 compute-0 ceph-mon[192914]: pgmap v1375: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1376: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:15 compute-0 nova_compute[349548]: 2025-12-05 01:55:15.782 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:55:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:55:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:55:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:55:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:55:16
Dec 05 01:55:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:55:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:55:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'default.rgw.log', 'images', '.rgw.root', 'default.rgw.meta', 'vms', '.mgr', 'cephfs.cephfs.data']
Dec 05 01:55:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:55:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:55:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:55:16 compute-0 nova_compute[349548]: 2025-12-05 01:55:16.551 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:55:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:55:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:55:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:55:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:55:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:55:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:55:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:55:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:55:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:55:16 compute-0 ceph-mon[192914]: pgmap v1376: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1377: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:55:18 compute-0 ceph-mon[192914]: pgmap v1377: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1378: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:20 compute-0 nova_compute[349548]: 2025-12-05 01:55:20.787 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:20 compute-0 ceph-mon[192914]: pgmap v1378: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:21 compute-0 nova_compute[349548]: 2025-12-05 01:55:21.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:55:21 compute-0 nova_compute[349548]: 2025-12-05 01:55:21.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 01:55:21 compute-0 nova_compute[349548]: 2025-12-05 01:55:21.553 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1379: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:22 compute-0 nova_compute[349548]: 2025-12-05 01:55:22.095 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:55:22 compute-0 nova_compute[349548]: 2025-12-05 01:55:22.095 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:55:22 compute-0 nova_compute[349548]: 2025-12-05 01:55:22.096 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 01:55:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:55:22 compute-0 ceph-mon[192914]: pgmap v1379: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:23 compute-0 nova_compute[349548]: 2025-12-05 01:55:23.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:55:23 compute-0 nova_compute[349548]: 2025-12-05 01:55:23.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 01:55:23 compute-0 nova_compute[349548]: 2025-12-05 01:55:23.089 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 01:55:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1380: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:24 compute-0 nova_compute[349548]: 2025-12-05 01:55:24.089 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:55:25 compute-0 ceph-mon[192914]: pgmap v1380: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:25 compute-0 nova_compute[349548]: 2025-12-05 01:55:25.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:55:25 compute-0 nova_compute[349548]: 2025-12-05 01:55:25.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 01:55:25 compute-0 nova_compute[349548]: 2025-12-05 01:55:25.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 01:55:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1381: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:25 compute-0 nova_compute[349548]: 2025-12-05 01:55:25.713 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:55:25 compute-0 nova_compute[349548]: 2025-12-05 01:55:25.714 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:55:25 compute-0 nova_compute[349548]: 2025-12-05 01:55:25.714 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 01:55:25 compute-0 nova_compute[349548]: 2025-12-05 01:55:25.716 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b69a0e24-1bc4-46a5-92d7-367c1efd53df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 01:55:25 compute-0 nova_compute[349548]: 2025-12-05 01:55:25.791 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:26 compute-0 nova_compute[349548]: 2025-12-05 01:55:26.555 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016577461621736017 of space, bias 1.0, pg target 0.4973238486520805 quantized to 32 (current 32)
Dec 05 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 05 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:55:27 compute-0 ceph-mon[192914]: pgmap v1381: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:27 compute-0 sudo[422407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:55:27 compute-0 sudo[422407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:55:27 compute-0 sudo[422407]: pam_unix(sudo:session): session closed for user root
Dec 05 01:55:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1382: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:27 compute-0 sudo[422432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:55:27 compute-0 sudo[422432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:55:27 compute-0 sudo[422432]: pam_unix(sudo:session): session closed for user root
Dec 05 01:55:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:55:27 compute-0 nova_compute[349548]: 2025-12-05 01:55:27.730 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updating instance_info_cache with network_info: [{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 01:55:27 compute-0 nova_compute[349548]: 2025-12-05 01:55:27.750 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:55:27 compute-0 nova_compute[349548]: 2025-12-05 01:55:27.752 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 01:55:27 compute-0 nova_compute[349548]: 2025-12-05 01:55:27.753 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:55:27 compute-0 nova_compute[349548]: 2025-12-05 01:55:27.756 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:55:27 compute-0 nova_compute[349548]: 2025-12-05 01:55:27.757 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:55:27 compute-0 sudo[422457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:55:27 compute-0 sudo[422457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:55:27 compute-0 sudo[422457]: pam_unix(sudo:session): session closed for user root
Dec 05 01:55:27 compute-0 nova_compute[349548]: 2025-12-05 01:55:27.794 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:55:27 compute-0 nova_compute[349548]: 2025-12-05 01:55:27.797 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:55:27 compute-0 nova_compute[349548]: 2025-12-05 01:55:27.798 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:55:27 compute-0 nova_compute[349548]: 2025-12-05 01:55:27.798 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:55:27 compute-0 nova_compute[349548]: 2025-12-05 01:55:27.799 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:55:27 compute-0 sudo[422482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:55:27 compute-0 sudo[422482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:55:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:55:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1467279240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.283 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.397 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.397 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.397 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.403 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.403 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.404 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.410 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.410 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.411 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:55:28 compute-0 sudo[422482]: pam_unix(sudo:session): session closed for user root
Dec 05 01:55:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:55:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:55:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:55:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:55:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:55:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:55:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c04683a5-4dec-4407-a718-63cfc7259477 does not exist
Dec 05 01:55:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev cdec43e4-b4ed-4759-bd9e-5cfb2cdb0e63 does not exist
Dec 05 01:55:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev ec700d17-11d5-415a-b013-97c52eb733bd does not exist
Dec 05 01:55:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:55:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:55:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:55:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:55:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:55:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:55:28 compute-0 sudo[422559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:55:28 compute-0 sudo[422559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:55:28 compute-0 sudo[422559]: pam_unix(sudo:session): session closed for user root
Dec 05 01:55:28 compute-0 sudo[422584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:55:28 compute-0 sudo[422584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:55:28 compute-0 sudo[422584]: pam_unix(sudo:session): session closed for user root
Dec 05 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.938 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.939 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3490MB free_disk=59.88883590698242GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.940 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.940 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:55:28 compute-0 sudo[422609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:55:28 compute-0 sudo[422609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:55:28 compute-0 sudo[422609]: pam_unix(sudo:session): session closed for user root
Dec 05 01:55:29 compute-0 ceph-mon[192914]: pgmap v1382: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:29 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1467279240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:55:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:55:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:55:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:55:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:55:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:55:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:55:29 compute-0 sudo[422634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:55:29 compute-0 sudo[422634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:55:29 compute-0 nova_compute[349548]: 2025-12-05 01:55:29.173 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:55:29 compute-0 nova_compute[349548]: 2025-12-05 01:55:29.173 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:55:29 compute-0 nova_compute[349548]: 2025-12-05 01:55:29.173 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:55:29 compute-0 nova_compute[349548]: 2025-12-05 01:55:29.189 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 01:55:29 compute-0 nova_compute[349548]: 2025-12-05 01:55:29.190 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 01:55:29 compute-0 nova_compute[349548]: 2025-12-05 01:55:29.434 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:55:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1383: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:29 compute-0 podman[422699]: 2025-12-05 01:55:29.6272694 +0000 UTC m=+0.073073688 container create 4a5befb52041ebc18bf8a529545ff04653faefd187ea7f3e8310774b37d9f145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:55:29 compute-0 podman[422699]: 2025-12-05 01:55:29.601783146 +0000 UTC m=+0.047587444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:55:29 compute-0 systemd[1]: Started libpod-conmon-4a5befb52041ebc18bf8a529545ff04653faefd187ea7f3e8310774b37d9f145.scope.
Dec 05 01:55:29 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:55:29 compute-0 podman[158197]: time="2025-12-05T01:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:55:29 compute-0 podman[422699]: 2025-12-05 01:55:29.756396756 +0000 UTC m=+0.202201154 container init 4a5befb52041ebc18bf8a529545ff04653faefd187ea7f3e8310774b37d9f145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:55:29 compute-0 podman[422699]: 2025-12-05 01:55:29.782304462 +0000 UTC m=+0.228108770 container start 4a5befb52041ebc18bf8a529545ff04653faefd187ea7f3e8310774b37d9f145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 05 01:55:29 compute-0 podman[422699]: 2025-12-05 01:55:29.787165958 +0000 UTC m=+0.232970296 container attach 4a5befb52041ebc18bf8a529545ff04653faefd187ea7f3e8310774b37d9f145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_tesla, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:55:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45191 "" "Go-http-client/1.1"
Dec 05 01:55:29 compute-0 funny_tesla[422734]: 167 167
Dec 05 01:55:29 compute-0 systemd[1]: libpod-4a5befb52041ebc18bf8a529545ff04653faefd187ea7f3e8310774b37d9f145.scope: Deactivated successfully.
Dec 05 01:55:29 compute-0 podman[422699]: 2025-12-05 01:55:29.804127513 +0000 UTC m=+0.249931811 container died 4a5befb52041ebc18bf8a529545ff04653faefd187ea7f3e8310774b37d9f145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_tesla, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 05 01:55:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-b98137664089a6d1d9c2124d93d50a22566e4a0dcc31dc818ebe3f26d918d1ea-merged.mount: Deactivated successfully.
Dec 05 01:55:29 compute-0 podman[422699]: 2025-12-05 01:55:29.8618808 +0000 UTC m=+0.307685108 container remove 4a5befb52041ebc18bf8a529545ff04653faefd187ea7f3e8310774b37d9f145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:55:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8624 "" "Go-http-client/1.1"
Dec 05 01:55:29 compute-0 systemd[1]: libpod-conmon-4a5befb52041ebc18bf8a529545ff04653faefd187ea7f3e8310774b37d9f145.scope: Deactivated successfully.
Dec 05 01:55:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:55:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2100140819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:55:29 compute-0 nova_compute[349548]: 2025-12-05 01:55:29.937 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:55:29 compute-0 nova_compute[349548]: 2025-12-05 01:55:29.949 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:55:29 compute-0 nova_compute[349548]: 2025-12-05 01:55:29.984 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:29 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:29.985 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 01:55:29 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:29.986 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 01:55:29 compute-0 nova_compute[349548]: 2025-12-05 01:55:29.999 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:55:30 compute-0 nova_compute[349548]: 2025-12-05 01:55:30.000 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 01:55:30 compute-0 nova_compute[349548]: 2025-12-05 01:55:30.001 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:55:30 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2100140819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:55:30 compute-0 nova_compute[349548]: 2025-12-05 01:55:30.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:55:30 compute-0 nova_compute[349548]: 2025-12-05 01:55:30.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:55:30 compute-0 nova_compute[349548]: 2025-12-05 01:55:30.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:55:30 compute-0 podman[422760]: 2025-12-05 01:55:30.153530128 +0000 UTC m=+0.089442266 container create 79ee89d2049129e948da155522a4b3d5734cfebf4e8bcf234b305b02e3d998c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 01:55:30 compute-0 podman[422760]: 2025-12-05 01:55:30.122101298 +0000 UTC m=+0.058013526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:55:30 compute-0 systemd[1]: Started libpod-conmon-79ee89d2049129e948da155522a4b3d5734cfebf4e8bcf234b305b02e3d998c2.scope.
Dec 05 01:55:30 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2782ac93dcdfe004fd127bbe4118bbdb4f4ee530fb66b3c9613ec410a33d8ee0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2782ac93dcdfe004fd127bbe4118bbdb4f4ee530fb66b3c9613ec410a33d8ee0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2782ac93dcdfe004fd127bbe4118bbdb4f4ee530fb66b3c9613ec410a33d8ee0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2782ac93dcdfe004fd127bbe4118bbdb4f4ee530fb66b3c9613ec410a33d8ee0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2782ac93dcdfe004fd127bbe4118bbdb4f4ee530fb66b3c9613ec410a33d8ee0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:55:30 compute-0 podman[422760]: 2025-12-05 01:55:30.281332627 +0000 UTC m=+0.217244785 container init 79ee89d2049129e948da155522a4b3d5734cfebf4e8bcf234b305b02e3d998c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:55:30 compute-0 podman[422760]: 2025-12-05 01:55:30.294693102 +0000 UTC m=+0.230605240 container start 79ee89d2049129e948da155522a4b3d5734cfebf4e8bcf234b305b02e3d998c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 05 01:55:30 compute-0 podman[422760]: 2025-12-05 01:55:30.298736355 +0000 UTC m=+0.234648493 container attach 79ee89d2049129e948da155522a4b3d5734cfebf4e8bcf234b305b02e3d998c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:55:30 compute-0 nova_compute[349548]: 2025-12-05 01:55:30.794 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:31 compute-0 ceph-mon[192914]: pgmap v1383: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:31 compute-0 openstack_network_exporter[366555]: ERROR   01:55:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:55:31 compute-0 openstack_network_exporter[366555]: ERROR   01:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:55:31 compute-0 openstack_network_exporter[366555]: ERROR   01:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:55:31 compute-0 openstack_network_exporter[366555]: ERROR   01:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:55:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:55:31 compute-0 openstack_network_exporter[366555]: ERROR   01:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:55:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:55:31 compute-0 laughing_grothendieck[422777]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:55:31 compute-0 laughing_grothendieck[422777]: --> relative data size: 1.0
Dec 05 01:55:31 compute-0 laughing_grothendieck[422777]: --> All data devices are unavailable
Dec 05 01:55:31 compute-0 systemd[1]: libpod-79ee89d2049129e948da155522a4b3d5734cfebf4e8bcf234b305b02e3d998c2.scope: Deactivated successfully.
Dec 05 01:55:31 compute-0 podman[422760]: 2025-12-05 01:55:31.510312916 +0000 UTC m=+1.446225094 container died 79ee89d2049129e948da155522a4b3d5734cfebf4e8bcf234b305b02e3d998c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:55:31 compute-0 systemd[1]: libpod-79ee89d2049129e948da155522a4b3d5734cfebf4e8bcf234b305b02e3d998c2.scope: Consumed 1.157s CPU time.
Dec 05 01:55:31 compute-0 nova_compute[349548]: 2025-12-05 01:55:31.558 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-2782ac93dcdfe004fd127bbe4118bbdb4f4ee530fb66b3c9613ec410a33d8ee0-merged.mount: Deactivated successfully.
Dec 05 01:55:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1384: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:31 compute-0 podman[422760]: 2025-12-05 01:55:31.622779456 +0000 UTC m=+1.558691604 container remove 79ee89d2049129e948da155522a4b3d5734cfebf4e8bcf234b305b02e3d998c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec 05 01:55:31 compute-0 systemd[1]: libpod-conmon-79ee89d2049129e948da155522a4b3d5734cfebf4e8bcf234b305b02e3d998c2.scope: Deactivated successfully.
Dec 05 01:55:31 compute-0 sudo[422634]: pam_unix(sudo:session): session closed for user root
Dec 05 01:55:31 compute-0 sudo[422817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:55:31 compute-0 sudo[422817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:55:31 compute-0 sudo[422817]: pam_unix(sudo:session): session closed for user root
Dec 05 01:55:31 compute-0 sudo[422844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:55:31 compute-0 sudo[422844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:55:31 compute-0 sudo[422844]: pam_unix(sudo:session): session closed for user root
Dec 05 01:55:31 compute-0 podman[422841]: 2025-12-05 01:55:31.927091598 +0000 UTC m=+0.107601464 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:55:31 compute-0 podman[422842]: 2025-12-05 01:55:31.940246087 +0000 UTC m=+0.121842164 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:55:31 compute-0 sudo[422907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:55:31 compute-0 sudo[422907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:55:31 compute-0 sudo[422907]: pam_unix(sudo:session): session closed for user root
Dec 05 01:55:32 compute-0 sudo[422932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:55:32 compute-0 sudo[422932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:55:32 compute-0 nova_compute[349548]: 2025-12-05 01:55:32.080 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:55:32 compute-0 podman[422994]: 2025-12-05 01:55:32.57417097 +0000 UTC m=+0.058657923 container create b73bab1e0fa04d7a7be47741a13101901737fba95a390639509939c2905b8d4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_murdock, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 05 01:55:32 compute-0 podman[422994]: 2025-12-05 01:55:32.548147692 +0000 UTC m=+0.032634655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:55:32 compute-0 systemd[1]: Started libpod-conmon-b73bab1e0fa04d7a7be47741a13101901737fba95a390639509939c2905b8d4b.scope.
Dec 05 01:55:32 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:55:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.721051) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899732721110, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1389, "num_deletes": 505, "total_data_size": 1659088, "memory_usage": 1697256, "flush_reason": "Manual Compaction"}
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Dec 05 01:55:32 compute-0 podman[422994]: 2025-12-05 01:55:32.723607364 +0000 UTC m=+0.208094337 container init b73bab1e0fa04d7a7be47741a13101901737fba95a390639509939c2905b8d4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_murdock, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899732731093, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 985475, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27571, "largest_seqno": 28959, "table_properties": {"data_size": 980561, "index_size": 1798, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 15287, "raw_average_key_size": 19, "raw_value_size": 968081, "raw_average_value_size": 1217, "num_data_blocks": 82, "num_entries": 795, "num_filter_entries": 795, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764899620, "oldest_key_time": 1764899620, "file_creation_time": 1764899732, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 10121 microseconds, and 5010 cpu microseconds.
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.731173) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 985475 bytes OK
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.731189) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.734105) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.734120) EVENT_LOG_v1 {"time_micros": 1764899732734116, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.734135) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1651836, prev total WAL file size 1651836, number of live WAL files 2.
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:55:32 compute-0 podman[422994]: 2025-12-05 01:55:32.740990591 +0000 UTC m=+0.225477544 container start b73bab1e0fa04d7a7be47741a13101901737fba95a390639509939c2905b8d4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.735617) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303033' seq:72057594037927935, type:22 .. '6D6772737461740031323534' seq:0, type:0; will stop at (end)
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(962KB)], [62(8704KB)]
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899732735671, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 9899049, "oldest_snapshot_seqno": -1}
Dec 05 01:55:32 compute-0 podman[422994]: 2025-12-05 01:55:32.75024693 +0000 UTC m=+0.234733933 container attach b73bab1e0fa04d7a7be47741a13101901737fba95a390639509939c2905b8d4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_murdock, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 05 01:55:32 compute-0 kind_murdock[423010]: 167 167
Dec 05 01:55:32 compute-0 systemd[1]: libpod-b73bab1e0fa04d7a7be47741a13101901737fba95a390639509939c2905b8d4b.scope: Deactivated successfully.
Dec 05 01:55:32 compute-0 podman[422994]: 2025-12-05 01:55:32.755393275 +0000 UTC m=+0.239880248 container died b73bab1e0fa04d7a7be47741a13101901737fba95a390639509939c2905b8d4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 4843 keys, 7174974 bytes, temperature: kUnknown
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899732806703, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 7174974, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7143702, "index_size": 18042, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12165, "raw_key_size": 122173, "raw_average_key_size": 25, "raw_value_size": 7057081, "raw_average_value_size": 1457, "num_data_blocks": 748, "num_entries": 4843, "num_filter_entries": 4843, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764899732, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.807318) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 7174974 bytes
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.810183) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.5 rd, 100.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.5 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(17.3) write-amplify(7.3) OK, records in: 5816, records dropped: 973 output_compression: NoCompression
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.810210) EVENT_LOG_v1 {"time_micros": 1764899732810197, "job": 34, "event": "compaction_finished", "compaction_time_micros": 71486, "compaction_time_cpu_micros": 23416, "output_level": 6, "num_output_files": 1, "total_output_size": 7174974, "num_input_records": 5816, "num_output_records": 4843, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899732812116, "job": 34, "event": "table_file_deletion", "file_number": 64}
Dec 05 01:55:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-42c866258acbf1c15df57c38396f4fe3cd81cdb63024ee95b238626912c24d59-merged.mount: Deactivated successfully.
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899732815360, "job": 34, "event": "table_file_deletion", "file_number": 62}
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.735415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.815868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.815874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.815877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.815879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.815914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:55:32 compute-0 podman[422994]: 2025-12-05 01:55:32.838266856 +0000 UTC m=+0.322753809 container remove b73bab1e0fa04d7a7be47741a13101901737fba95a390639509939c2905b8d4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Dec 05 01:55:32 compute-0 systemd[1]: libpod-conmon-b73bab1e0fa04d7a7be47741a13101901737fba95a390639509939c2905b8d4b.scope: Deactivated successfully.
Dec 05 01:55:33 compute-0 ceph-mon[192914]: pgmap v1384: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:33 compute-0 podman[423033]: 2025-12-05 01:55:33.108198845 +0000 UTC m=+0.080280549 container create 3030b76e5f1507734042ea8d00b2966dc05b0827b82b49e5c8afd437f15fa797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_solomon, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 05 01:55:33 compute-0 podman[423033]: 2025-12-05 01:55:33.06481945 +0000 UTC m=+0.036901194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:55:33 compute-0 systemd[1]: Started libpod-conmon-3030b76e5f1507734042ea8d00b2966dc05b0827b82b49e5c8afd437f15fa797.scope.
Dec 05 01:55:33 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1292ff254ae9b24cc5b6f2b322fe228e3acf30d4aaea87d25cc691752038fe6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1292ff254ae9b24cc5b6f2b322fe228e3acf30d4aaea87d25cc691752038fe6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1292ff254ae9b24cc5b6f2b322fe228e3acf30d4aaea87d25cc691752038fe6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1292ff254ae9b24cc5b6f2b322fe228e3acf30d4aaea87d25cc691752038fe6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:55:33 compute-0 podman[423033]: 2025-12-05 01:55:33.261429367 +0000 UTC m=+0.233511101 container init 3030b76e5f1507734042ea8d00b2966dc05b0827b82b49e5c8afd437f15fa797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:55:33 compute-0 podman[423033]: 2025-12-05 01:55:33.278624068 +0000 UTC m=+0.250705772 container start 3030b76e5f1507734042ea8d00b2966dc05b0827b82b49e5c8afd437f15fa797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_solomon, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Dec 05 01:55:33 compute-0 podman[423033]: 2025-12-05 01:55:33.284914484 +0000 UTC m=+0.256996188 container attach 3030b76e5f1507734042ea8d00b2966dc05b0827b82b49e5c8afd437f15fa797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_solomon, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:55:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1385: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:34 compute-0 practical_solomon[423048]: {
Dec 05 01:55:34 compute-0 practical_solomon[423048]:     "0": [
Dec 05 01:55:34 compute-0 practical_solomon[423048]:         {
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "devices": [
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "/dev/loop3"
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             ],
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "lv_name": "ceph_lv0",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "lv_size": "21470642176",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "name": "ceph_lv0",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "tags": {
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.cluster_name": "ceph",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.crush_device_class": "",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.encrypted": "0",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.osd_id": "0",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.type": "block",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.vdo": "0"
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             },
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "type": "block",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "vg_name": "ceph_vg0"
Dec 05 01:55:34 compute-0 practical_solomon[423048]:         }
Dec 05 01:55:34 compute-0 practical_solomon[423048]:     ],
Dec 05 01:55:34 compute-0 practical_solomon[423048]:     "1": [
Dec 05 01:55:34 compute-0 practical_solomon[423048]:         {
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "devices": [
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "/dev/loop4"
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             ],
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "lv_name": "ceph_lv1",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "lv_size": "21470642176",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "name": "ceph_lv1",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "tags": {
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.cluster_name": "ceph",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.crush_device_class": "",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.encrypted": "0",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.osd_id": "1",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.type": "block",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.vdo": "0"
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             },
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "type": "block",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "vg_name": "ceph_vg1"
Dec 05 01:55:34 compute-0 practical_solomon[423048]:         }
Dec 05 01:55:34 compute-0 practical_solomon[423048]:     ],
Dec 05 01:55:34 compute-0 practical_solomon[423048]:     "2": [
Dec 05 01:55:34 compute-0 practical_solomon[423048]:         {
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "devices": [
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "/dev/loop5"
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             ],
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "lv_name": "ceph_lv2",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "lv_size": "21470642176",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "name": "ceph_lv2",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "tags": {
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.cluster_name": "ceph",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.crush_device_class": "",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.encrypted": "0",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.osd_id": "2",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.type": "block",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:                 "ceph.vdo": "0"
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             },
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "type": "block",
Dec 05 01:55:34 compute-0 practical_solomon[423048]:             "vg_name": "ceph_vg2"
Dec 05 01:55:34 compute-0 practical_solomon[423048]:         }
Dec 05 01:55:34 compute-0 practical_solomon[423048]:     ]
Dec 05 01:55:34 compute-0 practical_solomon[423048]: }
Dec 05 01:55:34 compute-0 systemd[1]: libpod-3030b76e5f1507734042ea8d00b2966dc05b0827b82b49e5c8afd437f15fa797.scope: Deactivated successfully.
Dec 05 01:55:34 compute-0 conmon[423048]: conmon 3030b76e5f1507734042 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3030b76e5f1507734042ea8d00b2966dc05b0827b82b49e5c8afd437f15fa797.scope/container/memory.events
Dec 05 01:55:34 compute-0 podman[423033]: 2025-12-05 01:55:34.204763405 +0000 UTC m=+1.176845109 container died 3030b76e5f1507734042ea8d00b2966dc05b0827b82b49e5c8afd437f15fa797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_solomon, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Dec 05 01:55:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-1292ff254ae9b24cc5b6f2b322fe228e3acf30d4aaea87d25cc691752038fe6a-merged.mount: Deactivated successfully.
Dec 05 01:55:34 compute-0 podman[423033]: 2025-12-05 01:55:34.2902592 +0000 UTC m=+1.262340904 container remove 3030b76e5f1507734042ea8d00b2966dc05b0827b82b49e5c8afd437f15fa797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:55:34 compute-0 systemd[1]: libpod-conmon-3030b76e5f1507734042ea8d00b2966dc05b0827b82b49e5c8afd437f15fa797.scope: Deactivated successfully.
Dec 05 01:55:34 compute-0 sudo[422932]: pam_unix(sudo:session): session closed for user root
Dec 05 01:55:34 compute-0 podman[423066]: 2025-12-05 01:55:34.389178 +0000 UTC m=+0.138149660 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:55:34 compute-0 podman[423057]: 2025-12-05 01:55:34.393378568 +0000 UTC m=+0.137097391 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:55:34 compute-0 sudo[423103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:55:34 compute-0 sudo[423103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:55:34 compute-0 sudo[423103]: pam_unix(sudo:session): session closed for user root
Dec 05 01:55:34 compute-0 sudo[423130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:55:34 compute-0 sudo[423130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:55:34 compute-0 sudo[423130]: pam_unix(sudo:session): session closed for user root
Dec 05 01:55:34 compute-0 sudo[423155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:55:34 compute-0 sudo[423155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:55:34 compute-0 sudo[423155]: pam_unix(sudo:session): session closed for user root
Dec 05 01:55:34 compute-0 sudo[423180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:55:34 compute-0 sudo[423180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:55:35 compute-0 ceph-mon[192914]: pgmap v1385: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:35 compute-0 podman[423245]: 2025-12-05 01:55:35.317797217 +0000 UTC m=+0.074353083 container create a318ec1dc6302e43931e5ee1fd3d71a5b087328e9cb4459ace1f834079d9c8e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:55:35 compute-0 podman[423245]: 2025-12-05 01:55:35.283441425 +0000 UTC m=+0.039997371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:55:35 compute-0 systemd[1]: Started libpod-conmon-a318ec1dc6302e43931e5ee1fd3d71a5b087328e9cb4459ace1f834079d9c8e5.scope.
Dec 05 01:55:35 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:55:35 compute-0 podman[423245]: 2025-12-05 01:55:35.483144658 +0000 UTC m=+0.239700544 container init a318ec1dc6302e43931e5ee1fd3d71a5b087328e9cb4459ace1f834079d9c8e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chebyshev, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 05 01:55:35 compute-0 podman[423245]: 2025-12-05 01:55:35.496951384 +0000 UTC m=+0.253507260 container start a318ec1dc6302e43931e5ee1fd3d71a5b087328e9cb4459ace1f834079d9c8e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chebyshev, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:55:35 compute-0 podman[423245]: 2025-12-05 01:55:35.502466059 +0000 UTC m=+0.259021955 container attach a318ec1dc6302e43931e5ee1fd3d71a5b087328e9cb4459ace1f834079d9c8e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chebyshev, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 01:55:35 compute-0 loving_chebyshev[423261]: 167 167
Dec 05 01:55:35 compute-0 systemd[1]: libpod-a318ec1dc6302e43931e5ee1fd3d71a5b087328e9cb4459ace1f834079d9c8e5.scope: Deactivated successfully.
Dec 05 01:55:35 compute-0 podman[423245]: 2025-12-05 01:55:35.506308676 +0000 UTC m=+0.262864542 container died a318ec1dc6302e43931e5ee1fd3d71a5b087328e9cb4459ace1f834079d9c8e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chebyshev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 05 01:55:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-301a7b3eb0f995923555a74a7215040f3bf0646079a7b3cadc6d13af1160d8ee-merged.mount: Deactivated successfully.
Dec 05 01:55:35 compute-0 podman[423245]: 2025-12-05 01:55:35.556739639 +0000 UTC m=+0.313295495 container remove a318ec1dc6302e43931e5ee1fd3d71a5b087328e9cb4459ace1f834079d9c8e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chebyshev, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Dec 05 01:55:35 compute-0 systemd[1]: libpod-conmon-a318ec1dc6302e43931e5ee1fd3d71a5b087328e9cb4459ace1f834079d9c8e5.scope: Deactivated successfully.
Dec 05 01:55:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1386: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:35 compute-0 podman[423283]: 2025-12-05 01:55:35.795170916 +0000 UTC m=+0.057689306 container create 8f61047b2ae902a7479640a044c3ba40d7d73fa16282456434008dd77a10a769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 01:55:35 compute-0 nova_compute[349548]: 2025-12-05 01:55:35.798 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:35 compute-0 podman[423283]: 2025-12-05 01:55:35.766497623 +0000 UTC m=+0.029016023 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:55:35 compute-0 systemd[1]: Started libpod-conmon-8f61047b2ae902a7479640a044c3ba40d7d73fa16282456434008dd77a10a769.scope.
Dec 05 01:55:35 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62184cc27abc65ecb78da31d30479fe7c25746b0ee493570f5d2380924558c4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62184cc27abc65ecb78da31d30479fe7c25746b0ee493570f5d2380924558c4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62184cc27abc65ecb78da31d30479fe7c25746b0ee493570f5d2380924558c4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62184cc27abc65ecb78da31d30479fe7c25746b0ee493570f5d2380924558c4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:55:35 compute-0 podman[423283]: 2025-12-05 01:55:35.95096941 +0000 UTC m=+0.213487840 container init 8f61047b2ae902a7479640a044c3ba40d7d73fa16282456434008dd77a10a769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 05 01:55:35 compute-0 podman[423283]: 2025-12-05 01:55:35.966719701 +0000 UTC m=+0.229238081 container start 8f61047b2ae902a7479640a044c3ba40d7d73fa16282456434008dd77a10a769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:55:35 compute-0 podman[423283]: 2025-12-05 01:55:35.971464083 +0000 UTC m=+0.233982493 container attach 8f61047b2ae902a7479640a044c3ba40d7d73fa16282456434008dd77a10a769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 01:55:36 compute-0 nova_compute[349548]: 2025-12-05 01:55:36.164 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:55:36 compute-0 nova_compute[349548]: 2025-12-05 01:55:36.166 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:55:36 compute-0 nova_compute[349548]: 2025-12-05 01:55:36.190 349552 DEBUG nova.compute.manager [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 05 01:55:36 compute-0 nova_compute[349548]: 2025-12-05 01:55:36.301 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:55:36 compute-0 nova_compute[349548]: 2025-12-05 01:55:36.303 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:55:36 compute-0 nova_compute[349548]: 2025-12-05 01:55:36.313 349552 DEBUG nova.virt.hardware [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 05 01:55:36 compute-0 nova_compute[349548]: 2025-12-05 01:55:36.314 349552 INFO nova.compute.claims [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Claim successful on node compute-0.ctlplane.example.com
Dec 05 01:55:36 compute-0 nova_compute[349548]: 2025-12-05 01:55:36.495 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:55:36 compute-0 nova_compute[349548]: 2025-12-05 01:55:36.562 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:37 compute-0 charming_moore[423299]: {
Dec 05 01:55:37 compute-0 charming_moore[423299]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:55:37 compute-0 charming_moore[423299]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:55:37 compute-0 charming_moore[423299]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:55:37 compute-0 charming_moore[423299]:         "osd_id": 0,
Dec 05 01:55:37 compute-0 charming_moore[423299]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:55:37 compute-0 charming_moore[423299]:         "type": "bluestore"
Dec 05 01:55:37 compute-0 charming_moore[423299]:     },
Dec 05 01:55:37 compute-0 charming_moore[423299]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:55:37 compute-0 charming_moore[423299]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:55:37 compute-0 charming_moore[423299]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:55:37 compute-0 charming_moore[423299]:         "osd_id": 1,
Dec 05 01:55:37 compute-0 charming_moore[423299]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:55:37 compute-0 charming_moore[423299]:         "type": "bluestore"
Dec 05 01:55:37 compute-0 charming_moore[423299]:     },
Dec 05 01:55:37 compute-0 charming_moore[423299]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:55:37 compute-0 charming_moore[423299]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:55:37 compute-0 charming_moore[423299]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:55:37 compute-0 charming_moore[423299]:         "osd_id": 2,
Dec 05 01:55:37 compute-0 charming_moore[423299]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:55:37 compute-0 charming_moore[423299]:         "type": "bluestore"
Dec 05 01:55:37 compute-0 charming_moore[423299]:     }
Dec 05 01:55:37 compute-0 charming_moore[423299]: }
Dec 05 01:55:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:55:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1887424559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:55:37 compute-0 systemd[1]: libpod-8f61047b2ae902a7479640a044c3ba40d7d73fa16282456434008dd77a10a769.scope: Deactivated successfully.
Dec 05 01:55:37 compute-0 podman[423283]: 2025-12-05 01:55:37.109430742 +0000 UTC m=+1.371949132 container died 8f61047b2ae902a7479640a044c3ba40d7d73fa16282456434008dd77a10a769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 01:55:37 compute-0 systemd[1]: libpod-8f61047b2ae902a7479640a044c3ba40d7d73fa16282456434008dd77a10a769.scope: Consumed 1.117s CPU time.
Dec 05 01:55:37 compute-0 ceph-mon[192914]: pgmap v1386: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:37 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1887424559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.125 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:55:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-62184cc27abc65ecb78da31d30479fe7c25746b0ee493570f5d2380924558c4a-merged.mount: Deactivated successfully.
Dec 05 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.156 349552 DEBUG nova.compute.provider_tree [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.182 349552 DEBUG nova.scheduler.client.report [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:55:37 compute-0 podman[423283]: 2025-12-05 01:55:37.19003287 +0000 UTC m=+1.452551250 container remove 8f61047b2ae902a7479640a044c3ba40d7d73fa16282456434008dd77a10a769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 05 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.214 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.912s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.216 349552 DEBUG nova.compute.manager [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 05 01:55:37 compute-0 systemd[1]: libpod-conmon-8f61047b2ae902a7479640a044c3ba40d7d73fa16282456434008dd77a10a769.scope: Deactivated successfully.
Dec 05 01:55:37 compute-0 sudo[423180]: pam_unix(sudo:session): session closed for user root
Dec 05 01:55:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:55:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:55:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:55:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:55:37 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 3196e5ff-30a2-45fb-8aa2-738d35f1adf1 does not exist
Dec 05 01:55:37 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 600b7757-4334-43ab-97c9-87dd0f29cb56 does not exist
Dec 05 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.288 349552 DEBUG nova.compute.manager [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 05 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.289 349552 DEBUG nova.network.neutron [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 05 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.311 349552 INFO nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 05 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.359 349552 DEBUG nova.compute.manager [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 05 01:55:37 compute-0 sudo[423364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:55:37 compute-0 sudo[423364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:55:37 compute-0 sudo[423364]: pam_unix(sudo:session): session closed for user root
Dec 05 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.461 349552 DEBUG nova.compute.manager [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 05 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.466 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 05 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.467 349552 INFO nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Creating image(s)
Dec 05 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.512 349552 DEBUG nova.storage.rbd_utils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:55:37 compute-0 sudo[423389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:55:37 compute-0 sudo[423389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:55:37 compute-0 sudo[423389]: pam_unix(sudo:session): session closed for user root
Dec 05 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.572 349552 DEBUG nova.storage.rbd_utils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:55:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1387: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.623 349552 DEBUG nova.storage.rbd_utils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.634 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:55:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.723 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.724 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "af0f6d73e40706411141d751e7ebef271f1a5b42" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.725 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "af0f6d73e40706411141d751e7ebef271f1a5b42" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.726 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "af0f6d73e40706411141d751e7ebef271f1a5b42" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.759 349552 DEBUG nova.storage.rbd_utils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.767 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:55:38 compute-0 nova_compute[349548]: 2025-12-05 01:55:38.176 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:55:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:55:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:55:38 compute-0 ceph-mon[192914]: pgmap v1387: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:38 compute-0 nova_compute[349548]: 2025-12-05 01:55:38.322 349552 DEBUG nova.storage.rbd_utils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] resizing rbd image 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 05 01:55:38 compute-0 nova_compute[349548]: 2025-12-05 01:55:38.578 349552 DEBUG nova.objects.instance [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'migration_context' on Instance uuid 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 01:55:38 compute-0 nova_compute[349548]: 2025-12-05 01:55:38.645 349552 DEBUG nova.storage.rbd_utils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:55:38 compute-0 podman[423580]: 2025-12-05 01:55:38.715059189 +0000 UTC m=+0.123947702 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.openshift.expose-services=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 05 01:55:38 compute-0 nova_compute[349548]: 2025-12-05 01:55:38.715 349552 DEBUG nova.storage.rbd_utils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:55:38 compute-0 nova_compute[349548]: 2025-12-05 01:55:38.727 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:55:38 compute-0 nova_compute[349548]: 2025-12-05 01:55:38.786 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:55:38 compute-0 nova_compute[349548]: 2025-12-05 01:55:38.787 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:55:38 compute-0 nova_compute[349548]: 2025-12-05 01:55:38.788 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:55:38 compute-0 nova_compute[349548]: 2025-12-05 01:55:38.788 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:55:38 compute-0 nova_compute[349548]: 2025-12-05 01:55:38.830 349552 DEBUG nova.storage.rbd_utils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:55:38 compute-0 nova_compute[349548]: 2025-12-05 01:55:38.840 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:55:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:38.989 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.291 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.496 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 05 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.497 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Ensure instance console log exists: /var/lib/nova/instances/3611d2ae-da33-4e55-aec7-0bec88d3b4e0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 05 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.497 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.498 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.498 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:55:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1388: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.811 349552 DEBUG nova.network.neutron [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Successfully updated port: 2799035c-b9e1-4c24-b031-9824b684480c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 05 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.833 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.833 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquired lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.833 349552 DEBUG nova.network.neutron [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 05 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.928 349552 DEBUG nova.compute.manager [req-a06338e6-84f4-47fc-b2e8-d4c2087a5730 req-d0c3e7c6-692f-47fe-85a5-8cb57ff55f27 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Received event network-changed-2799035c-b9e1-4c24-b031-9824b684480c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.929 349552 DEBUG nova.compute.manager [req-a06338e6-84f4-47fc-b2e8-d4c2087a5730 req-d0c3e7c6-692f-47fe-85a5-8cb57ff55f27 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Refreshing instance network info cache due to event network-changed-2799035c-b9e1-4c24-b031-9824b684480c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.929 349552 DEBUG oslo_concurrency.lockutils [req-a06338e6-84f4-47fc-b2e8-d4c2087a5730 req-d0c3e7c6-692f-47fe-85a5-8cb57ff55f27 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.973 349552 DEBUG nova.network.neutron [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 05 01:55:40 compute-0 ceph-mon[192914]: pgmap v1388: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:55:40 compute-0 nova_compute[349548]: 2025-12-05 01:55:40.803 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.172 349552 DEBUG nova.network.neutron [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Updating instance_info_cache with network_info: [{"id": "2799035c-b9e1-4c24-b031-9824b684480c", "address": "fa:16:3e:10:64:51", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2799035c-b9", "ovs_interfaceid": "2799035c-b9e1-4c24-b031-9824b684480c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.205 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Releasing lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.206 349552 DEBUG nova.compute.manager [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Instance network_info: |[{"id": "2799035c-b9e1-4c24-b031-9824b684480c", "address": "fa:16:3e:10:64:51", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2799035c-b9", "ovs_interfaceid": "2799035c-b9e1-4c24-b031-9824b684480c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.206 349552 DEBUG oslo_concurrency.lockutils [req-a06338e6-84f4-47fc-b2e8-d4c2087a5730 req-d0c3e7c6-692f-47fe-85a5-8cb57ff55f27 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.207 349552 DEBUG nova.network.neutron [req-a06338e6-84f4-47fc-b2e8-d4c2087a5730 req-d0c3e7c6-692f-47fe-85a5-8cb57ff55f27 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Refreshing network info cache for port 2799035c-b9e1-4c24-b031-9824b684480c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.213 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Start _get_guest_xml network_info=[{"id": "2799035c-b9e1-4c24-b031-9824b684480c", "address": "fa:16:3e:10:64:51", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2799035c-b9", "ovs_interfaceid": "2799035c-b9e1-4c24-b031-9824b684480c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-05T01:46:34Z,direct_url=<?>,disk_format='qcow2',id=aa58c1e9-bdcc-4e60-9cee-eaeee0741251,min_disk=0,min_ram=0,name='cirros',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-05T01:46:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}], 'ephemerals': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.229 349552 WARNING nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.252 349552 DEBUG nova.virt.libvirt.host [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.254 349552 DEBUG nova.virt.libvirt.host [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.263 349552 DEBUG nova.virt.libvirt.host [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.263 349552 DEBUG nova.virt.libvirt.host [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.264 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.264 349552 DEBUG nova.virt.hardware [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T01:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='7d473820-6f66-40b4-b8d1-decd466d7dd2',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-05T01:46:34Z,direct_url=<?>,disk_format='qcow2',id=aa58c1e9-bdcc-4e60-9cee-eaeee0741251,min_disk=0,min_ram=0,name='cirros',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-05T01:46:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.265 349552 DEBUG nova.virt.hardware [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.265 349552 DEBUG nova.virt.hardware [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.265 349552 DEBUG nova.virt.hardware [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.266 349552 DEBUG nova.virt.hardware [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.266 349552 DEBUG nova.virt.hardware [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.266 349552 DEBUG nova.virt.hardware [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.267 349552 DEBUG nova.virt.hardware [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.267 349552 DEBUG nova.virt.hardware [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.267 349552 DEBUG nova.virt.hardware [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.268 349552 DEBUG nova.virt.hardware [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.271 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.565 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1389: 321 pgs: 321 active+clean; 210 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 561 KiB/s wr, 30 op/s
Dec 05 01:55:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 01:55:41 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1003878946' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.756 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.757 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:55:41 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 05 01:55:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 01:55:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4263597187' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.237 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.285 349552 DEBUG nova.storage.rbd_utils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.292 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:55:42 compute-0 ceph-mon[192914]: pgmap v1389: 321 pgs: 321 active+clean; 210 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 561 KiB/s wr, 30 op/s
Dec 05 01:55:42 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1003878946' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:55:42 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4263597187' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:55:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:55:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 01:55:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1715641047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.784 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.787 349552 DEBUG nova.virt.libvirt.vif [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T01:55:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq',id=4,image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ad982b73954486390215862ee62239f',ramdisk_id='',reservation_id='r-105jpxj7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T01:55:37Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04ODA2MDY4NjMzMjAxNTAxMzcxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTg4MDYwNjg2MzMyMDE1MDEzNzE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODgwNjA2ODYzMzIwMTUwMTM3MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTg4MDYwNjg2MzMyMDE1MDEzNzE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04ODA2MDY4NjMzMjAxNTAxMzcxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04ODA2MDY4NjMzMjAxNTAxMzcxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Dec 05 01:55:42 compute-0 nova_compute[349548]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODgwNjA2ODYzMzIwMTUwMTM3MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTg4MDYwNjg2MzMyMDE1MDEzNzE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04ODA2MDY4NjMzMjAxNTAxMzcxPT0tLQo=',user_id='ff880837791d4f49a54672b8d0e705ff',uuid=3611d2ae-da33-4e55-aec7-0bec88d3b4e0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2799035c-b9e1-4c24-b031-9824b684480c", "address": "fa:16:3e:10:64:51", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2799035c-b9", "ovs_interfaceid": "2799035c-b9e1-4c24-b031-9824b684480c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.788 349552 DEBUG nova.network.os_vif_util [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converting VIF {"id": "2799035c-b9e1-4c24-b031-9824b684480c", "address": "fa:16:3e:10:64:51", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2799035c-b9", "ovs_interfaceid": "2799035c-b9e1-4c24-b031-9824b684480c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.790 349552 DEBUG nova.network.os_vif_util [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:64:51,bridge_name='br-int',has_traffic_filtering=True,id=2799035c-b9e1-4c24-b031-9824b684480c,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2799035c-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.793 349552 DEBUG nova.objects.instance [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'pci_devices' on Instance uuid 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.824 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] End _get_guest_xml xml=<domain type="kvm">
Dec 05 01:55:42 compute-0 nova_compute[349548]:   <uuid>3611d2ae-da33-4e55-aec7-0bec88d3b4e0</uuid>
Dec 05 01:55:42 compute-0 nova_compute[349548]:   <name>instance-00000004</name>
Dec 05 01:55:42 compute-0 nova_compute[349548]:   <memory>524288</memory>
Dec 05 01:55:42 compute-0 nova_compute[349548]:   <vcpu>1</vcpu>
Dec 05 01:55:42 compute-0 nova_compute[349548]:   <metadata>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <nova:name>vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq</nova:name>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <nova:creationTime>2025-12-05 01:55:41</nova:creationTime>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <nova:flavor name="m1.small">
Dec 05 01:55:42 compute-0 nova_compute[349548]:         <nova:memory>512</nova:memory>
Dec 05 01:55:42 compute-0 nova_compute[349548]:         <nova:disk>1</nova:disk>
Dec 05 01:55:42 compute-0 nova_compute[349548]:         <nova:swap>0</nova:swap>
Dec 05 01:55:42 compute-0 nova_compute[349548]:         <nova:ephemeral>1</nova:ephemeral>
Dec 05 01:55:42 compute-0 nova_compute[349548]:         <nova:vcpus>1</nova:vcpus>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       </nova:flavor>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <nova:owner>
Dec 05 01:55:42 compute-0 nova_compute[349548]:         <nova:user uuid="ff880837791d4f49a54672b8d0e705ff">admin</nova:user>
Dec 05 01:55:42 compute-0 nova_compute[349548]:         <nova:project uuid="6ad982b73954486390215862ee62239f">admin</nova:project>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       </nova:owner>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <nova:root type="image" uuid="aa58c1e9-bdcc-4e60-9cee-eaeee0741251"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <nova:ports>
Dec 05 01:55:42 compute-0 nova_compute[349548]:         <nova:port uuid="2799035c-b9e1-4c24-b031-9824b684480c">
Dec 05 01:55:42 compute-0 nova_compute[349548]:           <nova:ip type="fixed" address="192.168.0.169" ipVersion="4"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:         </nova:port>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       </nova:ports>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     </nova:instance>
Dec 05 01:55:42 compute-0 nova_compute[349548]:   </metadata>
Dec 05 01:55:42 compute-0 nova_compute[349548]:   <sysinfo type="smbios">
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <system>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <entry name="manufacturer">RDO</entry>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <entry name="product">OpenStack Compute</entry>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <entry name="serial">3611d2ae-da33-4e55-aec7-0bec88d3b4e0</entry>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <entry name="uuid">3611d2ae-da33-4e55-aec7-0bec88d3b4e0</entry>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <entry name="family">Virtual Machine</entry>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     </system>
Dec 05 01:55:42 compute-0 nova_compute[349548]:   </sysinfo>
Dec 05 01:55:42 compute-0 nova_compute[349548]:   <os>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <boot dev="hd"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <smbios mode="sysinfo"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:   </os>
Dec 05 01:55:42 compute-0 nova_compute[349548]:   <features>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <acpi/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <apic/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <vmcoreinfo/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:   </features>
Dec 05 01:55:42 compute-0 nova_compute[349548]:   <clock offset="utc">
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <timer name="pit" tickpolicy="delay"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <timer name="hpet" present="no"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:   </clock>
Dec 05 01:55:42 compute-0 nova_compute[349548]:   <cpu mode="host-model" match="exact">
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <topology sockets="1" cores="1" threads="1"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:   </cpu>
Dec 05 01:55:42 compute-0 nova_compute[349548]:   <devices>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <disk type="network" device="disk">
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk">
Dec 05 01:55:42 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       </source>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 01:55:42 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       </auth>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <target dev="vda" bus="virtio"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     </disk>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <disk type="network" device="disk">
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk.eph0">
Dec 05 01:55:42 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       </source>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 01:55:42 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       </auth>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <target dev="vdb" bus="virtio"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     </disk>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <disk type="network" device="cdrom">
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk.config">
Dec 05 01:55:42 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       </source>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 01:55:42 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       </auth>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <target dev="sda" bus="sata"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     </disk>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <interface type="ethernet">
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <mac address="fa:16:3e:10:64:51"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <driver name="vhost" rx_queue_size="512"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <mtu size="1442"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <target dev="tap2799035c-b9"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     </interface>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <serial type="pty">
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <log file="/var/lib/nova/instances/3611d2ae-da33-4e55-aec7-0bec88d3b4e0/console.log" append="off"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     </serial>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <video>
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     </video>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <input type="tablet" bus="usb"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <rng model="virtio">
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <backend model="random">/dev/urandom</backend>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     </rng>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <controller type="usb" index="0"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     <memballoon model="virtio">
Dec 05 01:55:42 compute-0 nova_compute[349548]:       <stats period="10"/>
Dec 05 01:55:42 compute-0 nova_compute[349548]:     </memballoon>
Dec 05 01:55:42 compute-0 nova_compute[349548]:   </devices>
Dec 05 01:55:42 compute-0 nova_compute[349548]: </domain>
Dec 05 01:55:42 compute-0 nova_compute[349548]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.824 349552 DEBUG nova.compute.manager [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Preparing to wait for external event network-vif-plugged-2799035c-b9e1-4c24-b031-9824b684480c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.825 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.825 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.825 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.826 349552 DEBUG nova.virt.libvirt.vif [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T01:55:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq',id=4,image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ad982b73954486390215862ee62239f',ramdisk_id='',reservation_id='r-105jpxj7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T01:55:37Z,user_data='Content-Type: multipart/mixed; boundary="===============8806068633201501371=="
MIME-Version: 1.0

--===============8806068633201501371==
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config"



# Capture all subprocess output into a logfile
# Useful for troubleshooting cloud-init issues
output: {all: '| tee -a /var/log/cloud-init-output.log'}

--===============8806068633201501371==
Content-Type: text/cloud-boothook; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="boothook.sh"

#!/usr/bin/bash

# FIXME(shadower) this is a workaround for cloud-init 0.6.3 present in Ubuntu
# 12.04 LTS:
# https://bugs.launchpad.net/heat/+bug/1257410
#
# The old cloud-init doesn't create the users directly so the commands to do
# this are injected though nova_utils.py.
#
# Once we drop support for 0.6.3, we can safely remove this.


# in case heat-cfntools has been installed from package but no symlinks
# are yet in /opt/aws/bin/
cfn-create-aws-symlinks

# Do not remove - the cloud boothook should always return success
exit 0

--===============8806068633201501371==
Content-Type: text/part-handler; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="part-handler.py"

# part-handler
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import datetime
import errno
import os
import sys


def list_types():
    return ["text/x-cfninitdata"]


def handle_part(data, ctype, filename, payload):
    if ctype == "__begin__":
        try:
            os.makedirs('/var/lib/heat-cfntools', int("700", 8))
        except OSError:
            ex_type, e, tb = sys.exc_info()
            if e.errno != errno.EEXIST:
                raise
        return

    if ctype == "__end__":
        return

    timestamp = datetime.datetime.now()
    with open('/var/log/part-handler.log', 'a') as log:
        log.write('%s filename:%s, ctype:%s\n' % (timestamp, filename, ctype))

    if ctype == 'text/x-cfninitdata':
        with open('/var/lib/heat-cfntools/%s' % filename, 'w') as f:
            f.write(payload)

        # TODO(sdake) hopefully temporary until users move to heat-cfntools-1.3
        with open('/var/lib/cloud/data/%s' % filename, 'w') as f:
            f.write(payload)

--===============8806068633201501371==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-userdata"


--===============8806068633201501371==
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="loguserdata.py"

#!/usr/bin/env python3
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import datetime
import errno
import logging
import os
import subprocess
import sys


VAR_PATH = '/var/lib/heat-cfntools'
LOG = logging.getLogger('heat-provision')


def init_logging():
    LOG.setLevel(logging.INFO)
    LOG.addHandler(logging.StreamHandler())
    fh = logging.FileHandler("/var/log/heat-provision.log")
    os.chmod(fh.baseFilename, int("600", 8))
    LOG.addHandler(fh)


def call(args):

    class LogStream(object):

        def write(self, data):
            LOG.info(data)

    LOG.info('%s\n', ' '.join(args))  # noqa
    try:
        ls = LogStream()
        p = subprocess.Popen(args, stdout=subprocess.PIPE,
                             stderr=subprocess.PIPE)
        data = p.communicate()
        if data:
            for x in data:
                ls.write(x)
    except OSError:
        ex_type, ex, tb = sys.exc_info()
        if ex.errno == errno.ENOEXEC:
            LOG.error('Userdata empty or not executable: %s', ex)
            return os.EX_OK
        else:
            LOG.error('OS error running userdata: %s', ex)
            return os.EX_OSERR
    except Exception:
        ex_type, ex, tb = sys.exc_info()
        LOG.error('Unknown error running userdata: %s', ex)
        return os.EX_SOFTWARE
    return p.returncode


def main():
    userdata_path = os.path.join(VAR_PATH, 'cfn-userdata')
    os.chmod(userdata_path, int("700", 8))

    LOG.info('Provision began: %s', datetime.datetime.now())
    returncode = call([userdata_path])
    LOG.info('Provision done: %s', datetime.datetime.now())
    if returncode:
        return returncode


if __name__ == '__main__':
    init_logging()

    code = main()
    if code:
        LOG.error('Provision failed with exit code %s', code)
        sys.exit(code)

    provision_log = os.path.join(VAR_PATH, 'provision-finished')
    # touch the file so it is timestamped with when finished
    with open(provision_log, 'a'):
        os.utime(provision_log, None)

--===============8806068633201501371==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-metadata-server"

https://heat-cfnapi-internal.openstack.svc:8000/v1/
--===============8806068633201501371==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-boto-cfg"

[Boto]
debug = 0
is_secure = 0
https_validate_certificates = 1
cfn_region_name = heat
cfn_region_endpoint = heat-cfnapi-internal.openstack.svc
--===============8806068633201501371==--
',user_id='ff880837791d4f49a54672b8d0e705ff',uuid=3611d2ae-da33-4e55-aec7-0bec88d3b4e0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2799035c-b9e1-4c24-b031-9824b684480c", "address": "fa:16:3e:10:64:51", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2799035c-b9", "ovs_interfaceid": "2799035c-b9e1-4c24-b031-9824b684480c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.826 349552 DEBUG nova.network.os_vif_util [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converting VIF {"id": "2799035c-b9e1-4c24-b031-9824b684480c", "address": "fa:16:3e:10:64:51", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2799035c-b9", "ovs_interfaceid": "2799035c-b9e1-4c24-b031-9824b684480c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.826 349552 DEBUG nova.network.os_vif_util [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:64:51,bridge_name='br-int',has_traffic_filtering=True,id=2799035c-b9e1-4c24-b031-9824b684480c,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2799035c-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.827 349552 DEBUG os_vif [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:64:51,bridge_name='br-int',has_traffic_filtering=True,id=2799035c-b9e1-4c24-b031-9824b684480c,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2799035c-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.827 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.829 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.829 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.837 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.837 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2799035c-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.838 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2799035c-b9, col_values=(('external_ids', {'iface-id': '2799035c-b9e1-4c24-b031-9824b684480c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:64:51', 'vm-uuid': '3611d2ae-da33-4e55-aec7-0bec88d3b4e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.841 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:42 compute-0 NetworkManager[49092]: <info>  [1764899742.8435] manager: (tap2799035c-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.844 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.853 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.854 349552 INFO os_vif [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:64:51,bridge_name='br-int',has_traffic_filtering=True,id=2799035c-b9e1-4c24-b031-9824b684480c,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2799035c-b9')
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.914 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.915 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.915 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.916 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No VIF found with MAC fa:16:3e:10:64:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.916 349552 INFO nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Using config drive
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.962 349552 DEBUG nova.storage.rbd_utils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:55:42 compute-0 rsyslogd[188644]: message too long (8192) with configured size 8096, begin of message is: 2025-12-05 01:55:42.787 349552 DEBUG nova.virt.libvirt.vif [None req-32dbcdbb-11 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.985 349552 DEBUG nova.network.neutron [req-a06338e6-84f4-47fc-b2e8-d4c2087a5730 req-d0c3e7c6-692f-47fe-85a5-8cb57ff55f27 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Updated VIF entry in instance network info cache for port 2799035c-b9e1-4c24-b031-9824b684480c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.987 349552 DEBUG nova.network.neutron [req-a06338e6-84f4-47fc-b2e8-d4c2087a5730 req-d0c3e7c6-692f-47fe-85a5-8cb57ff55f27 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Updating instance_info_cache with network_info: [{"id": "2799035c-b9e1-4c24-b031-9824b684480c", "address": "fa:16:3e:10:64:51", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2799035c-b9", "ovs_interfaceid": "2799035c-b9e1-4c24-b031-9824b684480c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 01:55:43 compute-0 nova_compute[349548]: 2025-12-05 01:55:43.022 349552 DEBUG oslo_concurrency.lockutils [req-a06338e6-84f4-47fc-b2e8-d4c2087a5730 req-d0c3e7c6-692f-47fe-85a5-8cb57ff55f27 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:55:43 compute-0 nova_compute[349548]: 2025-12-05 01:55:43.315 349552 INFO nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Creating config drive at /var/lib/nova/instances/3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.config
Dec 05 01:55:43 compute-0 nova_compute[349548]: 2025-12-05 01:55:43.328 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb0gn_s2h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:55:43 compute-0 nova_compute[349548]: 2025-12-05 01:55:43.475 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb0gn_s2h" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:55:43 compute-0 nova_compute[349548]: 2025-12-05 01:55:43.519 349552 DEBUG nova.storage.rbd_utils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 01:55:43 compute-0 nova_compute[349548]: 2025-12-05 01:55:43.527 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.config 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:55:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1390: 321 pgs: 321 active+clean; 234 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec 05 01:55:43 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1715641047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 01:55:43 compute-0 nova_compute[349548]: 2025-12-05 01:55:43.782 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.config 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.255s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:55:43 compute-0 nova_compute[349548]: 2025-12-05 01:55:43.783 349552 INFO nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Deleting local config drive /var/lib/nova/instances/3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.config because it was imported into RBD.
Dec 05 01:55:43 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 05 01:55:43 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 05 01:55:43 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 05 01:55:43 compute-0 kernel: tap2799035c-b9: entered promiscuous mode
Dec 05 01:55:43 compute-0 NetworkManager[49092]: <info>  [1764899743.9312] manager: (tap2799035c-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Dec 05 01:55:43 compute-0 nova_compute[349548]: 2025-12-05 01:55:43.936 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:43 compute-0 ovn_controller[89286]: 2025-12-05T01:55:43Z|00045|binding|INFO|Claiming lport 2799035c-b9e1-4c24-b031-9824b684480c for this chassis.
Dec 05 01:55:43 compute-0 ovn_controller[89286]: 2025-12-05T01:55:43Z|00046|binding|INFO|2799035c-b9e1-4c24-b031-9824b684480c: Claiming fa:16:3e:10:64:51 192.168.0.169
Dec 05 01:55:43 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:43.947 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:64:51 192.168.0.169'], port_security=['fa:16:3e:10:64:51 192.168.0.169'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-qkgif4ysdpfw-etyk2gsqvxro-nwtay2ho224x-port-44wmftlb3hgo', 'neutron:cidrs': '192.168.0.169/24', 'neutron:device_id': '3611d2ae-da33-4e55-aec7-0bec88d3b4e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-qkgif4ysdpfw-etyk2gsqvxro-nwtay2ho224x-port-44wmftlb3hgo', 'neutron:project_id': '6ad982b73954486390215862ee62239f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf07c149-4b4f-4cc9-a5b5-cfd139acbede', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.221'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8440543a-d57d-422f-b491-49a678c2776e, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=2799035c-b9e1-4c24-b031-9824b684480c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 01:55:43 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:43.948 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 2799035c-b9e1-4c24-b031-9824b684480c in datapath 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 bound to our chassis
Dec 05 01:55:43 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:43.949 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183
Dec 05 01:55:43 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:43.966 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[e6c4521d-3edd-476d-9615-6e046ecc924e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:55:43 compute-0 ovn_controller[89286]: 2025-12-05T01:55:43Z|00047|binding|INFO|Setting lport 2799035c-b9e1-4c24-b031-9824b684480c ovn-installed in OVS
Dec 05 01:55:43 compute-0 ovn_controller[89286]: 2025-12-05T01:55:43Z|00048|binding|INFO|Setting lport 2799035c-b9e1-4c24-b031-9824b684480c up in Southbound
Dec 05 01:55:43 compute-0 nova_compute[349548]: 2025-12-05 01:55:43.984 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:43 compute-0 systemd-machined[138700]: New machine qemu-4-instance-00000004.
Dec 05 01:55:43 compute-0 systemd-udevd[423912]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 01:55:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:44.001 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[c17b6095-3e1b-4b05-87e1-f8694653e056]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:55:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:44.005 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[ba4a56c6-2bd2-4ddf-aa33-3a47ca72f5f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:55:44 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Dec 05 01:55:44 compute-0 NetworkManager[49092]: <info>  [1764899744.0145] device (tap2799035c-b9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 05 01:55:44 compute-0 NetworkManager[49092]: <info>  [1764899744.0154] device (tap2799035c-b9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 05 01:55:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:44.051 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[ab91b78e-12ae-43b7-a08c-ffbac88847a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:55:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:44.079 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[4eaf7c27-f557-4fab-ae51-94d1ad3d9f5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49f7d2f1-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:8a:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537514, 'reachable_time': 15952, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 423917, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:55:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:44.104 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[df49c28f-ad28-451c-9cfc-79b6ce7e61ab]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap49f7d2f1-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537531, 'tstamp': 537531}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 423923, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap49f7d2f1-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537536, 'tstamp': 537536}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 423923, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:55:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:44.106 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49f7d2f1-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:55:44 compute-0 nova_compute[349548]: 2025-12-05 01:55:44.108 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:44.110 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49f7d2f1-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:55:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:44.110 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 01:55:44 compute-0 nova_compute[349548]: 2025-12-05 01:55:44.110 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:44.110 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap49f7d2f1-f0, col_values=(('external_ids', {'iface-id': '35b0af3f-4a87-44c5-9b77-2f08261b9985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:55:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:44.110 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 01:55:44 compute-0 ceph-mon[192914]: pgmap v1390: 321 pgs: 321 active+clean; 234 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec 05 01:55:44 compute-0 nova_compute[349548]: 2025-12-05 01:55:44.871 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764899744.8700614, 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 01:55:44 compute-0 nova_compute[349548]: 2025-12-05 01:55:44.872 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] VM Started (Lifecycle Event)
Dec 05 01:55:44 compute-0 nova_compute[349548]: 2025-12-05 01:55:44.895 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 01:55:44 compute-0 nova_compute[349548]: 2025-12-05 01:55:44.904 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764899744.8702343, 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 01:55:44 compute-0 nova_compute[349548]: 2025-12-05 01:55:44.904 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] VM Paused (Lifecycle Event)
Dec 05 01:55:44 compute-0 nova_compute[349548]: 2025-12-05 01:55:44.923 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 01:55:44 compute-0 nova_compute[349548]: 2025-12-05 01:55:44.929 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 01:55:44 compute-0 nova_compute[349548]: 2025-12-05 01:55:44.954 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 01:55:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 01:55:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2230527893' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:55:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 01:55:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2230527893' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:55:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1391: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.4 MiB/s wr, 42 op/s
Dec 05 01:55:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2230527893' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:55:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2230527893' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:55:45 compute-0 podman[423985]: 2025-12-05 01:55:45.711011155 +0000 UTC m=+0.112222664 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:55:45 compute-0 podman[423988]: 2025-12-05 01:55:45.730961863 +0000 UTC m=+0.117163972 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.tags=minimal rhel9, vcs-type=git)
Dec 05 01:55:45 compute-0 podman[423986]: 2025-12-05 01:55:45.745372697 +0000 UTC m=+0.144222800 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:55:45 compute-0 podman[423987]: 2025-12-05 01:55:45.768716691 +0000 UTC m=+0.166147314 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 01:55:45 compute-0 nova_compute[349548]: 2025-12-05 01:55:45.973 349552 DEBUG nova.compute.manager [req-f03de3e7-7fbf-4a8a-875f-e993c3c56995 req-7ed506b9-b874-4836-b393-279c856938d2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Received event network-vif-plugged-2799035c-b9e1-4c24-b031-9824b684480c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 01:55:45 compute-0 nova_compute[349548]: 2025-12-05 01:55:45.973 349552 DEBUG oslo_concurrency.lockutils [req-f03de3e7-7fbf-4a8a-875f-e993c3c56995 req-7ed506b9-b874-4836-b393-279c856938d2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:55:45 compute-0 nova_compute[349548]: 2025-12-05 01:55:45.974 349552 DEBUG oslo_concurrency.lockutils [req-f03de3e7-7fbf-4a8a-875f-e993c3c56995 req-7ed506b9-b874-4836-b393-279c856938d2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:55:45 compute-0 nova_compute[349548]: 2025-12-05 01:55:45.974 349552 DEBUG oslo_concurrency.lockutils [req-f03de3e7-7fbf-4a8a-875f-e993c3c56995 req-7ed506b9-b874-4836-b393-279c856938d2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:55:45 compute-0 nova_compute[349548]: 2025-12-05 01:55:45.974 349552 DEBUG nova.compute.manager [req-f03de3e7-7fbf-4a8a-875f-e993c3c56995 req-7ed506b9-b874-4836-b393-279c856938d2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Processing event network-vif-plugged-2799035c-b9e1-4c24-b031-9824b684480c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 05 01:55:45 compute-0 nova_compute[349548]: 2025-12-05 01:55:45.975 349552 DEBUG nova.compute.manager [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 05 01:55:45 compute-0 nova_compute[349548]: 2025-12-05 01:55:45.982 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764899745.98215, 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 01:55:45 compute-0 nova_compute[349548]: 2025-12-05 01:55:45.983 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] VM Resumed (Lifecycle Event)
Dec 05 01:55:45 compute-0 nova_compute[349548]: 2025-12-05 01:55:45.986 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 05 01:55:45 compute-0 nova_compute[349548]: 2025-12-05 01:55:45.994 349552 INFO nova.virt.libvirt.driver [-] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Instance spawned successfully.
Dec 05 01:55:45 compute-0 nova_compute[349548]: 2025-12-05 01:55:45.996 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 05 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.008 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.019 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.030 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.030 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.031 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.032 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.032 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.033 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.041 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.086 349552 INFO nova.compute.manager [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Took 8.62 seconds to spawn the instance on the hypervisor.
Dec 05 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.086 349552 DEBUG nova.compute.manager [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.143 349552 INFO nova.compute.manager [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Took 9.87 seconds to build instance.
Dec 05 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.160 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.994s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:55:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:55:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:55:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:55:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:55:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:55:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.568 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:46 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 05 01:55:46 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 05 01:55:46 compute-0 ceph-mon[192914]: pgmap v1391: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.4 MiB/s wr, 42 op/s
Dec 05 01:55:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1392: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.4 MiB/s wr, 49 op/s
Dec 05 01:55:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:55:47 compute-0 nova_compute[349548]: 2025-12-05 01:55:47.841 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:48 compute-0 nova_compute[349548]: 2025-12-05 01:55:48.058 349552 DEBUG nova.compute.manager [req-9c4da4f4-de89-41b2-bd72-030f6b13beb4 req-9662a099-80c6-48fe-99ed-291181789645 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Received event network-vif-plugged-2799035c-b9e1-4c24-b031-9824b684480c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 01:55:48 compute-0 nova_compute[349548]: 2025-12-05 01:55:48.058 349552 DEBUG oslo_concurrency.lockutils [req-9c4da4f4-de89-41b2-bd72-030f6b13beb4 req-9662a099-80c6-48fe-99ed-291181789645 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:55:48 compute-0 nova_compute[349548]: 2025-12-05 01:55:48.059 349552 DEBUG oslo_concurrency.lockutils [req-9c4da4f4-de89-41b2-bd72-030f6b13beb4 req-9662a099-80c6-48fe-99ed-291181789645 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:55:48 compute-0 nova_compute[349548]: 2025-12-05 01:55:48.059 349552 DEBUG oslo_concurrency.lockutils [req-9c4da4f4-de89-41b2-bd72-030f6b13beb4 req-9662a099-80c6-48fe-99ed-291181789645 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:55:48 compute-0 nova_compute[349548]: 2025-12-05 01:55:48.059 349552 DEBUG nova.compute.manager [req-9c4da4f4-de89-41b2-bd72-030f6b13beb4 req-9662a099-80c6-48fe-99ed-291181789645 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] No waiting events found dispatching network-vif-plugged-2799035c-b9e1-4c24-b031-9824b684480c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 01:55:48 compute-0 nova_compute[349548]: 2025-12-05 01:55:48.060 349552 WARNING nova.compute.manager [req-9c4da4f4-de89-41b2-bd72-030f6b13beb4 req-9662a099-80c6-48fe-99ed-291181789645 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Received unexpected event network-vif-plugged-2799035c-b9e1-4c24-b031-9824b684480c for instance with vm_state active and task_state None.
Dec 05 01:55:48 compute-0 ceph-mon[192914]: pgmap v1392: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.4 MiB/s wr, 49 op/s
Dec 05 01:55:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1393: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.4 MiB/s wr, 49 op/s
Dec 05 01:55:50 compute-0 ceph-mon[192914]: pgmap v1393: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.4 MiB/s wr, 49 op/s
Dec 05 01:55:51 compute-0 nova_compute[349548]: 2025-12-05 01:55:51.572 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1394: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 631 KiB/s rd, 1.4 MiB/s wr, 68 op/s
Dec 05 01:55:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:55:52 compute-0 nova_compute[349548]: 2025-12-05 01:55:52.845 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:52 compute-0 ceph-mon[192914]: pgmap v1394: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 631 KiB/s rd, 1.4 MiB/s wr, 68 op/s
Dec 05 01:55:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1395: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 850 KiB/s wr, 67 op/s
Dec 05 01:55:54 compute-0 ceph-mon[192914]: pgmap v1395: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 850 KiB/s wr, 67 op/s
Dec 05 01:55:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1396: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 22 KiB/s wr, 60 op/s
Dec 05 01:55:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:56.186 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:55:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:56.187 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:55:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:56.187 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:55:56 compute-0 sshd-session[424092]: Connection reset by authenticating user root 45.135.232.92 port 32176 [preauth]
Dec 05 01:55:56 compute-0 nova_compute[349548]: 2025-12-05 01:55:56.577 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:56 compute-0 ceph-mon[192914]: pgmap v1396: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 22 KiB/s wr, 60 op/s
Dec 05 01:55:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1397: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.4 KiB/s wr, 55 op/s
Dec 05 01:55:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:55:57 compute-0 nova_compute[349548]: 2025-12-05 01:55:57.850 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:55:58 compute-0 sshd-session[424094]: Invalid user admin from 45.135.232.92 port 23804
Dec 05 01:55:58 compute-0 sshd-session[424094]: Connection reset by invalid user admin 45.135.232.92 port 23804 [preauth]
Dec 05 01:55:59 compute-0 ceph-mon[192914]: pgmap v1397: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.4 KiB/s wr, 55 op/s
Dec 05 01:55:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1398: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 48 op/s
Dec 05 01:55:59 compute-0 podman[158197]: time="2025-12-05T01:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:55:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 01:55:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8619 "" "Go-http-client/1.1"
Dec 05 01:56:00 compute-0 sshd-session[424096]: Connection reset by authenticating user root 45.135.232.92 port 23812 [preauth]
Dec 05 01:56:01 compute-0 ceph-mon[192914]: pgmap v1398: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 48 op/s
Dec 05 01:56:01 compute-0 openstack_network_exporter[366555]: ERROR   01:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:56:01 compute-0 openstack_network_exporter[366555]: ERROR   01:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:56:01 compute-0 openstack_network_exporter[366555]: ERROR   01:56:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:56:01 compute-0 openstack_network_exporter[366555]: ERROR   01:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:56:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:56:01 compute-0 openstack_network_exporter[366555]: ERROR   01:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:56:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:56:01 compute-0 nova_compute[349548]: 2025-12-05 01:56:01.580 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:56:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1399: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 48 op/s
Dec 05 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.077 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.115 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Triggering sync for uuid b69a0e24-1bc4-46a5-92d7-367c1efd53df _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 05 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.115 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Triggering sync for uuid b82c3f0e-6d6a-4a7b-9556-b609ad63e497 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 05 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.116 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Triggering sync for uuid 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 05 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.116 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Triggering sync for uuid 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 05 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.117 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.118 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.119 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.119 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.120 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.121 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.122 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.122 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.227 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.229 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.238 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.241 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:56:02 compute-0 sshd-session[424098]: Connection reset by authenticating user root 45.135.232.92 port 23834 [preauth]
Dec 05 01:56:02 compute-0 podman[424101]: 2025-12-05 01:56:02.703051166 +0000 UTC m=+0.116586656 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:56:02 compute-0 podman[424100]: 2025-12-05 01:56:02.710024711 +0000 UTC m=+0.117864991 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible)
Dec 05 01:56:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.852 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:56:03 compute-0 ceph-mon[192914]: pgmap v1399: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 48 op/s
Dec 05 01:56:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1400: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 914 KiB/s rd, 29 op/s
Dec 05 01:56:04 compute-0 ceph-mon[192914]: pgmap v1400: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 914 KiB/s rd, 29 op/s
Dec 05 01:56:04 compute-0 podman[424142]: 2025-12-05 01:56:04.705778564 +0000 UTC m=+0.111081902 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 01:56:04 compute-0 podman[424141]: 2025-12-05 01:56:04.729993602 +0000 UTC m=+0.134364874 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec 05 01:56:05 compute-0 sshd-session[424120]: Invalid user admin from 45.135.232.92 port 23836
Dec 05 01:56:05 compute-0 sshd-session[424120]: Connection reset by invalid user admin 45.135.232.92 port 23836 [preauth]
Dec 05 01:56:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1401: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 11 op/s
Dec 05 01:56:06 compute-0 nova_compute[349548]: 2025-12-05 01:56:06.582 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:56:06 compute-0 ceph-mon[192914]: pgmap v1401: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 11 op/s
Dec 05 01:56:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1402: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 49 op/s
Dec 05 01:56:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:56:07 compute-0 nova_compute[349548]: 2025-12-05 01:56:07.857 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:56:08 compute-0 ceph-mon[192914]: pgmap v1402: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 49 op/s
Dec 05 01:56:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1403: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 49 op/s
Dec 05 01:56:09 compute-0 podman[424178]: 2025-12-05 01:56:09.736636796 +0000 UTC m=+0.131308229 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-type=git, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, version=9.4, io.buildah.version=1.29.0, io.openshift.expose-services=, release-0.7.12=)
Dec 05 01:56:10 compute-0 ceph-mon[192914]: pgmap v1403: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 49 op/s
Dec 05 01:56:11 compute-0 nova_compute[349548]: 2025-12-05 01:56:11.586 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:56:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1404: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:56:12 compute-0 ceph-mon[192914]: pgmap v1404: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:56:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:56:12 compute-0 nova_compute[349548]: 2025-12-05 01:56:12.861 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:56:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1405: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:56:14 compute-0 ovn_controller[89286]: 2025-12-05T01:56:14Z|00049|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory
Dec 05 01:56:14 compute-0 ceph-mon[192914]: pgmap v1405: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 01:56:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1406: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 341 B/s wr, 59 op/s
Dec 05 01:56:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:56:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:56:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:56:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:56:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:56:16
Dec 05 01:56:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:56:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:56:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'backups', 'vms']
Dec 05 01:56:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:56:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:56:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:56:16 compute-0 nova_compute[349548]: 2025-12-05 01:56:16.589 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:56:16 compute-0 podman[424199]: 2025-12-05 01:56:16.70075982 +0000 UTC m=+0.106957136 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 01:56:16 compute-0 podman[424200]: 2025-12-05 01:56:16.713531428 +0000 UTC m=+0.113463099 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:56:16 compute-0 podman[424212]: 2025-12-05 01:56:16.733103326 +0000 UTC m=+0.093278374 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 05 01:56:16 compute-0 podman[424206]: 2025-12-05 01:56:16.74504528 +0000 UTC m=+0.135930788 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 01:56:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:56:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:56:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:56:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:56:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:56:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:56:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:56:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:56:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:56:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:56:16 compute-0 ceph-mon[192914]: pgmap v1406: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 341 B/s wr, 59 op/s
Dec 05 01:56:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1407: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 341 B/s wr, 47 op/s
Dec 05 01:56:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:56:17 compute-0 nova_compute[349548]: 2025-12-05 01:56:17.865 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:56:19 compute-0 ceph-mon[192914]: pgmap v1407: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 341 B/s wr, 47 op/s
Dec 05 01:56:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1408: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 341 B/s wr, 10 op/s
Dec 05 01:56:21 compute-0 ceph-mon[192914]: pgmap v1408: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 341 B/s wr, 10 op/s
Dec 05 01:56:21 compute-0 nova_compute[349548]: 2025-12-05 01:56:21.592 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:56:21 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 05 01:56:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1409: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 341 B/s wr, 10 op/s
Dec 05 01:56:22 compute-0 nova_compute[349548]: 2025-12-05 01:56:22.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:56:22 compute-0 nova_compute[349548]: 2025-12-05 01:56:22.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 01:56:22 compute-0 ovn_controller[89286]: 2025-12-05T01:56:22Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:10:64:51 192.168.0.169
Dec 05 01:56:22 compute-0 ovn_controller[89286]: 2025-12-05T01:56:22Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:10:64:51 192.168.0.169
Dec 05 01:56:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:56:22 compute-0 nova_compute[349548]: 2025-12-05 01:56:22.869 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:56:23 compute-0 ceph-mon[192914]: pgmap v1409: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 341 B/s wr, 10 op/s
Dec 05 01:56:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1410: 321 pgs: 321 active+clean; 243 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 934 KiB/s wr, 15 op/s
Dec 05 01:56:24 compute-0 nova_compute[349548]: 2025-12-05 01:56:24.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:56:25 compute-0 ceph-mon[192914]: pgmap v1410: 321 pgs: 321 active+clean; 243 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 934 KiB/s wr, 15 op/s
Dec 05 01:56:25 compute-0 nova_compute[349548]: 2025-12-05 01:56:25.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:56:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1411: 321 pgs: 321 active+clean; 254 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 1.4 MiB/s wr, 35 op/s
Dec 05 01:56:26 compute-0 nova_compute[349548]: 2025-12-05 01:56:26.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:56:26 compute-0 nova_compute[349548]: 2025-12-05 01:56:26.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:56:26 compute-0 nova_compute[349548]: 2025-12-05 01:56:26.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 01:56:26 compute-0 nova_compute[349548]: 2025-12-05 01:56:26.327 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:56:26 compute-0 nova_compute[349548]: 2025-12-05 01:56:26.328 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:56:26 compute-0 nova_compute[349548]: 2025-12-05 01:56:26.329 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 01:56:26 compute-0 nova_compute[349548]: 2025-12-05 01:56:26.596 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021977584529856093 of space, bias 1.0, pg target 0.6593275358956828 quantized to 32 (current 32)
Dec 05 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 05 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:56:27 compute-0 ceph-mon[192914]: pgmap v1411: 321 pgs: 321 active+clean; 254 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 1.4 MiB/s wr, 35 op/s
Dec 05 01:56:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1412: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec 05 01:56:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:56:27 compute-0 nova_compute[349548]: 2025-12-05 01:56:27.873 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:56:27 compute-0 nova_compute[349548]: 2025-12-05 01:56:27.923 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updating instance_info_cache with network_info: [{"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 01:56:27 compute-0 nova_compute[349548]: 2025-12-05 01:56:27.943 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:56:27 compute-0 nova_compute[349548]: 2025-12-05 01:56:27.944 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 01:56:27 compute-0 nova_compute[349548]: 2025-12-05 01:56:27.945 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:56:28 compute-0 nova_compute[349548]: 2025-12-05 01:56:28.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:56:29 compute-0 ceph-mon[192914]: pgmap v1412: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec 05 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.095 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.096 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.097 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.098 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.098 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:56:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:56:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/735773691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.573 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:56:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1413: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec 05 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.746 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.747 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.748 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:56:29 compute-0 podman[158197]: time="2025-12-05T01:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:56:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.759 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.760 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.761 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.768 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.769 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.770 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.779 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.780 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.780 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:56:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8636 "" "Go-http-client/1.1"
Dec 05 01:56:30 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/735773691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:56:30 compute-0 nova_compute[349548]: 2025-12-05 01:56:30.304 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:56:30 compute-0 nova_compute[349548]: 2025-12-05 01:56:30.307 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3305MB free_disk=59.855751037597656GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 01:56:30 compute-0 nova_compute[349548]: 2025-12-05 01:56:30.308 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:56:30 compute-0 nova_compute[349548]: 2025-12-05 01:56:30.309 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:56:30 compute-0 nova_compute[349548]: 2025-12-05 01:56:30.405 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:56:30 compute-0 nova_compute[349548]: 2025-12-05 01:56:30.406 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:56:30 compute-0 nova_compute[349548]: 2025-12-05 01:56:30.407 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:56:30 compute-0 nova_compute[349548]: 2025-12-05 01:56:30.408 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:56:30 compute-0 nova_compute[349548]: 2025-12-05 01:56:30.408 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 01:56:30 compute-0 nova_compute[349548]: 2025-12-05 01:56:30.409 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 01:56:30 compute-0 nova_compute[349548]: 2025-12-05 01:56:30.527 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:56:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:56:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3431019213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:56:31 compute-0 nova_compute[349548]: 2025-12-05 01:56:31.013 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:56:31 compute-0 nova_compute[349548]: 2025-12-05 01:56:31.027 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:56:31 compute-0 nova_compute[349548]: 2025-12-05 01:56:31.050 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:56:31 compute-0 nova_compute[349548]: 2025-12-05 01:56:31.079 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 01:56:31 compute-0 nova_compute[349548]: 2025-12-05 01:56:31.081 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:56:31 compute-0 ceph-mon[192914]: pgmap v1413: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec 05 01:56:31 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3431019213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:56:31 compute-0 openstack_network_exporter[366555]: ERROR   01:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:56:31 compute-0 openstack_network_exporter[366555]: ERROR   01:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:56:31 compute-0 openstack_network_exporter[366555]: ERROR   01:56:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:56:31 compute-0 openstack_network_exporter[366555]: ERROR   01:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:56:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:56:31 compute-0 openstack_network_exporter[366555]: ERROR   01:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:56:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:56:31 compute-0 nova_compute[349548]: 2025-12-05 01:56:31.600 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:56:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1414: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec 05 01:56:32 compute-0 nova_compute[349548]: 2025-12-05 01:56:32.084 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:56:32 compute-0 ceph-mon[192914]: pgmap v1414: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec 05 01:56:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:56:32 compute-0 nova_compute[349548]: 2025-12-05 01:56:32.877 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:56:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1415: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec 05 01:56:33 compute-0 podman[424329]: 2025-12-05 01:56:33.740632671 +0000 UTC m=+0.140742423 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 05 01:56:33 compute-0 podman[424330]: 2025-12-05 01:56:33.74884571 +0000 UTC m=+0.140248768 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 01:56:34 compute-0 ceph-mon[192914]: pgmap v1415: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec 05 01:56:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1416: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 585 KiB/s wr, 42 op/s
Dec 05 01:56:35 compute-0 podman[424370]: 2025-12-05 01:56:35.67739049 +0000 UTC m=+0.086887264 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 05 01:56:35 compute-0 podman[424371]: 2025-12-05 01:56:35.710834836 +0000 UTC m=+0.108962192 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 01:56:36 compute-0 nova_compute[349548]: 2025-12-05 01:56:36.603 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:56:37 compute-0 ceph-mon[192914]: pgmap v1416: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 585 KiB/s wr, 42 op/s
Dec 05 01:56:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1417: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 88 KiB/s wr, 22 op/s
Dec 05 01:56:37 compute-0 sudo[424409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:56:37 compute-0 sudo[424409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:56:37 compute-0 sudo[424409]: pam_unix(sudo:session): session closed for user root
Dec 05 01:56:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:56:37 compute-0 sudo[424434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:56:37 compute-0 sudo[424434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:56:37 compute-0 sudo[424434]: pam_unix(sudo:session): session closed for user root
Dec 05 01:56:37 compute-0 nova_compute[349548]: 2025-12-05 01:56:37.879 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:56:37 compute-0 sudo[424459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:56:37 compute-0 sudo[424459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:56:37 compute-0 sudo[424459]: pam_unix(sudo:session): session closed for user root
Dec 05 01:56:38 compute-0 sudo[424484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 05 01:56:38 compute-0 sudo[424484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:56:38 compute-0 ceph-mon[192914]: pgmap v1417: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 88 KiB/s wr, 22 op/s
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.318 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.318 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.319 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.325 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5', 'name': 'vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.328 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b69a0e24-1bc4-46a5-92d7-367c1efd53df', 'name': 'test_0', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.330 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b82c3f0e-6d6a-4a7b-9556-b609ad63e497', 'name': 'vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.332 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 05 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.333 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/3611d2ae-da33-4e55-aec7-0bec88d3b4e0 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}03a5c5085f72a10a14834caf2c8f725d7bea9761ee1da0af3d318eb89d91a8ae" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 05 01:56:38 compute-0 sudo[424484]: pam_unix(sudo:session): session closed for user root
Dec 05 01:56:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:56:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:56:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:56:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:56:38 compute-0 sudo[424529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:56:38 compute-0 sudo[424529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:56:38 compute-0 sudo[424529]: pam_unix(sudo:session): session closed for user root
Dec 05 01:56:38 compute-0 sudo[424554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:56:38 compute-0 sudo[424554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:56:38 compute-0 sudo[424554]: pam_unix(sudo:session): session closed for user root
Dec 05 01:56:38 compute-0 sudo[424579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:56:38 compute-0 sudo[424579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:56:38 compute-0 sudo[424579]: pam_unix(sudo:session): session closed for user root
Dec 05 01:56:38 compute-0 sudo[424604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:56:38 compute-0 sudo[424604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:56:39 compute-0 sudo[424604]: pam_unix(sudo:session): session closed for user root
Dec 05 01:56:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:56:39 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:56:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:56:39 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:56:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:56:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1418: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec 05 01:56:39 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:56:39 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev e4fe887e-5743-4df5-8506-014503a75178 does not exist
Dec 05 01:56:39 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 586d32f4-310e-4074-a718-e859376d22bc does not exist
Dec 05 01:56:39 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 1ce057b9-988d-4d7c-a59b-c2e26647e00d does not exist
Dec 05 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.929 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Fri, 05 Dec 2025 01:56:38 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-d3424ee8-b311-4519-9f51-5448cf4cd270 x-openstack-request-id: req-d3424ee8-b311-4519-9f51-5448cf4cd270 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 05 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.930 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "3611d2ae-da33-4e55-aec7-0bec88d3b4e0", "name": "vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq", "status": "ACTIVE", "tenant_id": "6ad982b73954486390215862ee62239f", "user_id": "ff880837791d4f49a54672b8d0e705ff", "metadata": {"metering.server_group": "b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1"}, "hostId": "c00078154b620f81ef3acab090afa15b914aca6c57286253be564282", "image": {"id": "aa58c1e9-bdcc-4e60-9cee-eaeee0741251", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/aa58c1e9-bdcc-4e60-9cee-eaeee0741251"}]}, "flavor": {"id": "7d473820-6f66-40b4-b8d1-decd466d7dd2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/7d473820-6f66-40b4-b8d1-decd466d7dd2"}]}, "created": "2025-12-05T01:55:34Z", "updated": "2025-12-05T01:55:46Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.169", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:10:64:51"}, {"version": 4, "addr": "192.168.122.221", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:10:64:51"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/3611d2ae-da33-4e55-aec7-0bec88d3b4e0"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/3611d2ae-da33-4e55-aec7-0bec88d3b4e0"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-05T01:55:46.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 05 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.930 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/3611d2ae-da33-4e55-aec7-0bec88d3b4e0 used request id req-d3424ee8-b311-4519-9f51-5448cf4cd270 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 05 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.933 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3611d2ae-da33-4e55-aec7-0bec88d3b4e0', 'name': 'vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.934 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 05 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.934 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.935 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.935 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.938 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 05 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.938 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.939 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 05 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.940 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.940 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.940 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.940 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T01:56:39.935319) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.941 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T01:56:39.940786) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.979 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.980 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.981 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.021 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.022 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.023 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.060 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.061 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.062 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:56:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:56:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:56:40 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:56:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:56:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.106 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.106 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.107 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.108 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.109 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.109 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.109 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.110 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T01:56:40.109550) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.109 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.112 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.112 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.112 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.112 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.112 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.112 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.112 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.112 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq>]
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.113 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.113 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.113 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.113 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.114 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-05T01:56:40.112683) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.114 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T01:56:40.113586) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.259 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.260 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.261 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 sudo[424658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:56:40 compute-0 sudo[424658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:56:40 compute-0 sudo[424658]: pam_unix(sudo:session): session closed for user root
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.344 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.344 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.345 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:56:40 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:56:40 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:56:40 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:56:40 compute-0 sudo[424684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:56:40 compute-0 sudo[424684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:56:40 compute-0 sudo[424684]: pam_unix(sudo:session): session closed for user root
Dec 05 01:56:40 compute-0 podman[424682]: 2025-12-05 01:56:40.400647517 +0000 UTC m=+0.108484299 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, vendor=Red Hat, Inc., version=9.4, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, architecture=x86_64, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, name=ubi9, managed_by=edpm_ansible)
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.433 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.434 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.434 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 sudo[424728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:56:40 compute-0 sudo[424728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:56:40 compute-0 sudo[424728]: pam_unix(sudo:session): session closed for user root
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.513 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.513 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.514 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.515 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.516 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.516 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.516 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.517 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.517 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.latency volume: 1788689993 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.517 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.latency volume: 318906117 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.518 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.latency volume: 246265233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.519 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 2043636416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.519 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 325714825 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.520 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 190759187 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.521 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T01:56:40.516997) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.520 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 2069488567 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.521 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 288882839 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.522 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 182154388 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.522 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 1726190004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.523 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 302563806 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.523 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 198504004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.525 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.525 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.526 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.526 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.526 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.526 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.527 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.527 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.528 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.528 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T01:56:40.526656) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.529 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.529 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.530 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.530 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.531 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.531 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.532 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.532 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.533 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.534 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.534 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.534 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.534 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.534 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.534 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T01:56:40.534398) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.534 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.535 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.535 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.535 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.536 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.536 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.536 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.537 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.537 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.537 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.538 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.538 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.539 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.539 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.539 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.539 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.539 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.539 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T01:56:40.539365) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.540 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.540 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.540 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.540 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.541 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.541 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.541 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.542 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.542 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.542 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.543 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.544 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.544 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.544 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.544 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.544 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.544 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.545 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T01:56:40.544763) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.584 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.611 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 sudo[424753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:56:40 compute-0 sudo[424753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.640 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.671 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.672 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.672 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.672 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.673 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.673 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.673 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.673 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.latency volume: 7184458071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.673 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.latency volume: 30429022 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.674 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.674 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 7524740776 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.675 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T01:56:40.673296) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.674 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 28454640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.675 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.676 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 9233370301 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.676 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 32028870 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.676 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.676 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 7901573506 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.677 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 33331693 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.677 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.678 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.678 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.678 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.678 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.678 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.678 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.678 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.679 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.679 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.679 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.679 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.680 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.680 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.680 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.681 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.681 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.681 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.682 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.682 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.683 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.683 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.683 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.683 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.683 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.684 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T01:56:40.678665) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.684 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T01:56:40.683544) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.688 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.692 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.695 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.699 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 / tap2799035c-b9 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.700 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.700 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.700 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.700 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.700 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.701 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.701 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.701 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.701 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.701 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.702 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.702 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.702 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T01:56:40.701084) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.703 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.703 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.703 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.703 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.703 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.703 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.704 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.704 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.705 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T01:56:40.703331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.705 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.705 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.706 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.706 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.706 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.706 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.707 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.707 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.708 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.708 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.708 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.708 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.708 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.708 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.709 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.709 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.709 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.710 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.710 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.710 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.710 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.710 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.710 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.bytes volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.711 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.711 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.bytes volume: 7370 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.711 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.bytes volume: 1906 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.711 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T01:56:40.708654) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.712 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T01:56:40.710673) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.712 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.712 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.712 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.712 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.712 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.712 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.bytes.delta volume: 225 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.713 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.713 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.bytes.delta volume: 2672 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.713 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T01:56:40.712650) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.713 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.714 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.714 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.714 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.714 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.714 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.714 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.714 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq>]
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.715 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.715 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.715 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.715 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.715 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/memory.usage volume: 49.02734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.716 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/memory.usage volume: 48.91015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.716 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/memory.usage volume: 49.0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.716 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/memory.usage volume: 49.62890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.717 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.717 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.717 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.717 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.717 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.717 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.718 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.718 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.718 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.719 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.719 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-05T01:56:40.714657) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.719 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T01:56:40.715752) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.719 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.719 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T01:56:40.717669) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.719 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.719 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.719 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.719 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.720 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.720 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets volume: 63 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.720 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.721 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.721 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.721 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T01:56:40.719706) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.721 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.721 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.722 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.722 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.722 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T01:56:40.722339) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.722 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.722 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.723 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.bytes.delta volume: 3431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.723 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.723 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.724 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.724 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.724 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.724 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.724 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T01:56:40.724530) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.724 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.724 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/cpu volume: 35830000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.725 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/cpu volume: 42350000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.725 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/cpu volume: 337280000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.725 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/cpu volume: 35680000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.726 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.726 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.726 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.726 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.726 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.726 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.726 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.727 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.727 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.727 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.727 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.728 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.728 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.728 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.728 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.728 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.728 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.728 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T01:56:40.726718) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.728 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T01:56:40.728416) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.728 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.728 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.729 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.729 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.730 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.734 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.734 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.734 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.734 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.734 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.734 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:56:41 compute-0 podman[424814]: 2025-12-05 01:56:41.024742534 +0000 UTC m=+0.038995453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:56:41 compute-0 podman[424814]: 2025-12-05 01:56:41.43877657 +0000 UTC m=+0.453029479 container create 0200ba125748b0e0151dfa7dc16c129620a04439fafdfc0c74fd95a3ddfa0948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_keldysh, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:56:41 compute-0 ceph-mon[192914]: pgmap v1418: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec 05 01:56:41 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:56:41 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:56:41 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:56:41 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:56:41 compute-0 systemd[1]: Started libpod-conmon-0200ba125748b0e0151dfa7dc16c129620a04439fafdfc0c74fd95a3ddfa0948.scope.
Dec 05 01:56:41 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:56:41 compute-0 nova_compute[349548]: 2025-12-05 01:56:41.605 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:56:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1419: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec 05 01:56:41 compute-0 podman[424814]: 2025-12-05 01:56:41.753482113 +0000 UTC m=+0.767735012 container init 0200ba125748b0e0151dfa7dc16c129620a04439fafdfc0c74fd95a3ddfa0948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_keldysh, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:56:41 compute-0 podman[424814]: 2025-12-05 01:56:41.764666527 +0000 UTC m=+0.778919416 container start 0200ba125748b0e0151dfa7dc16c129620a04439fafdfc0c74fd95a3ddfa0948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_keldysh, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:56:41 compute-0 podman[424814]: 2025-12-05 01:56:41.768972157 +0000 UTC m=+0.783225086 container attach 0200ba125748b0e0151dfa7dc16c129620a04439fafdfc0c74fd95a3ddfa0948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_keldysh, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:56:41 compute-0 optimistic_keldysh[424830]: 167 167
Dec 05 01:56:41 compute-0 systemd[1]: libpod-0200ba125748b0e0151dfa7dc16c129620a04439fafdfc0c74fd95a3ddfa0948.scope: Deactivated successfully.
Dec 05 01:56:41 compute-0 podman[424814]: 2025-12-05 01:56:41.774113391 +0000 UTC m=+0.788366290 container died 0200ba125748b0e0151dfa7dc16c129620a04439fafdfc0c74fd95a3ddfa0948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 05 01:56:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-57871eb64a3b10a7800d51cba02a93e09211b35c584f4d9adc276ee0ec14966f-merged.mount: Deactivated successfully.
Dec 05 01:56:41 compute-0 podman[424814]: 2025-12-05 01:56:41.978533796 +0000 UTC m=+0.992786685 container remove 0200ba125748b0e0151dfa7dc16c129620a04439fafdfc0c74fd95a3ddfa0948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_keldysh, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 01:56:42 compute-0 systemd[1]: libpod-conmon-0200ba125748b0e0151dfa7dc16c129620a04439fafdfc0c74fd95a3ddfa0948.scope: Deactivated successfully.
Dec 05 01:56:42 compute-0 podman[424855]: 2025-12-05 01:56:42.218059904 +0000 UTC m=+0.046554565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:56:42 compute-0 podman[424855]: 2025-12-05 01:56:42.33647503 +0000 UTC m=+0.164969651 container create 3c50516b163fbf0246378ea730475f0f480ae46b34a2987409fae246d7372197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:56:42 compute-0 systemd[1]: Started libpod-conmon-3c50516b163fbf0246378ea730475f0f480ae46b34a2987409fae246d7372197.scope.
Dec 05 01:56:42 compute-0 ceph-mon[192914]: pgmap v1419: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec 05 01:56:42 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:56:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41456f03d9399cf135598541630039adfac88d3672ab82f06cf63b73903db38a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:56:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41456f03d9399cf135598541630039adfac88d3672ab82f06cf63b73903db38a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:56:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41456f03d9399cf135598541630039adfac88d3672ab82f06cf63b73903db38a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:56:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41456f03d9399cf135598541630039adfac88d3672ab82f06cf63b73903db38a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:56:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41456f03d9399cf135598541630039adfac88d3672ab82f06cf63b73903db38a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:56:42 compute-0 podman[424855]: 2025-12-05 01:56:42.51141761 +0000 UTC m=+0.339912261 container init 3c50516b163fbf0246378ea730475f0f480ae46b34a2987409fae246d7372197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 01:56:42 compute-0 podman[424855]: 2025-12-05 01:56:42.54427368 +0000 UTC m=+0.372768301 container start 3c50516b163fbf0246378ea730475f0f480ae46b34a2987409fae246d7372197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_galois, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:56:42 compute-0 podman[424855]: 2025-12-05 01:56:42.550743641 +0000 UTC m=+0.379238302 container attach 3c50516b163fbf0246378ea730475f0f480ae46b34a2987409fae246d7372197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_galois, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 05 01:56:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:56:42 compute-0 nova_compute[349548]: 2025-12-05 01:56:42.883 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:56:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1420: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec 05 01:56:43 compute-0 awesome_galois[424870]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:56:43 compute-0 awesome_galois[424870]: --> relative data size: 1.0
Dec 05 01:56:43 compute-0 awesome_galois[424870]: --> All data devices are unavailable
Dec 05 01:56:43 compute-0 systemd[1]: libpod-3c50516b163fbf0246378ea730475f0f480ae46b34a2987409fae246d7372197.scope: Deactivated successfully.
Dec 05 01:56:43 compute-0 systemd[1]: libpod-3c50516b163fbf0246378ea730475f0f480ae46b34a2987409fae246d7372197.scope: Consumed 1.176s CPU time.
Dec 05 01:56:43 compute-0 podman[424855]: 2025-12-05 01:56:43.8118478 +0000 UTC m=+1.640342521 container died 3c50516b163fbf0246378ea730475f0f480ae46b34a2987409fae246d7372197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_galois, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 01:56:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-41456f03d9399cf135598541630039adfac88d3672ab82f06cf63b73903db38a-merged.mount: Deactivated successfully.
Dec 05 01:56:43 compute-0 podman[424855]: 2025-12-05 01:56:43.890235355 +0000 UTC m=+1.718729956 container remove 3c50516b163fbf0246378ea730475f0f480ae46b34a2987409fae246d7372197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_galois, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 05 01:56:43 compute-0 systemd[1]: libpod-conmon-3c50516b163fbf0246378ea730475f0f480ae46b34a2987409fae246d7372197.scope: Deactivated successfully.
Dec 05 01:56:43 compute-0 sudo[424753]: pam_unix(sudo:session): session closed for user root
Dec 05 01:56:44 compute-0 sudo[424910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:56:44 compute-0 sudo[424910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:56:44 compute-0 sudo[424910]: pam_unix(sudo:session): session closed for user root
Dec 05 01:56:44 compute-0 sudo[424935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:56:44 compute-0 sudo[424935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:56:44 compute-0 sudo[424935]: pam_unix(sudo:session): session closed for user root
Dec 05 01:56:44 compute-0 sudo[424960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:56:44 compute-0 sudo[424960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:56:44 compute-0 sudo[424960]: pam_unix(sudo:session): session closed for user root
Dec 05 01:56:44 compute-0 sudo[424985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:56:44 compute-0 sudo[424985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:56:44 compute-0 podman[425047]: 2025-12-05 01:56:44.784735225 +0000 UTC m=+0.048297404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:56:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 01:56:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2936409340' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:56:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 01:56:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2936409340' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:56:45 compute-0 ceph-mon[192914]: pgmap v1420: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec 05 01:56:45 compute-0 podman[425047]: 2025-12-05 01:56:45.446585511 +0000 UTC m=+0.710147700 container create d9f2b31da11ecda6bee4127b5389d76fd25541542274a45bcdcf51bb1047cd03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 05 01:56:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1421: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:56:45 compute-0 systemd[1]: Started libpod-conmon-d9f2b31da11ecda6bee4127b5389d76fd25541542274a45bcdcf51bb1047cd03.scope.
Dec 05 01:56:45 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:56:45 compute-0 podman[425047]: 2025-12-05 01:56:45.825020219 +0000 UTC m=+1.088582458 container init d9f2b31da11ecda6bee4127b5389d76fd25541542274a45bcdcf51bb1047cd03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Dec 05 01:56:45 compute-0 podman[425047]: 2025-12-05 01:56:45.838188518 +0000 UTC m=+1.101750667 container start d9f2b31da11ecda6bee4127b5389d76fd25541542274a45bcdcf51bb1047cd03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 05 01:56:45 compute-0 zealous_kepler[425062]: 167 167
Dec 05 01:56:45 compute-0 podman[425047]: 2025-12-05 01:56:45.842788257 +0000 UTC m=+1.106350486 container attach d9f2b31da11ecda6bee4127b5389d76fd25541542274a45bcdcf51bb1047cd03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 05 01:56:45 compute-0 systemd[1]: libpod-d9f2b31da11ecda6bee4127b5389d76fd25541542274a45bcdcf51bb1047cd03.scope: Deactivated successfully.
Dec 05 01:56:45 compute-0 podman[425047]: 2025-12-05 01:56:45.845153433 +0000 UTC m=+1.108715582 container died d9f2b31da11ecda6bee4127b5389d76fd25541542274a45bcdcf51bb1047cd03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:56:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c4ec6db6c38af05f41bb09a82824e46d237b84147e83ead80d191a3c0bd94df-merged.mount: Deactivated successfully.
Dec 05 01:56:45 compute-0 podman[425047]: 2025-12-05 01:56:45.901444429 +0000 UTC m=+1.165006578 container remove d9f2b31da11ecda6bee4127b5389d76fd25541542274a45bcdcf51bb1047cd03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 01:56:45 compute-0 systemd[1]: libpod-conmon-d9f2b31da11ecda6bee4127b5389d76fd25541542274a45bcdcf51bb1047cd03.scope: Deactivated successfully.
Dec 05 01:56:46 compute-0 podman[425087]: 2025-12-05 01:56:46.134720903 +0000 UTC m=+0.064768865 container create 06a4f352c74d481edf2ce3738eb4eb9d373fb5370a19522f04d6f3c09416be4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 05 01:56:46 compute-0 systemd[1]: Started libpod-conmon-06a4f352c74d481edf2ce3738eb4eb9d373fb5370a19522f04d6f3c09416be4a.scope.
Dec 05 01:56:46 compute-0 podman[425087]: 2025-12-05 01:56:46.110591557 +0000 UTC m=+0.040639509 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:56:46 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c8c141d6121c94e266e07eba23445fb94328ae759b541887c2db76da2565c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c8c141d6121c94e266e07eba23445fb94328ae759b541887c2db76da2565c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c8c141d6121c94e266e07eba23445fb94328ae759b541887c2db76da2565c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c8c141d6121c94e266e07eba23445fb94328ae759b541887c2db76da2565c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:56:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:56:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:56:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:56:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:56:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:56:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:56:46 compute-0 podman[425087]: 2025-12-05 01:56:46.297541522 +0000 UTC m=+0.227589464 container init 06a4f352c74d481edf2ce3738eb4eb9d373fb5370a19522f04d6f3c09416be4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_galileo, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 05 01:56:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2936409340' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:56:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2936409340' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:56:46 compute-0 ceph-mon[192914]: pgmap v1421: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:56:46 compute-0 podman[425087]: 2025-12-05 01:56:46.311185885 +0000 UTC m=+0.241233827 container start 06a4f352c74d481edf2ce3738eb4eb9d373fb5370a19522f04d6f3c09416be4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 01:56:46 compute-0 podman[425087]: 2025-12-05 01:56:46.317359347 +0000 UTC m=+0.247407359 container attach 06a4f352c74d481edf2ce3738eb4eb9d373fb5370a19522f04d6f3c09416be4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_galileo, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 05 01:56:46 compute-0 nova_compute[349548]: 2025-12-05 01:56:46.607 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:56:47 compute-0 competent_galileo[425103]: {
Dec 05 01:56:47 compute-0 competent_galileo[425103]:     "0": [
Dec 05 01:56:47 compute-0 competent_galileo[425103]:         {
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "devices": [
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "/dev/loop3"
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             ],
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "lv_name": "ceph_lv0",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "lv_size": "21470642176",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "name": "ceph_lv0",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "tags": {
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.cluster_name": "ceph",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.crush_device_class": "",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.encrypted": "0",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.osd_id": "0",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.type": "block",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.vdo": "0"
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             },
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "type": "block",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "vg_name": "ceph_vg0"
Dec 05 01:56:47 compute-0 competent_galileo[425103]:         }
Dec 05 01:56:47 compute-0 competent_galileo[425103]:     ],
Dec 05 01:56:47 compute-0 competent_galileo[425103]:     "1": [
Dec 05 01:56:47 compute-0 competent_galileo[425103]:         {
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "devices": [
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "/dev/loop4"
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             ],
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "lv_name": "ceph_lv1",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "lv_size": "21470642176",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "name": "ceph_lv1",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "tags": {
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.cluster_name": "ceph",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.crush_device_class": "",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.encrypted": "0",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.osd_id": "1",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.type": "block",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.vdo": "0"
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             },
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "type": "block",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "vg_name": "ceph_vg1"
Dec 05 01:56:47 compute-0 competent_galileo[425103]:         }
Dec 05 01:56:47 compute-0 competent_galileo[425103]:     ],
Dec 05 01:56:47 compute-0 competent_galileo[425103]:     "2": [
Dec 05 01:56:47 compute-0 competent_galileo[425103]:         {
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "devices": [
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "/dev/loop5"
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             ],
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "lv_name": "ceph_lv2",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "lv_size": "21470642176",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "name": "ceph_lv2",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "tags": {
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.cluster_name": "ceph",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.crush_device_class": "",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.encrypted": "0",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.osd_id": "2",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.type": "block",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:                 "ceph.vdo": "0"
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             },
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "type": "block",
Dec 05 01:56:47 compute-0 competent_galileo[425103]:             "vg_name": "ceph_vg2"
Dec 05 01:56:47 compute-0 competent_galileo[425103]:         }
Dec 05 01:56:47 compute-0 competent_galileo[425103]:     ]
Dec 05 01:56:47 compute-0 competent_galileo[425103]: }
Dec 05 01:56:47 compute-0 systemd[1]: libpod-06a4f352c74d481edf2ce3738eb4eb9d373fb5370a19522f04d6f3c09416be4a.scope: Deactivated successfully.
Dec 05 01:56:47 compute-0 conmon[425103]: conmon 06a4f352c74d481edf2c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-06a4f352c74d481edf2ce3738eb4eb9d373fb5370a19522f04d6f3c09416be4a.scope/container/memory.events
Dec 05 01:56:47 compute-0 podman[425087]: 2025-12-05 01:56:47.222384533 +0000 UTC m=+1.152432506 container died 06a4f352c74d481edf2ce3738eb4eb9d373fb5370a19522f04d6f3c09416be4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_galileo, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:56:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1422: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:56:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:56:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4c8c141d6121c94e266e07eba23445fb94328ae759b541887c2db76da2565c4-merged.mount: Deactivated successfully.
Dec 05 01:56:47 compute-0 nova_compute[349548]: 2025-12-05 01:56:47.885 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:56:48 compute-0 podman[425087]: 2025-12-05 01:56:48.400788965 +0000 UTC m=+2.330836927 container remove 06a4f352c74d481edf2ce3738eb4eb9d373fb5370a19522f04d6f3c09416be4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_galileo, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 05 01:56:48 compute-0 systemd[1]: libpod-conmon-06a4f352c74d481edf2ce3738eb4eb9d373fb5370a19522f04d6f3c09416be4a.scope: Deactivated successfully.
Dec 05 01:56:48 compute-0 sudo[424985]: pam_unix(sudo:session): session closed for user root
Dec 05 01:56:48 compute-0 podman[425113]: 2025-12-05 01:56:48.502690909 +0000 UTC m=+1.211586172 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 05 01:56:48 compute-0 podman[425122]: 2025-12-05 01:56:48.530035004 +0000 UTC m=+1.251832058 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.buildah.version=1.33.7, vendor=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=edpm, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter)
Dec 05 01:56:48 compute-0 podman[425120]: 2025-12-05 01:56:48.537123813 +0000 UTC m=+1.266907201 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:56:48 compute-0 podman[425121]: 2025-12-05 01:56:48.570733154 +0000 UTC m=+1.307249461 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 01:56:48 compute-0 sudo[425179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:56:48 compute-0 sudo[425179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:56:48 compute-0 sudo[425179]: pam_unix(sudo:session): session closed for user root
Dec 05 01:56:48 compute-0 sudo[425235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:56:48 compute-0 sudo[425235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:56:48 compute-0 sudo[425235]: pam_unix(sudo:session): session closed for user root
Dec 05 01:56:48 compute-0 ceph-mon[192914]: pgmap v1422: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:56:48 compute-0 sudo[425260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:56:48 compute-0 sudo[425260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:56:48 compute-0 sudo[425260]: pam_unix(sudo:session): session closed for user root
Dec 05 01:56:48 compute-0 sudo[425285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:56:48 compute-0 sudo[425285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:56:49 compute-0 podman[425348]: 2025-12-05 01:56:49.419257938 +0000 UTC m=+0.077983485 container create 96e88a2d8def95930a68fdcc9c25cdcc492b6936ba4649b1a2bf6a7e14487909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 05 01:56:49 compute-0 podman[425348]: 2025-12-05 01:56:49.389001361 +0000 UTC m=+0.047726978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:56:49 compute-0 systemd[1]: Started libpod-conmon-96e88a2d8def95930a68fdcc9c25cdcc492b6936ba4649b1a2bf6a7e14487909.scope.
Dec 05 01:56:49 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:56:49 compute-0 podman[425348]: 2025-12-05 01:56:49.557576892 +0000 UTC m=+0.216302479 container init 96e88a2d8def95930a68fdcc9c25cdcc492b6936ba4649b1a2bf6a7e14487909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shaw, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:56:49 compute-0 podman[425348]: 2025-12-05 01:56:49.567774287 +0000 UTC m=+0.226499814 container start 96e88a2d8def95930a68fdcc9c25cdcc492b6936ba4649b1a2bf6a7e14487909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shaw, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:56:49 compute-0 podman[425348]: 2025-12-05 01:56:49.573653692 +0000 UTC m=+0.232379259 container attach 96e88a2d8def95930a68fdcc9c25cdcc492b6936ba4649b1a2bf6a7e14487909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shaw, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 01:56:49 compute-0 hopeful_shaw[425365]: 167 167
Dec 05 01:56:49 compute-0 systemd[1]: libpod-96e88a2d8def95930a68fdcc9c25cdcc492b6936ba4649b1a2bf6a7e14487909.scope: Deactivated successfully.
Dec 05 01:56:49 compute-0 conmon[425365]: conmon 96e88a2d8def95930a68 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-96e88a2d8def95930a68fdcc9c25cdcc492b6936ba4649b1a2bf6a7e14487909.scope/container/memory.events
Dec 05 01:56:49 compute-0 podman[425348]: 2025-12-05 01:56:49.578082426 +0000 UTC m=+0.236807983 container died 96e88a2d8def95930a68fdcc9c25cdcc492b6936ba4649b1a2bf6a7e14487909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:56:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ae67ae8a120ff661d5e4551e7ce1dc8adfa8bc747b22a72841b09899b2e1048-merged.mount: Deactivated successfully.
Dec 05 01:56:49 compute-0 podman[425348]: 2025-12-05 01:56:49.632011756 +0000 UTC m=+0.290737293 container remove 96e88a2d8def95930a68fdcc9c25cdcc492b6936ba4649b1a2bf6a7e14487909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shaw, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 01:56:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1423: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:56:49 compute-0 systemd[1]: libpod-conmon-96e88a2d8def95930a68fdcc9c25cdcc492b6936ba4649b1a2bf6a7e14487909.scope: Deactivated successfully.
Dec 05 01:56:49 compute-0 podman[425387]: 2025-12-05 01:56:49.965480805 +0000 UTC m=+0.092845741 container create a0eb49d59c6d09a13eccea4945815ef76dddce7c271a21eb735ce093f56a6310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 01:56:50 compute-0 podman[425387]: 2025-12-05 01:56:49.940271509 +0000 UTC m=+0.067636455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:56:50 compute-0 systemd[1]: Started libpod-conmon-a0eb49d59c6d09a13eccea4945815ef76dddce7c271a21eb735ce093f56a6310.scope.
Dec 05 01:56:50 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:56:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83ab54b8a74c99c14cd060713eed461e2112184157a925019c44cfb27ad27f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:56:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83ab54b8a74c99c14cd060713eed461e2112184157a925019c44cfb27ad27f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:56:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83ab54b8a74c99c14cd060713eed461e2112184157a925019c44cfb27ad27f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:56:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83ab54b8a74c99c14cd060713eed461e2112184157a925019c44cfb27ad27f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:56:50 compute-0 podman[425387]: 2025-12-05 01:56:50.141794553 +0000 UTC m=+0.269159529 container init a0eb49d59c6d09a13eccea4945815ef76dddce7c271a21eb735ce093f56a6310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 05 01:56:50 compute-0 podman[425387]: 2025-12-05 01:56:50.163587873 +0000 UTC m=+0.290952839 container start a0eb49d59c6d09a13eccea4945815ef76dddce7c271a21eb735ce093f56a6310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cray, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 05 01:56:50 compute-0 podman[425387]: 2025-12-05 01:56:50.1706103 +0000 UTC m=+0.297975256 container attach a0eb49d59c6d09a13eccea4945815ef76dddce7c271a21eb735ce093f56a6310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cray, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:56:50 compute-0 ceph-mon[192914]: pgmap v1423: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:56:51 compute-0 wonderful_cray[425403]: {
Dec 05 01:56:51 compute-0 wonderful_cray[425403]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:56:51 compute-0 wonderful_cray[425403]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:56:51 compute-0 wonderful_cray[425403]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:56:51 compute-0 wonderful_cray[425403]:         "osd_id": 0,
Dec 05 01:56:51 compute-0 wonderful_cray[425403]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:56:51 compute-0 wonderful_cray[425403]:         "type": "bluestore"
Dec 05 01:56:51 compute-0 wonderful_cray[425403]:     },
Dec 05 01:56:51 compute-0 wonderful_cray[425403]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:56:51 compute-0 wonderful_cray[425403]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:56:51 compute-0 wonderful_cray[425403]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:56:51 compute-0 wonderful_cray[425403]:         "osd_id": 1,
Dec 05 01:56:51 compute-0 wonderful_cray[425403]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:56:51 compute-0 wonderful_cray[425403]:         "type": "bluestore"
Dec 05 01:56:51 compute-0 wonderful_cray[425403]:     },
Dec 05 01:56:51 compute-0 wonderful_cray[425403]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:56:51 compute-0 wonderful_cray[425403]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:56:51 compute-0 wonderful_cray[425403]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:56:51 compute-0 wonderful_cray[425403]:         "osd_id": 2,
Dec 05 01:56:51 compute-0 wonderful_cray[425403]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:56:51 compute-0 wonderful_cray[425403]:         "type": "bluestore"
Dec 05 01:56:51 compute-0 wonderful_cray[425403]:     }
Dec 05 01:56:51 compute-0 wonderful_cray[425403]: }
Dec 05 01:56:51 compute-0 systemd[1]: libpod-a0eb49d59c6d09a13eccea4945815ef76dddce7c271a21eb735ce093f56a6310.scope: Deactivated successfully.
Dec 05 01:56:51 compute-0 podman[425387]: 2025-12-05 01:56:51.428734314 +0000 UTC m=+1.556099260 container died a0eb49d59c6d09a13eccea4945815ef76dddce7c271a21eb735ce093f56a6310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 05 01:56:51 compute-0 systemd[1]: libpod-a0eb49d59c6d09a13eccea4945815ef76dddce7c271a21eb735ce093f56a6310.scope: Consumed 1.257s CPU time.
Dec 05 01:56:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-c83ab54b8a74c99c14cd060713eed461e2112184157a925019c44cfb27ad27f7-merged.mount: Deactivated successfully.
Dec 05 01:56:51 compute-0 podman[425387]: 2025-12-05 01:56:51.547117469 +0000 UTC m=+1.674482435 container remove a0eb49d59c6d09a13eccea4945815ef76dddce7c271a21eb735ce093f56a6310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 05 01:56:51 compute-0 systemd[1]: libpod-conmon-a0eb49d59c6d09a13eccea4945815ef76dddce7c271a21eb735ce093f56a6310.scope: Deactivated successfully.
Dec 05 01:56:51 compute-0 sudo[425285]: pam_unix(sudo:session): session closed for user root
Dec 05 01:56:51 compute-0 nova_compute[349548]: 2025-12-05 01:56:51.610 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:56:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:56:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:56:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:56:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1424: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:56:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:56:51 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7d13f7ae-d493-41ee-8ae5-22aaba576c2c does not exist
Dec 05 01:56:51 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d438c367-67fd-4a97-9c7c-88ed7b6b26ba does not exist
Dec 05 01:56:51 compute-0 sudo[425446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:56:51 compute-0 sudo[425446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:56:51 compute-0 sudo[425446]: pam_unix(sudo:session): session closed for user root
Dec 05 01:56:51 compute-0 sudo[425471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:56:51 compute-0 sudo[425471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:56:51 compute-0 sudo[425471]: pam_unix(sudo:session): session closed for user root
Dec 05 01:56:52 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:56:52 compute-0 ceph-mon[192914]: pgmap v1424: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:56:52 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:56:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:56:52 compute-0 nova_compute[349548]: 2025-12-05 01:56:52.891 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:56:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1425: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:56:54 compute-0 ceph-mon[192914]: pgmap v1425: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:56:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1426: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Dec 05 01:56:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:56:56.188 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:56:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:56:56.188 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:56:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:56:56.189 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:56:56 compute-0 nova_compute[349548]: 2025-12-05 01:56:56.614 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:56:56 compute-0 ceph-mon[192914]: pgmap v1426: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Dec 05 01:56:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1427: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec 05 01:56:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:56:57 compute-0 nova_compute[349548]: 2025-12-05 01:56:57.899 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:56:58 compute-0 ceph-mon[192914]: pgmap v1427: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec 05 01:56:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1428: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec 05 01:56:59 compute-0 podman[158197]: time="2025-12-05T01:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:56:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 01:56:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8619 "" "Go-http-client/1.1"
Dec 05 01:57:00 compute-0 ceph-mon[192914]: pgmap v1428: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec 05 01:57:01 compute-0 openstack_network_exporter[366555]: ERROR   01:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:57:01 compute-0 openstack_network_exporter[366555]: ERROR   01:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:57:01 compute-0 openstack_network_exporter[366555]: ERROR   01:57:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:57:01 compute-0 openstack_network_exporter[366555]: ERROR   01:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:57:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:57:01 compute-0 openstack_network_exporter[366555]: ERROR   01:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:57:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:57:01 compute-0 nova_compute[349548]: 2025-12-05 01:57:01.617 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:57:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1429: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec 05 01:57:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:57:02 compute-0 ceph-mon[192914]: pgmap v1429: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec 05 01:57:02 compute-0 nova_compute[349548]: 2025-12-05 01:57:02.904 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:57:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1430: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec 05 01:57:04 compute-0 podman[425496]: 2025-12-05 01:57:04.689465579 +0000 UTC m=+0.090442674 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 01:57:04 compute-0 podman[425497]: 2025-12-05 01:57:04.690397335 +0000 UTC m=+0.083371906 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:57:04 compute-0 ceph-mon[192914]: pgmap v1430: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec 05 01:57:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1431: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec 05 01:57:06 compute-0 nova_compute[349548]: 2025-12-05 01:57:06.621 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:57:06 compute-0 podman[425536]: 2025-12-05 01:57:06.716491696 +0000 UTC m=+0.119894388 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 01:57:06 compute-0 podman[425537]: 2025-12-05 01:57:06.743582365 +0000 UTC m=+0.132468901 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec 05 01:57:06 compute-0 ceph-mon[192914]: pgmap v1431: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec 05 01:57:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1432: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s wr, 0 op/s
Dec 05 01:57:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:57:07 compute-0 nova_compute[349548]: 2025-12-05 01:57:07.908 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:57:08 compute-0 ceph-mon[192914]: pgmap v1432: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s wr, 0 op/s
Dec 05 01:57:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1433: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:10 compute-0 podman[425575]: 2025-12-05 01:57:10.715313185 +0000 UTC m=+0.122255595 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, release=1214.1726694543, architecture=x86_64, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=edpm, com.redhat.component=ubi9-container, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, name=ubi9, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4)
Dec 05 01:57:10 compute-0 ceph-mon[192914]: pgmap v1433: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:11 compute-0 nova_compute[349548]: 2025-12-05 01:57:11.625 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:57:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1434: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:57:12 compute-0 ceph-mon[192914]: pgmap v1434: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:12 compute-0 nova_compute[349548]: 2025-12-05 01:57:12.911 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:57:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1435: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:15 compute-0 ceph-mon[192914]: pgmap v1435: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1436: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:57:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:57:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:57:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:57:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:57:16
Dec 05 01:57:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:57:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:57:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['.rgw.root', 'backups', '.mgr', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'volumes', 'default.rgw.log']
Dec 05 01:57:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:57:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:57:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:57:16 compute-0 nova_compute[349548]: 2025-12-05 01:57:16.628 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:57:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:57:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:57:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:57:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:57:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:57:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:57:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:57:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:57:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:57:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:57:17 compute-0 ceph-mon[192914]: pgmap v1436: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1437: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:57:17 compute-0 nova_compute[349548]: 2025-12-05 01:57:17.914 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:57:18 compute-0 podman[425597]: 2025-12-05 01:57:18.693431725 +0000 UTC m=+0.090405033 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:57:18 compute-0 ceph-mon[192914]: pgmap v1437: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:18 compute-0 podman[425598]: 2025-12-05 01:57:18.758783255 +0000 UTC m=+0.149927240 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.buildah.version=1.33.7, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, version=9.6, io.openshift.tags=minimal rhel9)
Dec 05 01:57:18 compute-0 podman[425596]: 2025-12-05 01:57:18.776633055 +0000 UTC m=+0.168990664 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 01:57:18 compute-0 podman[425599]: 2025-12-05 01:57:18.803017584 +0000 UTC m=+0.183478780 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller)
Dec 05 01:57:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1438: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:20 compute-0 ceph-mon[192914]: pgmap v1438: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:21 compute-0 nova_compute[349548]: 2025-12-05 01:57:21.631 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:57:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1439: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:57:22 compute-0 ceph-mon[192914]: pgmap v1439: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:22 compute-0 nova_compute[349548]: 2025-12-05 01:57:22.918 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:57:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1440: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:24 compute-0 nova_compute[349548]: 2025-12-05 01:57:24.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:57:24 compute-0 nova_compute[349548]: 2025-12-05 01:57:24.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 01:57:25 compute-0 nova_compute[349548]: 2025-12-05 01:57:25.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:57:25 compute-0 ceph-mon[192914]: pgmap v1440: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1441: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:26 compute-0 nova_compute[349548]: 2025-12-05 01:57:26.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:57:26 compute-0 nova_compute[349548]: 2025-12-05 01:57:26.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 01:57:26 compute-0 nova_compute[349548]: 2025-12-05 01:57:26.363 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:57:26 compute-0 nova_compute[349548]: 2025-12-05 01:57:26.364 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:57:26 compute-0 nova_compute[349548]: 2025-12-05 01:57:26.365 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 01:57:26 compute-0 nova_compute[349548]: 2025-12-05 01:57:26.635 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00221085813879664 of space, bias 1.0, pg target 0.663257441638992 quantized to 32 (current 32)
Dec 05 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 05 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:57:27 compute-0 ceph-mon[192914]: pgmap v1441: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1442: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:57:27 compute-0 nova_compute[349548]: 2025-12-05 01:57:27.923 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.109733) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899848109780, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1163, "num_deletes": 251, "total_data_size": 1789397, "memory_usage": 1810976, "flush_reason": "Manual Compaction"}
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899848129567, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 1750966, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28960, "largest_seqno": 30122, "table_properties": {"data_size": 1745322, "index_size": 3039, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11827, "raw_average_key_size": 19, "raw_value_size": 1734127, "raw_average_value_size": 2895, "num_data_blocks": 136, "num_entries": 599, "num_filter_entries": 599, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764899733, "oldest_key_time": 1764899733, "file_creation_time": 1764899848, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 19932 microseconds, and 10784 cpu microseconds.
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.129663) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 1750966 bytes OK
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.129690) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.133652) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.133718) EVENT_LOG_v1 {"time_micros": 1764899848133701, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.133759) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 1784053, prev total WAL file size 1784053, number of live WAL files 2.
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.136554) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(1709KB)], [65(7006KB)]
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899848136690, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 8925940, "oldest_snapshot_seqno": -1}
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 4928 keys, 7205078 bytes, temperature: kUnknown
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899848263879, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 7205078, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7173268, "index_size": 18388, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12357, "raw_key_size": 124573, "raw_average_key_size": 25, "raw_value_size": 7085077, "raw_average_value_size": 1437, "num_data_blocks": 757, "num_entries": 4928, "num_filter_entries": 4928, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764899848, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.264178) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 7205078 bytes
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.266651) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 70.1 rd, 56.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 6.8 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(9.2) write-amplify(4.1) OK, records in: 5442, records dropped: 514 output_compression: NoCompression
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.266672) EVENT_LOG_v1 {"time_micros": 1764899848266662, "job": 36, "event": "compaction_finished", "compaction_time_micros": 127321, "compaction_time_cpu_micros": 27712, "output_level": 6, "num_output_files": 1, "total_output_size": 7205078, "num_input_records": 5442, "num_output_records": 4928, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899848267303, "job": 36, "event": "table_file_deletion", "file_number": 67}
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899848268810, "job": 36, "event": "table_file_deletion", "file_number": 65}
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.135754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.269012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.269016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.269018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.269019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.269021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 01:57:28 compute-0 nova_compute[349548]: 2025-12-05 01:57:28.779 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Updating instance_info_cache with network_info: [{"id": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "address": "fa:16:3e:68:a7:22", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4341bf52-6b", "ovs_interfaceid": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 01:57:28 compute-0 nova_compute[349548]: 2025-12-05 01:57:28.809 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:57:28 compute-0 nova_compute[349548]: 2025-12-05 01:57:28.809 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 01:57:28 compute-0 nova_compute[349548]: 2025-12-05 01:57:28.810 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:57:28 compute-0 nova_compute[349548]: 2025-12-05 01:57:28.811 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.102 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.103 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.104 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.104 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.105 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:57:29 compute-0 ceph-mon[192914]: pgmap v1442: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:57:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3471495524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.614 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:57:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1443: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:29 compute-0 podman[158197]: time="2025-12-05T01:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:57:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.772 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.773 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.775 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:57:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8636 "" "Go-http-client/1.1"
Dec 05 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.786 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.787 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.789 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.799 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.799 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.800 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.809 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.809 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.810 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:57:30 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3471495524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:57:30 compute-0 ceph-mon[192914]: pgmap v1443: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:30 compute-0 nova_compute[349548]: 2025-12-05 01:57:30.408 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:57:30 compute-0 nova_compute[349548]: 2025-12-05 01:57:30.409 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3209MB free_disk=59.855655670166016GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 01:57:30 compute-0 nova_compute[349548]: 2025-12-05 01:57:30.409 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:57:30 compute-0 nova_compute[349548]: 2025-12-05 01:57:30.410 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:57:30 compute-0 nova_compute[349548]: 2025-12-05 01:57:30.532 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:57:30 compute-0 nova_compute[349548]: 2025-12-05 01:57:30.533 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:57:30 compute-0 nova_compute[349548]: 2025-12-05 01:57:30.533 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:57:30 compute-0 nova_compute[349548]: 2025-12-05 01:57:30.533 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:57:30 compute-0 nova_compute[349548]: 2025-12-05 01:57:30.533 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 01:57:30 compute-0 nova_compute[349548]: 2025-12-05 01:57:30.533 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 01:57:30 compute-0 nova_compute[349548]: 2025-12-05 01:57:30.679 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:57:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:57:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2845196599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:57:31 compute-0 nova_compute[349548]: 2025-12-05 01:57:31.187 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:57:31 compute-0 nova_compute[349548]: 2025-12-05 01:57:31.203 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:57:31 compute-0 nova_compute[349548]: 2025-12-05 01:57:31.220 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:57:31 compute-0 nova_compute[349548]: 2025-12-05 01:57:31.222 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 01:57:31 compute-0 nova_compute[349548]: 2025-12-05 01:57:31.222 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:57:31 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2845196599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:57:31 compute-0 openstack_network_exporter[366555]: ERROR   01:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:57:31 compute-0 openstack_network_exporter[366555]: ERROR   01:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:57:31 compute-0 openstack_network_exporter[366555]: ERROR   01:57:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:57:31 compute-0 openstack_network_exporter[366555]: ERROR   01:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:57:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:57:31 compute-0 openstack_network_exporter[366555]: ERROR   01:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:57:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:57:31 compute-0 nova_compute[349548]: 2025-12-05 01:57:31.638 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:57:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1444: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:32 compute-0 ceph-mon[192914]: pgmap v1444: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:57:32 compute-0 nova_compute[349548]: 2025-12-05 01:57:32.929 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:57:33 compute-0 nova_compute[349548]: 2025-12-05 01:57:33.222 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:57:33 compute-0 nova_compute[349548]: 2025-12-05 01:57:33.255 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:57:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1445: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:34 compute-0 ceph-mon[192914]: pgmap v1445: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1446: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:35 compute-0 podman[425723]: 2025-12-05 01:57:35.677365629 +0000 UTC m=+0.083037767 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:57:35 compute-0 podman[425724]: 2025-12-05 01:57:35.693240113 +0000 UTC m=+0.092383688 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 01:57:36 compute-0 nova_compute[349548]: 2025-12-05 01:57:36.640 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:57:36 compute-0 ceph-mon[192914]: pgmap v1446: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1447: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:37 compute-0 podman[425764]: 2025-12-05 01:57:37.716433524 +0000 UTC m=+0.115202737 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Dec 05 01:57:37 compute-0 podman[425765]: 2025-12-05 01:57:37.718870503 +0000 UTC m=+0.114759245 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 01:57:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:57:37 compute-0 nova_compute[349548]: 2025-12-05 01:57:37.934 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:57:38 compute-0 ceph-mon[192914]: pgmap v1447: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1448: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:40 compute-0 ceph-mon[192914]: pgmap v1448: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:41 compute-0 nova_compute[349548]: 2025-12-05 01:57:41.643 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:57:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1449: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:41 compute-0 podman[425803]: 2025-12-05 01:57:41.705819179 +0000 UTC m=+0.102444910 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, vcs-type=git, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4)
Dec 05 01:57:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:57:42 compute-0 nova_compute[349548]: 2025-12-05 01:57:42.938 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:57:43 compute-0 ceph-mon[192914]: pgmap v1449: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1450: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:43 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 05 01:57:45 compute-0 ceph-mon[192914]: pgmap v1450: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 01:57:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3550525541' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:57:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 01:57:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3550525541' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:57:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1451: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/3550525541' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:57:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/3550525541' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:57:46 compute-0 ceph-mon[192914]: pgmap v1451: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:57:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:57:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:57:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:57:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:57:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:57:46 compute-0 nova_compute[349548]: 2025-12-05 01:57:46.646 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:57:46 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 05 01:57:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1452: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:57:47 compute-0 nova_compute[349548]: 2025-12-05 01:57:47.942 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:57:49 compute-0 ceph-mon[192914]: pgmap v1452: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1453: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:49 compute-0 podman[425826]: 2025-12-05 01:57:49.722636844 +0000 UTC m=+0.111510664 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 01:57:49 compute-0 podman[425825]: 2025-12-05 01:57:49.738733265 +0000 UTC m=+0.147597185 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 01:57:49 compute-0 podman[425833]: 2025-12-05 01:57:49.748220841 +0000 UTC m=+0.109523329 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, distribution-scope=public, release=1755695350, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Dec 05 01:57:49 compute-0 podman[425827]: 2025-12-05 01:57:49.78355607 +0000 UTC m=+0.159479487 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Dec 05 01:57:50 compute-0 ceph-mon[192914]: pgmap v1453: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:51 compute-0 nova_compute[349548]: 2025-12-05 01:57:51.649 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:57:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1454: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:52 compute-0 sudo[425904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:57:52 compute-0 sudo[425904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:57:52 compute-0 sudo[425904]: pam_unix(sudo:session): session closed for user root
Dec 05 01:57:52 compute-0 sudo[425929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:57:52 compute-0 sudo[425929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:57:52 compute-0 sudo[425929]: pam_unix(sudo:session): session closed for user root
Dec 05 01:57:52 compute-0 sudo[425954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:57:52 compute-0 sudo[425954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:57:52 compute-0 sudo[425954]: pam_unix(sudo:session): session closed for user root
Dec 05 01:57:52 compute-0 sudo[425979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:57:52 compute-0 sudo[425979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:57:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:57:52 compute-0 nova_compute[349548]: 2025-12-05 01:57:52.947 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:57:53 compute-0 sudo[425979]: pam_unix(sudo:session): session closed for user root
Dec 05 01:57:53 compute-0 ceph-mon[192914]: pgmap v1454: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 05 01:57:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 01:57:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:57:53 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:57:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:57:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:57:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:57:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:57:53 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 285f0195-8471-4e9a-964d-3e6be48094b6 does not exist
Dec 05 01:57:53 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev e3cf2af1-8a62-4b85-adff-20657386a7a8 does not exist
Dec 05 01:57:53 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev bfcc8e7c-021b-47f0-8331-32712b614db6 does not exist
Dec 05 01:57:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:57:53 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:57:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:57:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:57:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:57:53 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:57:53 compute-0 sudo[426035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:57:53 compute-0 sudo[426035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:57:53 compute-0 sudo[426035]: pam_unix(sudo:session): session closed for user root
Dec 05 01:57:53 compute-0 sudo[426060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:57:53 compute-0 sudo[426060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:57:53 compute-0 sudo[426060]: pam_unix(sudo:session): session closed for user root
Dec 05 01:57:53 compute-0 sudo[426085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:57:53 compute-0 sudo[426085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:57:53 compute-0 sudo[426085]: pam_unix(sudo:session): session closed for user root
Dec 05 01:57:53 compute-0 sudo[426110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:57:53 compute-0 sudo[426110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:57:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1455: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 01:57:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:57:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:57:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:57:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:57:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:57:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:57:54 compute-0 podman[426170]: 2025-12-05 01:57:54.106463415 +0000 UTC m=+0.067094760 container create 679b2cc98a0de2e05222273a33a1eebe7b49173f8b849f6b1092832d4e9ea0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jang, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 05 01:57:54 compute-0 podman[426170]: 2025-12-05 01:57:54.075790456 +0000 UTC m=+0.036421841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:57:54 compute-0 systemd[1]: Started libpod-conmon-679b2cc98a0de2e05222273a33a1eebe7b49173f8b849f6b1092832d4e9ea0dd.scope.
Dec 05 01:57:54 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:57:54 compute-0 podman[426170]: 2025-12-05 01:57:54.264324986 +0000 UTC m=+0.224956361 container init 679b2cc98a0de2e05222273a33a1eebe7b49173f8b849f6b1092832d4e9ea0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jang, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 01:57:54 compute-0 podman[426170]: 2025-12-05 01:57:54.282943318 +0000 UTC m=+0.243574663 container start 679b2cc98a0de2e05222273a33a1eebe7b49173f8b849f6b1092832d4e9ea0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jang, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:57:54 compute-0 podman[426170]: 2025-12-05 01:57:54.288477463 +0000 UTC m=+0.249108848 container attach 679b2cc98a0de2e05222273a33a1eebe7b49173f8b849f6b1092832d4e9ea0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jang, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:57:54 compute-0 peaceful_jang[426186]: 167 167
Dec 05 01:57:54 compute-0 systemd[1]: libpod-679b2cc98a0de2e05222273a33a1eebe7b49173f8b849f6b1092832d4e9ea0dd.scope: Deactivated successfully.
Dec 05 01:57:54 compute-0 podman[426191]: 2025-12-05 01:57:54.389715528 +0000 UTC m=+0.063594532 container died 679b2cc98a0de2e05222273a33a1eebe7b49173f8b849f6b1092832d4e9ea0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:57:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf9e22a6837e1cf8ff94cb67fe1e42cc0b7bd70b1fcf6fce5e3e55a0306a04d3-merged.mount: Deactivated successfully.
Dec 05 01:57:54 compute-0 podman[426191]: 2025-12-05 01:57:54.464033619 +0000 UTC m=+0.137912573 container remove 679b2cc98a0de2e05222273a33a1eebe7b49173f8b849f6b1092832d4e9ea0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jang, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:57:54 compute-0 systemd[1]: libpod-conmon-679b2cc98a0de2e05222273a33a1eebe7b49173f8b849f6b1092832d4e9ea0dd.scope: Deactivated successfully.
Dec 05 01:57:54 compute-0 podman[426212]: 2025-12-05 01:57:54.794051712 +0000 UTC m=+0.107355798 container create bba846903659aabfc73628b19ab8333cb8479500d066927db8f69d0eae09e1af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:57:54 compute-0 podman[426212]: 2025-12-05 01:57:54.747370794 +0000 UTC m=+0.060674930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:57:54 compute-0 systemd[1]: Started libpod-conmon-bba846903659aabfc73628b19ab8333cb8479500d066927db8f69d0eae09e1af.scope.
Dec 05 01:57:54 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49bc63b93ed6759d56e6a8c8cf7c297f5208dd35e7ca266f9ef0ffbc515dd538/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49bc63b93ed6759d56e6a8c8cf7c297f5208dd35e7ca266f9ef0ffbc515dd538/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49bc63b93ed6759d56e6a8c8cf7c297f5208dd35e7ca266f9ef0ffbc515dd538/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49bc63b93ed6759d56e6a8c8cf7c297f5208dd35e7ca266f9ef0ffbc515dd538/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49bc63b93ed6759d56e6a8c8cf7c297f5208dd35e7ca266f9ef0ffbc515dd538/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:57:54 compute-0 podman[426212]: 2025-12-05 01:57:54.991427679 +0000 UTC m=+0.304731755 container init bba846903659aabfc73628b19ab8333cb8479500d066927db8f69d0eae09e1af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 05 01:57:55 compute-0 podman[426212]: 2025-12-05 01:57:55.017260153 +0000 UTC m=+0.330564209 container start bba846903659aabfc73628b19ab8333cb8479500d066927db8f69d0eae09e1af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:57:55 compute-0 podman[426212]: 2025-12-05 01:57:55.023429375 +0000 UTC m=+0.336733501 container attach bba846903659aabfc73628b19ab8333cb8479500d066927db8f69d0eae09e1af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:57:55 compute-0 ceph-mon[192914]: pgmap v1455: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1456: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:57:56.189 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:57:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:57:56.190 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:57:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:57:56.190 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:57:56 compute-0 unruffled_neumann[426229]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:57:56 compute-0 unruffled_neumann[426229]: --> relative data size: 1.0
Dec 05 01:57:56 compute-0 unruffled_neumann[426229]: --> All data devices are unavailable
Dec 05 01:57:56 compute-0 systemd[1]: libpod-bba846903659aabfc73628b19ab8333cb8479500d066927db8f69d0eae09e1af.scope: Deactivated successfully.
Dec 05 01:57:56 compute-0 systemd[1]: libpod-bba846903659aabfc73628b19ab8333cb8479500d066927db8f69d0eae09e1af.scope: Consumed 1.281s CPU time.
Dec 05 01:57:56 compute-0 conmon[426229]: conmon bba846903659aabfc736 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bba846903659aabfc73628b19ab8333cb8479500d066927db8f69d0eae09e1af.scope/container/memory.events
Dec 05 01:57:56 compute-0 podman[426258]: 2025-12-05 01:57:56.450049588 +0000 UTC m=+0.048842299 container died bba846903659aabfc73628b19ab8333cb8479500d066927db8f69d0eae09e1af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:57:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-49bc63b93ed6759d56e6a8c8cf7c297f5208dd35e7ca266f9ef0ffbc515dd538-merged.mount: Deactivated successfully.
Dec 05 01:57:56 compute-0 podman[426258]: 2025-12-05 01:57:56.564563605 +0000 UTC m=+0.163356226 container remove bba846903659aabfc73628b19ab8333cb8479500d066927db8f69d0eae09e1af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Dec 05 01:57:56 compute-0 systemd[1]: libpod-conmon-bba846903659aabfc73628b19ab8333cb8479500d066927db8f69d0eae09e1af.scope: Deactivated successfully.
Dec 05 01:57:56 compute-0 sudo[426110]: pam_unix(sudo:session): session closed for user root
Dec 05 01:57:56 compute-0 nova_compute[349548]: 2025-12-05 01:57:56.653 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:57:56 compute-0 sudo[426272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:57:56 compute-0 sudo[426272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:57:56 compute-0 sudo[426272]: pam_unix(sudo:session): session closed for user root
Dec 05 01:57:56 compute-0 sudo[426297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:57:56 compute-0 sudo[426297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:57:56 compute-0 sudo[426297]: pam_unix(sudo:session): session closed for user root
Dec 05 01:57:56 compute-0 sudo[426322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:57:56 compute-0 sudo[426322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:57:56 compute-0 sudo[426322]: pam_unix(sudo:session): session closed for user root
Dec 05 01:57:57 compute-0 sudo[426347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:57:57 compute-0 sudo[426347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:57:57 compute-0 ceph-mon[192914]: pgmap v1456: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1457: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:57 compute-0 podman[426407]: 2025-12-05 01:57:57.677104733 +0000 UTC m=+0.054376194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:57:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:57:57 compute-0 nova_compute[349548]: 2025-12-05 01:57:57.952 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:57:58 compute-0 podman[426407]: 2025-12-05 01:57:58.125225993 +0000 UTC m=+0.502497354 container create 27b0902208139ed6f9db56d79be8071854a1ca788c7d6396e5e62233a9ad15ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 01:57:58 compute-0 systemd[1]: Started libpod-conmon-27b0902208139ed6f9db56d79be8071854a1ca788c7d6396e5e62233a9ad15ae.scope.
Dec 05 01:57:58 compute-0 ceph-mon[192914]: pgmap v1457: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:58 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:57:58 compute-0 podman[426407]: 2025-12-05 01:57:58.258839475 +0000 UTC m=+0.636110916 container init 27b0902208139ed6f9db56d79be8071854a1ca788c7d6396e5e62233a9ad15ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heyrovsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:57:58 compute-0 podman[426407]: 2025-12-05 01:57:58.275247394 +0000 UTC m=+0.652518765 container start 27b0902208139ed6f9db56d79be8071854a1ca788c7d6396e5e62233a9ad15ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heyrovsky, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:57:58 compute-0 podman[426407]: 2025-12-05 01:57:58.280232164 +0000 UTC m=+0.657503575 container attach 27b0902208139ed6f9db56d79be8071854a1ca788c7d6396e5e62233a9ad15ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 05 01:57:58 compute-0 distracted_heyrovsky[426423]: 167 167
Dec 05 01:57:58 compute-0 systemd[1]: libpod-27b0902208139ed6f9db56d79be8071854a1ca788c7d6396e5e62233a9ad15ae.scope: Deactivated successfully.
Dec 05 01:57:58 compute-0 podman[426428]: 2025-12-05 01:57:58.353148206 +0000 UTC m=+0.042946664 container died 27b0902208139ed6f9db56d79be8071854a1ca788c7d6396e5e62233a9ad15ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:57:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-938d895de4356d7aacf703ebcc71115207f9a1cbb4db079a925be15f6797039d-merged.mount: Deactivated successfully.
Dec 05 01:57:58 compute-0 podman[426428]: 2025-12-05 01:57:58.412649862 +0000 UTC m=+0.102448300 container remove 27b0902208139ed6f9db56d79be8071854a1ca788c7d6396e5e62233a9ad15ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:57:58 compute-0 systemd[1]: libpod-conmon-27b0902208139ed6f9db56d79be8071854a1ca788c7d6396e5e62233a9ad15ae.scope: Deactivated successfully.
Dec 05 01:57:58 compute-0 podman[426449]: 2025-12-05 01:57:58.75179628 +0000 UTC m=+0.081996937 container create edb5115d9d5a975fd48562eef418dff34411bd6c8be8ba869eada334c59ae341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 05 01:57:58 compute-0 podman[426449]: 2025-12-05 01:57:58.717744587 +0000 UTC m=+0.047945294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:57:58 compute-0 systemd[1]: Started libpod-conmon-edb5115d9d5a975fd48562eef418dff34411bd6c8be8ba869eada334c59ae341.scope.
Dec 05 01:57:58 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:57:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/033aa1161c6c27b4da6598fb8d9be093dbdc585a4b1ad80c0edb37aa9db83fff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:57:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/033aa1161c6c27b4da6598fb8d9be093dbdc585a4b1ad80c0edb37aa9db83fff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:57:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/033aa1161c6c27b4da6598fb8d9be093dbdc585a4b1ad80c0edb37aa9db83fff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:57:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/033aa1161c6c27b4da6598fb8d9be093dbdc585a4b1ad80c0edb37aa9db83fff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:57:58 compute-0 podman[426449]: 2025-12-05 01:57:58.905529646 +0000 UTC m=+0.235730323 container init edb5115d9d5a975fd48562eef418dff34411bd6c8be8ba869eada334c59ae341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_babbage, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:57:58 compute-0 podman[426449]: 2025-12-05 01:57:58.923492959 +0000 UTC m=+0.253693646 container start edb5115d9d5a975fd48562eef418dff34411bd6c8be8ba869eada334c59ae341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_babbage, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 01:57:58 compute-0 podman[426449]: 2025-12-05 01:57:58.929793765 +0000 UTC m=+0.259994522 container attach edb5115d9d5a975fd48562eef418dff34411bd6c8be8ba869eada334c59ae341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec 05 01:57:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1458: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:57:59 compute-0 funny_babbage[426466]: {
Dec 05 01:57:59 compute-0 funny_babbage[426466]:     "0": [
Dec 05 01:57:59 compute-0 funny_babbage[426466]:         {
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "devices": [
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "/dev/loop3"
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             ],
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "lv_name": "ceph_lv0",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "lv_size": "21470642176",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "name": "ceph_lv0",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "tags": {
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.cluster_name": "ceph",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.crush_device_class": "",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.encrypted": "0",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.osd_id": "0",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.type": "block",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.vdo": "0"
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             },
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "type": "block",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "vg_name": "ceph_vg0"
Dec 05 01:57:59 compute-0 funny_babbage[426466]:         }
Dec 05 01:57:59 compute-0 funny_babbage[426466]:     ],
Dec 05 01:57:59 compute-0 funny_babbage[426466]:     "1": [
Dec 05 01:57:59 compute-0 funny_babbage[426466]:         {
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "devices": [
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "/dev/loop4"
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             ],
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "lv_name": "ceph_lv1",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "lv_size": "21470642176",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "name": "ceph_lv1",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "tags": {
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.cluster_name": "ceph",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.crush_device_class": "",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.encrypted": "0",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.osd_id": "1",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.type": "block",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.vdo": "0"
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             },
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "type": "block",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "vg_name": "ceph_vg1"
Dec 05 01:57:59 compute-0 funny_babbage[426466]:         }
Dec 05 01:57:59 compute-0 funny_babbage[426466]:     ],
Dec 05 01:57:59 compute-0 funny_babbage[426466]:     "2": [
Dec 05 01:57:59 compute-0 podman[158197]: time="2025-12-05T01:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:57:59 compute-0 funny_babbage[426466]:         {
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "devices": [
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "/dev/loop5"
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             ],
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "lv_name": "ceph_lv2",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "lv_size": "21470642176",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "name": "ceph_lv2",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "tags": {
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.cluster_name": "ceph",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.crush_device_class": "",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.encrypted": "0",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.osd_id": "2",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.type": "block",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:                 "ceph.vdo": "0"
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             },
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "type": "block",
Dec 05 01:57:59 compute-0 funny_babbage[426466]:             "vg_name": "ceph_vg2"
Dec 05 01:57:59 compute-0 funny_babbage[426466]:         }
Dec 05 01:57:59 compute-0 funny_babbage[426466]:     ]
Dec 05 01:57:59 compute-0 funny_babbage[426466]: }
Dec 05 01:57:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45381 "" "Go-http-client/1.1"
Dec 05 01:57:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9047 "" "Go-http-client/1.1"
Dec 05 01:57:59 compute-0 systemd[1]: libpod-edb5115d9d5a975fd48562eef418dff34411bd6c8be8ba869eada334c59ae341.scope: Deactivated successfully.
Dec 05 01:57:59 compute-0 podman[426449]: 2025-12-05 01:57:59.791501977 +0000 UTC m=+1.121702634 container died edb5115d9d5a975fd48562eef418dff34411bd6c8be8ba869eada334c59ae341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_babbage, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:57:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-033aa1161c6c27b4da6598fb8d9be093dbdc585a4b1ad80c0edb37aa9db83fff-merged.mount: Deactivated successfully.
Dec 05 01:57:59 compute-0 podman[426449]: 2025-12-05 01:57:59.8776674 +0000 UTC m=+1.207868057 container remove edb5115d9d5a975fd48562eef418dff34411bd6c8be8ba869eada334c59ae341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 05 01:57:59 compute-0 systemd[1]: libpod-conmon-edb5115d9d5a975fd48562eef418dff34411bd6c8be8ba869eada334c59ae341.scope: Deactivated successfully.
Dec 05 01:57:59 compute-0 sudo[426347]: pam_unix(sudo:session): session closed for user root
Dec 05 01:58:00 compute-0 sudo[426487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:58:00 compute-0 sudo[426487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:58:00 compute-0 sudo[426487]: pam_unix(sudo:session): session closed for user root
Dec 05 01:58:00 compute-0 sudo[426512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:58:00 compute-0 sudo[426512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:58:00 compute-0 sudo[426512]: pam_unix(sudo:session): session closed for user root
Dec 05 01:58:00 compute-0 sudo[426537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:58:00 compute-0 sudo[426537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:58:00 compute-0 sudo[426537]: pam_unix(sudo:session): session closed for user root
Dec 05 01:58:00 compute-0 sudo[426562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:58:00 compute-0 sudo[426562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:58:00 compute-0 ceph-mon[192914]: pgmap v1458: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:00 compute-0 podman[426626]: 2025-12-05 01:58:00.886563715 +0000 UTC m=+0.030145325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:58:01 compute-0 podman[426626]: 2025-12-05 01:58:01.016077003 +0000 UTC m=+0.159658633 container create f671a26ffe38e322612fb9f8ffa1c4e742eb91ae783711032aaf7b637dad7857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sinoussi, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Dec 05 01:58:01 compute-0 systemd[1]: Started libpod-conmon-f671a26ffe38e322612fb9f8ffa1c4e742eb91ae783711032aaf7b637dad7857.scope.
Dec 05 01:58:01 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:58:01 compute-0 podman[426626]: 2025-12-05 01:58:01.145367444 +0000 UTC m=+0.288949084 container init f671a26ffe38e322612fb9f8ffa1c4e742eb91ae783711032aaf7b637dad7857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sinoussi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:58:01 compute-0 podman[426626]: 2025-12-05 01:58:01.155379714 +0000 UTC m=+0.298961314 container start f671a26ffe38e322612fb9f8ffa1c4e742eb91ae783711032aaf7b637dad7857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:58:01 compute-0 podman[426626]: 2025-12-05 01:58:01.160740184 +0000 UTC m=+0.304321874 container attach f671a26ffe38e322612fb9f8ffa1c4e742eb91ae783711032aaf7b637dad7857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:58:01 compute-0 agitated_sinoussi[426642]: 167 167
Dec 05 01:58:01 compute-0 systemd[1]: libpod-f671a26ffe38e322612fb9f8ffa1c4e742eb91ae783711032aaf7b637dad7857.scope: Deactivated successfully.
Dec 05 01:58:01 compute-0 podman[426626]: 2025-12-05 01:58:01.164307574 +0000 UTC m=+0.307889174 container died f671a26ffe38e322612fb9f8ffa1c4e742eb91ae783711032aaf7b637dad7857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sinoussi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:58:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea0a2778602f8e5b7f4311324cca8d41a1bfdfd28f03fa9244e50d42a858f009-merged.mount: Deactivated successfully.
Dec 05 01:58:01 compute-0 podman[426626]: 2025-12-05 01:58:01.224245213 +0000 UTC m=+0.367826813 container remove f671a26ffe38e322612fb9f8ffa1c4e742eb91ae783711032aaf7b637dad7857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:58:01 compute-0 systemd[1]: libpod-conmon-f671a26ffe38e322612fb9f8ffa1c4e742eb91ae783711032aaf7b637dad7857.scope: Deactivated successfully.
Dec 05 01:58:01 compute-0 openstack_network_exporter[366555]: ERROR   01:58:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:58:01 compute-0 openstack_network_exporter[366555]: ERROR   01:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:58:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:58:01 compute-0 openstack_network_exporter[366555]: ERROR   01:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:58:01 compute-0 openstack_network_exporter[366555]: ERROR   01:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:58:01 compute-0 openstack_network_exporter[366555]: ERROR   01:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:58:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:58:01 compute-0 podman[426664]: 2025-12-05 01:58:01.499374858 +0000 UTC m=+0.092416099 container create 9abe95a42b211e054f5a668caf8d5e0a0b040be1f5c73ae43241a1153393abcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 05 01:58:01 compute-0 podman[426664]: 2025-12-05 01:58:01.470080557 +0000 UTC m=+0.063121818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:58:01 compute-0 systemd[1]: Started libpod-conmon-9abe95a42b211e054f5a668caf8d5e0a0b040be1f5c73ae43241a1153393abcd.scope.
Dec 05 01:58:01 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:58:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32572efe6e3727cd3bafbd8a37e13fb1b3d7f9a7fbc9125b8ff003a53053990d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:58:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32572efe6e3727cd3bafbd8a37e13fb1b3d7f9a7fbc9125b8ff003a53053990d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:58:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32572efe6e3727cd3bafbd8a37e13fb1b3d7f9a7fbc9125b8ff003a53053990d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:58:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32572efe6e3727cd3bafbd8a37e13fb1b3d7f9a7fbc9125b8ff003a53053990d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:58:01 compute-0 podman[426664]: 2025-12-05 01:58:01.643038291 +0000 UTC m=+0.236079582 container init 9abe95a42b211e054f5a668caf8d5e0a0b040be1f5c73ae43241a1153393abcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cray, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Dec 05 01:58:01 compute-0 podman[426664]: 2025-12-05 01:58:01.662321161 +0000 UTC m=+0.255362372 container start 9abe95a42b211e054f5a668caf8d5e0a0b040be1f5c73ae43241a1153393abcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cray, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:58:01 compute-0 podman[426664]: 2025-12-05 01:58:01.66797684 +0000 UTC m=+0.261018121 container attach 9abe95a42b211e054f5a668caf8d5e0a0b040be1f5c73ae43241a1153393abcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cray, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:58:01 compute-0 nova_compute[349548]: 2025-12-05 01:58:01.669 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:58:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1459: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:02 compute-0 ceph-mon[192914]: pgmap v1459: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:58:02 compute-0 vigorous_cray[426679]: {
Dec 05 01:58:02 compute-0 vigorous_cray[426679]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:58:02 compute-0 vigorous_cray[426679]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:58:02 compute-0 vigorous_cray[426679]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:58:02 compute-0 vigorous_cray[426679]:         "osd_id": 0,
Dec 05 01:58:02 compute-0 vigorous_cray[426679]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:58:02 compute-0 vigorous_cray[426679]:         "type": "bluestore"
Dec 05 01:58:02 compute-0 vigorous_cray[426679]:     },
Dec 05 01:58:02 compute-0 vigorous_cray[426679]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:58:02 compute-0 vigorous_cray[426679]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:58:02 compute-0 vigorous_cray[426679]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:58:02 compute-0 vigorous_cray[426679]:         "osd_id": 1,
Dec 05 01:58:02 compute-0 vigorous_cray[426679]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:58:02 compute-0 vigorous_cray[426679]:         "type": "bluestore"
Dec 05 01:58:02 compute-0 vigorous_cray[426679]:     },
Dec 05 01:58:02 compute-0 vigorous_cray[426679]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:58:02 compute-0 vigorous_cray[426679]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:58:02 compute-0 vigorous_cray[426679]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:58:02 compute-0 vigorous_cray[426679]:         "osd_id": 2,
Dec 05 01:58:02 compute-0 vigorous_cray[426679]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:58:02 compute-0 vigorous_cray[426679]:         "type": "bluestore"
Dec 05 01:58:02 compute-0 vigorous_cray[426679]:     }
Dec 05 01:58:02 compute-0 vigorous_cray[426679]: }
Dec 05 01:58:02 compute-0 systemd[1]: libpod-9abe95a42b211e054f5a668caf8d5e0a0b040be1f5c73ae43241a1153393abcd.scope: Deactivated successfully.
Dec 05 01:58:02 compute-0 podman[426664]: 2025-12-05 01:58:02.846279699 +0000 UTC m=+1.439320920 container died 9abe95a42b211e054f5a668caf8d5e0a0b040be1f5c73ae43241a1153393abcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 05 01:58:02 compute-0 systemd[1]: libpod-9abe95a42b211e054f5a668caf8d5e0a0b040be1f5c73ae43241a1153393abcd.scope: Consumed 1.175s CPU time.
Dec 05 01:58:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-32572efe6e3727cd3bafbd8a37e13fb1b3d7f9a7fbc9125b8ff003a53053990d-merged.mount: Deactivated successfully.
Dec 05 01:58:02 compute-0 podman[426664]: 2025-12-05 01:58:02.922689589 +0000 UTC m=+1.515730800 container remove 9abe95a42b211e054f5a668caf8d5e0a0b040be1f5c73ae43241a1153393abcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cray, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Dec 05 01:58:02 compute-0 systemd[1]: libpod-conmon-9abe95a42b211e054f5a668caf8d5e0a0b040be1f5c73ae43241a1153393abcd.scope: Deactivated successfully.
Dec 05 01:58:02 compute-0 sudo[426562]: pam_unix(sudo:session): session closed for user root
Dec 05 01:58:02 compute-0 nova_compute[349548]: 2025-12-05 01:58:02.956 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:58:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:58:02 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:58:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:58:02 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:58:02 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 116ffca0-c04b-43b3-96ea-fe3552f6b185 does not exist
Dec 05 01:58:02 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b3f8ea15-c77a-481b-9e0e-b17d64055afd does not exist
Dec 05 01:58:03 compute-0 sudo[426726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:58:03 compute-0 sudo[426726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:58:03 compute-0 sudo[426726]: pam_unix(sudo:session): session closed for user root
Dec 05 01:58:03 compute-0 sudo[426751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:58:03 compute-0 sudo[426751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:58:03 compute-0 sudo[426751]: pam_unix(sudo:session): session closed for user root
Dec 05 01:58:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1460: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:03 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:58:03 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:58:04 compute-0 ceph-mon[192914]: pgmap v1460: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1461: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:06 compute-0 nova_compute[349548]: 2025-12-05 01:58:06.659 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:58:06 compute-0 podman[426776]: 2025-12-05 01:58:06.71806728 +0000 UTC m=+0.113329615 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 05 01:58:06 compute-0 podman[426777]: 2025-12-05 01:58:06.718151242 +0000 UTC m=+0.103926451 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:58:07 compute-0 ceph-mon[192914]: pgmap v1461: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1462: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:58:07 compute-0 nova_compute[349548]: 2025-12-05 01:58:07.961 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:58:08 compute-0 podman[426816]: 2025-12-05 01:58:08.707819484 +0000 UTC m=+0.110755852 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 05 01:58:08 compute-0 podman[426817]: 2025-12-05 01:58:08.719221814 +0000 UTC m=+0.105794684 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 05 01:58:09 compute-0 ceph-mon[192914]: pgmap v1462: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1463: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:11 compute-0 ceph-mon[192914]: pgmap v1463: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:11 compute-0 nova_compute[349548]: 2025-12-05 01:58:11.662 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:58:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1464: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:12 compute-0 podman[426856]: 2025-12-05 01:58:12.708809265 +0000 UTC m=+0.125856556 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, maintainer=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, version=9.4, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 01:58:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:58:12 compute-0 nova_compute[349548]: 2025-12-05 01:58:12.966 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:58:13 compute-0 ceph-mon[192914]: pgmap v1464: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1465: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:15 compute-0 ceph-mon[192914]: pgmap v1465: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1466: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:58:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:58:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:58:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:58:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:58:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:58:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:58:16
Dec 05 01:58:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:58:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:58:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'backups', 'vms', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.log']
Dec 05 01:58:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:58:16 compute-0 ceph-mon[192914]: pgmap v1466: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:16 compute-0 nova_compute[349548]: 2025-12-05 01:58:16.663 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:58:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:58:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:58:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:58:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:58:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:58:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:58:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:58:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:58:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:58:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:58:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1467: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:58:17 compute-0 nova_compute[349548]: 2025-12-05 01:58:17.970 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:58:18 compute-0 ceph-mon[192914]: pgmap v1467: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1468: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:20 compute-0 podman[426878]: 2025-12-05 01:58:20.716751342 +0000 UTC m=+0.117596724 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 01:58:20 compute-0 podman[426877]: 2025-12-05 01:58:20.726869406 +0000 UTC m=+0.129617272 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 01:58:20 compute-0 podman[426880]: 2025-12-05 01:58:20.730787765 +0000 UTC m=+0.115100514 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, config_id=edpm)
Dec 05 01:58:20 compute-0 podman[426879]: 2025-12-05 01:58:20.76094104 +0000 UTC m=+0.156870695 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 01:58:20 compute-0 ceph-mon[192914]: pgmap v1468: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:21 compute-0 nova_compute[349548]: 2025-12-05 01:58:21.667 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:58:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1469: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:58:22 compute-0 ceph-mon[192914]: pgmap v1469: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:22 compute-0 nova_compute[349548]: 2025-12-05 01:58:22.974 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:58:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1470: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:24 compute-0 nova_compute[349548]: 2025-12-05 01:58:24.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:58:24 compute-0 nova_compute[349548]: 2025-12-05 01:58:24.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 01:58:24 compute-0 ceph-mon[192914]: pgmap v1470: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:25 compute-0 nova_compute[349548]: 2025-12-05 01:58:25.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:58:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1471: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:26 compute-0 nova_compute[349548]: 2025-12-05 01:58:26.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:58:26 compute-0 nova_compute[349548]: 2025-12-05 01:58:26.670 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00221085813879664 of space, bias 1.0, pg target 0.663257441638992 quantized to 32 (current 32)
Dec 05 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 05 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:58:26 compute-0 ceph-mon[192914]: pgmap v1471: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:27 compute-0 nova_compute[349548]: 2025-12-05 01:58:27.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:58:27 compute-0 nova_compute[349548]: 2025-12-05 01:58:27.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 01:58:27 compute-0 nova_compute[349548]: 2025-12-05 01:58:27.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 01:58:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1472: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:58:27 compute-0 nova_compute[349548]: 2025-12-05 01:58:27.904 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:58:27 compute-0 nova_compute[349548]: 2025-12-05 01:58:27.905 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:58:27 compute-0 nova_compute[349548]: 2025-12-05 01:58:27.905 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 01:58:27 compute-0 nova_compute[349548]: 2025-12-05 01:58:27.906 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b69a0e24-1bc4-46a5-92d7-367c1efd53df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 01:58:27 compute-0 nova_compute[349548]: 2025-12-05 01:58:27.979 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:58:28 compute-0 ceph-mon[192914]: pgmap v1472: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1473: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:29 compute-0 podman[158197]: time="2025-12-05T01:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:58:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 01:58:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8625 "" "Go-http-client/1.1"
Dec 05 01:58:30 compute-0 nova_compute[349548]: 2025-12-05 01:58:30.406 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updating instance_info_cache with network_info: [{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 01:58:30 compute-0 nova_compute[349548]: 2025-12-05 01:58:30.424 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:58:30 compute-0 nova_compute[349548]: 2025-12-05 01:58:30.425 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 01:58:30 compute-0 nova_compute[349548]: 2025-12-05 01:58:30.426 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:58:30 compute-0 nova_compute[349548]: 2025-12-05 01:58:30.427 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:58:30 compute-0 nova_compute[349548]: 2025-12-05 01:58:30.459 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:58:30 compute-0 nova_compute[349548]: 2025-12-05 01:58:30.460 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:58:30 compute-0 nova_compute[349548]: 2025-12-05 01:58:30.461 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:58:30 compute-0 nova_compute[349548]: 2025-12-05 01:58:30.462 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:58:30 compute-0 nova_compute[349548]: 2025-12-05 01:58:30.462 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:58:30 compute-0 ceph-mon[192914]: pgmap v1473: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:58:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/317975351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.001 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.112 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.113 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.113 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.119 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.120 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.120 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.127 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.128 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.128 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.135 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.135 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.136 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:58:31 compute-0 openstack_network_exporter[366555]: ERROR   01:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:58:31 compute-0 openstack_network_exporter[366555]: ERROR   01:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:58:31 compute-0 openstack_network_exporter[366555]: ERROR   01:58:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:58:31 compute-0 openstack_network_exporter[366555]: ERROR   01:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:58:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:58:31 compute-0 openstack_network_exporter[366555]: ERROR   01:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:58:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.580 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.581 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3206MB free_disk=59.855655670166016GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.581 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.582 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.673 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:58:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1474: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.719 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.720 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.720 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.720 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.721 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.721 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.762 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing inventories for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.782 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating ProviderTree inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.782 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.797 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing aggregate associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.817 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing trait associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, traits: HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 01:58:31 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/317975351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.919 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:58:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:58:32 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3627712230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:58:32 compute-0 nova_compute[349548]: 2025-12-05 01:58:32.445 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:58:32 compute-0 nova_compute[349548]: 2025-12-05 01:58:32.454 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:58:32 compute-0 nova_compute[349548]: 2025-12-05 01:58:32.470 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:58:32 compute-0 nova_compute[349548]: 2025-12-05 01:58:32.472 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 01:58:32 compute-0 nova_compute[349548]: 2025-12-05 01:58:32.472 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:58:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:58:32 compute-0 ceph-mon[192914]: pgmap v1474: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:32 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3627712230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:58:32 compute-0 nova_compute[349548]: 2025-12-05 01:58:32.984 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:58:33 compute-0 nova_compute[349548]: 2025-12-05 01:58:33.113 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:58:33 compute-0 nova_compute[349548]: 2025-12-05 01:58:33.114 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:58:33 compute-0 nova_compute[349548]: 2025-12-05 01:58:33.115 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:58:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1475: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:35 compute-0 ceph-mon[192914]: pgmap v1475: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1476: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:36 compute-0 nova_compute[349548]: 2025-12-05 01:58:36.677 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:58:37 compute-0 ceph-mon[192914]: pgmap v1476: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1477: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:37 compute-0 podman[427009]: 2025-12-05 01:58:37.71735372 +0000 UTC m=+0.117131162 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 01:58:37 compute-0 podman[427010]: 2025-12-05 01:58:37.745234611 +0000 UTC m=+0.126120993 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:58:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:58:37 compute-0 nova_compute[349548]: 2025-12-05 01:58:37.988 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.318 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.319 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.319 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.327 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5', 'name': 'vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.330 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b69a0e24-1bc4-46a5-92d7-367c1efd53df', 'name': 'test_0', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.335 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b82c3f0e-6d6a-4a7b-9556-b609ad63e497', 'name': 'vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.338 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3611d2ae-da33-4e55-aec7-0bec88d3b4e0', 'name': 'vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.339 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.339 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.339 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.340 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.341 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.342 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.342 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.342 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.342 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T01:58:38.340069) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.342 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.343 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T01:58:38.342688) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.378 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.379 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.379 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.415 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.416 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.416 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.450 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.450 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.451 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.484 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.485 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.485 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.486 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.487 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.487 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.487 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.488 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.488 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.489 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.489 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.490 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.490 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.490 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.490 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.491 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.491 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.491 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T01:58:38.488256) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.492 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T01:58:38.491218) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.581 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.582 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.583 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.675 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.675 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.676 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.756 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.757 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.758 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.827 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.828 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.829 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.830 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.830 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.831 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.831 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.832 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.832 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.833 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.latency volume: 1788689993 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.834 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.latency volume: 318906117 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.834 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.latency volume: 246265233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.834 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T01:58:38.832764) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.835 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 2043636416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.836 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 325714825 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.837 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 190759187 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.837 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 2069488567 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.838 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 288882839 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.839 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 182154388 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.839 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 1726190004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.840 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 302563806 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.840 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 198504004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.841 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.842 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.842 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.843 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.843 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.843 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.844 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.844 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T01:58:38.843814) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.844 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.845 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.846 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.847 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.847 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.847 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.848 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.849 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.850 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.851 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.851 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.852 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.853 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.853 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.853 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.853 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.854 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T01:58:38.853553) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.854 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.854 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.855 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.855 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.856 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.856 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.856 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.857 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.857 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.858 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.858 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.858 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.859 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.860 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.860 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.860 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.860 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.861 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.861 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.862 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.862 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.863 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.863 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.863 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.864 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.864 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.865 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.866 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.866 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.867 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T01:58:38.860781) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.868 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.868 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.868 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.868 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.869 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.870 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T01:58:38.869147) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.910 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.941 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.970 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.004 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.006 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.007 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.007 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.008 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.009 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.009 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T01:58:39.008855) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.009 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.latency volume: 7184458071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.011 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.latency volume: 30429022 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.012 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.012 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 7524740776 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.013 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 28454640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.013 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.014 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 9233370301 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.014 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 32028870 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.015 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.016 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 8278686410 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.016 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 33331693 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.017 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.018 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.018 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.018 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.019 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.019 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.019 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.020 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.019 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T01:58:39.019357) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.020 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.021 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.021 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.022 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.022 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.022 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.023 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.023 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.024 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.024 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.025 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.026 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.027 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.027 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.027 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.027 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.027 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.028 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T01:58:39.027592) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.032 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.038 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.045 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.050 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.051 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.052 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.052 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.052 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.052 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.053 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.053 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T01:58:39.052774) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.054 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.054 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.054 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.055 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.056 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.056 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.056 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.056 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.057 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.057 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.058 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.058 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.059 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T01:58:39.056816) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.059 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.059 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.060 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.060 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.061 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.061 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.061 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.062 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.063 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.063 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.064 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.064 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.064 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.064 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.065 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.065 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.066 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.067 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.067 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.068 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.068 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.068 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.068 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.069 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.069 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.bytes volume: 7440 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.070 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.071 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.072 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.072 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.072 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T01:58:39.064484) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.072 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.073 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.073 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.073 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.073 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T01:58:39.068469) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.073 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.074 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.bytes.delta volume: 380 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.074 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T01:58:39.072982) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.075 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.075 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.075 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.075 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.076 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.076 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/memory.usage volume: 49.02734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.076 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/memory.usage volume: 48.91015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.077 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/memory.usage volume: 49.0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.077 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/memory.usage volume: 49.01171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.078 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.078 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T01:58:39.076087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.078 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.078 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.079 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.079 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.080 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.081 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.081 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.081 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T01:58:39.079175) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.082 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.082 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.083 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.083 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.083 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.083 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.083 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.083 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.084 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets volume: 64 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.084 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.084 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T01:58:39.083410) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.085 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.085 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.085 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.085 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.085 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.085 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.086 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.086 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.086 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.087 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.087 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T01:58:39.085619) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.087 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.087 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.087 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.088 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.088 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/cpu volume: 37770000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.088 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/cpu volume: 44210000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.088 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T01:58:39.087997) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.088 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/cpu volume: 339210000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.089 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/cpu volume: 37720000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.089 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.089 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.089 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.089 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.090 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.090 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.090 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T01:58:39.090021) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.090 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.090 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.091 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.091 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.091 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.091 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.092 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.092 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.092 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.092 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T01:58:39.092343) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.093 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.093 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.093 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.094 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.096 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.096 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 01:58:39 compute-0 ceph-mon[192914]: pgmap v1477: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:39 compute-0 podman[427049]: 2025-12-05 01:58:39.700989732 +0000 UTC m=+0.099415485 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec 05 01:58:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1478: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:39 compute-0 podman[427048]: 2025-12-05 01:58:39.716404554 +0000 UTC m=+0.131285508 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm)
Dec 05 01:58:41 compute-0 ceph-mon[192914]: pgmap v1478: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:41 compute-0 nova_compute[349548]: 2025-12-05 01:58:41.680 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:58:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1479: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:58:42 compute-0 nova_compute[349548]: 2025-12-05 01:58:42.991 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:58:43 compute-0 ceph-mon[192914]: pgmap v1479: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:43 compute-0 podman[427087]: 2025-12-05 01:58:43.648542416 +0000 UTC m=+0.070669770 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.29.0, name=ubi9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, distribution-scope=public, release=1214.1726694543)
Dec 05 01:58:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1480: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:45 compute-0 ceph-mon[192914]: pgmap v1480: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 01:58:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4042815258' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:58:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 01:58:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4042815258' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:58:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1481: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/4042815258' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:58:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/4042815258' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:58:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:58:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:58:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:58:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:58:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:58:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:58:46 compute-0 nova_compute[349548]: 2025-12-05 01:58:46.683 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:58:47 compute-0 ceph-mon[192914]: pgmap v1481: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1482: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:58:47 compute-0 nova_compute[349548]: 2025-12-05 01:58:47.995 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:58:48 compute-0 ceph-mon[192914]: pgmap v1482: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1483: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:50 compute-0 ceph-mon[192914]: pgmap v1483: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:51 compute-0 nova_compute[349548]: 2025-12-05 01:58:51.689 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:58:51 compute-0 podman[427109]: 2025-12-05 01:58:51.713607272 +0000 UTC m=+0.099993101 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 01:58:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1484: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:51 compute-0 podman[427115]: 2025-12-05 01:58:51.731222435 +0000 UTC m=+0.101663618 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, version=9.6, build-date=2025-08-20T13:12:41, name=ubi9-minimal, distribution-scope=public, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 05 01:58:51 compute-0 podman[427108]: 2025-12-05 01:58:51.739174968 +0000 UTC m=+0.135722282 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 05 01:58:51 compute-0 podman[427110]: 2025-12-05 01:58:51.777590314 +0000 UTC m=+0.150933518 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 05 01:58:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:58:52 compute-0 ceph-mon[192914]: pgmap v1484: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:53 compute-0 nova_compute[349548]: 2025-12-05 01:58:53.000 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:58:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1485: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:54 compute-0 ceph-mon[192914]: pgmap v1485: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1486: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:58:56.190 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:58:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:58:56.192 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:58:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:58:56.193 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:58:56 compute-0 nova_compute[349548]: 2025-12-05 01:58:56.690 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:58:56 compute-0 ceph-mon[192914]: pgmap v1486: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1487: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:58:58 compute-0 nova_compute[349548]: 2025-12-05 01:58:58.005 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:58:58 compute-0 ceph-mon[192914]: pgmap v1487: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1488: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:58:59 compute-0 podman[158197]: time="2025-12-05T01:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:58:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 01:58:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8631 "" "Go-http-client/1.1"
Dec 05 01:59:00 compute-0 ceph-mon[192914]: pgmap v1488: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:01 compute-0 openstack_network_exporter[366555]: ERROR   01:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:59:01 compute-0 openstack_network_exporter[366555]: ERROR   01:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:59:01 compute-0 openstack_network_exporter[366555]: ERROR   01:59:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:59:01 compute-0 openstack_network_exporter[366555]: ERROR   01:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:59:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:59:01 compute-0 openstack_network_exporter[366555]: ERROR   01:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:59:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:59:01 compute-0 nova_compute[349548]: 2025-12-05 01:59:01.694 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1489: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:59:02 compute-0 ceph-mon[192914]: pgmap v1489: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:03 compute-0 nova_compute[349548]: 2025-12-05 01:59:03.009 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:03 compute-0 sudo[427191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:59:03 compute-0 sudo[427191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:59:03 compute-0 sudo[427191]: pam_unix(sudo:session): session closed for user root
Dec 05 01:59:03 compute-0 sudo[427216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:59:03 compute-0 sudo[427216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:59:03 compute-0 sudo[427216]: pam_unix(sudo:session): session closed for user root
Dec 05 01:59:03 compute-0 sudo[427241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:59:03 compute-0 sudo[427241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:59:03 compute-0 sudo[427241]: pam_unix(sudo:session): session closed for user root
Dec 05 01:59:03 compute-0 sudo[427266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 01:59:03 compute-0 sudo[427266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:59:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1490: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:04 compute-0 sudo[427266]: pam_unix(sudo:session): session closed for user root
Dec 05 01:59:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:59:04 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:59:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 01:59:04 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:59:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 01:59:04 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:59:04 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 44bf2aef-3c19-4b47-b5bf-309d4d367874 does not exist
Dec 05 01:59:04 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 035531a3-d528-4db0-a724-16c7d23c724f does not exist
Dec 05 01:59:04 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev ef16358e-3dd2-47b6-9afa-1d81fbf5af89 does not exist
Dec 05 01:59:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 01:59:04 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:59:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 01:59:04 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:59:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 01:59:04 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:59:04 compute-0 sudo[427324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:59:04 compute-0 sudo[427324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:59:04 compute-0 sudo[427324]: pam_unix(sudo:session): session closed for user root
Dec 05 01:59:04 compute-0 sudo[427349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:59:04 compute-0 sudo[427349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:59:04 compute-0 sudo[427349]: pam_unix(sudo:session): session closed for user root
Dec 05 01:59:04 compute-0 ceph-mon[192914]: pgmap v1490: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:04 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:59:04 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 01:59:04 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:59:04 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 01:59:04 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 01:59:04 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 01:59:04 compute-0 sudo[427374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:59:04 compute-0 sudo[427374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:59:04 compute-0 sudo[427374]: pam_unix(sudo:session): session closed for user root
Dec 05 01:59:05 compute-0 sudo[427399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 01:59:05 compute-0 sudo[427399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:59:05 compute-0 podman[427463]: 2025-12-05 01:59:05.637620511 +0000 UTC m=+0.092192333 container create 7d377280190c3c5b55f23a497a4a4d28f5735b15bc378aa65b8e0dc04731ea61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lederberg, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:59:05 compute-0 podman[427463]: 2025-12-05 01:59:05.597463726 +0000 UTC m=+0.052035628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:59:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1491: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:05 compute-0 systemd[1]: Started libpod-conmon-7d377280190c3c5b55f23a497a4a4d28f5735b15bc378aa65b8e0dc04731ea61.scope.
Dec 05 01:59:05 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:59:05 compute-0 podman[427463]: 2025-12-05 01:59:05.79150783 +0000 UTC m=+0.246079682 container init 7d377280190c3c5b55f23a497a4a4d28f5735b15bc378aa65b8e0dc04731ea61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lederberg, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 01:59:05 compute-0 podman[427463]: 2025-12-05 01:59:05.802462577 +0000 UTC m=+0.257034429 container start 7d377280190c3c5b55f23a497a4a4d28f5735b15bc378aa65b8e0dc04731ea61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lederberg, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:59:05 compute-0 podman[427463]: 2025-12-05 01:59:05.809396911 +0000 UTC m=+0.263968763 container attach 7d377280190c3c5b55f23a497a4a4d28f5735b15bc378aa65b8e0dc04731ea61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lederberg, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 05 01:59:05 compute-0 amazing_lederberg[427479]: 167 167
Dec 05 01:59:05 compute-0 systemd[1]: libpod-7d377280190c3c5b55f23a497a4a4d28f5735b15bc378aa65b8e0dc04731ea61.scope: Deactivated successfully.
Dec 05 01:59:05 compute-0 podman[427463]: 2025-12-05 01:59:05.813323871 +0000 UTC m=+0.267895683 container died 7d377280190c3c5b55f23a497a4a4d28f5735b15bc378aa65b8e0dc04731ea61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 05 01:59:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce929ff9a0beaa3e7a7b203e90a107a1c3e43836940be3d661bd2fbdc250397d-merged.mount: Deactivated successfully.
Dec 05 01:59:05 compute-0 podman[427463]: 2025-12-05 01:59:05.876759098 +0000 UTC m=+0.331330910 container remove 7d377280190c3c5b55f23a497a4a4d28f5735b15bc378aa65b8e0dc04731ea61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 05 01:59:05 compute-0 systemd[1]: libpod-conmon-7d377280190c3c5b55f23a497a4a4d28f5735b15bc378aa65b8e0dc04731ea61.scope: Deactivated successfully.
Dec 05 01:59:06 compute-0 podman[427501]: 2025-12-05 01:59:06.17179091 +0000 UTC m=+0.110195407 container create ee090d7b0b9a2bdebe9e66fab95668236f55be6ab019da283d987fbaf7c7f41f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendeleev, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 05 01:59:06 compute-0 podman[427501]: 2025-12-05 01:59:06.131721598 +0000 UTC m=+0.070126205 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:59:06 compute-0 systemd[1]: Started libpod-conmon-ee090d7b0b9a2bdebe9e66fab95668236f55be6ab019da283d987fbaf7c7f41f.scope.
Dec 05 01:59:06 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:59:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf5f20731a46d749607ae5da7dc40e83dc49b4aff46327b798eea13b6b8da51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:59:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf5f20731a46d749607ae5da7dc40e83dc49b4aff46327b798eea13b6b8da51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:59:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf5f20731a46d749607ae5da7dc40e83dc49b4aff46327b798eea13b6b8da51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:59:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf5f20731a46d749607ae5da7dc40e83dc49b4aff46327b798eea13b6b8da51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:59:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf5f20731a46d749607ae5da7dc40e83dc49b4aff46327b798eea13b6b8da51/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 01:59:06 compute-0 podman[427501]: 2025-12-05 01:59:06.332028508 +0000 UTC m=+0.270433055 container init ee090d7b0b9a2bdebe9e66fab95668236f55be6ab019da283d987fbaf7c7f41f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 05 01:59:06 compute-0 podman[427501]: 2025-12-05 01:59:06.351043491 +0000 UTC m=+0.289448008 container start ee090d7b0b9a2bdebe9e66fab95668236f55be6ab019da283d987fbaf7c7f41f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendeleev, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 01:59:06 compute-0 podman[427501]: 2025-12-05 01:59:06.356147913 +0000 UTC m=+0.294552430 container attach ee090d7b0b9a2bdebe9e66fab95668236f55be6ab019da283d987fbaf7c7f41f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendeleev, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:59:06 compute-0 nova_compute[349548]: 2025-12-05 01:59:06.697 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:06 compute-0 ceph-mon[192914]: pgmap v1491: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:07 compute-0 focused_mendeleev[427517]: --> passed data devices: 0 physical, 3 LVM
Dec 05 01:59:07 compute-0 focused_mendeleev[427517]: --> relative data size: 1.0
Dec 05 01:59:07 compute-0 focused_mendeleev[427517]: --> All data devices are unavailable
Dec 05 01:59:07 compute-0 systemd[1]: libpod-ee090d7b0b9a2bdebe9e66fab95668236f55be6ab019da283d987fbaf7c7f41f.scope: Deactivated successfully.
Dec 05 01:59:07 compute-0 podman[427501]: 2025-12-05 01:59:07.509111912 +0000 UTC m=+1.447516439 container died ee090d7b0b9a2bdebe9e66fab95668236f55be6ab019da283d987fbaf7c7f41f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 05 01:59:07 compute-0 systemd[1]: libpod-ee090d7b0b9a2bdebe9e66fab95668236f55be6ab019da283d987fbaf7c7f41f.scope: Consumed 1.087s CPU time.
Dec 05 01:59:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bf5f20731a46d749607ae5da7dc40e83dc49b4aff46327b798eea13b6b8da51-merged.mount: Deactivated successfully.
Dec 05 01:59:07 compute-0 podman[427501]: 2025-12-05 01:59:07.600353988 +0000 UTC m=+1.538758505 container remove ee090d7b0b9a2bdebe9e66fab95668236f55be6ab019da283d987fbaf7c7f41f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendeleev, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 05 01:59:07 compute-0 systemd[1]: libpod-conmon-ee090d7b0b9a2bdebe9e66fab95668236f55be6ab019da283d987fbaf7c7f41f.scope: Deactivated successfully.
Dec 05 01:59:07 compute-0 sudo[427399]: pam_unix(sudo:session): session closed for user root
Dec 05 01:59:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1492: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:07 compute-0 sudo[427559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:59:07 compute-0 sudo[427559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:59:07 compute-0 sudo[427559]: pam_unix(sudo:session): session closed for user root
Dec 05 01:59:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:59:07 compute-0 podman[427583]: 2025-12-05 01:59:07.858489387 +0000 UTC m=+0.085506476 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 01:59:07 compute-0 sudo[427591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:59:07 compute-0 sudo[427591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:59:07 compute-0 sudo[427591]: pam_unix(sudo:session): session closed for user root
Dec 05 01:59:07 compute-0 podman[427584]: 2025-12-05 01:59:07.889239748 +0000 UTC m=+0.098393237 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 01:59:07 compute-0 sudo[427652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:59:07 compute-0 sudo[427652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:59:07 compute-0 sudo[427652]: pam_unix(sudo:session): session closed for user root
Dec 05 01:59:08 compute-0 nova_compute[349548]: 2025-12-05 01:59:08.013 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:08 compute-0 sudo[427677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 01:59:08 compute-0 sudo[427677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:59:08 compute-0 podman[427739]: 2025-12-05 01:59:08.644306494 +0000 UTC m=+0.091566995 container create 77e8ddf2186f991194c98bc89c36394292da9f242a920b7a44439b12df2c9cc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:59:08 compute-0 systemd[1]: Started libpod-conmon-77e8ddf2186f991194c98bc89c36394292da9f242a920b7a44439b12df2c9cc6.scope.
Dec 05 01:59:08 compute-0 podman[427739]: 2025-12-05 01:59:08.60774354 +0000 UTC m=+0.055004081 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:59:08 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:59:08 compute-0 podman[427739]: 2025-12-05 01:59:08.76735276 +0000 UTC m=+0.214613281 container init 77e8ddf2186f991194c98bc89c36394292da9f242a920b7a44439b12df2c9cc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 01:59:08 compute-0 podman[427739]: 2025-12-05 01:59:08.785732045 +0000 UTC m=+0.232992556 container start 77e8ddf2186f991194c98bc89c36394292da9f242a920b7a44439b12df2c9cc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_albattani, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:59:08 compute-0 podman[427739]: 2025-12-05 01:59:08.790311123 +0000 UTC m=+0.237571624 container attach 77e8ddf2186f991194c98bc89c36394292da9f242a920b7a44439b12df2c9cc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_albattani, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:59:08 compute-0 elated_albattani[427754]: 167 167
Dec 05 01:59:08 compute-0 systemd[1]: libpod-77e8ddf2186f991194c98bc89c36394292da9f242a920b7a44439b12df2c9cc6.scope: Deactivated successfully.
Dec 05 01:59:08 compute-0 podman[427739]: 2025-12-05 01:59:08.796007393 +0000 UTC m=+0.243267954 container died 77e8ddf2186f991194c98bc89c36394292da9f242a920b7a44439b12df2c9cc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:59:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f253a9a6c0a07a71477510dd348fbeb99b93b118fc7c4a7c9c10016c9179e36-merged.mount: Deactivated successfully.
Dec 05 01:59:08 compute-0 podman[427739]: 2025-12-05 01:59:08.8726603 +0000 UTC m=+0.319920811 container remove 77e8ddf2186f991194c98bc89c36394292da9f242a920b7a44439b12df2c9cc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_albattani, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 05 01:59:08 compute-0 systemd[1]: libpod-conmon-77e8ddf2186f991194c98bc89c36394292da9f242a920b7a44439b12df2c9cc6.scope: Deactivated successfully.
Dec 05 01:59:08 compute-0 ceph-mon[192914]: pgmap v1492: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:09 compute-0 podman[427776]: 2025-12-05 01:59:09.138488304 +0000 UTC m=+0.073811398 container create be29eaab8e1249549f0d659bc04b2101764681305bec7383666eb9b1cfb43d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rosalind, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 05 01:59:09 compute-0 podman[427776]: 2025-12-05 01:59:09.112526947 +0000 UTC m=+0.047850051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:59:09 compute-0 systemd[1]: Started libpod-conmon-be29eaab8e1249549f0d659bc04b2101764681305bec7383666eb9b1cfb43d99.scope.
Dec 05 01:59:09 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bcf5442bc6e13b4cb80edd10df6dbcf88a27b24cecf93987f0d5414a0fcec6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bcf5442bc6e13b4cb80edd10df6dbcf88a27b24cecf93987f0d5414a0fcec6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bcf5442bc6e13b4cb80edd10df6dbcf88a27b24cecf93987f0d5414a0fcec6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bcf5442bc6e13b4cb80edd10df6dbcf88a27b24cecf93987f0d5414a0fcec6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:59:09 compute-0 podman[427776]: 2025-12-05 01:59:09.28474657 +0000 UTC m=+0.220069694 container init be29eaab8e1249549f0d659bc04b2101764681305bec7383666eb9b1cfb43d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rosalind, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 05 01:59:09 compute-0 podman[427776]: 2025-12-05 01:59:09.305546493 +0000 UTC m=+0.240869597 container start be29eaab8e1249549f0d659bc04b2101764681305bec7383666eb9b1cfb43d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:59:09 compute-0 podman[427776]: 2025-12-05 01:59:09.311762077 +0000 UTC m=+0.247085231 container attach be29eaab8e1249549f0d659bc04b2101764681305bec7383666eb9b1cfb43d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:59:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1493: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:10 compute-0 bold_rosalind[427792]: {
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:     "0": [
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:         {
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "devices": [
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "/dev/loop3"
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             ],
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "lv_name": "ceph_lv0",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "lv_size": "21470642176",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "name": "ceph_lv0",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "tags": {
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.cluster_name": "ceph",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.crush_device_class": "",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.encrypted": "0",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.osd_id": "0",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.type": "block",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.vdo": "0"
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             },
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "type": "block",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "vg_name": "ceph_vg0"
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:         }
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:     ],
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:     "1": [
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:         {
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "devices": [
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "/dev/loop4"
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             ],
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "lv_name": "ceph_lv1",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "lv_size": "21470642176",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "name": "ceph_lv1",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "tags": {
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.cluster_name": "ceph",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.crush_device_class": "",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.encrypted": "0",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.osd_id": "1",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.type": "block",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.vdo": "0"
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             },
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "type": "block",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "vg_name": "ceph_vg1"
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:         }
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:     ],
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:     "2": [
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:         {
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "devices": [
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "/dev/loop5"
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             ],
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "lv_name": "ceph_lv2",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "lv_size": "21470642176",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "name": "ceph_lv2",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "tags": {
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.cluster_name": "ceph",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.crush_device_class": "",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.encrypted": "0",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.osd_id": "2",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.type": "block",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:                 "ceph.vdo": "0"
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             },
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "type": "block",
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:             "vg_name": "ceph_vg2"
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:         }
Dec 05 01:59:10 compute-0 bold_rosalind[427792]:     ]
Dec 05 01:59:10 compute-0 bold_rosalind[427792]: }
Dec 05 01:59:10 compute-0 systemd[1]: libpod-be29eaab8e1249549f0d659bc04b2101764681305bec7383666eb9b1cfb43d99.scope: Deactivated successfully.
Dec 05 01:59:10 compute-0 podman[427776]: 2025-12-05 01:59:10.198712377 +0000 UTC m=+1.134035471 container died be29eaab8e1249549f0d659bc04b2101764681305bec7383666eb9b1cfb43d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rosalind, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 01:59:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bcf5442bc6e13b4cb80edd10df6dbcf88a27b24cecf93987f0d5414a0fcec6a-merged.mount: Deactivated successfully.
Dec 05 01:59:10 compute-0 podman[427776]: 2025-12-05 01:59:10.296416793 +0000 UTC m=+1.231739857 container remove be29eaab8e1249549f0d659bc04b2101764681305bec7383666eb9b1cfb43d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rosalind, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 01:59:10 compute-0 systemd[1]: libpod-conmon-be29eaab8e1249549f0d659bc04b2101764681305bec7383666eb9b1cfb43d99.scope: Deactivated successfully.
Dec 05 01:59:10 compute-0 sudo[427677]: pam_unix(sudo:session): session closed for user root
Dec 05 01:59:10 compute-0 podman[427809]: 2025-12-05 01:59:10.35486467 +0000 UTC m=+0.107295246 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 05 01:59:10 compute-0 podman[427802]: 2025-12-05 01:59:10.3680747 +0000 UTC m=+0.137456651 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm)
Dec 05 01:59:10 compute-0 sudo[427845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:59:10 compute-0 sudo[427845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:59:10 compute-0 sudo[427845]: pam_unix(sudo:session): session closed for user root
Dec 05 01:59:10 compute-0 sudo[427870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 01:59:10 compute-0 sudo[427870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:59:10 compute-0 sudo[427870]: pam_unix(sudo:session): session closed for user root
Dec 05 01:59:10 compute-0 sudo[427895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:59:10 compute-0 sudo[427895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:59:10 compute-0 sudo[427895]: pam_unix(sudo:session): session closed for user root
Dec 05 01:59:10 compute-0 sudo[427920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 01:59:10 compute-0 sudo[427920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:59:10 compute-0 ceph-mon[192914]: pgmap v1493: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:11 compute-0 podman[427980]: 2025-12-05 01:59:11.179709219 +0000 UTC m=+0.064916389 container create c16af941dc404c25ddf49b101601765f48e32844c129b70264ff4b03c8dfcee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 01:59:11 compute-0 systemd[1]: Started libpod-conmon-c16af941dc404c25ddf49b101601765f48e32844c129b70264ff4b03c8dfcee0.scope.
Dec 05 01:59:11 compute-0 podman[427980]: 2025-12-05 01:59:11.153093634 +0000 UTC m=+0.038300784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:59:11 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:59:11 compute-0 podman[427980]: 2025-12-05 01:59:11.317331234 +0000 UTC m=+0.202538374 container init c16af941dc404c25ddf49b101601765f48e32844c129b70264ff4b03c8dfcee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shamir, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 01:59:11 compute-0 podman[427980]: 2025-12-05 01:59:11.336419598 +0000 UTC m=+0.221626768 container start c16af941dc404c25ddf49b101601765f48e32844c129b70264ff4b03c8dfcee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shamir, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 05 01:59:11 compute-0 podman[427980]: 2025-12-05 01:59:11.343703222 +0000 UTC m=+0.228910352 container attach c16af941dc404c25ddf49b101601765f48e32844c129b70264ff4b03c8dfcee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shamir, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:59:11 compute-0 vibrant_shamir[427996]: 167 167
Dec 05 01:59:11 compute-0 systemd[1]: libpod-c16af941dc404c25ddf49b101601765f48e32844c129b70264ff4b03c8dfcee0.scope: Deactivated successfully.
Dec 05 01:59:11 compute-0 podman[427980]: 2025-12-05 01:59:11.354596207 +0000 UTC m=+0.239803377 container died c16af941dc404c25ddf49b101601765f48e32844c129b70264ff4b03c8dfcee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 01:59:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-5098479006f29ae869b4d53340e512f5e5d9d7da71c5ddc67debb12b48935df3-merged.mount: Deactivated successfully.
Dec 05 01:59:11 compute-0 podman[427980]: 2025-12-05 01:59:11.419350021 +0000 UTC m=+0.304557171 container remove c16af941dc404c25ddf49b101601765f48e32844c129b70264ff4b03c8dfcee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shamir, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 01:59:11 compute-0 systemd[1]: libpod-conmon-c16af941dc404c25ddf49b101601765f48e32844c129b70264ff4b03c8dfcee0.scope: Deactivated successfully.
Dec 05 01:59:11 compute-0 podman[428019]: 2025-12-05 01:59:11.687543722 +0000 UTC m=+0.070713442 container create b61de0cd05a14a402e9d7af00aee49c8dca307e27bfc43f870396ab19d4c5d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 01:59:11 compute-0 nova_compute[349548]: 2025-12-05 01:59:11.699 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1494: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:11 compute-0 systemd[1]: Started libpod-conmon-b61de0cd05a14a402e9d7af00aee49c8dca307e27bfc43f870396ab19d4c5d40.scope.
Dec 05 01:59:11 compute-0 podman[428019]: 2025-12-05 01:59:11.661801871 +0000 UTC m=+0.044971631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 01:59:11 compute-0 systemd[1]: Started libcrun container.
Dec 05 01:59:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa72de752d90970f2c2174266250838ea9943c3d806659182fc4f1151948444a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 01:59:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa72de752d90970f2c2174266250838ea9943c3d806659182fc4f1151948444a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 01:59:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa72de752d90970f2c2174266250838ea9943c3d806659182fc4f1151948444a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 01:59:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa72de752d90970f2c2174266250838ea9943c3d806659182fc4f1151948444a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 01:59:11 compute-0 podman[428019]: 2025-12-05 01:59:11.828431757 +0000 UTC m=+0.211601547 container init b61de0cd05a14a402e9d7af00aee49c8dca307e27bfc43f870396ab19d4c5d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lederberg, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:59:11 compute-0 podman[428019]: 2025-12-05 01:59:11.847012418 +0000 UTC m=+0.230182128 container start b61de0cd05a14a402e9d7af00aee49c8dca307e27bfc43f870396ab19d4c5d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lederberg, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 05 01:59:11 compute-0 podman[428019]: 2025-12-05 01:59:11.851924695 +0000 UTC m=+0.235094455 container attach b61de0cd05a14a402e9d7af00aee49c8dca307e27bfc43f870396ab19d4c5d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:59:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:59:12 compute-0 ceph-mon[192914]: pgmap v1494: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:12 compute-0 relaxed_lederberg[428035]: {
Dec 05 01:59:12 compute-0 relaxed_lederberg[428035]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 01:59:12 compute-0 relaxed_lederberg[428035]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:59:12 compute-0 relaxed_lederberg[428035]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 01:59:12 compute-0 relaxed_lederberg[428035]:         "osd_id": 0,
Dec 05 01:59:12 compute-0 relaxed_lederberg[428035]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 01:59:12 compute-0 relaxed_lederberg[428035]:         "type": "bluestore"
Dec 05 01:59:12 compute-0 relaxed_lederberg[428035]:     },
Dec 05 01:59:12 compute-0 relaxed_lederberg[428035]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 01:59:12 compute-0 relaxed_lederberg[428035]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:59:12 compute-0 relaxed_lederberg[428035]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 01:59:12 compute-0 relaxed_lederberg[428035]:         "osd_id": 1,
Dec 05 01:59:12 compute-0 relaxed_lederberg[428035]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 01:59:12 compute-0 relaxed_lederberg[428035]:         "type": "bluestore"
Dec 05 01:59:12 compute-0 relaxed_lederberg[428035]:     },
Dec 05 01:59:12 compute-0 relaxed_lederberg[428035]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 01:59:12 compute-0 relaxed_lederberg[428035]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 01:59:12 compute-0 relaxed_lederberg[428035]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 01:59:12 compute-0 relaxed_lederberg[428035]:         "osd_id": 2,
Dec 05 01:59:12 compute-0 relaxed_lederberg[428035]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 01:59:12 compute-0 relaxed_lederberg[428035]:         "type": "bluestore"
Dec 05 01:59:12 compute-0 relaxed_lederberg[428035]:     }
Dec 05 01:59:12 compute-0 relaxed_lederberg[428035]: }
Dec 05 01:59:13 compute-0 nova_compute[349548]: 2025-12-05 01:59:13.017 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:13 compute-0 systemd[1]: libpod-b61de0cd05a14a402e9d7af00aee49c8dca307e27bfc43f870396ab19d4c5d40.scope: Deactivated successfully.
Dec 05 01:59:13 compute-0 podman[428019]: 2025-12-05 01:59:13.03183491 +0000 UTC m=+1.415004670 container died b61de0cd05a14a402e9d7af00aee49c8dca307e27bfc43f870396ab19d4c5d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lederberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 01:59:13 compute-0 systemd[1]: libpod-b61de0cd05a14a402e9d7af00aee49c8dca307e27bfc43f870396ab19d4c5d40.scope: Consumed 1.170s CPU time.
Dec 05 01:59:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa72de752d90970f2c2174266250838ea9943c3d806659182fc4f1151948444a-merged.mount: Deactivated successfully.
Dec 05 01:59:13 compute-0 podman[428019]: 2025-12-05 01:59:13.096069669 +0000 UTC m=+1.479239379 container remove b61de0cd05a14a402e9d7af00aee49c8dca307e27bfc43f870396ab19d4c5d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lederberg, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 01:59:13 compute-0 systemd[1]: libpod-conmon-b61de0cd05a14a402e9d7af00aee49c8dca307e27bfc43f870396ab19d4c5d40.scope: Deactivated successfully.
Dec 05 01:59:13 compute-0 sudo[427920]: pam_unix(sudo:session): session closed for user root
Dec 05 01:59:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 01:59:13 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:59:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 01:59:13 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:59:13 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 732790b1-4952-41ab-98e7-4198f1c2d4c0 does not exist
Dec 05 01:59:13 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d8f4d3dd-0ce3-4d9a-b114-d21e26683301 does not exist
Dec 05 01:59:13 compute-0 sudo[428079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 01:59:13 compute-0 sudo[428079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:59:13 compute-0 sudo[428079]: pam_unix(sudo:session): session closed for user root
Dec 05 01:59:13 compute-0 sudo[428104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 01:59:13 compute-0 sudo[428104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 01:59:13 compute-0 sudo[428104]: pam_unix(sudo:session): session closed for user root
Dec 05 01:59:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1495: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:14 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:59:14 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 01:59:14 compute-0 podman[428129]: 2025-12-05 01:59:14.734647918 +0000 UTC m=+0.129192890 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, io.openshift.expose-services=, io.openshift.tags=base rhel9, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, version=9.4, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc.)
Dec 05 01:59:15 compute-0 ceph-mon[192914]: pgmap v1495: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1496: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:59:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:59:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:59:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:59:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:59:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:59:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:59:16
Dec 05 01:59:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 01:59:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 01:59:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['volumes', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', '.rgw.root', 'default.rgw.meta', 'backups']
Dec 05 01:59:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 01:59:16 compute-0 nova_compute[349548]: 2025-12-05 01:59:16.703 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 01:59:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:59:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 01:59:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 01:59:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:59:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 01:59:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:59:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 01:59:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:59:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 01:59:17 compute-0 ceph-mon[192914]: pgmap v1496: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1497: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:59:18 compute-0 nova_compute[349548]: 2025-12-05 01:59:18.022 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:19 compute-0 ceph-mon[192914]: pgmap v1497: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1498: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:21 compute-0 ceph-mon[192914]: pgmap v1498: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:21 compute-0 nova_compute[349548]: 2025-12-05 01:59:21.705 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1499: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:22 compute-0 ceph-mon[192914]: pgmap v1499: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:22 compute-0 podman[428152]: 2025-12-05 01:59:22.70850148 +0000 UTC m=+0.100433443 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, container_name=openstack_network_exporter, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container)
Dec 05 01:59:22 compute-0 podman[428150]: 2025-12-05 01:59:22.712029139 +0000 UTC m=+0.107844801 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 01:59:22 compute-0 podman[428149]: 2025-12-05 01:59:22.714964031 +0000 UTC m=+0.113968293 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 01:59:22 compute-0 podman[428151]: 2025-12-05 01:59:22.774328654 +0000 UTC m=+0.168038577 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec 05 01:59:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:59:23 compute-0 nova_compute[349548]: 2025-12-05 01:59:23.024 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1500: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:24 compute-0 nova_compute[349548]: 2025-12-05 01:59:24.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:59:24 compute-0 nova_compute[349548]: 2025-12-05 01:59:24.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 01:59:24 compute-0 ceph-mon[192914]: pgmap v1500: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:25 compute-0 nova_compute[349548]: 2025-12-05 01:59:25.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:59:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1501: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:26 compute-0 nova_compute[349548]: 2025-12-05 01:59:26.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:59:26 compute-0 nova_compute[349548]: 2025-12-05 01:59:26.708 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:26 compute-0 ceph-mon[192914]: pgmap v1501: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00221085813879664 of space, bias 1.0, pg target 0.663257441638992 quantized to 32 (current 32)
Dec 05 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 05 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 01:59:27 compute-0 nova_compute[349548]: 2025-12-05 01:59:27.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:59:27 compute-0 nova_compute[349548]: 2025-12-05 01:59:27.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 01:59:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1502: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:59:27 compute-0 nova_compute[349548]: 2025-12-05 01:59:27.864 349552 DEBUG oslo_concurrency.lockutils [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:59:27 compute-0 nova_compute[349548]: 2025-12-05 01:59:27.865 349552 DEBUG oslo_concurrency.lockutils [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:59:27 compute-0 nova_compute[349548]: 2025-12-05 01:59:27.866 349552 DEBUG oslo_concurrency.lockutils [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:59:27 compute-0 nova_compute[349548]: 2025-12-05 01:59:27.867 349552 DEBUG oslo_concurrency.lockutils [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:59:27 compute-0 nova_compute[349548]: 2025-12-05 01:59:27.868 349552 DEBUG oslo_concurrency.lockutils [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:59:27 compute-0 nova_compute[349548]: 2025-12-05 01:59:27.871 349552 INFO nova.compute.manager [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Terminating instance
Dec 05 01:59:27 compute-0 nova_compute[349548]: 2025-12-05 01:59:27.873 349552 DEBUG nova.compute.manager [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 05 01:59:27 compute-0 nova_compute[349548]: 2025-12-05 01:59:27.921 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:59:27 compute-0 nova_compute[349548]: 2025-12-05 01:59:27.922 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:59:27 compute-0 nova_compute[349548]: 2025-12-05 01:59:27.923 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.028 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:28 compute-0 kernel: tap554930d3-ff (unregistering): left promiscuous mode
Dec 05 01:59:28 compute-0 NetworkManager[49092]: <info>  [1764899968.0798] device (tap554930d3-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 05 01:59:28 compute-0 ovn_controller[89286]: 2025-12-05T01:59:28Z|00050|binding|INFO|Releasing lport 554930d3-ff53-4ef1-af0a-bad6acef1456 from this chassis (sb_readonly=0)
Dec 05 01:59:28 compute-0 ovn_controller[89286]: 2025-12-05T01:59:28Z|00051|binding|INFO|Setting lport 554930d3-ff53-4ef1-af0a-bad6acef1456 down in Southbound
Dec 05 01:59:28 compute-0 ovn_controller[89286]: 2025-12-05T01:59:28Z|00052|binding|INFO|Removing iface tap554930d3-ff ovn-installed in OVS
Dec 05 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.106 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.110 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.122 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:63:18 192.168.0.23'], port_security=['fa:16:3e:43:63:18 192.168.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-qkgif4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-port-nevnpfznt6pg', 'neutron:cidrs': '192.168.0.23/24', 'neutron:device_id': 'b82c3f0e-6d6a-4a7b-9556-b609ad63e497', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-qkgif4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-port-nevnpfznt6pg', 'neutron:project_id': '6ad982b73954486390215862ee62239f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf07c149-4b4f-4cc9-a5b5-cfd139acbede', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.213', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8440543a-d57d-422f-b491-49a678c2776e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=554930d3-ff53-4ef1-af0a-bad6acef1456) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.124 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 554930d3-ff53-4ef1-af0a-bad6acef1456 in datapath 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 unbound from our chassis
Dec 05 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.127 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183
Dec 05 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.133 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.158 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a55fb5-2a21-411f-bd26-7c6c4955db74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:59:28 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Dec 05 01:59:28 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 7min 1.094s CPU time.
Dec 05 01:59:28 compute-0 systemd-machined[138700]: Machine qemu-2-instance-00000002 terminated.
Dec 05 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.200 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[0602af4f-8acb-4da8-9314-e41d44c7d307]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.205 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[ad160e95-d486-4684-a255-20e253527cc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.245 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[8eaaa3e3-574e-4be9-b77e-e333758dba23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.277 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[52499975-3e92-479c-9f56-e3d98b22e97b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49f7d2f1-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:8a:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 616, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 616, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537514, 'reachable_time': 39496, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 428246, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.299 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[3023ec69-8edc-4b19-b1c8-fb51ec4cbd7b]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap49f7d2f1-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537531, 'tstamp': 537531}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 428247, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap49f7d2f1-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537536, 'tstamp': 537536}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 428247, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.303 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49f7d2f1-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.305 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.318 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.320 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49f7d2f1-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.321 349552 INFO nova.virt.libvirt.driver [-] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Instance destroyed successfully.
Dec 05 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.322 349552 DEBUG nova.objects.instance [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'resources' on Instance uuid b82c3f0e-6d6a-4a7b-9556-b609ad63e497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.321 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.324 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap49f7d2f1-f0, col_values=(('external_ids', {'iface-id': '35b0af3f-4a87-44c5-9b77-2f08261b9985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.325 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.336 349552 DEBUG nova.virt.libvirt.vif [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T01:49:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj',id=2,image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-05T01:49:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ad982b73954486390215862ee62239f',ramdisk_id='',reservation_id='r-rt9976xc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T01:49:19Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01NzYxMzI0NDc4NDAzNTAzNjkyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU3NjEzMjQ0Nzg0MDM1MDM2OTI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTc2MTMyNDQ3ODQwMzUwMzY5Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU3NjEzMjQ0Nzg0MDM1MDM2OTI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01NzYxMzI0NDc4NDAzNTAzNjkyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01NzYxMzI0NDc4NDAzNTAzNjkyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Dec 05 01:59:28 compute-0 nova_compute[349548]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTc2MTMyNDQ3ODQwMzUwMzY5Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU3NjEzMjQ0Nzg0MDM1MDM2OTI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01NzYxMzI0NDc4NDAzNTAzNjkyPT0tLQo=',user_id='ff880837791d4f49a54672b8d0e705ff',uuid=b82c3f0e-6d6a-4a7b-9556-b609ad63e497,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 05 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.337 349552 DEBUG nova.network.os_vif_util [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converting VIF {"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.338 349552 DEBUG nova.network.os_vif_util [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:43:63:18,bridge_name='br-int',has_traffic_filtering=True,id=554930d3-ff53-4ef1-af0a-bad6acef1456,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap554930d3-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.338 349552 DEBUG os_vif [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:63:18,bridge_name='br-int',has_traffic_filtering=True,id=554930d3-ff53-4ef1-af0a-bad6acef1456,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap554930d3-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 05 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.340 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.341 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap554930d3-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.343 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.345 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.346 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.350 349552 INFO os_vif [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:63:18,bridge_name='br-int',has_traffic_filtering=True,id=554930d3-ff53-4ef1-af0a-bad6acef1456,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap554930d3-ff')
Dec 05 01:59:28 compute-0 rsyslogd[188644]: message too long (8192) with configured size 8096, begin of message is: 2025-12-05 01:59:28.336 349552 DEBUG nova.virt.libvirt.vif [None req-ed4253f5-b0 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 05 01:59:28 compute-0 ceph-mon[192914]: pgmap v1502: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.191 349552 DEBUG nova.compute.manager [req-652fad9b-b7d7-40ed-be09-fd42bd510732 req-6aa27376-e300-4003-a26c-d12ab8428be0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Received event network-vif-unplugged-554930d3-ff53-4ef1-af0a-bad6acef1456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.192 349552 DEBUG oslo_concurrency.lockutils [req-652fad9b-b7d7-40ed-be09-fd42bd510732 req-6aa27376-e300-4003-a26c-d12ab8428be0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.192 349552 DEBUG oslo_concurrency.lockutils [req-652fad9b-b7d7-40ed-be09-fd42bd510732 req-6aa27376-e300-4003-a26c-d12ab8428be0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.194 349552 DEBUG oslo_concurrency.lockutils [req-652fad9b-b7d7-40ed-be09-fd42bd510732 req-6aa27376-e300-4003-a26c-d12ab8428be0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.194 349552 DEBUG nova.compute.manager [req-652fad9b-b7d7-40ed-be09-fd42bd510732 req-6aa27376-e300-4003-a26c-d12ab8428be0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] No waiting events found dispatching network-vif-unplugged-554930d3-ff53-4ef1-af0a-bad6acef1456 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.195 349552 DEBUG nova.compute.manager [req-652fad9b-b7d7-40ed-be09-fd42bd510732 req-6aa27376-e300-4003-a26c-d12ab8428be0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Received event network-vif-unplugged-554930d3-ff53-4ef1-af0a-bad6acef1456 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 05 01:59:29 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:29.493 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.494 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:29 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:29.494 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.585 349552 INFO nova.virt.libvirt.driver [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Deleting instance files /var/lib/nova/instances/b82c3f0e-6d6a-4a7b-9556-b609ad63e497_del
Dec 05 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.586 349552 INFO nova.virt.libvirt.driver [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Deletion of /var/lib/nova/instances/b82c3f0e-6d6a-4a7b-9556-b609ad63e497_del complete
Dec 05 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.594 349552 DEBUG nova.compute.manager [req-05d6b711-3c7c-478e-86bf-f89da1286ee3 req-2e65d5b8-16cc-4632-bc08-1ba8c04cc852 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Received event network-changed-554930d3-ff53-4ef1-af0a-bad6acef1456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.594 349552 DEBUG nova.compute.manager [req-05d6b711-3c7c-478e-86bf-f89da1286ee3 req-2e65d5b8-16cc-4632-bc08-1ba8c04cc852 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Refreshing instance network info cache due to event network-changed-554930d3-ff53-4ef1-af0a-bad6acef1456. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.595 349552 DEBUG oslo_concurrency.lockutils [req-05d6b711-3c7c-478e-86bf-f89da1286ee3 req-2e65d5b8-16cc-4632-bc08-1ba8c04cc852 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.693 349552 DEBUG nova.virt.libvirt.host [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Dec 05 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.693 349552 INFO nova.virt.libvirt.host [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] UEFI support detected
Dec 05 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.696 349552 INFO nova.compute.manager [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Took 1.82 seconds to destroy the instance on the hypervisor.
Dec 05 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.697 349552 DEBUG oslo.service.loopingcall [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 05 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.698 349552 DEBUG nova.compute.manager [-] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 05 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.698 349552 DEBUG nova.network.neutron [-] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 05 01:59:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1503: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:29 compute-0 podman[158197]: time="2025-12-05T01:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:59:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 01:59:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8635 "" "Go-http-client/1.1"
Dec 05 01:59:30 compute-0 nova_compute[349548]: 2025-12-05 01:59:30.731 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updating instance_info_cache with network_info: [{"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 01:59:30 compute-0 nova_compute[349548]: 2025-12-05 01:59:30.770 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:59:30 compute-0 nova_compute[349548]: 2025-12-05 01:59:30.771 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 01:59:30 compute-0 nova_compute[349548]: 2025-12-05 01:59:30.771 349552 DEBUG oslo_concurrency.lockutils [req-05d6b711-3c7c-478e-86bf-f89da1286ee3 req-2e65d5b8-16cc-4632-bc08-1ba8c04cc852 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 01:59:30 compute-0 nova_compute[349548]: 2025-12-05 01:59:30.772 349552 DEBUG nova.network.neutron [req-05d6b711-3c7c-478e-86bf-f89da1286ee3 req-2e65d5b8-16cc-4632-bc08-1ba8c04cc852 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Refreshing network info cache for port 554930d3-ff53-4ef1-af0a-bad6acef1456 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 01:59:30 compute-0 nova_compute[349548]: 2025-12-05 01:59:30.774 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:59:30 compute-0 nova_compute[349548]: 2025-12-05 01:59:30.776 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:59:30 compute-0 nova_compute[349548]: 2025-12-05 01:59:30.827 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:59:30 compute-0 nova_compute[349548]: 2025-12-05 01:59:30.828 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:59:30 compute-0 nova_compute[349548]: 2025-12-05 01:59:30.828 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:59:30 compute-0 nova_compute[349548]: 2025-12-05 01:59:30.829 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 01:59:30 compute-0 nova_compute[349548]: 2025-12-05 01:59:30.830 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:59:30 compute-0 ceph-mon[192914]: pgmap v1503: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:59:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2700550426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.316 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:59:31 compute-0 openstack_network_exporter[366555]: ERROR   01:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:59:31 compute-0 openstack_network_exporter[366555]: ERROR   01:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 01:59:31 compute-0 openstack_network_exporter[366555]: ERROR   01:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 01:59:31 compute-0 openstack_network_exporter[366555]: ERROR   01:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 01:59:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:59:31 compute-0 openstack_network_exporter[366555]: ERROR   01:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 01:59:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.492 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.493 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.493 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.503 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.504 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.504 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.512 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.512 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.512 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.564 349552 DEBUG nova.compute.manager [req-d28f1916-295b-4d91-8524-ff4d276adb30 req-ef4e8d6b-5100-4f30-893a-0952f652e254 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Received event network-vif-plugged-554930d3-ff53-4ef1-af0a-bad6acef1456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.565 349552 DEBUG oslo_concurrency.lockutils [req-d28f1916-295b-4d91-8524-ff4d276adb30 req-ef4e8d6b-5100-4f30-893a-0952f652e254 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.565 349552 DEBUG oslo_concurrency.lockutils [req-d28f1916-295b-4d91-8524-ff4d276adb30 req-ef4e8d6b-5100-4f30-893a-0952f652e254 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.566 349552 DEBUG oslo_concurrency.lockutils [req-d28f1916-295b-4d91-8524-ff4d276adb30 req-ef4e8d6b-5100-4f30-893a-0952f652e254 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.566 349552 DEBUG nova.compute.manager [req-d28f1916-295b-4d91-8524-ff4d276adb30 req-ef4e8d6b-5100-4f30-893a-0952f652e254 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] No waiting events found dispatching network-vif-plugged-554930d3-ff53-4ef1-af0a-bad6acef1456 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.566 349552 WARNING nova.compute.manager [req-d28f1916-295b-4d91-8524-ff4d276adb30 req-ef4e8d6b-5100-4f30-893a-0952f652e254 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Received unexpected event network-vif-plugged-554930d3-ff53-4ef1-af0a-bad6acef1456 for instance with vm_state active and task_state deleting.
Dec 05 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.713 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1504: 321 pgs: 321 active+clean; 250 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 341 B/s wr, 13 op/s
Dec 05 01:59:31 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2700550426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.120 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.121 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3411MB free_disk=59.855655670166016GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.121 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.121 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.428 349552 DEBUG nova.network.neutron [-] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.563 349552 INFO nova.compute.manager [-] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Took 2.87 seconds to deallocate network for instance.
Dec 05 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.636 349552 DEBUG oslo_concurrency.lockutils [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.672 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.672 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.672 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.672 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.673 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.673 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.804 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:59:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:59:32 compute-0 ceph-mon[192914]: pgmap v1504: 321 pgs: 321 active+clean; 250 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 341 B/s wr, 13 op/s
Dec 05 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.966 349552 DEBUG nova.network.neutron [req-05d6b711-3c7c-478e-86bf-f89da1286ee3 req-2e65d5b8-16cc-4632-bc08-1ba8c04cc852 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updated VIF entry in instance network info cache for port 554930d3-ff53-4ef1-af0a-bad6acef1456. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.968 349552 DEBUG nova.network.neutron [req-05d6b711-3c7c-478e-86bf-f89da1286ee3 req-2e65d5b8-16cc-4632-bc08-1ba8c04cc852 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updating instance_info_cache with network_info: [{"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 01:59:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:59:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1634709864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:59:33 compute-0 nova_compute[349548]: 2025-12-05 01:59:33.342 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:33 compute-0 nova_compute[349548]: 2025-12-05 01:59:33.348 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:59:33 compute-0 nova_compute[349548]: 2025-12-05 01:59:33.357 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:59:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1505: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 01:59:33 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1634709864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:59:33 compute-0 nova_compute[349548]: 2025-12-05 01:59:33.914 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:59:33 compute-0 nova_compute[349548]: 2025-12-05 01:59:33.927 349552 DEBUG oslo_concurrency.lockutils [req-05d6b711-3c7c-478e-86bf-f89da1286ee3 req-2e65d5b8-16cc-4632-bc08-1ba8c04cc852 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 01:59:33 compute-0 nova_compute[349548]: 2025-12-05 01:59:33.961 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 01:59:33 compute-0 nova_compute[349548]: 2025-12-05 01:59:33.962 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:59:33 compute-0 nova_compute[349548]: 2025-12-05 01:59:33.963 349552 DEBUG oslo_concurrency.lockutils [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.327s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:59:34 compute-0 nova_compute[349548]: 2025-12-05 01:59:34.111 349552 DEBUG oslo_concurrency.processutils [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 01:59:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 01:59:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/282690679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:59:34 compute-0 nova_compute[349548]: 2025-12-05 01:59:34.613 349552 DEBUG oslo_concurrency.processutils [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 01:59:34 compute-0 nova_compute[349548]: 2025-12-05 01:59:34.628 349552 DEBUG nova.compute.provider_tree [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 01:59:34 compute-0 nova_compute[349548]: 2025-12-05 01:59:34.653 349552 DEBUG nova.scheduler.client.report [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 01:59:34 compute-0 nova_compute[349548]: 2025-12-05 01:59:34.698 349552 DEBUG oslo_concurrency.lockutils [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:59:34 compute-0 nova_compute[349548]: 2025-12-05 01:59:34.752 349552 INFO nova.scheduler.client.report [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Deleted allocations for instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497
Dec 05 01:59:34 compute-0 nova_compute[349548]: 2025-12-05 01:59:34.837 349552 DEBUG oslo_concurrency.lockutils [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:59:34 compute-0 ceph-mon[192914]: pgmap v1505: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 01:59:34 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/282690679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 01:59:34 compute-0 nova_compute[349548]: 2025-12-05 01:59:34.960 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:59:34 compute-0 nova_compute[349548]: 2025-12-05 01:59:34.961 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:59:34 compute-0 nova_compute[349548]: 2025-12-05 01:59:34.962 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:59:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1506: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 01:59:36 compute-0 nova_compute[349548]: 2025-12-05 01:59:36.064 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 01:59:36 compute-0 nova_compute[349548]: 2025-12-05 01:59:36.715 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:36 compute-0 ceph-mon[192914]: pgmap v1506: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 01:59:37 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:37.496 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 01:59:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1507: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 01:59:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:59:38 compute-0 nova_compute[349548]: 2025-12-05 01:59:38.345 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:38 compute-0 podman[428349]: 2025-12-05 01:59:38.702967813 +0000 UTC m=+0.107475251 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 01:59:38 compute-0 podman[428348]: 2025-12-05 01:59:38.721375988 +0000 UTC m=+0.121647908 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 05 01:59:38 compute-0 ceph-mon[192914]: pgmap v1507: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 01:59:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1508: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 01:59:40 compute-0 podman[428388]: 2025-12-05 01:59:40.739587988 +0000 UTC m=+0.133563362 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 05 01:59:40 compute-0 podman[428387]: 2025-12-05 01:59:40.740442342 +0000 UTC m=+0.144283602 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 01:59:40 compute-0 ceph-mon[192914]: pgmap v1508: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 01:59:41 compute-0 nova_compute[349548]: 2025-12-05 01:59:41.720 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1509: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 01:59:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:59:42 compute-0 ceph-mon[192914]: pgmap v1509: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 01:59:43 compute-0 nova_compute[349548]: 2025-12-05 01:59:43.315 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764899968.3130796, b82c3f0e-6d6a-4a7b-9556-b609ad63e497 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 01:59:43 compute-0 nova_compute[349548]: 2025-12-05 01:59:43.316 349552 INFO nova.compute.manager [-] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] VM Stopped (Lifecycle Event)
Dec 05 01:59:43 compute-0 nova_compute[349548]: 2025-12-05 01:59:43.336 349552 DEBUG nova.compute.manager [None req-f514dcc7-940d-4ec9-815b-f46f5eb2d1db - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 01:59:43 compute-0 nova_compute[349548]: 2025-12-05 01:59:43.347 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1510: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Dec 05 01:59:44 compute-0 ceph-mon[192914]: pgmap v1510: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Dec 05 01:59:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 01:59:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/39374684' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:59:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 01:59:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/39374684' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:59:45 compute-0 podman[428425]: 2025-12-05 01:59:45.684571125 +0000 UTC m=+0.105410533 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.component=ubi9-container, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, config_id=edpm, name=ubi9, vendor=Red Hat, Inc.)
Dec 05 01:59:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1511: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/39374684' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 01:59:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/39374684' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 01:59:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:59:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:59:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:59:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:59:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 01:59:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 01:59:46 compute-0 nova_compute[349548]: 2025-12-05 01:59:46.722 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:46 compute-0 ceph-mon[192914]: pgmap v1511: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1512: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:59:48 compute-0 nova_compute[349548]: 2025-12-05 01:59:48.349 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:49 compute-0 ceph-mon[192914]: pgmap v1512: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1513: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:51 compute-0 ceph-mon[192914]: pgmap v1513: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:51 compute-0 nova_compute[349548]: 2025-12-05 01:59:51.727 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1514: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:59:53 compute-0 ceph-mon[192914]: pgmap v1514: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:53 compute-0 nova_compute[349548]: 2025-12-05 01:59:53.353 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:53 compute-0 podman[428446]: 2025-12-05 01:59:53.692698757 +0000 UTC m=+0.090536306 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 01:59:53 compute-0 podman[428449]: 2025-12-05 01:59:53.696242747 +0000 UTC m=+0.094625491 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm)
Dec 05 01:59:53 compute-0 podman[428447]: 2025-12-05 01:59:53.710730942 +0000 UTC m=+0.116413131 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 01:59:53 compute-0 podman[428448]: 2025-12-05 01:59:53.725671131 +0000 UTC m=+0.127927144 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 05 01:59:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1515: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:55 compute-0 ceph-mon[192914]: pgmap v1515: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1516: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:56.191 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 01:59:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:56.192 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 01:59:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:56.193 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 01:59:56 compute-0 nova_compute[349548]: 2025-12-05 01:59:56.729 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:57 compute-0 ceph-mon[192914]: pgmap v1516: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1517: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 01:59:58 compute-0 nova_compute[349548]: 2025-12-05 01:59:58.356 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 01:59:59 compute-0 ceph-mon[192914]: pgmap v1517: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:59 compute-0 podman[158197]: time="2025-12-05T01:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 01:59:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1518: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 01:59:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 01:59:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8626 "" "Go-http-client/1.1"
Dec 05 02:00:01 compute-0 ceph-mon[192914]: pgmap v1518: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:01 compute-0 openstack_network_exporter[366555]: ERROR   02:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:00:01 compute-0 openstack_network_exporter[366555]: ERROR   02:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:00:01 compute-0 openstack_network_exporter[366555]: ERROR   02:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:00:01 compute-0 openstack_network_exporter[366555]: ERROR   02:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:00:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:00:01 compute-0 openstack_network_exporter[366555]: ERROR   02:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:00:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:00:01 compute-0 nova_compute[349548]: 2025-12-05 02:00:01.733 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:00:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1519: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:00:03 compute-0 ceph-mon[192914]: pgmap v1519: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:03 compute-0 nova_compute[349548]: 2025-12-05 02:00:03.360 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:00:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1520: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:04 compute-0 ovn_controller[89286]: 2025-12-05T02:00:04Z|00053|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Dec 05 02:00:05 compute-0 ceph-mon[192914]: pgmap v1520: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1521: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:06 compute-0 nova_compute[349548]: 2025-12-05 02:00:06.735 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:00:07 compute-0 ceph-mon[192914]: pgmap v1521: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1522: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:00:08 compute-0 nova_compute[349548]: 2025-12-05 02:00:08.363 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:00:09 compute-0 ceph-mon[192914]: pgmap v1522: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:09 compute-0 podman[428533]: 2025-12-05 02:00:09.723341582 +0000 UTC m=+0.122046199 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:00:09 compute-0 podman[428534]: 2025-12-05 02:00:09.733375493 +0000 UTC m=+0.125327591 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 02:00:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1523: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:11 compute-0 ceph-mon[192914]: pgmap v1523: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:11 compute-0 podman[428575]: 2025-12-05 02:00:11.716523192 +0000 UTC m=+0.122156032 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:00:11 compute-0 podman[428576]: 2025-12-05 02:00:11.722047296 +0000 UTC m=+0.115024442 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 05 02:00:11 compute-0 nova_compute[349548]: 2025-12-05 02:00:11.737 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:00:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1524: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:00:13 compute-0 ceph-mon[192914]: pgmap v1524: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:13 compute-0 nova_compute[349548]: 2025-12-05 02:00:13.366 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:00:13 compute-0 sudo[428615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:00:13 compute-0 sudo[428615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:00:13 compute-0 sudo[428615]: pam_unix(sudo:session): session closed for user root
Dec 05 02:00:13 compute-0 sudo[428640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:00:13 compute-0 sudo[428640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:00:13 compute-0 sudo[428640]: pam_unix(sudo:session): session closed for user root
Dec 05 02:00:13 compute-0 sudo[428665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:00:13 compute-0 sudo[428665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:00:13 compute-0 sudo[428665]: pam_unix(sudo:session): session closed for user root
Dec 05 02:00:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1525: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:13 compute-0 sudo[428690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 05 02:00:13 compute-0 sudo[428690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:00:14 compute-0 podman[428782]: 2025-12-05 02:00:14.733420571 +0000 UTC m=+0.106243787 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:00:14 compute-0 podman[428782]: 2025-12-05 02:00:14.883123533 +0000 UTC m=+0.255946739 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:00:15 compute-0 ceph-mon[192914]: pgmap v1525: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1526: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:15 compute-0 podman[428900]: 2025-12-05 02:00:15.915481444 +0000 UTC m=+0.137392688 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, name=ubi9, container_name=kepler, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git)
Dec 05 02:00:16 compute-0 sudo[428690]: pam_unix(sudo:session): session closed for user root
Dec 05 02:00:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:00:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:00:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:00:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:00:16 compute-0 sudo[428950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:00:16 compute-0 sudo[428950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:00:16 compute-0 sudo[428950]: pam_unix(sudo:session): session closed for user root
Dec 05 02:00:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:00:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:00:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:00:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:00:16 compute-0 sudo[428975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:00:16 compute-0 sudo[428975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:00:16 compute-0 sudo[428975]: pam_unix(sudo:session): session closed for user root
Dec 05 02:00:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:00:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:00:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:00:16
Dec 05 02:00:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:00:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:00:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'images', '.mgr', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'default.rgw.control', 'vms', 'default.rgw.meta']
Dec 05 02:00:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:00:16 compute-0 sudo[429000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:00:16 compute-0 sudo[429000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:00:16 compute-0 sudo[429000]: pam_unix(sudo:session): session closed for user root
Dec 05 02:00:16 compute-0 sudo[429025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:00:16 compute-0 sudo[429025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:00:16 compute-0 nova_compute[349548]: 2025-12-05 02:00:16.740 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:00:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:00:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:00:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:00:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:00:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:00:17 compute-0 ceph-mon[192914]: pgmap v1526: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:00:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:00:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:00:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:00:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:00:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:00:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:00:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1527: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:00:17 compute-0 sudo[429025]: pam_unix(sudo:session): session closed for user root
Dec 05 02:00:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:00:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:00:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:00:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:00:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:00:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:00:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 84fffc59-8bf5-4592-bdf5-4f28decde7b7 does not exist
Dec 05 02:00:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 0dec3bc4-c7cd-4519-832c-8155773f6663 does not exist
Dec 05 02:00:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8547deb3-2b7f-4c5a-9e0e-0a361f82e02d does not exist
Dec 05 02:00:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:00:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:00:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:00:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:00:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:00:18 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:00:18 compute-0 sudo[429081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:00:18 compute-0 sudo[429081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:00:18 compute-0 sudo[429081]: pam_unix(sudo:session): session closed for user root
Dec 05 02:00:18 compute-0 sudo[429106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:00:18 compute-0 sudo[429106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:00:18 compute-0 sudo[429106]: pam_unix(sudo:session): session closed for user root
Dec 05 02:00:18 compute-0 nova_compute[349548]: 2025-12-05 02:00:18.369 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:00:18 compute-0 sudo[429131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:00:18 compute-0 sudo[429131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:00:18 compute-0 sudo[429131]: pam_unix(sudo:session): session closed for user root
Dec 05 02:00:18 compute-0 sudo[429156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:00:18 compute-0 sudo[429156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:00:18 compute-0 ceph-mon[192914]: pgmap v1527: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:00:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:00:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:00:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:00:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:00:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:00:19 compute-0 podman[429219]: 2025-12-05 02:00:19.065321137 +0000 UTC m=+0.064663791 container create eeec8b499590f3974c6f132add82c75a4dec3f828f3098e0646b541e7bf97f8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 02:00:19 compute-0 podman[429219]: 2025-12-05 02:00:19.033199468 +0000 UTC m=+0.032542162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:00:19 compute-0 systemd[1]: Started libpod-conmon-eeec8b499590f3974c6f132add82c75a4dec3f828f3098e0646b541e7bf97f8f.scope.
Dec 05 02:00:19 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:00:19 compute-0 podman[429219]: 2025-12-05 02:00:19.190177304 +0000 UTC m=+0.189519998 container init eeec8b499590f3974c6f132add82c75a4dec3f828f3098e0646b541e7bf97f8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nobel, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 05 02:00:19 compute-0 podman[429219]: 2025-12-05 02:00:19.199602758 +0000 UTC m=+0.198945412 container start eeec8b499590f3974c6f132add82c75a4dec3f828f3098e0646b541e7bf97f8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:00:19 compute-0 podman[429219]: 2025-12-05 02:00:19.20432052 +0000 UTC m=+0.203663174 container attach eeec8b499590f3974c6f132add82c75a4dec3f828f3098e0646b541e7bf97f8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nobel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 05 02:00:19 compute-0 silly_nobel[429235]: 167 167
Dec 05 02:00:19 compute-0 systemd[1]: libpod-eeec8b499590f3974c6f132add82c75a4dec3f828f3098e0646b541e7bf97f8f.scope: Deactivated successfully.
Dec 05 02:00:19 compute-0 conmon[429235]: conmon eeec8b499590f3974c6f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eeec8b499590f3974c6f132add82c75a4dec3f828f3098e0646b541e7bf97f8f.scope/container/memory.events
Dec 05 02:00:19 compute-0 podman[429219]: 2025-12-05 02:00:19.212369876 +0000 UTC m=+0.211712580 container died eeec8b499590f3974c6f132add82c75a4dec3f828f3098e0646b541e7bf97f8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:00:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-9560a9f41cb17ad6f7557fdeffdbf653903976e73370e44dff15388dcfda3bc5-merged.mount: Deactivated successfully.
Dec 05 02:00:19 compute-0 podman[429219]: 2025-12-05 02:00:19.288958711 +0000 UTC m=+0.288301375 container remove eeec8b499590f3974c6f132add82c75a4dec3f828f3098e0646b541e7bf97f8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:00:19 compute-0 systemd[1]: libpod-conmon-eeec8b499590f3974c6f132add82c75a4dec3f828f3098e0646b541e7bf97f8f.scope: Deactivated successfully.
Dec 05 02:00:19 compute-0 podman[429258]: 2025-12-05 02:00:19.531312348 +0000 UTC m=+0.077159492 container create 118ff52700b13f3ea7d4eb1c337a966c873d9698d2a5a0f32c7c0c26926b3ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wilson, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:00:19 compute-0 systemd[1]: Started libpod-conmon-118ff52700b13f3ea7d4eb1c337a966c873d9698d2a5a0f32c7c0c26926b3ea6.scope.
Dec 05 02:00:19 compute-0 podman[429258]: 2025-12-05 02:00:19.505508605 +0000 UTC m=+0.051355829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:00:19 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61efb10cd25439dd9e7b368c5e1cda61efa3591566dcb2beb5e05120655dbe1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61efb10cd25439dd9e7b368c5e1cda61efa3591566dcb2beb5e05120655dbe1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61efb10cd25439dd9e7b368c5e1cda61efa3591566dcb2beb5e05120655dbe1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61efb10cd25439dd9e7b368c5e1cda61efa3591566dcb2beb5e05120655dbe1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61efb10cd25439dd9e7b368c5e1cda61efa3591566dcb2beb5e05120655dbe1c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:00:19 compute-0 podman[429258]: 2025-12-05 02:00:19.672826711 +0000 UTC m=+0.218673885 container init 118ff52700b13f3ea7d4eb1c337a966c873d9698d2a5a0f32c7c0c26926b3ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wilson, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 05 02:00:19 compute-0 podman[429258]: 2025-12-05 02:00:19.686316609 +0000 UTC m=+0.232163753 container start 118ff52700b13f3ea7d4eb1c337a966c873d9698d2a5a0f32c7c0c26926b3ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wilson, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:00:19 compute-0 podman[429258]: 2025-12-05 02:00:19.691417802 +0000 UTC m=+0.237264946 container attach 118ff52700b13f3ea7d4eb1c337a966c873d9698d2a5a0f32c7c0c26926b3ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Dec 05 02:00:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1528: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:20 compute-0 ceph-mon[192914]: pgmap v1528: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:20 compute-0 lucid_wilson[429273]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:00:20 compute-0 lucid_wilson[429273]: --> relative data size: 1.0
Dec 05 02:00:20 compute-0 lucid_wilson[429273]: --> All data devices are unavailable
Dec 05 02:00:20 compute-0 systemd[1]: libpod-118ff52700b13f3ea7d4eb1c337a966c873d9698d2a5a0f32c7c0c26926b3ea6.scope: Deactivated successfully.
Dec 05 02:00:20 compute-0 podman[429258]: 2025-12-05 02:00:20.945873434 +0000 UTC m=+1.491720608 container died 118ff52700b13f3ea7d4eb1c337a966c873d9698d2a5a0f32c7c0c26926b3ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wilson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 05 02:00:20 compute-0 systemd[1]: libpod-118ff52700b13f3ea7d4eb1c337a966c873d9698d2a5a0f32c7c0c26926b3ea6.scope: Consumed 1.167s CPU time.
Dec 05 02:00:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-61efb10cd25439dd9e7b368c5e1cda61efa3591566dcb2beb5e05120655dbe1c-merged.mount: Deactivated successfully.
Dec 05 02:00:21 compute-0 podman[429258]: 2025-12-05 02:00:21.049381483 +0000 UTC m=+1.595228627 container remove 118ff52700b13f3ea7d4eb1c337a966c873d9698d2a5a0f32c7c0c26926b3ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wilson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:00:21 compute-0 systemd[1]: libpod-conmon-118ff52700b13f3ea7d4eb1c337a966c873d9698d2a5a0f32c7c0c26926b3ea6.scope: Deactivated successfully.
Dec 05 02:00:21 compute-0 sudo[429156]: pam_unix(sudo:session): session closed for user root
Dec 05 02:00:21 compute-0 sudo[429316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:00:21 compute-0 sudo[429316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:00:21 compute-0 sudo[429316]: pam_unix(sudo:session): session closed for user root
Dec 05 02:00:21 compute-0 sudo[429341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:00:21 compute-0 sudo[429341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:00:21 compute-0 sudo[429341]: pam_unix(sudo:session): session closed for user root
Dec 05 02:00:21 compute-0 sudo[429366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:00:21 compute-0 sudo[429366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:00:21 compute-0 sudo[429366]: pam_unix(sudo:session): session closed for user root
Dec 05 02:00:21 compute-0 sudo[429391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:00:21 compute-0 sudo[429391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:00:21 compute-0 nova_compute[349548]: 2025-12-05 02:00:21.743 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:00:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1529: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:22 compute-0 podman[429456]: 2025-12-05 02:00:22.108646078 +0000 UTC m=+0.070418193 container create 7a57d2c519f633f2892c641a976ea9bce87b1aaebabfc288be77b6f4b53de58c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_solomon, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:00:22 compute-0 podman[429456]: 2025-12-05 02:00:22.079368638 +0000 UTC m=+0.041140813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:00:22 compute-0 systemd[1]: Started libpod-conmon-7a57d2c519f633f2892c641a976ea9bce87b1aaebabfc288be77b6f4b53de58c.scope.
Dec 05 02:00:22 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:00:22 compute-0 podman[429456]: 2025-12-05 02:00:22.255044278 +0000 UTC m=+0.216816433 container init 7a57d2c519f633f2892c641a976ea9bce87b1aaebabfc288be77b6f4b53de58c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_solomon, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 05 02:00:22 compute-0 podman[429456]: 2025-12-05 02:00:22.272564189 +0000 UTC m=+0.234336304 container start 7a57d2c519f633f2892c641a976ea9bce87b1aaebabfc288be77b6f4b53de58c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 05 02:00:22 compute-0 zen_solomon[429470]: 167 167
Dec 05 02:00:22 compute-0 podman[429456]: 2025-12-05 02:00:22.279634697 +0000 UTC m=+0.241406802 container attach 7a57d2c519f633f2892c641a976ea9bce87b1aaebabfc288be77b6f4b53de58c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Dec 05 02:00:22 compute-0 systemd[1]: libpod-7a57d2c519f633f2892c641a976ea9bce87b1aaebabfc288be77b6f4b53de58c.scope: Deactivated successfully.
Dec 05 02:00:22 compute-0 podman[429456]: 2025-12-05 02:00:22.280551503 +0000 UTC m=+0.242323598 container died 7a57d2c519f633f2892c641a976ea9bce87b1aaebabfc288be77b6f4b53de58c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_solomon, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 02:00:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d7b181d0f25f72f66b4af84e184d92c4b16884c2e0c692266b4e6054ef3b9cc-merged.mount: Deactivated successfully.
Dec 05 02:00:22 compute-0 podman[429456]: 2025-12-05 02:00:22.332667612 +0000 UTC m=+0.294439697 container remove 7a57d2c519f633f2892c641a976ea9bce87b1aaebabfc288be77b6f4b53de58c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:00:22 compute-0 systemd[1]: libpod-conmon-7a57d2c519f633f2892c641a976ea9bce87b1aaebabfc288be77b6f4b53de58c.scope: Deactivated successfully.
Dec 05 02:00:22 compute-0 podman[429494]: 2025-12-05 02:00:22.629224767 +0000 UTC m=+0.079047284 container create 9627cb42a19ee30a69fe234f3a15f8da9fb51d7759fe02ceb86cb704aa5a1110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_montalcini, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:00:22 compute-0 podman[429494]: 2025-12-05 02:00:22.604220886 +0000 UTC m=+0.054043373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:00:22 compute-0 systemd[1]: Started libpod-conmon-9627cb42a19ee30a69fe234f3a15f8da9fb51d7759fe02ceb86cb704aa5a1110.scope.
Dec 05 02:00:22 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd98cdf75ad7759cf0816bfb6b0893655ac6ddc4fa7bd331731b6829ad45fd7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd98cdf75ad7759cf0816bfb6b0893655ac6ddc4fa7bd331731b6829ad45fd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd98cdf75ad7759cf0816bfb6b0893655ac6ddc4fa7bd331731b6829ad45fd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd98cdf75ad7759cf0816bfb6b0893655ac6ddc4fa7bd331731b6829ad45fd7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:00:22 compute-0 podman[429494]: 2025-12-05 02:00:22.811608984 +0000 UTC m=+0.261431571 container init 9627cb42a19ee30a69fe234f3a15f8da9fb51d7759fe02ceb86cb704aa5a1110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_montalcini, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 05 02:00:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:00:22 compute-0 podman[429494]: 2025-12-05 02:00:22.832544471 +0000 UTC m=+0.282366968 container start 9627cb42a19ee30a69fe234f3a15f8da9fb51d7759fe02ceb86cb704aa5a1110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:00:22 compute-0 podman[429494]: 2025-12-05 02:00:22.840224936 +0000 UTC m=+0.290047463 container attach 9627cb42a19ee30a69fe234f3a15f8da9fb51d7759fe02ceb86cb704aa5a1110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 02:00:22 compute-0 ceph-mon[192914]: pgmap v1529: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:23 compute-0 nova_compute[349548]: 2025-12-05 02:00:23.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:00:23 compute-0 nova_compute[349548]: 2025-12-05 02:00:23.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 02:00:23 compute-0 nova_compute[349548]: 2025-12-05 02:00:23.088 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 02:00:23 compute-0 nova_compute[349548]: 2025-12-05 02:00:23.372 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]: {
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:     "0": [
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:         {
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "devices": [
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "/dev/loop3"
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             ],
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "lv_name": "ceph_lv0",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "lv_size": "21470642176",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "name": "ceph_lv0",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "tags": {
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.cluster_name": "ceph",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.crush_device_class": "",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.encrypted": "0",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.osd_id": "0",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.type": "block",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.vdo": "0"
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             },
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "type": "block",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "vg_name": "ceph_vg0"
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:         }
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:     ],
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:     "1": [
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:         {
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "devices": [
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "/dev/loop4"
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             ],
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "lv_name": "ceph_lv1",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "lv_size": "21470642176",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "name": "ceph_lv1",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "tags": {
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.cluster_name": "ceph",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.crush_device_class": "",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.encrypted": "0",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.osd_id": "1",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.type": "block",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.vdo": "0"
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             },
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "type": "block",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "vg_name": "ceph_vg1"
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:         }
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:     ],
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:     "2": [
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:         {
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "devices": [
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "/dev/loop5"
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             ],
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "lv_name": "ceph_lv2",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "lv_size": "21470642176",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "name": "ceph_lv2",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "tags": {
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.cluster_name": "ceph",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.crush_device_class": "",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.encrypted": "0",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.osd_id": "2",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.type": "block",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:                 "ceph.vdo": "0"
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             },
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "type": "block",
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:             "vg_name": "ceph_vg2"
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:         }
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]:     ]
Dec 05 02:00:23 compute-0 sharp_montalcini[429510]: }
Dec 05 02:00:23 compute-0 systemd[1]: libpod-9627cb42a19ee30a69fe234f3a15f8da9fb51d7759fe02ceb86cb704aa5a1110.scope: Deactivated successfully.
Dec 05 02:00:23 compute-0 podman[429494]: 2025-12-05 02:00:23.675749665 +0000 UTC m=+1.125572152 container died 9627cb42a19ee30a69fe234f3a15f8da9fb51d7759fe02ceb86cb704aa5a1110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_montalcini, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 05 02:00:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7dd98cdf75ad7759cf0816bfb6b0893655ac6ddc4fa7bd331731b6829ad45fd7-merged.mount: Deactivated successfully.
Dec 05 02:00:23 compute-0 podman[429494]: 2025-12-05 02:00:23.740723065 +0000 UTC m=+1.190545552 container remove 9627cb42a19ee30a69fe234f3a15f8da9fb51d7759fe02ceb86cb704aa5a1110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_montalcini, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 02:00:23 compute-0 systemd[1]: libpod-conmon-9627cb42a19ee30a69fe234f3a15f8da9fb51d7759fe02ceb86cb704aa5a1110.scope: Deactivated successfully.
Dec 05 02:00:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1530: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:23 compute-0 sudo[429391]: pam_unix(sudo:session): session closed for user root
Dec 05 02:00:23 compute-0 podman[429532]: 2025-12-05 02:00:23.840347585 +0000 UTC m=+0.081219156 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 02:00:23 compute-0 podman[429530]: 2025-12-05 02:00:23.841922039 +0000 UTC m=+0.096322358 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Dec 05 02:00:23 compute-0 podman[429531]: 2025-12-05 02:00:23.864223124 +0000 UTC m=+0.118620183 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, release=1755695350, build-date=2025-08-20T13:12:41, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., distribution-scope=public)
Dec 05 02:00:23 compute-0 sudo[429568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:00:23 compute-0 sudo[429568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:00:23 compute-0 sudo[429568]: pam_unix(sudo:session): session closed for user root
Dec 05 02:00:23 compute-0 podman[429533]: 2025-12-05 02:00:23.90656948 +0000 UTC m=+0.150936848 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec 05 02:00:23 compute-0 sudo[429639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:00:23 compute-0 sudo[429639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:00:23 compute-0 sudo[429639]: pam_unix(sudo:session): session closed for user root
Dec 05 02:00:24 compute-0 sudo[429664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:00:24 compute-0 sudo[429664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:00:24 compute-0 sudo[429664]: pam_unix(sudo:session): session closed for user root
Dec 05 02:00:24 compute-0 sudo[429689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:00:24 compute-0 sudo[429689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:00:24 compute-0 podman[429752]: 2025-12-05 02:00:24.695079603 +0000 UTC m=+0.097662437 container create 28479d2f93fd2a0fb8acc1a763e7aa827a2494222cba59249b22ee138fd5195e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_yalow, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 05 02:00:24 compute-0 podman[429752]: 2025-12-05 02:00:24.65536337 +0000 UTC m=+0.057946234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:00:24 compute-0 systemd[1]: Started libpod-conmon-28479d2f93fd2a0fb8acc1a763e7aa827a2494222cba59249b22ee138fd5195e.scope.
Dec 05 02:00:24 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:00:24 compute-0 ceph-mon[192914]: pgmap v1530: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:24 compute-0 podman[429752]: 2025-12-05 02:00:24.860357851 +0000 UTC m=+0.262940715 container init 28479d2f93fd2a0fb8acc1a763e7aa827a2494222cba59249b22ee138fd5195e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_yalow, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:00:24 compute-0 podman[429752]: 2025-12-05 02:00:24.881816982 +0000 UTC m=+0.284399846 container start 28479d2f93fd2a0fb8acc1a763e7aa827a2494222cba59249b22ee138fd5195e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_yalow, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 02:00:24 compute-0 podman[429752]: 2025-12-05 02:00:24.888753687 +0000 UTC m=+0.291336541 container attach 28479d2f93fd2a0fb8acc1a763e7aa827a2494222cba59249b22ee138fd5195e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 05 02:00:24 compute-0 nervous_yalow[429768]: 167 167
Dec 05 02:00:24 compute-0 systemd[1]: libpod-28479d2f93fd2a0fb8acc1a763e7aa827a2494222cba59249b22ee138fd5195e.scope: Deactivated successfully.
Dec 05 02:00:24 compute-0 podman[429752]: 2025-12-05 02:00:24.895523136 +0000 UTC m=+0.298106000 container died 28479d2f93fd2a0fb8acc1a763e7aa827a2494222cba59249b22ee138fd5195e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:00:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-e18f43591846e5ff7a23e9359dd6ec482ab1dbc8824a555a2c8168f39a3c126a-merged.mount: Deactivated successfully.
Dec 05 02:00:24 compute-0 podman[429752]: 2025-12-05 02:00:24.972342378 +0000 UTC m=+0.374925212 container remove 28479d2f93fd2a0fb8acc1a763e7aa827a2494222cba59249b22ee138fd5195e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:00:24 compute-0 systemd[1]: libpod-conmon-28479d2f93fd2a0fb8acc1a763e7aa827a2494222cba59249b22ee138fd5195e.scope: Deactivated successfully.
Dec 05 02:00:25 compute-0 nova_compute[349548]: 2025-12-05 02:00:25.089 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:00:25 compute-0 nova_compute[349548]: 2025-12-05 02:00:25.091 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:00:25 compute-0 podman[429791]: 2025-12-05 02:00:25.207423601 +0000 UTC m=+0.084599130 container create ca6e738071c0c0fdf4cf604515a234c5ee2c4a6afeb94b37d8a2a410dfe5881f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hamilton, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Dec 05 02:00:25 compute-0 podman[429791]: 2025-12-05 02:00:25.16058813 +0000 UTC m=+0.037763679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:00:25 compute-0 systemd[1]: Started libpod-conmon-ca6e738071c0c0fdf4cf604515a234c5ee2c4a6afeb94b37d8a2a410dfe5881f.scope.
Dec 05 02:00:25 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279f7b11079ce98ba18479e455bfe75702d5c6b361ce95841e1a0cf749f8dee4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279f7b11079ce98ba18479e455bfe75702d5c6b361ce95841e1a0cf749f8dee4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279f7b11079ce98ba18479e455bfe75702d5c6b361ce95841e1a0cf749f8dee4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279f7b11079ce98ba18479e455bfe75702d5c6b361ce95841e1a0cf749f8dee4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:00:25 compute-0 podman[429791]: 2025-12-05 02:00:25.381012093 +0000 UTC m=+0.258187652 container init ca6e738071c0c0fdf4cf604515a234c5ee2c4a6afeb94b37d8a2a410dfe5881f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hamilton, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:00:25 compute-0 podman[429791]: 2025-12-05 02:00:25.397783752 +0000 UTC m=+0.274959281 container start ca6e738071c0c0fdf4cf604515a234c5ee2c4a6afeb94b37d8a2a410dfe5881f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:00:25 compute-0 podman[429791]: 2025-12-05 02:00:25.402969288 +0000 UTC m=+0.280144837 container attach ca6e738071c0c0fdf4cf604515a234c5ee2c4a6afeb94b37d8a2a410dfe5881f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 05 02:00:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1531: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:26 compute-0 goofy_hamilton[429807]: {
Dec 05 02:00:26 compute-0 goofy_hamilton[429807]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:00:26 compute-0 goofy_hamilton[429807]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:00:26 compute-0 goofy_hamilton[429807]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:00:26 compute-0 goofy_hamilton[429807]:         "osd_id": 0,
Dec 05 02:00:26 compute-0 goofy_hamilton[429807]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:00:26 compute-0 goofy_hamilton[429807]:         "type": "bluestore"
Dec 05 02:00:26 compute-0 goofy_hamilton[429807]:     },
Dec 05 02:00:26 compute-0 goofy_hamilton[429807]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:00:26 compute-0 goofy_hamilton[429807]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:00:26 compute-0 goofy_hamilton[429807]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:00:26 compute-0 goofy_hamilton[429807]:         "osd_id": 1,
Dec 05 02:00:26 compute-0 goofy_hamilton[429807]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:00:26 compute-0 goofy_hamilton[429807]:         "type": "bluestore"
Dec 05 02:00:26 compute-0 goofy_hamilton[429807]:     },
Dec 05 02:00:26 compute-0 goofy_hamilton[429807]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:00:26 compute-0 goofy_hamilton[429807]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:00:26 compute-0 goofy_hamilton[429807]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:00:26 compute-0 goofy_hamilton[429807]:         "osd_id": 2,
Dec 05 02:00:26 compute-0 goofy_hamilton[429807]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:00:26 compute-0 goofy_hamilton[429807]:         "type": "bluestore"
Dec 05 02:00:26 compute-0 goofy_hamilton[429807]:     }
Dec 05 02:00:26 compute-0 goofy_hamilton[429807]: }
Dec 05 02:00:26 compute-0 systemd[1]: libpod-ca6e738071c0c0fdf4cf604515a234c5ee2c4a6afeb94b37d8a2a410dfe5881f.scope: Deactivated successfully.
Dec 05 02:00:26 compute-0 podman[429791]: 2025-12-05 02:00:26.562742407 +0000 UTC m=+1.439917956 container died ca6e738071c0c0fdf4cf604515a234c5ee2c4a6afeb94b37d8a2a410dfe5881f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hamilton, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:00:26 compute-0 systemd[1]: libpod-ca6e738071c0c0fdf4cf604515a234c5ee2c4a6afeb94b37d8a2a410dfe5881f.scope: Consumed 1.166s CPU time.
Dec 05 02:00:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-279f7b11079ce98ba18479e455bfe75702d5c6b361ce95841e1a0cf749f8dee4-merged.mount: Deactivated successfully.
Dec 05 02:00:26 compute-0 podman[429791]: 2025-12-05 02:00:26.647775919 +0000 UTC m=+1.524951448 container remove ca6e738071c0c0fdf4cf604515a234c5ee2c4a6afeb94b37d8a2a410dfe5881f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 05 02:00:26 compute-0 systemd[1]: libpod-conmon-ca6e738071c0c0fdf4cf604515a234c5ee2c4a6afeb94b37d8a2a410dfe5881f.scope: Deactivated successfully.
Dec 05 02:00:26 compute-0 sudo[429689]: pam_unix(sudo:session): session closed for user root
Dec 05 02:00:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:00:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:00:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:00:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:00:26 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 1b469a19-6aa1-4466-93e5-fe6258fe0b30 does not exist
Dec 05 02:00:26 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b8e6de92-cba1-4dfc-be5b-1ae67dac1842 does not exist
Dec 05 02:00:26 compute-0 nova_compute[349548]: 2025-12-05 02:00:26.747 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:00:26 compute-0 sudo[429851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:00:26 compute-0 sudo[429851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:00:26 compute-0 sudo[429851]: pam_unix(sudo:session): session closed for user root
Dec 05 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016572374365110374 of space, bias 1.0, pg target 0.4971712309533112 quantized to 32 (current 32)
Dec 05 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 05 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:00:26 compute-0 ceph-mon[192914]: pgmap v1531: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:00:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:00:26 compute-0 sudo[429876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:00:26 compute-0 sudo[429876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:00:26 compute-0 sudo[429876]: pam_unix(sudo:session): session closed for user root
Dec 05 02:00:27 compute-0 nova_compute[349548]: 2025-12-05 02:00:27.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:00:27 compute-0 nova_compute[349548]: 2025-12-05 02:00:27.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:00:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1532: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:00:28 compute-0 nova_compute[349548]: 2025-12-05 02:00:28.061 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:00:28 compute-0 nova_compute[349548]: 2025-12-05 02:00:28.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:00:28 compute-0 nova_compute[349548]: 2025-12-05 02:00:28.065 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:00:28 compute-0 nova_compute[349548]: 2025-12-05 02:00:28.376 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:00:28 compute-0 nova_compute[349548]: 2025-12-05 02:00:28.956 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:00:28 compute-0 nova_compute[349548]: 2025-12-05 02:00:28.957 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:00:28 compute-0 nova_compute[349548]: 2025-12-05 02:00:28.958 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 02:00:28 compute-0 ceph-mon[192914]: pgmap v1532: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:29 compute-0 podman[158197]: time="2025-12-05T02:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:00:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:00:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1533: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8635 "" "Go-http-client/1.1"
Dec 05 02:00:30 compute-0 ceph-mon[192914]: pgmap v1533: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.213 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Updating instance_info_cache with network_info: [{"id": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "address": "fa:16:3e:68:a7:22", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4341bf52-6b", "ovs_interfaceid": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.231 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.232 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.233 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.234 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.268 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.269 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.270 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.270 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.271 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:00:31 compute-0 openstack_network_exporter[366555]: ERROR   02:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:00:31 compute-0 openstack_network_exporter[366555]: ERROR   02:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:00:31 compute-0 openstack_network_exporter[366555]: ERROR   02:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:00:31 compute-0 openstack_network_exporter[366555]: ERROR   02:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:00:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:00:31 compute-0 openstack_network_exporter[366555]: ERROR   02:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:00:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.750 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:00:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:00:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/86507028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:00:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1534: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.788 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.881 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.882 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.882 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.888 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.889 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.889 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.895 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.896 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.896 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:00:31 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/86507028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:00:32 compute-0 nova_compute[349548]: 2025-12-05 02:00:32.371 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:00:32 compute-0 nova_compute[349548]: 2025-12-05 02:00:32.372 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3406MB free_disk=59.88886642456055GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:00:32 compute-0 nova_compute[349548]: 2025-12-05 02:00:32.372 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:00:32 compute-0 nova_compute[349548]: 2025-12-05 02:00:32.372 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:00:32 compute-0 nova_compute[349548]: 2025-12-05 02:00:32.526 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:00:32 compute-0 nova_compute[349548]: 2025-12-05 02:00:32.526 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:00:32 compute-0 nova_compute[349548]: 2025-12-05 02:00:32.526 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:00:32 compute-0 nova_compute[349548]: 2025-12-05 02:00:32.527 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:00:32 compute-0 nova_compute[349548]: 2025-12-05 02:00:32.527 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:00:32 compute-0 nova_compute[349548]: 2025-12-05 02:00:32.721 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:00:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:00:33 compute-0 ceph-mon[192914]: pgmap v1534: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:00:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3176152989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:00:33 compute-0 nova_compute[349548]: 2025-12-05 02:00:33.255 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:00:33 compute-0 nova_compute[349548]: 2025-12-05 02:00:33.265 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:00:33 compute-0 nova_compute[349548]: 2025-12-05 02:00:33.283 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:00:33 compute-0 nova_compute[349548]: 2025-12-05 02:00:33.313 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:00:33 compute-0 nova_compute[349548]: 2025-12-05 02:00:33.314 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:00:33 compute-0 nova_compute[349548]: 2025-12-05 02:00:33.316 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:00:33 compute-0 nova_compute[349548]: 2025-12-05 02:00:33.316 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 02:00:33 compute-0 nova_compute[349548]: 2025-12-05 02:00:33.379 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:00:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1535: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:34 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3176152989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:00:35 compute-0 ceph-mon[192914]: pgmap v1535: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1536: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:36 compute-0 nova_compute[349548]: 2025-12-05 02:00:36.164 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:00:36 compute-0 nova_compute[349548]: 2025-12-05 02:00:36.165 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:00:36 compute-0 nova_compute[349548]: 2025-12-05 02:00:36.753 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:00:37 compute-0 nova_compute[349548]: 2025-12-05 02:00:37.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:00:37 compute-0 ceph-mon[192914]: pgmap v1536: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1537: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.319 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.320 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.332 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5', 'name': 'vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.348 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b69a0e24-1bc4-46a5-92d7-367c1efd53df', 'name': 'test_0', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.355 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3611d2ae-da33-4e55-aec7-0bec88d3b4e0', 'name': 'vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.355 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.356 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.356 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.356 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.358 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.358 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.359 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.359 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.359 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.360 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T02:00:38.356446) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.360 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.361 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T02:00:38.360169) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:00:38 compute-0 nova_compute[349548]: 2025-12-05 02:00:38.383 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.387 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.387 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.388 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.417 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.418 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.418 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.443 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.443 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.444 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.445 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.445 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.445 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.445 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.446 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.446 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.447 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.447 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.447 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.447 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.447 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.447 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.448 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.448 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.448 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T02:00:38.446223) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.449 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T02:00:38.448234) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.497 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.498 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.498 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.575 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.577 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.577 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.641 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.641 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.642 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.643 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.644 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.644 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.645 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.645 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.646 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.latency volume: 1788689993 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.647 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.latency volume: 318906117 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.647 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T02:00:38.645576) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.648 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.latency volume: 246265233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.649 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 2043636416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.650 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 325714825 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.651 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 190759187 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.651 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 1726190004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.652 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 302563806 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.653 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 198504004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.654 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.655 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.656 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.657 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.657 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.658 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.658 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T02:00:38.657972) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.658 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.659 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.660 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.661 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.662 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.662 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.663 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.664 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.665 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.666 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.667 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.668 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.668 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.669 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.670 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.670 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T02:00:38.669869) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.670 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.671 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.672 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.673 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.674 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.675 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.676 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.677 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.678 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.679 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.680 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.681 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.681 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.681 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.681 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.681 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.681 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.682 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.682 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.682 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.683 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.683 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T02:00:38.681330) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.683 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.683 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.684 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.684 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.684 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.685 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.685 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.685 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.685 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.686 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T02:00:38.685427) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.712 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.748 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.776 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.779 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.780 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.780 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.781 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.782 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.783 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.784 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T02:00:38.782977) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.784 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.latency volume: 7184458071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.785 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.latency volume: 30429022 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.786 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.787 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 7524740776 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.788 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 28454640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.789 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.790 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 8278686410 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.791 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 33331693 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.792 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.793 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.795 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.795 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.796 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.797 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.798 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T02:00:38.796964) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.798 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.799 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.800 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.800 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.801 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.801 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.802 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.802 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.803 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.804 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.804 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.805 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.805 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.806 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.806 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.806 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T02:00:38.806327) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.811 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.815 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.820 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.821 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.822 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.822 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.822 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.822 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.823 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.823 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.823 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T02:00:38.822947) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.824 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.824 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.824 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.825 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.825 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.825 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.825 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.826 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.826 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T02:00:38.826008) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.826 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.828 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.829 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.829 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.830 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.830 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.830 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.831 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.831 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.832 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.832 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.832 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.832 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.833 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.833 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.833 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.833 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.834 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.834 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T02:00:38.833129) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.835 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.835 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.835 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.835 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.835 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.835 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.836 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.836 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T02:00:38.835843) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.836 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.836 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.837 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.838 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.838 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.838 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.838 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.838 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.838 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.839 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.839 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T02:00:38.838495) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.839 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.840 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.840 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.840 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.840 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.841 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.841 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.841 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.841 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.841 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/memory.usage volume: 49.02734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.842 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/memory.usage volume: 48.87890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.842 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T02:00:38.841778) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.842 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/memory.usage volume: 49.01171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.844 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.844 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.844 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.844 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.844 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.844 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.845 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes volume: 2220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.846 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.846 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.847 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.847 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.847 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.848 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.848 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T02:00:38.844800) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.848 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.848 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.848 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.849 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.849 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.849 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.849 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.849 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.850 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T02:00:38.847940) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.850 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.850 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T02:00:38.849974) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.850 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.851 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.851 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.851 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.852 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.852 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.852 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.853 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/cpu volume: 39700000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.853 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T02:00:38.852235) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.853 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/cpu volume: 46130000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.853 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/cpu volume: 39710000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.854 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.854 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.854 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.854 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.855 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.855 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.855 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.856 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.856 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.856 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.856 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.857 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.857 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.857 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.857 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T02:00:38.855181) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.857 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.858 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T02:00:38.857128) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.858 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.858 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.859 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.859 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.859 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.859 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.859 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.859 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.859 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:00:39 compute-0 ceph-mon[192914]: pgmap v1537: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1538: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:40 compute-0 podman[429947]: 2025-12-05 02:00:40.715352379 +0000 UTC m=+0.112631985 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 02:00:40 compute-0 podman[429946]: 2025-12-05 02:00:40.741251115 +0000 UTC m=+0.139743245 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 05 02:00:41 compute-0 ceph-mon[192914]: pgmap v1538: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:41 compute-0 nova_compute[349548]: 2025-12-05 02:00:41.757 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:00:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1539: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:42 compute-0 podman[429988]: 2025-12-05 02:00:42.725116405 +0000 UTC m=+0.117674917 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 05 02:00:42 compute-0 podman[429987]: 2025-12-05 02:00:42.731058111 +0000 UTC m=+0.129242270 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec 05 02:00:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:00:43 compute-0 ceph-mon[192914]: pgmap v1539: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:43 compute-0 nova_compute[349548]: 2025-12-05 02:00:43.387 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:00:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1540: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:45 compute-0 ceph-mon[192914]: pgmap v1540: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:00:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2835619917' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:00:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:00:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2835619917' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:00:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1541: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2835619917' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:00:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2835619917' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:00:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:00:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:00:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:00:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:00:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:00:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:00:46 compute-0 podman[430025]: 2025-12-05 02:00:46.719601722 +0000 UTC m=+0.120245449 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, release-0.7.12=, vcs-type=git, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container)
Dec 05 02:00:46 compute-0 nova_compute[349548]: 2025-12-05 02:00:46.760 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:00:47 compute-0 ceph-mon[192914]: pgmap v1541: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1542: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:00:48 compute-0 nova_compute[349548]: 2025-12-05 02:00:48.391 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:00:49 compute-0 ceph-mon[192914]: pgmap v1542: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1543: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:51 compute-0 ceph-mon[192914]: pgmap v1543: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:51 compute-0 nova_compute[349548]: 2025-12-05 02:00:51.765 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:00:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1544: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:52 compute-0 ceph-mon[192914]: pgmap v1544: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:00:53 compute-0 nova_compute[349548]: 2025-12-05 02:00:53.394 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:00:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1545: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:54 compute-0 podman[430044]: 2025-12-05 02:00:54.717386153 +0000 UTC m=+0.118621743 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 02:00:54 compute-0 podman[430045]: 2025-12-05 02:00:54.74050702 +0000 UTC m=+0.131821052 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:00:54 compute-0 podman[430047]: 2025-12-05 02:00:54.761725045 +0000 UTC m=+0.136564466 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vendor=Red Hat, Inc., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 05 02:00:54 compute-0 podman[430046]: 2025-12-05 02:00:54.774839322 +0000 UTC m=+0.160565768 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 05 02:00:54 compute-0 ceph-mon[192914]: pgmap v1545: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1546: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:00:56.193 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:00:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:00:56.194 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:00:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:00:56.195 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:00:56 compute-0 nova_compute[349548]: 2025-12-05 02:00:56.767 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:00:56 compute-0 ceph-mon[192914]: pgmap v1546: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1547: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:00:58 compute-0 nova_compute[349548]: 2025-12-05 02:00:58.397 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:00:58 compute-0 ceph-mon[192914]: pgmap v1547: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:00:59 compute-0 podman[158197]: time="2025-12-05T02:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:00:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:00:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8635 "" "Go-http-client/1.1"
Dec 05 02:00:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1548: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:00 compute-0 ceph-mon[192914]: pgmap v1548: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:01 compute-0 CROND[430129]: (root) CMD (run-parts /etc/cron.hourly)
Dec 05 02:01:01 compute-0 run-parts[430132]: (/etc/cron.hourly) starting 0anacron
Dec 05 02:01:01 compute-0 run-parts[430138]: (/etc/cron.hourly) finished 0anacron
Dec 05 02:01:01 compute-0 CROND[430128]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 05 02:01:01 compute-0 openstack_network_exporter[366555]: ERROR   02:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:01:01 compute-0 openstack_network_exporter[366555]: ERROR   02:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:01:01 compute-0 openstack_network_exporter[366555]: ERROR   02:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:01:01 compute-0 openstack_network_exporter[366555]: ERROR   02:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:01:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:01:01 compute-0 openstack_network_exporter[366555]: ERROR   02:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:01:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:01:01 compute-0 nova_compute[349548]: 2025-12-05 02:01:01.771 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1549: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:01:03 compute-0 ceph-mon[192914]: pgmap v1549: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:03 compute-0 nova_compute[349548]: 2025-12-05 02:01:03.401 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1550: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:05 compute-0 ceph-mon[192914]: pgmap v1550: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1551: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:06 compute-0 nova_compute[349548]: 2025-12-05 02:01:06.773 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:07 compute-0 ceph-mon[192914]: pgmap v1551: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1552: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:01:08 compute-0 nova_compute[349548]: 2025-12-05 02:01:08.404 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:09 compute-0 ceph-mon[192914]: pgmap v1552: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1553: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:11 compute-0 ceph-mon[192914]: pgmap v1553: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:11 compute-0 podman[430140]: 2025-12-05 02:01:11.675446091 +0000 UTC m=+0.081806632 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:01:11 compute-0 podman[430139]: 2025-12-05 02:01:11.721364337 +0000 UTC m=+0.123895261 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 02:01:11 compute-0 nova_compute[349548]: 2025-12-05 02:01:11.776 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1554: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:01:13 compute-0 ceph-mon[192914]: pgmap v1554: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:13 compute-0 nova_compute[349548]: 2025-12-05 02:01:13.408 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:13 compute-0 podman[430181]: 2025-12-05 02:01:13.688726003 +0000 UTC m=+0.104481167 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, tcib_managed=true)
Dec 05 02:01:13 compute-0 podman[430182]: 2025-12-05 02:01:13.720402361 +0000 UTC m=+0.118898181 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 05 02:01:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1555: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:15 compute-0 ceph-mon[192914]: pgmap v1555: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.212557) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900075212984, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2042, "num_deletes": 251, "total_data_size": 3382859, "memory_usage": 3431120, "flush_reason": "Manual Compaction"}
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900075256454, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3316936, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30123, "largest_seqno": 32164, "table_properties": {"data_size": 3307643, "index_size": 5851, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18521, "raw_average_key_size": 20, "raw_value_size": 3289232, "raw_average_value_size": 3559, "num_data_blocks": 260, "num_entries": 924, "num_filter_entries": 924, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764899849, "oldest_key_time": 1764899849, "file_creation_time": 1764900075, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 43973 microseconds, and 22118 cpu microseconds.
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.256523) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3316936 bytes OK
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.256561) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.260573) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.260714) EVENT_LOG_v1 {"time_micros": 1764900075260691, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.260757) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3374338, prev total WAL file size 3374338, number of live WAL files 2.
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.263440) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3239KB)], [68(7036KB)]
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900075263551, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10522014, "oldest_snapshot_seqno": -1}
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5338 keys, 8820868 bytes, temperature: kUnknown
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900075497067, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8820868, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8784770, "index_size": 21652, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13381, "raw_key_size": 133667, "raw_average_key_size": 25, "raw_value_size": 8687757, "raw_average_value_size": 1627, "num_data_blocks": 894, "num_entries": 5338, "num_filter_entries": 5338, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764900075, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.497428) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8820868 bytes
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.502348) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 45.0 rd, 37.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 6.9 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(5.8) write-amplify(2.7) OK, records in: 5852, records dropped: 514 output_compression: NoCompression
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.502416) EVENT_LOG_v1 {"time_micros": 1764900075502389, "job": 38, "event": "compaction_finished", "compaction_time_micros": 233628, "compaction_time_cpu_micros": 38860, "output_level": 6, "num_output_files": 1, "total_output_size": 8820868, "num_input_records": 5852, "num_output_records": 5338, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900075504135, "job": 38, "event": "table_file_deletion", "file_number": 70}
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900075507564, "job": 38, "event": "table_file_deletion", "file_number": 68}
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.263228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.507854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.507863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.507867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.507870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.507873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:01:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1556: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:01:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:01:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:01:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:01:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:01:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:01:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:01:16
Dec 05 02:01:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:01:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:01:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', '.mgr', 'default.rgw.meta', 'vms', 'volumes', 'cephfs.cephfs.meta', 'images']
Dec 05 02:01:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:01:16 compute-0 nova_compute[349548]: 2025-12-05 02:01:16.779 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:17 compute-0 ceph-mon[192914]: pgmap v1556: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:01:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:01:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:01:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:01:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:01:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:01:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:01:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:01:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:01:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:01:17 compute-0 podman[430218]: 2025-12-05 02:01:17.737310307 +0000 UTC m=+0.143295195 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=edpm, container_name=kepler, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, io.openshift.expose-services=, architecture=x86_64, maintainer=Red Hat, Inc., release-0.7.12=, vcs-type=git)
Dec 05 02:01:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1557: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:01:18 compute-0 nova_compute[349548]: 2025-12-05 02:01:18.410 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:19 compute-0 ceph-mon[192914]: pgmap v1557: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1558: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:21 compute-0 ceph-mon[192914]: pgmap v1558: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:21 compute-0 nova_compute[349548]: 2025-12-05 02:01:21.783 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1559: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:22 compute-0 ceph-mon[192914]: pgmap v1559: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:01:23 compute-0 nova_compute[349548]: 2025-12-05 02:01:23.413 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1560: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:25 compute-0 ceph-mon[192914]: pgmap v1560: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:25 compute-0 podman[430239]: 2025-12-05 02:01:25.707157536 +0000 UTC m=+0.103715048 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 02:01:25 compute-0 podman[430238]: 2025-12-05 02:01:25.732325421 +0000 UTC m=+0.136878878 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 02:01:25 compute-0 podman[430241]: 2025-12-05 02:01:25.746490368 +0000 UTC m=+0.134721597 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, config_id=edpm, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, container_name=openstack_network_exporter, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-type=git, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 02:01:25 compute-0 podman[430240]: 2025-12-05 02:01:25.761626522 +0000 UTC m=+0.153565565 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 05 02:01:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1561: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:26 compute-0 nova_compute[349548]: 2025-12-05 02:01:26.086 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:01:26 compute-0 nova_compute[349548]: 2025-12-05 02:01:26.087 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:01:26 compute-0 nova_compute[349548]: 2025-12-05 02:01:26.787 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016572374365110374 of space, bias 1.0, pg target 0.4971712309533112 quantized to 32 (current 32)
Dec 05 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 05 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:01:27 compute-0 ceph-mon[192914]: pgmap v1561: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:27 compute-0 sudo[430321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:01:27 compute-0 sudo[430321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:01:27 compute-0 sudo[430321]: pam_unix(sudo:session): session closed for user root
Dec 05 02:01:27 compute-0 sudo[430346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:01:27 compute-0 sudo[430346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:01:27 compute-0 sudo[430346]: pam_unix(sudo:session): session closed for user root
Dec 05 02:01:27 compute-0 sudo[430371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:01:27 compute-0 sudo[430371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:01:27 compute-0 sudo[430371]: pam_unix(sudo:session): session closed for user root
Dec 05 02:01:27 compute-0 sudo[430396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:01:27 compute-0 sudo[430396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:01:27 compute-0 nova_compute[349548]: 2025-12-05 02:01:27.624 349552 DEBUG oslo_concurrency.lockutils [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:01:27 compute-0 nova_compute[349548]: 2025-12-05 02:01:27.626 349552 DEBUG oslo_concurrency.lockutils [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:01:27 compute-0 nova_compute[349548]: 2025-12-05 02:01:27.627 349552 DEBUG oslo_concurrency.lockutils [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:01:27 compute-0 nova_compute[349548]: 2025-12-05 02:01:27.627 349552 DEBUG oslo_concurrency.lockutils [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:01:27 compute-0 nova_compute[349548]: 2025-12-05 02:01:27.628 349552 DEBUG oslo_concurrency.lockutils [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:01:27 compute-0 nova_compute[349548]: 2025-12-05 02:01:27.630 349552 INFO nova.compute.manager [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Terminating instance
Dec 05 02:01:27 compute-0 nova_compute[349548]: 2025-12-05 02:01:27.632 349552 DEBUG nova.compute.manager [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 05 02:01:27 compute-0 kernel: tap4341bf52-6b (unregistering): left promiscuous mode
Dec 05 02:01:27 compute-0 nova_compute[349548]: 2025-12-05 02:01:27.773 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:27 compute-0 ovn_controller[89286]: 2025-12-05T02:01:27Z|00054|binding|INFO|Releasing lport 4341bf52-6bd5-42ee-b25d-f3d9844af854 from this chassis (sb_readonly=0)
Dec 05 02:01:27 compute-0 ovn_controller[89286]: 2025-12-05T02:01:27Z|00055|binding|INFO|Setting lport 4341bf52-6bd5-42ee-b25d-f3d9844af854 down in Southbound
Dec 05 02:01:27 compute-0 ovn_controller[89286]: 2025-12-05T02:01:27Z|00056|binding|INFO|Removing iface tap4341bf52-6b ovn-installed in OVS
Dec 05 02:01:27 compute-0 nova_compute[349548]: 2025-12-05 02:01:27.781 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:27 compute-0 NetworkManager[49092]: <info>  [1764900087.7854] device (tap4341bf52-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 05 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.786 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:a7:22 192.168.0.25'], port_security=['fa:16:3e:68:a7:22 192.168.0.25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-qkgif4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-port-3t3utgry676a', 'neutron:cidrs': '192.168.0.25/24', 'neutron:device_id': '7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-qkgif4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-port-3t3utgry676a', 'neutron:project_id': '6ad982b73954486390215862ee62239f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf07c149-4b4f-4cc9-a5b5-cfd139acbede', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.236', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8440543a-d57d-422f-b491-49a678c2776e, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=4341bf52-6bd5-42ee-b25d-f3d9844af854) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.788 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 4341bf52-6bd5-42ee-b25d-f3d9844af854 in datapath 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 unbound from our chassis
Dec 05 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.789 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183
Dec 05 02:01:27 compute-0 nova_compute[349548]: 2025-12-05 02:01:27.795 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.805 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[80f8e2b6-5c92-440c-a904-c4b343d17de8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:01:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1562: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:01:27 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Dec 05 02:01:27 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 44.564s CPU time.
Dec 05 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.847 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[f9756fee-271d-4ef9-93f1-bdf174755de9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:01:27 compute-0 systemd-machined[138700]: Machine qemu-3-instance-00000003 terminated.
Dec 05 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.852 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[7768c2c0-6996-4a75-ac44-fd921107d33a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.885 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[d2e959aa-9434-420a-abc1-1cf65bd32f0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.904 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[d9758a03-219d-4f00-9d4b-9fd090774ee6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49f7d2f1-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:8a:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 14, 'rx_bytes': 616, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 14, 'rx_bytes': 616, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537514, 'reachable_time': 39496, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 430450, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.920 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[5b547bd8-202e-42d0-8fa9-a35128c929a2]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap49f7d2f1-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537531, 'tstamp': 537531}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430451, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap49f7d2f1-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537536, 'tstamp': 537536}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430451, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.922 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49f7d2f1-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:01:27 compute-0 nova_compute[349548]: 2025-12-05 02:01:27.924 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:27 compute-0 nova_compute[349548]: 2025-12-05 02:01:27.931 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.931 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49f7d2f1-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.932 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.932 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap49f7d2f1-f0, col_values=(('external_ids', {'iface-id': '35b0af3f-4a87-44c5-9b77-2f08261b9985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.933 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.063 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.071 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.082 349552 INFO nova.virt.libvirt.driver [-] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Instance destroyed successfully.
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.083 349552 DEBUG nova.objects.instance [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'resources' on Instance uuid 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.095 349552 DEBUG nova.virt.libvirt.vif [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T01:53:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7',id=3,image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-05T01:53:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ad982b73954486390215862ee62239f',ramdisk_id='',reservation_id='r-6yiphc1y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T01:53:42Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wNjI3NDkyODY1Nzg2OTkzOTcyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA2Mjc0OTI4NjU3ODY5OTM5NzI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDYyNzQ5Mjg2NTc4Njk5Mzk3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA2Mjc0OTI4NjU3ODY5OTM5NzI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wNjI3NDkyODY1Nzg2OTkzOTcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wNjI3NDkyODY1Nzg2OTkzOTcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Dec 05 02:01:28 compute-0 nova_compute[349548]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDYyNzQ5Mjg2NTc4Njk5Mzk3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA2Mjc0OTI4NjU3ODY5OTM5NzI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wNjI3NDkyODY1Nzg2OTkzOTcyPT0tLQo=',user_id='ff880837791d4f49a54672b8d0e705ff',uuid=7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "address": "fa:16:3e:68:a7:22", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4341bf52-6b", "ovs_interfaceid": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.095 349552 DEBUG nova.network.os_vif_util [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converting VIF {"id": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "address": "fa:16:3e:68:a7:22", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4341bf52-6b", "ovs_interfaceid": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.096 349552 DEBUG nova.network.os_vif_util [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:68:a7:22,bridge_name='br-int',has_traffic_filtering=True,id=4341bf52-6bd5-42ee-b25d-f3d9844af854,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4341bf52-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.096 349552 DEBUG os_vif [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:a7:22,bridge_name='br-int',has_traffic_filtering=True,id=4341bf52-6bd5-42ee-b25d-f3d9844af854,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4341bf52-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.097 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.098 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4341bf52-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.100 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.102 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.103 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.106 349552 INFO os_vif [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:a7:22,bridge_name='br-int',has_traffic_filtering=True,id=4341bf52-6bd5-42ee-b25d-f3d9844af854,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4341bf52-6b')
Dec 05 02:01:28 compute-0 sudo[430396]: pam_unix(sudo:session): session closed for user root
Dec 05 02:01:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:01:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:01:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:01:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:01:28 compute-0 rsyslogd[188644]: message too long (8192) with configured size 8096, begin of message is: 2025-12-05 02:01:28.095 349552 DEBUG nova.virt.libvirt.vif [None req-5fa94621-3d [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 05 02:01:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:01:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:01:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 71bc29b5-4049-4d6e-b918-93976ce33fca does not exist
Dec 05 02:01:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 6c8af796-3fa1-425b-9e24-1b3325a9b73e does not exist
Dec 05 02:01:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 71654bf0-85c1-461d-a9be-e12534705c08 does not exist
Dec 05 02:01:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:01:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:01:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:01:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:01:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:01:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.267 349552 DEBUG nova.compute.manager [req-116ab8ea-f481-4def-8776-d737c3cf667d req-3a37b4ad-7f7b-4f05-8c55-ac69ecfb1b4b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Received event network-vif-unplugged-4341bf52-6bd5-42ee-b25d-f3d9844af854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.267 349552 DEBUG oslo_concurrency.lockutils [req-116ab8ea-f481-4def-8776-d737c3cf667d req-3a37b4ad-7f7b-4f05-8c55-ac69ecfb1b4b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.268 349552 DEBUG oslo_concurrency.lockutils [req-116ab8ea-f481-4def-8776-d737c3cf667d req-3a37b4ad-7f7b-4f05-8c55-ac69ecfb1b4b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.268 349552 DEBUG oslo_concurrency.lockutils [req-116ab8ea-f481-4def-8776-d737c3cf667d req-3a37b4ad-7f7b-4f05-8c55-ac69ecfb1b4b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.268 349552 DEBUG nova.compute.manager [req-116ab8ea-f481-4def-8776-d737c3cf667d req-3a37b4ad-7f7b-4f05-8c55-ac69ecfb1b4b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] No waiting events found dispatching network-vif-unplugged-4341bf52-6bd5-42ee-b25d-f3d9844af854 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.268 349552 DEBUG nova.compute.manager [req-116ab8ea-f481-4def-8776-d737c3cf667d req-3a37b4ad-7f7b-4f05-8c55-ac69ecfb1b4b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Received event network-vif-unplugged-4341bf52-6bd5-42ee-b25d-f3d9844af854 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 05 02:01:28 compute-0 sudo[430495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:01:28 compute-0 sudo[430495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:01:28 compute-0 sudo[430495]: pam_unix(sudo:session): session closed for user root
Dec 05 02:01:28 compute-0 sudo[430520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:01:28 compute-0 sudo[430520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:01:28 compute-0 sudo[430520]: pam_unix(sudo:session): session closed for user root
Dec 05 02:01:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:28.530 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:01:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:28.530 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.533 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:28 compute-0 sudo[430545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:01:28 compute-0 sudo[430545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:01:28 compute-0 sudo[430545]: pam_unix(sudo:session): session closed for user root
Dec 05 02:01:28 compute-0 sudo[430571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:01:28 compute-0 sudo[430571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.911 349552 DEBUG nova.compute.manager [req-c1c324c3-743e-4218-bd75-87bdb74aa50d req-fd507ea1-c6db-4910-af86-f95cdcf8ac3e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Received event network-changed-4341bf52-6bd5-42ee-b25d-f3d9844af854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.911 349552 DEBUG nova.compute.manager [req-c1c324c3-743e-4218-bd75-87bdb74aa50d req-fd507ea1-c6db-4910-af86-f95cdcf8ac3e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Refreshing instance network info cache due to event network-changed-4341bf52-6bd5-42ee-b25d-f3d9844af854. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.911 349552 DEBUG oslo_concurrency.lockutils [req-c1c324c3-743e-4218-bd75-87bdb74aa50d req-fd507ea1-c6db-4910-af86-f95cdcf8ac3e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.912 349552 DEBUG oslo_concurrency.lockutils [req-c1c324c3-743e-4218-bd75-87bdb74aa50d req-fd507ea1-c6db-4910-af86-f95cdcf8ac3e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.912 349552 DEBUG nova.network.neutron [req-c1c324c3-743e-4218-bd75-87bdb74aa50d req-fd507ea1-c6db-4910-af86-f95cdcf8ac3e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Refreshing network info cache for port 4341bf52-6bd5-42ee-b25d-f3d9844af854 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 02:01:29 compute-0 nova_compute[349548]: 2025-12-05 02:01:29.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:01:29 compute-0 nova_compute[349548]: 2025-12-05 02:01:29.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:01:29 compute-0 ceph-mon[192914]: pgmap v1562: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:01:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:01:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:01:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:01:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:01:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:01:29 compute-0 podman[430635]: 2025-12-05 02:01:29.163379512 +0000 UTC m=+0.076239438 container create e4da1d0d085bf4e234408a53ab968ab9bbd31a2ff9981fb68c012c425d1ff08a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noyce, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Dec 05 02:01:29 compute-0 podman[430635]: 2025-12-05 02:01:29.136070746 +0000 UTC m=+0.048930712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:01:29 compute-0 systemd[1]: Started libpod-conmon-e4da1d0d085bf4e234408a53ab968ab9bbd31a2ff9981fb68c012c425d1ff08a.scope.
Dec 05 02:01:29 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:01:29 compute-0 nova_compute[349548]: 2025-12-05 02:01:29.291 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:01:29 compute-0 nova_compute[349548]: 2025-12-05 02:01:29.293 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:01:29 compute-0 nova_compute[349548]: 2025-12-05 02:01:29.293 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 02:01:29 compute-0 podman[430635]: 2025-12-05 02:01:29.31529815 +0000 UTC m=+0.228158126 container init e4da1d0d085bf4e234408a53ab968ab9bbd31a2ff9981fb68c012c425d1ff08a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:01:29 compute-0 podman[430635]: 2025-12-05 02:01:29.333607933 +0000 UTC m=+0.246467889 container start e4da1d0d085bf4e234408a53ab968ab9bbd31a2ff9981fb68c012c425d1ff08a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 02:01:29 compute-0 podman[430635]: 2025-12-05 02:01:29.340111285 +0000 UTC m=+0.252971311 container attach e4da1d0d085bf4e234408a53ab968ab9bbd31a2ff9981fb68c012c425d1ff08a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 02:01:29 compute-0 nova_compute[349548]: 2025-12-05 02:01:29.343 349552 INFO nova.virt.libvirt.driver [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Deleting instance files /var/lib/nova/instances/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_del
Dec 05 02:01:29 compute-0 nova_compute[349548]: 2025-12-05 02:01:29.345 349552 INFO nova.virt.libvirt.driver [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Deletion of /var/lib/nova/instances/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_del complete
Dec 05 02:01:29 compute-0 recursing_noyce[430651]: 167 167
Dec 05 02:01:29 compute-0 systemd[1]: libpod-e4da1d0d085bf4e234408a53ab968ab9bbd31a2ff9981fb68c012c425d1ff08a.scope: Deactivated successfully.
Dec 05 02:01:29 compute-0 podman[430635]: 2025-12-05 02:01:29.349660753 +0000 UTC m=+0.262520709 container died e4da1d0d085bf4e234408a53ab968ab9bbd31a2ff9981fb68c012c425d1ff08a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noyce, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:01:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-174470297fb1cfddbbb44a99d6c09660504ef6d14ad74d192f25c843a33b2742-merged.mount: Deactivated successfully.
Dec 05 02:01:29 compute-0 podman[430635]: 2025-12-05 02:01:29.429024698 +0000 UTC m=+0.341884614 container remove e4da1d0d085bf4e234408a53ab968ab9bbd31a2ff9981fb68c012c425d1ff08a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noyce, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:01:29 compute-0 nova_compute[349548]: 2025-12-05 02:01:29.432 349552 INFO nova.compute.manager [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Took 1.80 seconds to destroy the instance on the hypervisor.
Dec 05 02:01:29 compute-0 nova_compute[349548]: 2025-12-05 02:01:29.433 349552 DEBUG oslo.service.loopingcall [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 05 02:01:29 compute-0 nova_compute[349548]: 2025-12-05 02:01:29.433 349552 DEBUG nova.compute.manager [-] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 05 02:01:29 compute-0 nova_compute[349548]: 2025-12-05 02:01:29.434 349552 DEBUG nova.network.neutron [-] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 05 02:01:29 compute-0 systemd[1]: libpod-conmon-e4da1d0d085bf4e234408a53ab968ab9bbd31a2ff9981fb68c012c425d1ff08a.scope: Deactivated successfully.
Dec 05 02:01:29 compute-0 podman[430675]: 2025-12-05 02:01:29.704374886 +0000 UTC m=+0.080776745 container create b4587edc64f1bc1d00a90c0fd622c5e5e7e51091813f10942a1d0b87f2f1fb03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curie, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:01:29 compute-0 podman[158197]: time="2025-12-05T02:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:01:29 compute-0 podman[430675]: 2025-12-05 02:01:29.670063404 +0000 UTC m=+0.046465303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:01:29 compute-0 systemd[1]: Started libpod-conmon-b4587edc64f1bc1d00a90c0fd622c5e5e7e51091813f10942a1d0b87f2f1fb03.scope.
Dec 05 02:01:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1563: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:29 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3256c754df319e642871aacd5b7adc3ce230c19d8a761631476d8e5b8296a785/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3256c754df319e642871aacd5b7adc3ce230c19d8a761631476d8e5b8296a785/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3256c754df319e642871aacd5b7adc3ce230c19d8a761631476d8e5b8296a785/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3256c754df319e642871aacd5b7adc3ce230c19d8a761631476d8e5b8296a785/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3256c754df319e642871aacd5b7adc3ce230c19d8a761631476d8e5b8296a785/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:01:29 compute-0 podman[430675]: 2025-12-05 02:01:29.872538479 +0000 UTC m=+0.248940378 container init b4587edc64f1bc1d00a90c0fd622c5e5e7e51091813f10942a1d0b87f2f1fb03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curie, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:01:29 compute-0 podman[430675]: 2025-12-05 02:01:29.900256916 +0000 UTC m=+0.276658775 container start b4587edc64f1bc1d00a90c0fd622c5e5e7e51091813f10942a1d0b87f2f1fb03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:01:29 compute-0 podman[430675]: 2025-12-05 02:01:29.908005963 +0000 UTC m=+0.284407882 container attach b4587edc64f1bc1d00a90c0fd622c5e5e7e51091813f10942a1d0b87f2f1fb03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 05 02:01:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45513 "" "Go-http-client/1.1"
Dec 05 02:01:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9044 "" "Go-http-client/1.1"
Dec 05 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.194 349552 DEBUG nova.network.neutron [req-c1c324c3-743e-4218-bd75-87bdb74aa50d req-fd507ea1-c6db-4910-af86-f95cdcf8ac3e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Updated VIF entry in instance network info cache for port 4341bf52-6bd5-42ee-b25d-f3d9844af854. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.194 349552 DEBUG nova.network.neutron [req-c1c324c3-743e-4218-bd75-87bdb74aa50d req-fd507ea1-c6db-4910-af86-f95cdcf8ac3e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Updating instance_info_cache with network_info: [{"id": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "address": "fa:16:3e:68:a7:22", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4341bf52-6b", "ovs_interfaceid": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.221 349552 DEBUG oslo_concurrency.lockutils [req-c1c324c3-743e-4218-bd75-87bdb74aa50d req-fd507ea1-c6db-4910-af86-f95cdcf8ac3e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.453 349552 DEBUG nova.compute.manager [req-afa7b141-d8b0-4367-8de7-47c335cff4eb req-cd005135-6607-4f2e-ba0d-60e6235eb99b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Received event network-vif-plugged-4341bf52-6bd5-42ee-b25d-f3d9844af854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.454 349552 DEBUG oslo_concurrency.lockutils [req-afa7b141-d8b0-4367-8de7-47c335cff4eb req-cd005135-6607-4f2e-ba0d-60e6235eb99b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.454 349552 DEBUG oslo_concurrency.lockutils [req-afa7b141-d8b0-4367-8de7-47c335cff4eb req-cd005135-6607-4f2e-ba0d-60e6235eb99b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.454 349552 DEBUG oslo_concurrency.lockutils [req-afa7b141-d8b0-4367-8de7-47c335cff4eb req-cd005135-6607-4f2e-ba0d-60e6235eb99b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.455 349552 DEBUG nova.compute.manager [req-afa7b141-d8b0-4367-8de7-47c335cff4eb req-cd005135-6607-4f2e-ba0d-60e6235eb99b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] No waiting events found dispatching network-vif-plugged-4341bf52-6bd5-42ee-b25d-f3d9844af854 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.455 349552 WARNING nova.compute.manager [req-afa7b141-d8b0-4367-8de7-47c335cff4eb req-cd005135-6607-4f2e-ba0d-60e6235eb99b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Received unexpected event network-vif-plugged-4341bf52-6bd5-42ee-b25d-f3d9844af854 for instance with vm_state active and task_state deleting.
Dec 05 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.688 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Updating instance_info_cache with network_info: [{"id": "2799035c-b9e1-4c24-b031-9824b684480c", "address": "fa:16:3e:10:64:51", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2799035c-b9", "ovs_interfaceid": "2799035c-b9e1-4c24-b031-9824b684480c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.713 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.714 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.715 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.787 349552 DEBUG nova.network.neutron [-] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.808 349552 INFO nova.compute.manager [-] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Took 1.37 seconds to deallocate network for instance.
Dec 05 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.882 349552 DEBUG oslo_concurrency.lockutils [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.883 349552 DEBUG oslo_concurrency.lockutils [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.034 349552 DEBUG oslo_concurrency.processutils [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.069 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.108 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:01:31 compute-0 ceph-mon[192914]: pgmap v1563: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:31 compute-0 magical_curie[430692]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:01:31 compute-0 magical_curie[430692]: --> relative data size: 1.0
Dec 05 02:01:31 compute-0 magical_curie[430692]: --> All data devices are unavailable
Dec 05 02:01:31 compute-0 systemd[1]: libpod-b4587edc64f1bc1d00a90c0fd622c5e5e7e51091813f10942a1d0b87f2f1fb03.scope: Deactivated successfully.
Dec 05 02:01:31 compute-0 podman[430675]: 2025-12-05 02:01:31.260537713 +0000 UTC m=+1.636939562 container died b4587edc64f1bc1d00a90c0fd622c5e5e7e51091813f10942a1d0b87f2f1fb03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curie, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:01:31 compute-0 systemd[1]: libpod-b4587edc64f1bc1d00a90c0fd622c5e5e7e51091813f10942a1d0b87f2f1fb03.scope: Consumed 1.260s CPU time.
Dec 05 02:01:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-3256c754df319e642871aacd5b7adc3ce230c19d8a761631476d8e5b8296a785-merged.mount: Deactivated successfully.
Dec 05 02:01:31 compute-0 podman[430675]: 2025-12-05 02:01:31.340878185 +0000 UTC m=+1.717280004 container remove b4587edc64f1bc1d00a90c0fd622c5e5e7e51091813f10942a1d0b87f2f1fb03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curie, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:01:31 compute-0 systemd[1]: libpod-conmon-b4587edc64f1bc1d00a90c0fd622c5e5e7e51091813f10942a1d0b87f2f1fb03.scope: Deactivated successfully.
Dec 05 02:01:31 compute-0 sudo[430571]: pam_unix(sudo:session): session closed for user root
Dec 05 02:01:31 compute-0 openstack_network_exporter[366555]: ERROR   02:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:01:31 compute-0 openstack_network_exporter[366555]: ERROR   02:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:01:31 compute-0 openstack_network_exporter[366555]: ERROR   02:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:01:31 compute-0 openstack_network_exporter[366555]: ERROR   02:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:01:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:01:31 compute-0 openstack_network_exporter[366555]: ERROR   02:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:01:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:01:31 compute-0 sudo[430751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:01:31 compute-0 sudo[430751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:01:31 compute-0 sudo[430751]: pam_unix(sudo:session): session closed for user root
Dec 05 02:01:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:01:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1587700047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.607 349552 DEBUG oslo_concurrency.processutils [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:01:31 compute-0 sudo[430776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:01:31 compute-0 sudo[430776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.622 349552 DEBUG nova.compute.provider_tree [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:01:31 compute-0 sudo[430776]: pam_unix(sudo:session): session closed for user root
Dec 05 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.637 349552 DEBUG nova.scheduler.client.report [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.655 349552 DEBUG oslo_concurrency.lockutils [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.658 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.658 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.659 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.659 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:01:31 compute-0 sudo[430803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.719 349552 INFO nova.scheduler.client.report [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Deleted allocations for instance 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5
Dec 05 02:01:31 compute-0 sudo[430803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:01:31 compute-0 sudo[430803]: pam_unix(sudo:session): session closed for user root
Dec 05 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.780 349552 DEBUG oslo_concurrency.lockutils [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.789 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:31 compute-0 sudo[430829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:01:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1564: 321 pgs: 321 active+clean; 192 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1023 B/s wr, 26 op/s
Dec 05 02:01:31 compute-0 sudo[430829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:01:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:01:32 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3145945582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.221 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.338 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.338 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.338 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.351 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.351 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.352 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:01:32 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1587700047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:01:32 compute-0 podman[430914]: 2025-12-05 02:01:32.410371842 +0000 UTC m=+0.092648757 container create d8b7899499b78b6f49f27d06637ea6b4e10b29f11241458f09e461606989c292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 02:01:32 compute-0 podman[430914]: 2025-12-05 02:01:32.364578329 +0000 UTC m=+0.046855344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:01:32 compute-0 systemd[1]: Started libpod-conmon-d8b7899499b78b6f49f27d06637ea6b4e10b29f11241458f09e461606989c292.scope.
Dec 05 02:01:32 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:01:32 compute-0 podman[430914]: 2025-12-05 02:01:32.514195922 +0000 UTC m=+0.196472867 container init d8b7899499b78b6f49f27d06637ea6b4e10b29f11241458f09e461606989c292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 05 02:01:32 compute-0 podman[430914]: 2025-12-05 02:01:32.52624318 +0000 UTC m=+0.208520095 container start d8b7899499b78b6f49f27d06637ea6b4e10b29f11241458f09e461606989c292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mclean, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:01:32 compute-0 podman[430914]: 2025-12-05 02:01:32.529992925 +0000 UTC m=+0.212269870 container attach d8b7899499b78b6f49f27d06637ea6b4e10b29f11241458f09e461606989c292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mclean, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:01:32 compute-0 romantic_mclean[430929]: 167 167
Dec 05 02:01:32 compute-0 systemd[1]: libpod-d8b7899499b78b6f49f27d06637ea6b4e10b29f11241458f09e461606989c292.scope: Deactivated successfully.
Dec 05 02:01:32 compute-0 podman[430914]: 2025-12-05 02:01:32.536709754 +0000 UTC m=+0.218986719 container died d8b7899499b78b6f49f27d06637ea6b4e10b29f11241458f09e461606989c292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mclean, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 02:01:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfcb252b28be680e4621169c79931aacf230d38e6ab46cb979a8c967226b01b6-merged.mount: Deactivated successfully.
Dec 05 02:01:32 compute-0 podman[430914]: 2025-12-05 02:01:32.590856301 +0000 UTC m=+0.273133226 container remove d8b7899499b78b6f49f27d06637ea6b4e10b29f11241458f09e461606989c292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mclean, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 05 02:01:32 compute-0 systemd[1]: libpod-conmon-d8b7899499b78b6f49f27d06637ea6b4e10b29f11241458f09e461606989c292.scope: Deactivated successfully.
Dec 05 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.796 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.798 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3614MB free_disk=59.88886642456055GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.798 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.801 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:01:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:01:32 compute-0 podman[430952]: 2025-12-05 02:01:32.84376201 +0000 UTC m=+0.060194418 container create 2a1e9d887ccbfc08bd45a870234bd746908ab28e5c2b5fa3b979e4c0dfab7e22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:01:32 compute-0 systemd[1]: Started libpod-conmon-2a1e9d887ccbfc08bd45a870234bd746908ab28e5c2b5fa3b979e4c0dfab7e22.scope.
Dec 05 02:01:32 compute-0 podman[430952]: 2025-12-05 02:01:32.822048631 +0000 UTC m=+0.038481079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.937 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.938 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.938 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.938 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:01:32 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:01:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6926310a7b527b0a226540de2a6985d0eb2181f90eeab80330374d5b37eafc40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:01:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6926310a7b527b0a226540de2a6985d0eb2181f90eeab80330374d5b37eafc40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:01:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6926310a7b527b0a226540de2a6985d0eb2181f90eeab80330374d5b37eafc40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:01:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6926310a7b527b0a226540de2a6985d0eb2181f90eeab80330374d5b37eafc40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.980 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:01:33 compute-0 podman[430952]: 2025-12-05 02:01:33.009692311 +0000 UTC m=+0.226124769 container init 2a1e9d887ccbfc08bd45a870234bd746908ab28e5c2b5fa3b979e4c0dfab7e22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:01:33 compute-0 podman[430952]: 2025-12-05 02:01:33.042342996 +0000 UTC m=+0.258775414 container start 2a1e9d887ccbfc08bd45a870234bd746908ab28e5c2b5fa3b979e4c0dfab7e22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curie, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 05 02:01:33 compute-0 podman[430952]: 2025-12-05 02:01:33.048136509 +0000 UTC m=+0.264568927 container attach 2a1e9d887ccbfc08bd45a870234bd746908ab28e5c2b5fa3b979e4c0dfab7e22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 05 02:01:33 compute-0 nova_compute[349548]: 2025-12-05 02:01:33.100 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:33 compute-0 ceph-mon[192914]: pgmap v1564: 321 pgs: 321 active+clean; 192 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1023 B/s wr, 26 op/s
Dec 05 02:01:33 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3145945582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:01:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:01:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2677156586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:01:33 compute-0 nova_compute[349548]: 2025-12-05 02:01:33.493 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:01:33 compute-0 nova_compute[349548]: 2025-12-05 02:01:33.504 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:01:33 compute-0 nova_compute[349548]: 2025-12-05 02:01:33.521 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:01:33 compute-0 nova_compute[349548]: 2025-12-05 02:01:33.524 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:01:33 compute-0 nova_compute[349548]: 2025-12-05 02:01:33.525 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:01:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1565: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 02:01:33 compute-0 exciting_curie[430967]: {
Dec 05 02:01:33 compute-0 exciting_curie[430967]:     "0": [
Dec 05 02:01:33 compute-0 exciting_curie[430967]:         {
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "devices": [
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "/dev/loop3"
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             ],
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "lv_name": "ceph_lv0",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "lv_size": "21470642176",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "name": "ceph_lv0",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "tags": {
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.cluster_name": "ceph",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.crush_device_class": "",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.encrypted": "0",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.osd_id": "0",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.type": "block",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.vdo": "0"
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             },
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "type": "block",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "vg_name": "ceph_vg0"
Dec 05 02:01:33 compute-0 exciting_curie[430967]:         }
Dec 05 02:01:33 compute-0 exciting_curie[430967]:     ],
Dec 05 02:01:33 compute-0 exciting_curie[430967]:     "1": [
Dec 05 02:01:33 compute-0 exciting_curie[430967]:         {
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "devices": [
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "/dev/loop4"
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             ],
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "lv_name": "ceph_lv1",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "lv_size": "21470642176",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "name": "ceph_lv1",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "tags": {
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.cluster_name": "ceph",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.crush_device_class": "",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.encrypted": "0",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.osd_id": "1",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.type": "block",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.vdo": "0"
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             },
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "type": "block",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "vg_name": "ceph_vg1"
Dec 05 02:01:33 compute-0 exciting_curie[430967]:         }
Dec 05 02:01:33 compute-0 exciting_curie[430967]:     ],
Dec 05 02:01:33 compute-0 exciting_curie[430967]:     "2": [
Dec 05 02:01:33 compute-0 exciting_curie[430967]:         {
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "devices": [
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "/dev/loop5"
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             ],
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "lv_name": "ceph_lv2",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "lv_size": "21470642176",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "name": "ceph_lv2",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "tags": {
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.cluster_name": "ceph",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.crush_device_class": "",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.encrypted": "0",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.osd_id": "2",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.type": "block",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:                 "ceph.vdo": "0"
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             },
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "type": "block",
Dec 05 02:01:33 compute-0 exciting_curie[430967]:             "vg_name": "ceph_vg2"
Dec 05 02:01:33 compute-0 exciting_curie[430967]:         }
Dec 05 02:01:33 compute-0 exciting_curie[430967]:     ]
Dec 05 02:01:33 compute-0 exciting_curie[430967]: }
Dec 05 02:01:33 compute-0 systemd[1]: libpod-2a1e9d887ccbfc08bd45a870234bd746908ab28e5c2b5fa3b979e4c0dfab7e22.scope: Deactivated successfully.
Dec 05 02:01:33 compute-0 podman[430952]: 2025-12-05 02:01:33.918479494 +0000 UTC m=+1.134911942 container died 2a1e9d887ccbfc08bd45a870234bd746908ab28e5c2b5fa3b979e4c0dfab7e22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 05 02:01:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-6926310a7b527b0a226540de2a6985d0eb2181f90eeab80330374d5b37eafc40-merged.mount: Deactivated successfully.
Dec 05 02:01:34 compute-0 podman[430952]: 2025-12-05 02:01:34.025669588 +0000 UTC m=+1.242101996 container remove 2a1e9d887ccbfc08bd45a870234bd746908ab28e5c2b5fa3b979e4c0dfab7e22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:01:34 compute-0 systemd[1]: libpod-conmon-2a1e9d887ccbfc08bd45a870234bd746908ab28e5c2b5fa3b979e4c0dfab7e22.scope: Deactivated successfully.
Dec 05 02:01:34 compute-0 sudo[430829]: pam_unix(sudo:session): session closed for user root
Dec 05 02:01:34 compute-0 sudo[431010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:01:34 compute-0 sudo[431010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:01:34 compute-0 sudo[431010]: pam_unix(sudo:session): session closed for user root
Dec 05 02:01:34 compute-0 sudo[431035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:01:34 compute-0 sudo[431035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:01:34 compute-0 sudo[431035]: pam_unix(sudo:session): session closed for user root
Dec 05 02:01:34 compute-0 sudo[431060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:01:34 compute-0 sudo[431060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:01:34 compute-0 sudo[431060]: pam_unix(sudo:session): session closed for user root
Dec 05 02:01:34 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2677156586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:01:34 compute-0 ceph-mon[192914]: pgmap v1565: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 02:01:34 compute-0 sudo[431085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:01:34 compute-0 sudo[431085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:01:35 compute-0 podman[431148]: 2025-12-05 02:01:35.048719083 +0000 UTC m=+0.099559942 container create 4e24370165c002c3af2b079dd982b11e685c97de45ce48fadeb2cea4fb80edd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_liskov, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Dec 05 02:01:35 compute-0 podman[431148]: 2025-12-05 02:01:34.997290051 +0000 UTC m=+0.048130990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:01:35 compute-0 systemd[1]: Started libpod-conmon-4e24370165c002c3af2b079dd982b11e685c97de45ce48fadeb2cea4fb80edd5.scope.
Dec 05 02:01:35 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:01:35 compute-0 podman[431148]: 2025-12-05 02:01:35.177112702 +0000 UTC m=+0.227953571 container init 4e24370165c002c3af2b079dd982b11e685c97de45ce48fadeb2cea4fb80edd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_liskov, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 02:01:35 compute-0 podman[431148]: 2025-12-05 02:01:35.187997257 +0000 UTC m=+0.238838116 container start 4e24370165c002c3af2b079dd982b11e685c97de45ce48fadeb2cea4fb80edd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_liskov, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 05 02:01:35 compute-0 podman[431148]: 2025-12-05 02:01:35.193038968 +0000 UTC m=+0.243879927 container attach 4e24370165c002c3af2b079dd982b11e685c97de45ce48fadeb2cea4fb80edd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 05 02:01:35 compute-0 quirky_liskov[431164]: 167 167
Dec 05 02:01:35 compute-0 systemd[1]: libpod-4e24370165c002c3af2b079dd982b11e685c97de45ce48fadeb2cea4fb80edd5.scope: Deactivated successfully.
Dec 05 02:01:35 compute-0 podman[431148]: 2025-12-05 02:01:35.196341701 +0000 UTC m=+0.247182560 container died 4e24370165c002c3af2b079dd982b11e685c97de45ce48fadeb2cea4fb80edd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 05 02:01:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-79deee6cc8ad23f24571e2c15434866f01a95fd0d7af108225bb7d8b7342235c-merged.mount: Deactivated successfully.
Dec 05 02:01:35 compute-0 podman[431148]: 2025-12-05 02:01:35.264465 +0000 UTC m=+0.315305859 container remove 4e24370165c002c3af2b079dd982b11e685c97de45ce48fadeb2cea4fb80edd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_liskov, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Dec 05 02:01:35 compute-0 systemd[1]: libpod-conmon-4e24370165c002c3af2b079dd982b11e685c97de45ce48fadeb2cea4fb80edd5.scope: Deactivated successfully.
Dec 05 02:01:35 compute-0 podman[431187]: 2025-12-05 02:01:35.529936651 +0000 UTC m=+0.078110360 container create 1664a415665b04987e7a2563a18736c15c7a50659ec5bc27db116ec7d1a1586e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 02:01:35 compute-0 podman[431187]: 2025-12-05 02:01:35.498291744 +0000 UTC m=+0.046465503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:01:35 compute-0 systemd[1]: Started libpod-conmon-1664a415665b04987e7a2563a18736c15c7a50659ec5bc27db116ec7d1a1586e.scope.
Dec 05 02:01:35 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77823af6dac19a6b181bb2701eda02a0deb3259fb1cf8147731634ad6b2614a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77823af6dac19a6b181bb2701eda02a0deb3259fb1cf8147731634ad6b2614a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77823af6dac19a6b181bb2701eda02a0deb3259fb1cf8147731634ad6b2614a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77823af6dac19a6b181bb2701eda02a0deb3259fb1cf8147731634ad6b2614a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:01:35 compute-0 podman[431187]: 2025-12-05 02:01:35.67402089 +0000 UTC m=+0.222194629 container init 1664a415665b04987e7a2563a18736c15c7a50659ec5bc27db116ec7d1a1586e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cohen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 05 02:01:35 compute-0 podman[431187]: 2025-12-05 02:01:35.696550081 +0000 UTC m=+0.244723800 container start 1664a415665b04987e7a2563a18736c15c7a50659ec5bc27db116ec7d1a1586e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cohen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 02:01:35 compute-0 podman[431187]: 2025-12-05 02:01:35.703431494 +0000 UTC m=+0.251605203 container attach 1664a415665b04987e7a2563a18736c15c7a50659ec5bc27db116ec7d1a1586e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cohen, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:01:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1566: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 02:01:36 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:36.533 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:01:36 compute-0 nova_compute[349548]: 2025-12-05 02:01:36.792 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:36 compute-0 optimistic_cohen[431203]: {
Dec 05 02:01:36 compute-0 optimistic_cohen[431203]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:01:36 compute-0 optimistic_cohen[431203]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:01:36 compute-0 optimistic_cohen[431203]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:01:36 compute-0 optimistic_cohen[431203]:         "osd_id": 0,
Dec 05 02:01:36 compute-0 optimistic_cohen[431203]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:01:36 compute-0 optimistic_cohen[431203]:         "type": "bluestore"
Dec 05 02:01:36 compute-0 optimistic_cohen[431203]:     },
Dec 05 02:01:36 compute-0 optimistic_cohen[431203]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:01:36 compute-0 optimistic_cohen[431203]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:01:36 compute-0 optimistic_cohen[431203]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:01:36 compute-0 optimistic_cohen[431203]:         "osd_id": 1,
Dec 05 02:01:36 compute-0 optimistic_cohen[431203]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:01:36 compute-0 optimistic_cohen[431203]:         "type": "bluestore"
Dec 05 02:01:36 compute-0 optimistic_cohen[431203]:     },
Dec 05 02:01:36 compute-0 optimistic_cohen[431203]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:01:36 compute-0 optimistic_cohen[431203]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:01:36 compute-0 optimistic_cohen[431203]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:01:36 compute-0 optimistic_cohen[431203]:         "osd_id": 2,
Dec 05 02:01:36 compute-0 optimistic_cohen[431203]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:01:36 compute-0 optimistic_cohen[431203]:         "type": "bluestore"
Dec 05 02:01:36 compute-0 optimistic_cohen[431203]:     }
Dec 05 02:01:36 compute-0 optimistic_cohen[431203]: }
Dec 05 02:01:36 compute-0 ceph-mon[192914]: pgmap v1566: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 02:01:36 compute-0 systemd[1]: libpod-1664a415665b04987e7a2563a18736c15c7a50659ec5bc27db116ec7d1a1586e.scope: Deactivated successfully.
Dec 05 02:01:36 compute-0 podman[431187]: 2025-12-05 02:01:36.906148086 +0000 UTC m=+1.454321775 container died 1664a415665b04987e7a2563a18736c15c7a50659ec5bc27db116ec7d1a1586e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cohen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:01:36 compute-0 systemd[1]: libpod-1664a415665b04987e7a2563a18736c15c7a50659ec5bc27db116ec7d1a1586e.scope: Consumed 1.210s CPU time.
Dec 05 02:01:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-77823af6dac19a6b181bb2701eda02a0deb3259fb1cf8147731634ad6b2614a9-merged.mount: Deactivated successfully.
Dec 05 02:01:37 compute-0 podman[431187]: 2025-12-05 02:01:37.110173424 +0000 UTC m=+1.658347153 container remove 1664a415665b04987e7a2563a18736c15c7a50659ec5bc27db116ec7d1a1586e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cohen, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:01:37 compute-0 sudo[431085]: pam_unix(sudo:session): session closed for user root
Dec 05 02:01:37 compute-0 systemd[1]: libpod-conmon-1664a415665b04987e7a2563a18736c15c7a50659ec5bc27db116ec7d1a1586e.scope: Deactivated successfully.
Dec 05 02:01:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:01:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:01:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:01:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:01:37 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8d5a668e-eecf-4c2c-980b-51609e88500d does not exist
Dec 05 02:01:37 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c7a6f763-d5d2-4292-beb9-bd5debf483af does not exist
Dec 05 02:01:37 compute-0 sudo[431247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:01:37 compute-0 sudo[431247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:01:37 compute-0 sudo[431247]: pam_unix(sudo:session): session closed for user root
Dec 05 02:01:37 compute-0 sudo[431272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:01:37 compute-0 sudo[431272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:01:37 compute-0 sudo[431272]: pam_unix(sudo:session): session closed for user root
Dec 05 02:01:37 compute-0 nova_compute[349548]: 2025-12-05 02:01:37.522 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:01:37 compute-0 nova_compute[349548]: 2025-12-05 02:01:37.549 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:01:37 compute-0 nova_compute[349548]: 2025-12-05 02:01:37.550 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:01:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1567: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 02:01:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:01:38 compute-0 nova_compute[349548]: 2025-12-05 02:01:38.104 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:01:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:01:39 compute-0 ceph-mon[192914]: pgmap v1567: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 02:01:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1568: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 02:01:41 compute-0 ceph-mon[192914]: pgmap v1568: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 02:01:41 compute-0 nova_compute[349548]: 2025-12-05 02:01:41.793 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1569: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 02:01:42 compute-0 podman[431299]: 2025-12-05 02:01:42.724541291 +0000 UTC m=+0.118427021 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 02:01:42 compute-0 podman[431298]: 2025-12-05 02:01:42.743120621 +0000 UTC m=+0.134817839 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 05 02:01:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:01:43 compute-0 nova_compute[349548]: 2025-12-05 02:01:43.078 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764900088.0759847, 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:01:43 compute-0 nova_compute[349548]: 2025-12-05 02:01:43.079 349552 INFO nova.compute.manager [-] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] VM Stopped (Lifecycle Event)
Dec 05 02:01:43 compute-0 nova_compute[349548]: 2025-12-05 02:01:43.106 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:43 compute-0 nova_compute[349548]: 2025-12-05 02:01:43.110 349552 DEBUG nova.compute.manager [None req-75552ae3-7a45-41d0-bbc9-7974bced881a - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:01:43 compute-0 ceph-mon[192914]: pgmap v1569: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 02:01:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1570: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 767 B/s wr, 13 op/s
Dec 05 02:01:44 compute-0 podman[431337]: 2025-12-05 02:01:44.741729202 +0000 UTC m=+0.143067601 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 02:01:44 compute-0 podman[431338]: 2025-12-05 02:01:44.761447015 +0000 UTC m=+0.153938446 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 05 02:01:45 compute-0 ceph-mon[192914]: pgmap v1570: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 767 B/s wr, 13 op/s
Dec 05 02:01:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:01:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2067888663' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:01:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:01:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2067888663' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:01:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1571: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2067888663' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:01:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2067888663' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:01:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:01:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:01:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:01:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:01:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:01:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:01:46 compute-0 nova_compute[349548]: 2025-12-05 02:01:46.797 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:47 compute-0 ceph-mon[192914]: pgmap v1571: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1572: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:01:47 compute-0 sshd-session[431376]: Accepted publickey for zuul from 38.102.83.179 port 39490 ssh2: RSA SHA256:TVb6vFiLOEHtrkkdoyIozA4b0isBLmSla+NPtR7bFX8
Dec 05 02:01:47 compute-0 systemd-logind[792]: New session 62 of user zuul.
Dec 05 02:01:47 compute-0 systemd[1]: Started Session 62 of User zuul.
Dec 05 02:01:47 compute-0 sshd-session[431376]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 02:01:48 compute-0 podman[431378]: 2025-12-05 02:01:48.07921891 +0000 UTC m=+0.130156520 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, architecture=x86_64, distribution-scope=public, name=ubi9, io.buildah.version=1.29.0, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, config_id=edpm, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 02:01:48 compute-0 nova_compute[349548]: 2025-12-05 02:01:48.109 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:48 compute-0 ceph-mon[192914]: pgmap v1572: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:48 compute-0 sudo[431573]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wahpcslwdcestxxhncfdcfdumakkabmn ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764900108.1210492-60622-15585018575329/AnsiballZ_command.py'
Dec 05 02:01:48 compute-0 sudo[431573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 02:01:49 compute-0 python3[431575]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 02:01:49 compute-0 sudo[431573]: pam_unix(sudo:session): session closed for user root
Dec 05 02:01:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1573: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:50 compute-0 ceph-mon[192914]: pgmap v1573: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:51 compute-0 nova_compute[349548]: 2025-12-05 02:01:51.799 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1574: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:01:52 compute-0 ceph-mon[192914]: pgmap v1574: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:53 compute-0 nova_compute[349548]: 2025-12-05 02:01:53.111 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1575: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:54 compute-0 ceph-mon[192914]: pgmap v1575: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1576: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:56.195 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:01:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:56.195 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:01:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:56.196 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:01:56 compute-0 podman[431616]: 2025-12-05 02:01:56.697375971 +0000 UTC m=+0.099533021 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd)
Dec 05 02:01:56 compute-0 podman[431617]: 2025-12-05 02:01:56.70413351 +0000 UTC m=+0.105864468 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 02:01:56 compute-0 podman[431619]: 2025-12-05 02:01:56.734937564 +0000 UTC m=+0.124891702 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, architecture=x86_64, io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 02:01:56 compute-0 podman[431618]: 2025-12-05 02:01:56.758683949 +0000 UTC m=+0.152413783 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Dec 05 02:01:56 compute-0 nova_compute[349548]: 2025-12-05 02:01:56.801 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:56 compute-0 ceph-mon[192914]: pgmap v1576: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:01:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1577: 321 pgs: 321 active+clean; 147 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 683 KiB/s wr, 6 op/s
Dec 05 02:01:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:01:58 compute-0 nova_compute[349548]: 2025-12-05 02:01:58.115 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:01:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Dec 05 02:01:58 compute-0 ceph-mon[192914]: pgmap v1577: 321 pgs: 321 active+clean; 147 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 683 KiB/s wr, 6 op/s
Dec 05 02:01:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Dec 05 02:01:58 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Dec 05 02:01:59 compute-0 podman[158197]: time="2025-12-05T02:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:01:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:01:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8648 "" "Go-http-client/1.1"
Dec 05 02:01:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1579: 321 pgs: 321 active+clean; 147 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 820 KiB/s wr, 7 op/s
Dec 05 02:01:59 compute-0 ceph-mon[192914]: osdmap e127: 3 total, 3 up, 3 in
Dec 05 02:02:01 compute-0 ceph-mon[192914]: pgmap v1579: 321 pgs: 321 active+clean; 147 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 820 KiB/s wr, 7 op/s
Dec 05 02:02:01 compute-0 openstack_network_exporter[366555]: ERROR   02:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:02:01 compute-0 openstack_network_exporter[366555]: ERROR   02:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:02:01 compute-0 openstack_network_exporter[366555]: ERROR   02:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:02:01 compute-0 openstack_network_exporter[366555]: ERROR   02:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:02:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:02:01 compute-0 openstack_network_exporter[366555]: ERROR   02:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:02:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:02:01 compute-0 nova_compute[349548]: 2025-12-05 02:02:01.803 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:02:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1580: 321 pgs: 321 active+clean; 147 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 820 KiB/s wr, 7 op/s
Dec 05 02:02:02 compute-0 ovn_controller[89286]: 2025-12-05T02:02:02Z|00057|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 05 02:02:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:02:03 compute-0 ceph-mon[192914]: pgmap v1580: 321 pgs: 321 active+clean; 147 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 820 KiB/s wr, 7 op/s
Dec 05 02:02:03 compute-0 nova_compute[349548]: 2025-12-05 02:02:03.118 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:02:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1581: 321 pgs: 321 active+clean; 155 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Dec 05 02:02:05 compute-0 ceph-mon[192914]: pgmap v1581: 321 pgs: 321 active+clean; 155 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Dec 05 02:02:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1582: 321 pgs: 321 active+clean; 155 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Dec 05 02:02:06 compute-0 nova_compute[349548]: 2025-12-05 02:02:06.417 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "ee0bd3a4-b224-4dad-948c-1362bf56fea1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:02:06 compute-0 nova_compute[349548]: 2025-12-05 02:02:06.418 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ee0bd3a4-b224-4dad-948c-1362bf56fea1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:02:06 compute-0 nova_compute[349548]: 2025-12-05 02:02:06.436 349552 DEBUG nova.compute.manager [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 05 02:02:06 compute-0 nova_compute[349548]: 2025-12-05 02:02:06.518 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:02:06 compute-0 nova_compute[349548]: 2025-12-05 02:02:06.518 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:02:06 compute-0 nova_compute[349548]: 2025-12-05 02:02:06.528 349552 DEBUG nova.virt.hardware [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 05 02:02:06 compute-0 nova_compute[349548]: 2025-12-05 02:02:06.529 349552 INFO nova.compute.claims [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Claim successful on node compute-0.ctlplane.example.com
Dec 05 02:02:06 compute-0 nova_compute[349548]: 2025-12-05 02:02:06.694 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:02:06 compute-0 nova_compute[349548]: 2025-12-05 02:02:06.805 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:02:07 compute-0 ceph-mon[192914]: pgmap v1582: 321 pgs: 321 active+clean; 155 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Dec 05 02:02:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:02:07 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/571145534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.151 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.162 349552 DEBUG nova.compute.provider_tree [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.180 349552 DEBUG nova.scheduler.client.report [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.216 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.217 349552 DEBUG nova.compute.manager [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 05 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.277 349552 DEBUG nova.compute.manager [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Dec 05 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.295 349552 INFO nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 05 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.331 349552 DEBUG nova.compute.manager [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 05 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.432 349552 DEBUG nova.compute.manager [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 05 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.434 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 05 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.435 349552 INFO nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Creating image(s)
Dec 05 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.488 349552 DEBUG nova.storage.rbd_utils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.548 349552 DEBUG nova.storage.rbd_utils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.591 349552 DEBUG nova.storage.rbd_utils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.598 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "c50dad93a0c0d8de9b59bb98a1c7fb911608b410" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.599 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "c50dad93a0c0d8de9b59bb98a1c7fb911608b410" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:02:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1583: 321 pgs: 321 active+clean; 155 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 773 KiB/s wr, 10 op/s
Dec 05 02:02:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.898 349552 DEBUG nova.virt.libvirt.imagebackend [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Image locations are: [{'url': 'rbd://cbd280d3-cbd8-528b-ace6-2b3a887cdcee/images/2f1298d8-b7d4-43bf-b887-b91409888461/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://cbd280d3-cbd8-528b-ace6-2b3a887cdcee/images/2f1298d8-b7d4-43bf-b887-b91409888461/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 05 02:02:08 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/571145534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:02:08 compute-0 nova_compute[349548]: 2025-12-05 02:02:08.122 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.039 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:02:09 compute-0 ceph-mon[192914]: pgmap v1583: 321 pgs: 321 active+clean; 155 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 773 KiB/s wr, 10 op/s
Dec 05 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.143 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410.part --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.144 349552 DEBUG nova.virt.images [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] 2f1298d8-b7d4-43bf-b887-b91409888461 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Dec 05 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.145 349552 DEBUG nova.privsep.utils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 05 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.146 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410.part /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.328 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410.part /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410.converted" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.331 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.387 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410.converted --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.388 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "c50dad93a0c0d8de9b59bb98a1c7fb911608b410" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.419 349552 DEBUG nova.storage.rbd_utils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.428 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410 ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.783 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410 ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:02:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1584: 321 pgs: 321 active+clean; 155 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 6.9 KiB/s rd, 713 KiB/s wr, 9 op/s
Dec 05 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.903 349552 DEBUG nova.storage.rbd_utils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] resizing rbd image ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.110 349552 DEBUG nova.objects.instance [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'migration_context' on Instance uuid ee0bd3a4-b224-4dad-948c-1362bf56fea1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.160 349552 DEBUG nova.storage.rbd_utils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.198 349552 DEBUG nova.storage.rbd_utils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.206 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.282 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.283 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.284 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.284 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.319 349552 DEBUG nova.storage.rbd_utils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.326 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.800 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.946 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.947 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Ensure instance console log exists: /var/lib/nova/instances/ee0bd3a4-b224-4dad-948c-1362bf56fea1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.947 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.948 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.948 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.949 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-05T02:01:53Z,direct_url=<?>,disk_format='qcow2',id=2f1298d8-b7d4-43bf-b887-b91409888461,min_disk=0,min_ram=0,name='fvt_testing_image',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-05T02:01:59Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': '2f1298d8-b7d4-43bf-b887-b91409888461'}], 'ephemerals': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.958 349552 WARNING nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.968 349552 DEBUG nova.virt.libvirt.host [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.969 349552 DEBUG nova.virt.libvirt.host [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.974 349552 DEBUG nova.virt.libvirt.host [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.975 349552 DEBUG nova.virt.libvirt.host [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.976 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.976 349552 DEBUG nova.virt.hardware [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T02:02:01Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='d5a74f49-c758-455f-8cdb-cc9a5a969d77',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-05T02:01:53Z,direct_url=<?>,disk_format='qcow2',id=2f1298d8-b7d4-43bf-b887-b91409888461,min_disk=0,min_ram=0,name='fvt_testing_image',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-05T02:01:59Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.978 349552 DEBUG nova.virt.hardware [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.978 349552 DEBUG nova.virt.hardware [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.979 349552 DEBUG nova.virt.hardware [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.979 349552 DEBUG nova.virt.hardware [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.980 349552 DEBUG nova.virt.hardware [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.981 349552 DEBUG nova.virt.hardware [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.981 349552 DEBUG nova.virt.hardware [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.982 349552 DEBUG nova.virt.hardware [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.982 349552 DEBUG nova.virt.hardware [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.982 349552 DEBUG nova.virt.hardware [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 05 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.985 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:02:11 compute-0 ceph-mon[192914]: pgmap v1584: 321 pgs: 321 active+clean; 155 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 6.9 KiB/s rd, 713 KiB/s wr, 9 op/s
Dec 05 02:02:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 02:02:11 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1132399297' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:02:11 compute-0 nova_compute[349548]: 2025-12-05 02:02:11.496 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:02:11 compute-0 nova_compute[349548]: 2025-12-05 02:02:11.498 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:02:11 compute-0 nova_compute[349548]: 2025-12-05 02:02:11.808 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:02:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1585: 321 pgs: 321 active+clean; 164 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.0 MiB/s wr, 20 op/s
Dec 05 02:02:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 02:02:11 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3698465419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:02:11 compute-0 nova_compute[349548]: 2025-12-05 02:02:11.985 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:02:12 compute-0 nova_compute[349548]: 2025-12-05 02:02:12.028 349552 DEBUG nova.storage.rbd_utils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:02:12 compute-0 nova_compute[349548]: 2025-12-05 02:02:12.036 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:02:12 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1132399297' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:02:12 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3698465419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:02:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 02:02:12 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2274756270' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:02:12 compute-0 nova_compute[349548]: 2025-12-05 02:02:12.531 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:02:12 compute-0 nova_compute[349548]: 2025-12-05 02:02:12.533 349552 DEBUG nova.objects.instance [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'pci_devices' on Instance uuid ee0bd3a4-b224-4dad-948c-1362bf56fea1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:02:12 compute-0 nova_compute[349548]: 2025-12-05 02:02:12.549 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] End _get_guest_xml xml=<domain type="kvm">
Dec 05 02:02:12 compute-0 nova_compute[349548]:   <uuid>ee0bd3a4-b224-4dad-948c-1362bf56fea1</uuid>
Dec 05 02:02:12 compute-0 nova_compute[349548]:   <name>instance-00000005</name>
Dec 05 02:02:12 compute-0 nova_compute[349548]:   <memory>524288</memory>
Dec 05 02:02:12 compute-0 nova_compute[349548]:   <vcpu>1</vcpu>
Dec 05 02:02:12 compute-0 nova_compute[349548]:   <metadata>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <nova:name>fvt_testing_server</nova:name>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <nova:creationTime>2025-12-05 02:02:10</nova:creationTime>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <nova:flavor name="fvt_testing_flavor">
Dec 05 02:02:12 compute-0 nova_compute[349548]:         <nova:memory>512</nova:memory>
Dec 05 02:02:12 compute-0 nova_compute[349548]:         <nova:disk>1</nova:disk>
Dec 05 02:02:12 compute-0 nova_compute[349548]:         <nova:swap>0</nova:swap>
Dec 05 02:02:12 compute-0 nova_compute[349548]:         <nova:ephemeral>1</nova:ephemeral>
Dec 05 02:02:12 compute-0 nova_compute[349548]:         <nova:vcpus>1</nova:vcpus>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       </nova:flavor>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <nova:owner>
Dec 05 02:02:12 compute-0 nova_compute[349548]:         <nova:user uuid="ff880837791d4f49a54672b8d0e705ff">admin</nova:user>
Dec 05 02:02:12 compute-0 nova_compute[349548]:         <nova:project uuid="6ad982b73954486390215862ee62239f">admin</nova:project>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       </nova:owner>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <nova:root type="image" uuid="2f1298d8-b7d4-43bf-b887-b91409888461"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <nova:ports/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     </nova:instance>
Dec 05 02:02:12 compute-0 nova_compute[349548]:   </metadata>
Dec 05 02:02:12 compute-0 nova_compute[349548]:   <sysinfo type="smbios">
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <system>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <entry name="manufacturer">RDO</entry>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <entry name="product">OpenStack Compute</entry>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <entry name="serial">ee0bd3a4-b224-4dad-948c-1362bf56fea1</entry>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <entry name="uuid">ee0bd3a4-b224-4dad-948c-1362bf56fea1</entry>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <entry name="family">Virtual Machine</entry>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     </system>
Dec 05 02:02:12 compute-0 nova_compute[349548]:   </sysinfo>
Dec 05 02:02:12 compute-0 nova_compute[349548]:   <os>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <boot dev="hd"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <smbios mode="sysinfo"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:   </os>
Dec 05 02:02:12 compute-0 nova_compute[349548]:   <features>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <acpi/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <apic/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <vmcoreinfo/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:   </features>
Dec 05 02:02:12 compute-0 nova_compute[349548]:   <clock offset="utc">
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <timer name="pit" tickpolicy="delay"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <timer name="hpet" present="no"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:   </clock>
Dec 05 02:02:12 compute-0 nova_compute[349548]:   <cpu mode="host-model" match="exact">
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <topology sockets="1" cores="1" threads="1"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:   </cpu>
Dec 05 02:02:12 compute-0 nova_compute[349548]:   <devices>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <disk type="network" device="disk">
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk">
Dec 05 02:02:12 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       </source>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 02:02:12 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       </auth>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <target dev="vda" bus="virtio"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     </disk>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <disk type="network" device="disk">
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk.eph0">
Dec 05 02:02:12 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       </source>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 02:02:12 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       </auth>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <target dev="vdb" bus="virtio"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     </disk>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <disk type="network" device="cdrom">
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk.config">
Dec 05 02:02:12 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       </source>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 02:02:12 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       </auth>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <target dev="sda" bus="sata"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     </disk>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <serial type="pty">
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <log file="/var/lib/nova/instances/ee0bd3a4-b224-4dad-948c-1362bf56fea1/console.log" append="off"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     </serial>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <video>
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     </video>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <input type="tablet" bus="usb"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <rng model="virtio">
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <backend model="random">/dev/urandom</backend>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     </rng>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <controller type="usb" index="0"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     <memballoon model="virtio">
Dec 05 02:02:12 compute-0 nova_compute[349548]:       <stats period="10"/>
Dec 05 02:02:12 compute-0 nova_compute[349548]:     </memballoon>
Dec 05 02:02:12 compute-0 nova_compute[349548]:   </devices>
Dec 05 02:02:12 compute-0 nova_compute[349548]: </domain>
Dec 05 02:02:12 compute-0 nova_compute[349548]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 05 02:02:12 compute-0 nova_compute[349548]: 2025-12-05 02:02:12.624 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 02:02:12 compute-0 nova_compute[349548]: 2025-12-05 02:02:12.625 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 02:02:12 compute-0 nova_compute[349548]: 2025-12-05 02:02:12.626 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 02:02:12 compute-0 nova_compute[349548]: 2025-12-05 02:02:12.627 349552 INFO nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Using config drive
Dec 05 02:02:12 compute-0 nova_compute[349548]: 2025-12-05 02:02:12.664 349552 DEBUG nova.storage.rbd_utils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:02:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:02:13 compute-0 nova_compute[349548]: 2025-12-05 02:02:13.005 349552 INFO nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Creating config drive at /var/lib/nova/instances/ee0bd3a4-b224-4dad-948c-1362bf56fea1/disk.config
Dec 05 02:02:13 compute-0 nova_compute[349548]: 2025-12-05 02:02:13.016 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ee0bd3a4-b224-4dad-948c-1362bf56fea1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnatkm2o0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:02:13 compute-0 ceph-mon[192914]: pgmap v1585: 321 pgs: 321 active+clean; 164 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.0 MiB/s wr, 20 op/s
Dec 05 02:02:13 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2274756270' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:02:13 compute-0 nova_compute[349548]: 2025-12-05 02:02:13.126 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:02:13 compute-0 nova_compute[349548]: 2025-12-05 02:02:13.168 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ee0bd3a4-b224-4dad-948c-1362bf56fea1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnatkm2o0" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:02:13 compute-0 nova_compute[349548]: 2025-12-05 02:02:13.220 349552 DEBUG nova.storage.rbd_utils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:02:13 compute-0 nova_compute[349548]: 2025-12-05 02:02:13.228 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ee0bd3a4-b224-4dad-948c-1362bf56fea1/disk.config ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:02:13 compute-0 nova_compute[349548]: 2025-12-05 02:02:13.462 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ee0bd3a4-b224-4dad-948c-1362bf56fea1/disk.config ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:02:13 compute-0 nova_compute[349548]: 2025-12-05 02:02:13.463 349552 INFO nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Deleting local config drive /var/lib/nova/instances/ee0bd3a4-b224-4dad-948c-1362bf56fea1/disk.config because it was imported into RBD.
Dec 05 02:02:13 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 05 02:02:13 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 05 02:02:13 compute-0 podman[432172]: 2025-12-05 02:02:13.595960943 +0000 UTC m=+0.092663627 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:02:13 compute-0 podman[432171]: 2025-12-05 02:02:13.599271546 +0000 UTC m=+0.100410264 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:02:13 compute-0 systemd-machined[138700]: New machine qemu-5-instance-00000005.
Dec 05 02:02:13 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Dec 05 02:02:13 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 05 02:02:13 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 05 02:02:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1586: 321 pgs: 321 active+clean; 186 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.0 MiB/s wr, 51 op/s
Dec 05 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.707 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900134.7063084, ee0bd3a4-b224-4dad-948c-1362bf56fea1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.708 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] VM Resumed (Lifecycle Event)
Dec 05 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.716 349552 DEBUG nova.compute.manager [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 05 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.717 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 05 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.730 349552 INFO nova.virt.libvirt.driver [-] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Instance spawned successfully.
Dec 05 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.736 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 05 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.762 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.772 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.801 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.801 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.802 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.803 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.804 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.804 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.808 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.809 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900134.7153697, ee0bd3a4-b224-4dad-948c-1362bf56fea1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.809 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] VM Started (Lifecycle Event)
Dec 05 02:02:14 compute-0 podman[432328]: 2025-12-05 02:02:14.918660538 +0000 UTC m=+0.126343022 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:02:14 compute-0 podman[432329]: 2025-12-05 02:02:14.944243795 +0000 UTC m=+0.128003409 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.952 349552 INFO nova.compute.manager [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Took 7.52 seconds to spawn the instance on the hypervisor.
Dec 05 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.952 349552 DEBUG nova.compute.manager [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.969 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.982 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 02:02:15 compute-0 nova_compute[349548]: 2025-12-05 02:02:15.023 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 02:02:15 compute-0 nova_compute[349548]: 2025-12-05 02:02:15.058 349552 INFO nova.compute.manager [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Took 8.57 seconds to build instance.
Dec 05 02:02:15 compute-0 nova_compute[349548]: 2025-12-05 02:02:15.082 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ee0bd3a4-b224-4dad-948c-1362bf56fea1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:02:15 compute-0 ceph-mon[192914]: pgmap v1586: 321 pgs: 321 active+clean; 186 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.0 MiB/s wr, 51 op/s
Dec 05 02:02:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1587: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 51 op/s
Dec 05 02:02:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:02:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:02:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:02:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:02:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:02:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:02:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:02:16
Dec 05 02:02:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:02:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:02:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'images', '.mgr', 'default.rgw.control', 'vms', 'default.rgw.log', '.rgw.root', 'default.rgw.meta']
Dec 05 02:02:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:02:16 compute-0 nova_compute[349548]: 2025-12-05 02:02:16.811 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:02:17 compute-0 ceph-mon[192914]: pgmap v1587: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 51 op/s
Dec 05 02:02:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:02:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:02:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:02:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:02:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:02:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:02:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:02:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:02:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:02:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:02:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1588: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 78 op/s
Dec 05 02:02:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:02:18 compute-0 nova_compute[349548]: 2025-12-05 02:02:18.129 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:02:18 compute-0 podman[432364]: 2025-12-05 02:02:18.741653824 +0000 UTC m=+0.148974027 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9, config_id=edpm, managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-container, distribution-scope=public)
Dec 05 02:02:19 compute-0 ceph-mon[192914]: pgmap v1588: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 78 op/s
Dec 05 02:02:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1589: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 78 op/s
Dec 05 02:02:20 compute-0 ceph-mon[192914]: pgmap v1589: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 78 op/s
Dec 05 02:02:21 compute-0 nova_compute[349548]: 2025-12-05 02:02:21.814 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:02:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1590: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.4 MiB/s wr, 91 op/s
Dec 05 02:02:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:02:22 compute-0 ceph-mon[192914]: pgmap v1590: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.4 MiB/s wr, 91 op/s
Dec 05 02:02:23 compute-0 nova_compute[349548]: 2025-12-05 02:02:23.132 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:02:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1591: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 1007 KiB/s wr, 92 op/s
Dec 05 02:02:25 compute-0 ceph-mon[192914]: pgmap v1591: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 1007 KiB/s wr, 92 op/s
Dec 05 02:02:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1592: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 14 KiB/s wr, 61 op/s
Dec 05 02:02:26 compute-0 ceph-mon[192914]: pgmap v1592: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 14 KiB/s wr, 61 op/s
Dec 05 02:02:26 compute-0 nova_compute[349548]: 2025-12-05 02:02:26.818 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0013715879769811811 of space, bias 1.0, pg target 0.41147639309435435 quantized to 32 (current 32)
Dec 05 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0005066271692062251 of space, bias 1.0, pg target 0.15198815076186756 quantized to 32 (current 32)
Dec 05 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:02:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:02:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3000.0 total, 600.0 interval
                                            Cumulative writes: 7304 writes, 32K keys, 7304 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                            Cumulative WAL: 7304 writes, 7304 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1331 writes, 6016 keys, 1331 commit groups, 1.0 writes per commit group, ingest: 8.58 MB, 0.01 MB/s
                                            Interval WAL: 1331 writes, 1331 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    104.3      0.38              0.17        19    0.020       0      0       0.0       0.0
                                              L6      1/0    8.41 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.3    116.6     94.2      1.37              0.53        18    0.076     86K    10K       0.0       0.0
                                             Sum      1/0    8.41 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.3     91.5     96.4      1.75              0.70        37    0.047     86K    10K       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.4     66.8     69.2      0.57              0.15         8    0.071     22K   2515       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    116.6     94.2      1.37              0.53        18    0.076     86K    10K       0.0       0.0
                                            High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    105.6      0.37              0.17        18    0.021       0      0       0.0       0.0
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 3000.0 total, 600.0 interval
                                            Flush(GB): cumulative 0.038, interval 0.009
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.16 GB write, 0.06 MB/s write, 0.16 GB read, 0.05 MB/s read, 1.7 seconds
                                            Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.6 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56463779d1f0#2 capacity: 308.00 MB usage: 19.63 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000169 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1275,18.96 MB,6.1552%) FilterBlock(38,245.17 KB,0.0777356%) IndexBlock(38,444.05 KB,0.140792%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Dec 05 02:02:27 compute-0 nova_compute[349548]: 2025-12-05 02:02:27.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:02:27 compute-0 nova_compute[349548]: 2025-12-05 02:02:27.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:02:27 compute-0 podman[432388]: 2025-12-05 02:02:27.706202233 +0000 UTC m=+0.107710870 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, config_id=edpm, vendor=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9)
Dec 05 02:02:27 compute-0 podman[432385]: 2025-12-05 02:02:27.710083732 +0000 UTC m=+0.126455325 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 02:02:27 compute-0 podman[432386]: 2025-12-05 02:02:27.71605914 +0000 UTC m=+0.125677764 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:02:27 compute-0 podman[432387]: 2025-12-05 02:02:27.74818797 +0000 UTC m=+0.141911418 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 05 02:02:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1593: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 255 B/s wr, 52 op/s
Dec 05 02:02:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:27.854280) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900147854308, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 852, "num_deletes": 256, "total_data_size": 1097730, "memory_usage": 1124608, "flush_reason": "Manual Compaction"}
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900147862054, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1087339, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32165, "largest_seqno": 33016, "table_properties": {"data_size": 1083024, "index_size": 1967, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9440, "raw_average_key_size": 19, "raw_value_size": 1074294, "raw_average_value_size": 2179, "num_data_blocks": 88, "num_entries": 493, "num_filter_entries": 493, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764900076, "oldest_key_time": 1764900076, "file_creation_time": 1764900147, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 7837 microseconds, and 3483 cpu microseconds.
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:27.862117) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1087339 bytes OK
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:27.862131) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:27.863869) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:27.863923) EVENT_LOG_v1 {"time_micros": 1764900147863876, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:27.863939) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1093514, prev total WAL file size 1093514, number of live WAL files 2.
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:27.864803) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303034' seq:72057594037927935, type:22 .. '6C6F676D0031323536' seq:0, type:0; will stop at (end)
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1061KB)], [71(8614KB)]
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900147864953, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 9908207, "oldest_snapshot_seqno": -1}
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5303 keys, 9805262 bytes, temperature: kUnknown
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900147991377, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 9805262, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9767668, "index_size": 23212, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 133855, "raw_average_key_size": 25, "raw_value_size": 9669658, "raw_average_value_size": 1823, "num_data_blocks": 958, "num_entries": 5303, "num_filter_entries": 5303, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764900147, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:27.991687) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9805262 bytes
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:27.994509) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 78.3 rd, 77.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 8.4 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(18.1) write-amplify(9.0) OK, records in: 5831, records dropped: 528 output_compression: NoCompression
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:27.994631) EVENT_LOG_v1 {"time_micros": 1764900147994617, "job": 40, "event": "compaction_finished", "compaction_time_micros": 126514, "compaction_time_cpu_micros": 46556, "output_level": 6, "num_output_files": 1, "total_output_size": 9805262, "num_input_records": 5831, "num_output_records": 5303, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:02:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900147995235, "job": 40, "event": "table_file_deletion", "file_number": 73}
Dec 05 02:02:28 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:02:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900148009742, "job": 40, "event": "table_file_deletion", "file_number": 71}
Dec 05 02:02:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:27.864508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:02:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:28.009944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:02:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:28.009949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:02:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:28.009951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:02:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:28.009953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:02:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:28.009954) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:02:28 compute-0 nova_compute[349548]: 2025-12-05 02:02:28.135 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:02:28 compute-0 ceph-mon[192914]: pgmap v1593: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 255 B/s wr, 52 op/s
Dec 05 02:02:29 compute-0 nova_compute[349548]: 2025-12-05 02:02:29.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:02:29 compute-0 nova_compute[349548]: 2025-12-05 02:02:29.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:02:29 compute-0 nova_compute[349548]: 2025-12-05 02:02:29.069 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:02:29 compute-0 podman[158197]: time="2025-12-05T02:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:02:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:02:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8642 "" "Go-http-client/1.1"
Dec 05 02:02:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1594: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 809 KiB/s rd, 25 op/s
Dec 05 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.074 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.074 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.074 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.074 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b69a0e24-1bc4-46a5-92d7-367c1efd53df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.288 349552 DEBUG oslo_concurrency.lockutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "ee0bd3a4-b224-4dad-948c-1362bf56fea1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.288 349552 DEBUG oslo_concurrency.lockutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ee0bd3a4-b224-4dad-948c-1362bf56fea1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.289 349552 DEBUG oslo_concurrency.lockutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "ee0bd3a4-b224-4dad-948c-1362bf56fea1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.289 349552 DEBUG oslo_concurrency.lockutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ee0bd3a4-b224-4dad-948c-1362bf56fea1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.290 349552 DEBUG oslo_concurrency.lockutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ee0bd3a4-b224-4dad-948c-1362bf56fea1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.292 349552 INFO nova.compute.manager [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Terminating instance
Dec 05 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.295 349552 DEBUG oslo_concurrency.lockutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "refresh_cache-ee0bd3a4-b224-4dad-948c-1362bf56fea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.296 349552 DEBUG oslo_concurrency.lockutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquired lock "refresh_cache-ee0bd3a4-b224-4dad-948c-1362bf56fea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.297 349552 DEBUG nova.network.neutron [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 05 02:02:30 compute-0 ceph-mon[192914]: pgmap v1594: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 809 KiB/s rd, 25 op/s
Dec 05 02:02:31 compute-0 nova_compute[349548]: 2025-12-05 02:02:31.171 349552 DEBUG nova.network.neutron [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 05 02:02:31 compute-0 openstack_network_exporter[366555]: ERROR   02:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:02:31 compute-0 openstack_network_exporter[366555]: ERROR   02:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:02:31 compute-0 openstack_network_exporter[366555]: ERROR   02:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:02:31 compute-0 openstack_network_exporter[366555]: ERROR   02:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:02:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:02:31 compute-0 openstack_network_exporter[366555]: ERROR   02:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:02:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:02:31 compute-0 nova_compute[349548]: 2025-12-05 02:02:31.736 349552 DEBUG nova.network.neutron [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:02:31 compute-0 nova_compute[349548]: 2025-12-05 02:02:31.760 349552 DEBUG oslo_concurrency.lockutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Releasing lock "refresh_cache-ee0bd3a4-b224-4dad-948c-1362bf56fea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:02:31 compute-0 nova_compute[349548]: 2025-12-05 02:02:31.762 349552 DEBUG nova.compute.manager [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 05 02:02:31 compute-0 nova_compute[349548]: 2025-12-05 02:02:31.820 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:02:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1595: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 809 KiB/s rd, 25 op/s
Dec 05 02:02:31 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Dec 05 02:02:31 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 18.707s CPU time.
Dec 05 02:02:31 compute-0 systemd-machined[138700]: Machine qemu-5-instance-00000005 terminated.
Dec 05 02:02:32 compute-0 nova_compute[349548]: 2025-12-05 02:02:32.000 349552 INFO nova.virt.libvirt.driver [-] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Instance destroyed successfully.
Dec 05 02:02:32 compute-0 nova_compute[349548]: 2025-12-05 02:02:32.001 349552 DEBUG nova.objects.instance [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'resources' on Instance uuid ee0bd3a4-b224-4dad-948c-1362bf56fea1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:02:32 compute-0 nova_compute[349548]: 2025-12-05 02:02:32.174 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updating instance_info_cache with network_info: [{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:02:32 compute-0 nova_compute[349548]: 2025-12-05 02:02:32.192 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:02:32 compute-0 nova_compute[349548]: 2025-12-05 02:02:32.193 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 02:02:32 compute-0 nova_compute[349548]: 2025-12-05 02:02:32.194 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:02:32 compute-0 nova_compute[349548]: 2025-12-05 02:02:32.194 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:02:32 compute-0 nova_compute[349548]: 2025-12-05 02:02:32.194 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:02:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:02:32 compute-0 ceph-mon[192914]: pgmap v1595: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 809 KiB/s rd, 25 op/s
Dec 05 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.103 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.103 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.104 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.104 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.105 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.138 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.307 349552 INFO nova.virt.libvirt.driver [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Deleting instance files /var/lib/nova/instances/ee0bd3a4-b224-4dad-948c-1362bf56fea1_del
Dec 05 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.308 349552 INFO nova.virt.libvirt.driver [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Deletion of /var/lib/nova/instances/ee0bd3a4-b224-4dad-948c-1362bf56fea1_del complete
Dec 05 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.532 349552 INFO nova.compute.manager [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Took 1.77 seconds to destroy the instance on the hypervisor.
Dec 05 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.533 349552 DEBUG oslo.service.loopingcall [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 05 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.533 349552 DEBUG nova.compute.manager [-] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 05 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.533 349552 DEBUG nova.network.neutron [-] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 05 02:02:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:02:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/862283307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.603 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.702 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.702 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.702 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.712 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.712 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.713 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:02:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1596: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 12 op/s
Dec 05 02:02:33 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/862283307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.047 349552 DEBUG nova.network.neutron [-] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 05 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.064 349552 DEBUG nova.network.neutron [-] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.076 349552 INFO nova.compute.manager [-] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Took 0.54 seconds to deallocate network for instance.
Dec 05 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.121 349552 DEBUG oslo_concurrency.lockutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.122 349552 DEBUG oslo_concurrency.lockutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.176 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.178 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3608MB free_disk=59.906002044677734GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.178 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.272 349552 DEBUG oslo_concurrency.processutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:02:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:02:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4002993508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.832 349552 DEBUG oslo_concurrency.processutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.842 349552 DEBUG nova.compute.provider_tree [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.869 349552 DEBUG nova.scheduler.client.report [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.910 349552 DEBUG oslo_concurrency.lockutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.914 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.951 349552 INFO nova.scheduler.client.report [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Deleted allocations for instance ee0bd3a4-b224-4dad-948c-1362bf56fea1
Dec 05 02:02:34 compute-0 ceph-mon[192914]: pgmap v1596: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 12 op/s
Dec 05 02:02:34 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4002993508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.997 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.997 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.998 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.998 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:02:35 compute-0 nova_compute[349548]: 2025-12-05 02:02:35.010 349552 DEBUG oslo_concurrency.lockutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ee0bd3a4-b224-4dad-948c-1362bf56fea1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:02:35 compute-0 nova_compute[349548]: 2025-12-05 02:02:35.048 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:02:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:02:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2501496586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:02:35 compute-0 nova_compute[349548]: 2025-12-05 02:02:35.584 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:02:35 compute-0 nova_compute[349548]: 2025-12-05 02:02:35.596 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:02:35 compute-0 nova_compute[349548]: 2025-12-05 02:02:35.629 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:02:35 compute-0 nova_compute[349548]: 2025-12-05 02:02:35.689 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:02:35 compute-0 nova_compute[349548]: 2025-12-05 02:02:35.689 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:02:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1597: 321 pgs: 321 active+clean; 177 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 17 op/s
Dec 05 02:02:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Dec 05 02:02:35 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2501496586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:02:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Dec 05 02:02:35 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Dec 05 02:02:36 compute-0 nova_compute[349548]: 2025-12-05 02:02:36.824 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:02:36 compute-0 ceph-mon[192914]: pgmap v1597: 321 pgs: 321 active+clean; 177 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 17 op/s
Dec 05 02:02:36 compute-0 ceph-mon[192914]: osdmap e128: 3 total, 3 up, 3 in
Dec 05 02:02:37 compute-0 sudo[432562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:02:37 compute-0 sudo[432562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:02:37 compute-0 sudo[432562]: pam_unix(sudo:session): session closed for user root
Dec 05 02:02:37 compute-0 nova_compute[349548]: 2025-12-05 02:02:37.689 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:02:37 compute-0 sudo[432587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:02:37 compute-0 sudo[432587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:02:37 compute-0 sudo[432587]: pam_unix(sudo:session): session closed for user root
Dec 05 02:02:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1599: 321 pgs: 321 active+clean; 139 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.2 KiB/s wr, 70 op/s
Dec 05 02:02:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:02:37 compute-0 sudo[432612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:02:37 compute-0 sudo[432612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:02:37 compute-0 sudo[432612]: pam_unix(sudo:session): session closed for user root
Dec 05 02:02:38 compute-0 sudo[432637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:02:38 compute-0 sudo[432637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:02:38 compute-0 nova_compute[349548]: 2025-12-05 02:02:38.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:02:38 compute-0 nova_compute[349548]: 2025-12-05 02:02:38.143 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.320 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.321 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.332 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b69a0e24-1bc4-46a5-92d7-367c1efd53df', 'name': 'test_0', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.336 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3611d2ae-da33-4e55-aec7-0bec88d3b4e0', 'name': 'vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.337 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.337 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.338 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.338 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T02:02:38.338282) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.340 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.340 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.341 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.341 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.342 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.342 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T02:02:38.342081) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.371 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.372 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.372 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.398 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.399 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.400 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.401 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.401 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.402 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.402 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.402 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.402 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.404 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.404 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.404 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.405 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.405 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.405 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.406 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.406 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T02:02:38.402744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T02:02:38.406547) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.477 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.477 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.478 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.522 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.523 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.523 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.524 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.524 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.524 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.524 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.524 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.525 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 2043636416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.525 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 325714825 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.525 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T02:02:38.524633) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.525 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 190759187 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.526 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 1726190004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.526 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 302563806 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.526 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 198504004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.527 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.527 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.527 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.527 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.527 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.528 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.528 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T02:02:38.527582) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.528 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.528 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.528 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.528 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.528 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.529 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.529 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.529 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.529 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.529 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T02:02:38.529589) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.530 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.530 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.530 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.530 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.530 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.531 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.531 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.531 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.531 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.531 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.531 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.532 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T02:02:38.531820) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.532 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.532 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.532 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.533 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.533 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.533 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.533 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.534 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.534 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.534 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.534 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.534 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T02:02:38.534382) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.553 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.571 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.571 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.572 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.572 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.572 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.572 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.572 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 7524740776 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.573 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 28454640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.573 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.573 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 8278686410 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.574 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 33331693 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.574 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.574 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T02:02:38.572495) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.575 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.575 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.576 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.576 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.576 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.576 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.576 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.576 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.577 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.577 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.577 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.578 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.578 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.578 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.578 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T02:02:38.576161) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T02:02:38.578846) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.584 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.587 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.587 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.588 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.588 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.588 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.588 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.588 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.588 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.589 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.589 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.589 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.589 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.589 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.590 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.590 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.590 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T02:02:38.588319) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.590 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.590 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.591 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.591 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.591 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.591 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.592 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.592 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.592 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.592 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.592 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.592 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.592 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.592 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.593 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.593 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.593 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.593 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.593 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.593 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.594 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.594 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.594 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.595 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.595 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.595 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.595 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.595 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.595 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.595 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T02:02:38.589487) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.595 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T02:02:38.592105) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.596 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T02:02:38.593111) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.596 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T02:02:38.594165) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.595 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.596 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T02:02:38.595566) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.596 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/memory.usage volume: 48.87890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.596 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/memory.usage volume: 49.01171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.597 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.597 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.597 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.597 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.597 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.597 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.597 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.597 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.598 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.598 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.598 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.598 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.598 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T02:02:38.597538) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.598 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.598 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.599 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.599 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.599 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.599 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.599 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.600 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.600 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.600 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.600 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.600 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.600 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T02:02:38.598820) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.600 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T02:02:38.600180) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.600 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.601 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.601 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.601 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.601 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.601 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.601 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/cpu volume: 48060000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.601 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T02:02:38.601365) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.601 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/cpu volume: 41650000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.602 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.602 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.602 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.602 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.602 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.602 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.602 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.602 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T02:02:38.602412) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.603 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.603 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.603 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.603 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.603 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.603 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.603 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T02:02:38.603583) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.604 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.604 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.606 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.606 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.606 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.606 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.606 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.606 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.606 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.606 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.607 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.607 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.607 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.607 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.607 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.607 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.607 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.608 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:02:38 compute-0 sudo[432637]: pam_unix(sudo:session): session closed for user root
Dec 05 02:02:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:02:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:02:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:02:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:02:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:02:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:02:38 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev e108be2e-d007-486b-9489-24368b5bceb2 does not exist
Dec 05 02:02:38 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 6637557b-3adf-4b5f-9f11-540fd6899dd8 does not exist
Dec 05 02:02:38 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b6b13d99-e44f-45e4-9b62-a361123083b5 does not exist
Dec 05 02:02:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:02:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:02:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:02:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:02:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:02:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:02:38 compute-0 sudo[432693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:02:38 compute-0 sudo[432693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:02:38 compute-0 sudo[432693]: pam_unix(sudo:session): session closed for user root
Dec 05 02:02:39 compute-0 ceph-mon[192914]: pgmap v1599: 321 pgs: 321 active+clean; 139 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.2 KiB/s wr, 70 op/s
Dec 05 02:02:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:02:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:02:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:02:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:02:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:02:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:02:39 compute-0 sudo[432718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:02:39 compute-0 sudo[432718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:02:39 compute-0 sudo[432718]: pam_unix(sudo:session): session closed for user root
Dec 05 02:02:39 compute-0 sudo[432743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:02:39 compute-0 sudo[432743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:02:39 compute-0 sudo[432743]: pam_unix(sudo:session): session closed for user root
Dec 05 02:02:39 compute-0 sudo[432768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:02:39 compute-0 sudo[432768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:02:39 compute-0 podman[432830]: 2025-12-05 02:02:39.743864951 +0000 UTC m=+0.071658329 container create 14180d9d17fbc02763032832574ff3356241834dee82bdcf9156c26cf681ca19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 05 02:02:39 compute-0 systemd[1]: Started libpod-conmon-14180d9d17fbc02763032832574ff3356241834dee82bdcf9156c26cf681ca19.scope.
Dec 05 02:02:39 compute-0 podman[432830]: 2025-12-05 02:02:39.720405934 +0000 UTC m=+0.048199352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:02:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1600: 321 pgs: 321 active+clean; 139 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.2 KiB/s wr, 70 op/s
Dec 05 02:02:39 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:02:39 compute-0 podman[432830]: 2025-12-05 02:02:39.871662933 +0000 UTC m=+0.199456331 container init 14180d9d17fbc02763032832574ff3356241834dee82bdcf9156c26cf681ca19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:02:39 compute-0 podman[432830]: 2025-12-05 02:02:39.888746662 +0000 UTC m=+0.216540080 container start 14180d9d17fbc02763032832574ff3356241834dee82bdcf9156c26cf681ca19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dijkstra, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 05 02:02:39 compute-0 friendly_dijkstra[432846]: 167 167
Dec 05 02:02:39 compute-0 systemd[1]: libpod-14180d9d17fbc02763032832574ff3356241834dee82bdcf9156c26cf681ca19.scope: Deactivated successfully.
Dec 05 02:02:39 compute-0 podman[432830]: 2025-12-05 02:02:39.89617314 +0000 UTC m=+0.223966558 container attach 14180d9d17fbc02763032832574ff3356241834dee82bdcf9156c26cf681ca19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dijkstra, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:02:39 compute-0 podman[432830]: 2025-12-05 02:02:39.896928871 +0000 UTC m=+0.224722249 container died 14180d9d17fbc02763032832574ff3356241834dee82bdcf9156c26cf681ca19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dijkstra, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:02:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-864fea35063f1754387ab12ac1e8cb2b3115f7507bbf948bdbd9033a32811fc7-merged.mount: Deactivated successfully.
Dec 05 02:02:39 compute-0 podman[432830]: 2025-12-05 02:02:39.952198931 +0000 UTC m=+0.279992319 container remove 14180d9d17fbc02763032832574ff3356241834dee82bdcf9156c26cf681ca19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:02:39 compute-0 systemd[1]: libpod-conmon-14180d9d17fbc02763032832574ff3356241834dee82bdcf9156c26cf681ca19.scope: Deactivated successfully.
Dec 05 02:02:40 compute-0 podman[432869]: 2025-12-05 02:02:40.167743142 +0000 UTC m=+0.062604326 container create 41ff7f80e0ef97d91a89e54668fffede4439451cd49e4bfec3a82e1473f8b6c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 02:02:40 compute-0 podman[432869]: 2025-12-05 02:02:40.145797697 +0000 UTC m=+0.040658911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:02:40 compute-0 systemd[1]: Started libpod-conmon-41ff7f80e0ef97d91a89e54668fffede4439451cd49e4bfec3a82e1473f8b6c4.scope.
Dec 05 02:02:40 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce1f335efd85115650eda9a4540676689fb3708705f57398e9e49b011335f876/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce1f335efd85115650eda9a4540676689fb3708705f57398e9e49b011335f876/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce1f335efd85115650eda9a4540676689fb3708705f57398e9e49b011335f876/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce1f335efd85115650eda9a4540676689fb3708705f57398e9e49b011335f876/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce1f335efd85115650eda9a4540676689fb3708705f57398e9e49b011335f876/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:02:40 compute-0 podman[432869]: 2025-12-05 02:02:40.309277359 +0000 UTC m=+0.204138613 container init 41ff7f80e0ef97d91a89e54668fffede4439451cd49e4bfec3a82e1473f8b6c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 02:02:40 compute-0 podman[432869]: 2025-12-05 02:02:40.33749138 +0000 UTC m=+0.232352564 container start 41ff7f80e0ef97d91a89e54668fffede4439451cd49e4bfec3a82e1473f8b6c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_villani, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:02:40 compute-0 podman[432869]: 2025-12-05 02:02:40.342385837 +0000 UTC m=+0.237247151 container attach 41ff7f80e0ef97d91a89e54668fffede4439451cd49e4bfec3a82e1473f8b6c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:02:41 compute-0 ceph-mon[192914]: pgmap v1600: 321 pgs: 321 active+clean; 139 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.2 KiB/s wr, 70 op/s
Dec 05 02:02:41 compute-0 great_villani[432885]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:02:41 compute-0 great_villani[432885]: --> relative data size: 1.0
Dec 05 02:02:41 compute-0 great_villani[432885]: --> All data devices are unavailable
Dec 05 02:02:41 compute-0 systemd[1]: libpod-41ff7f80e0ef97d91a89e54668fffede4439451cd49e4bfec3a82e1473f8b6c4.scope: Deactivated successfully.
Dec 05 02:02:41 compute-0 systemd[1]: libpod-41ff7f80e0ef97d91a89e54668fffede4439451cd49e4bfec3a82e1473f8b6c4.scope: Consumed 1.191s CPU time.
Dec 05 02:02:41 compute-0 podman[432869]: 2025-12-05 02:02:41.606559822 +0000 UTC m=+1.501421036 container died 41ff7f80e0ef97d91a89e54668fffede4439451cd49e4bfec3a82e1473f8b6c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 02:02:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce1f335efd85115650eda9a4540676689fb3708705f57398e9e49b011335f876-merged.mount: Deactivated successfully.
Dec 05 02:02:41 compute-0 podman[432869]: 2025-12-05 02:02:41.702697276 +0000 UTC m=+1.597558490 container remove 41ff7f80e0ef97d91a89e54668fffede4439451cd49e4bfec3a82e1473f8b6c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:02:41 compute-0 systemd[1]: libpod-conmon-41ff7f80e0ef97d91a89e54668fffede4439451cd49e4bfec3a82e1473f8b6c4.scope: Deactivated successfully.
Dec 05 02:02:41 compute-0 sudo[432768]: pam_unix(sudo:session): session closed for user root
Dec 05 02:02:41 compute-0 nova_compute[349548]: 2025-12-05 02:02:41.827 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:02:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1601: 321 pgs: 321 active+clean; 139 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.5 KiB/s wr, 71 op/s
Dec 05 02:02:41 compute-0 sudo[432928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:02:41 compute-0 sudo[432928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:02:41 compute-0 sudo[432928]: pam_unix(sudo:session): session closed for user root
Dec 05 02:02:42 compute-0 sudo[432953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:02:42 compute-0 sudo[432953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:02:42 compute-0 sudo[432953]: pam_unix(sudo:session): session closed for user root
Dec 05 02:02:42 compute-0 sudo[432978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:02:42 compute-0 sudo[432978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:02:42 compute-0 sudo[432978]: pam_unix(sudo:session): session closed for user root
Dec 05 02:02:42 compute-0 sudo[433003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:02:42 compute-0 sudo[433003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:02:42 compute-0 podman[433067]: 2025-12-05 02:02:42.819014125 +0000 UTC m=+0.070700383 container create 9f0bc936dc5da5b9afaa828617ab1a856ef6c8698db686533c25b07327ecb648 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_elion, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:02:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:02:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Dec 05 02:02:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Dec 05 02:02:42 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Dec 05 02:02:42 compute-0 podman[433067]: 2025-12-05 02:02:42.791304808 +0000 UTC m=+0.042991096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:02:42 compute-0 systemd[1]: Started libpod-conmon-9f0bc936dc5da5b9afaa828617ab1a856ef6c8698db686533c25b07327ecb648.scope.
Dec 05 02:02:42 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:02:42 compute-0 podman[433067]: 2025-12-05 02:02:42.971424277 +0000 UTC m=+0.223110605 container init 9f0bc936dc5da5b9afaa828617ab1a856ef6c8698db686533c25b07327ecb648 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 05 02:02:42 compute-0 podman[433067]: 2025-12-05 02:02:42.983265809 +0000 UTC m=+0.234952067 container start 9f0bc936dc5da5b9afaa828617ab1a856ef6c8698db686533c25b07327ecb648 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_elion, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 02:02:42 compute-0 podman[433067]: 2025-12-05 02:02:42.988245189 +0000 UTC m=+0.239931447 container attach 9f0bc936dc5da5b9afaa828617ab1a856ef6c8698db686533c25b07327ecb648 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_elion, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 02:02:42 compute-0 peaceful_elion[433082]: 167 167
Dec 05 02:02:42 compute-0 systemd[1]: libpod-9f0bc936dc5da5b9afaa828617ab1a856ef6c8698db686533c25b07327ecb648.scope: Deactivated successfully.
Dec 05 02:02:42 compute-0 podman[433067]: 2025-12-05 02:02:42.999672989 +0000 UTC m=+0.251359247 container died 9f0bc936dc5da5b9afaa828617ab1a856ef6c8698db686533c25b07327ecb648 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_elion, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:02:43 compute-0 ceph-mon[192914]: pgmap v1601: 321 pgs: 321 active+clean; 139 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.5 KiB/s wr, 71 op/s
Dec 05 02:02:43 compute-0 ceph-mon[192914]: osdmap e129: 3 total, 3 up, 3 in
Dec 05 02:02:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3a5d4a0f65caf60d558c24b03fa7d239a93108284567850b63d62aaebb817ec-merged.mount: Deactivated successfully.
Dec 05 02:02:43 compute-0 podman[433067]: 2025-12-05 02:02:43.069111205 +0000 UTC m=+0.320797483 container remove 9f0bc936dc5da5b9afaa828617ab1a856ef6c8698db686533c25b07327ecb648 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_elion, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:02:43 compute-0 systemd[1]: libpod-conmon-9f0bc936dc5da5b9afaa828617ab1a856ef6c8698db686533c25b07327ecb648.scope: Deactivated successfully.
Dec 05 02:02:43 compute-0 nova_compute[349548]: 2025-12-05 02:02:43.146 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:02:43 compute-0 podman[433106]: 2025-12-05 02:02:43.290153941 +0000 UTC m=+0.061167726 container create b9376c24c9f290429f60e0f843b2dfe729cc77d78ded944b85de86879fe4d69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_maxwell, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:02:43 compute-0 systemd[1]: Started libpod-conmon-b9376c24c9f290429f60e0f843b2dfe729cc77d78ded944b85de86879fe4d69d.scope.
Dec 05 02:02:43 compute-0 podman[433106]: 2025-12-05 02:02:43.273176825 +0000 UTC m=+0.044190650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:02:43 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9eb7387dd481a89bba323634bca1c3bfddda81ed0454bb84b9bb663ca9d0f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9eb7387dd481a89bba323634bca1c3bfddda81ed0454bb84b9bb663ca9d0f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9eb7387dd481a89bba323634bca1c3bfddda81ed0454bb84b9bb663ca9d0f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9eb7387dd481a89bba323634bca1c3bfddda81ed0454bb84b9bb663ca9d0f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:02:43 compute-0 podman[433106]: 2025-12-05 02:02:43.489787567 +0000 UTC m=+0.260801462 container init b9376c24c9f290429f60e0f843b2dfe729cc77d78ded944b85de86879fe4d69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_maxwell, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 02:02:43 compute-0 podman[433106]: 2025-12-05 02:02:43.506370501 +0000 UTC m=+0.277384336 container start b9376c24c9f290429f60e0f843b2dfe729cc77d78ded944b85de86879fe4d69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_maxwell, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:02:43 compute-0 podman[433106]: 2025-12-05 02:02:43.513211453 +0000 UTC m=+0.284225338 container attach b9376c24c9f290429f60e0f843b2dfe729cc77d78ded944b85de86879fe4d69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:02:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1603: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 4.4 KiB/s wr, 62 op/s
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]: {
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:     "0": [
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:         {
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "devices": [
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "/dev/loop3"
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             ],
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "lv_name": "ceph_lv0",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "lv_size": "21470642176",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "name": "ceph_lv0",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "tags": {
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.cluster_name": "ceph",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.crush_device_class": "",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.encrypted": "0",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.osd_id": "0",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.type": "block",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.vdo": "0"
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             },
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "type": "block",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "vg_name": "ceph_vg0"
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:         }
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:     ],
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:     "1": [
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:         {
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "devices": [
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "/dev/loop4"
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             ],
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "lv_name": "ceph_lv1",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "lv_size": "21470642176",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "name": "ceph_lv1",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "tags": {
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.cluster_name": "ceph",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.crush_device_class": "",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.encrypted": "0",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.osd_id": "1",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.type": "block",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.vdo": "0"
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             },
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "type": "block",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "vg_name": "ceph_vg1"
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:         }
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:     ],
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:     "2": [
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:         {
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "devices": [
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "/dev/loop5"
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             ],
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "lv_name": "ceph_lv2",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "lv_size": "21470642176",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "name": "ceph_lv2",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "tags": {
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.cluster_name": "ceph",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.crush_device_class": "",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.encrypted": "0",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.osd_id": "2",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.type": "block",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:                 "ceph.vdo": "0"
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             },
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "type": "block",
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:             "vg_name": "ceph_vg2"
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:         }
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]:     ]
Dec 05 02:02:44 compute-0 sharp_maxwell[433121]: }
Dec 05 02:02:44 compute-0 systemd[1]: libpod-b9376c24c9f290429f60e0f843b2dfe729cc77d78ded944b85de86879fe4d69d.scope: Deactivated successfully.
Dec 05 02:02:44 compute-0 podman[433131]: 2025-12-05 02:02:44.420340439 +0000 UTC m=+0.042645306 container died b9376c24c9f290429f60e0f843b2dfe729cc77d78ded944b85de86879fe4d69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_maxwell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:02:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb9eb7387dd481a89bba323634bca1c3bfddda81ed0454bb84b9bb663ca9d0f9-merged.mount: Deactivated successfully.
Dec 05 02:02:44 compute-0 podman[433130]: 2025-12-05 02:02:44.478617463 +0000 UTC m=+0.084516050 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Dec 05 02:02:44 compute-0 podman[433131]: 2025-12-05 02:02:44.499017515 +0000 UTC m=+0.121322302 container remove b9376c24c9f290429f60e0f843b2dfe729cc77d78ded944b85de86879fe4d69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_maxwell, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec 05 02:02:44 compute-0 systemd[1]: libpod-conmon-b9376c24c9f290429f60e0f843b2dfe729cc77d78ded944b85de86879fe4d69d.scope: Deactivated successfully.
Dec 05 02:02:44 compute-0 podman[433137]: 2025-12-05 02:02:44.519772246 +0000 UTC m=+0.115506418 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:02:44 compute-0 sudo[433003]: pam_unix(sudo:session): session closed for user root
Dec 05 02:02:44 compute-0 sudo[433179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:02:44 compute-0 sudo[433179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:02:44 compute-0 sudo[433179]: pam_unix(sudo:session): session closed for user root
Dec 05 02:02:44 compute-0 sudo[433204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:02:44 compute-0 sudo[433204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:02:44 compute-0 sudo[433204]: pam_unix(sudo:session): session closed for user root
Dec 05 02:02:44 compute-0 sudo[433229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:02:44 compute-0 sudo[433229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:02:44 compute-0 sudo[433229]: pam_unix(sudo:session): session closed for user root
Dec 05 02:02:44 compute-0 sudo[433254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:02:44 compute-0 sudo[433254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:02:45 compute-0 ceph-mon[192914]: pgmap v1603: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 4.4 KiB/s wr, 62 op/s
Dec 05 02:02:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:02:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/877224894' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:02:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:02:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/877224894' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:02:45 compute-0 podman[433317]: 2025-12-05 02:02:45.436131032 +0000 UTC m=+0.083661836 container create a950c062aa91f17c2d028e6c84fc8510d7f14422d55361137ea5a747db85083b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:02:45 compute-0 podman[433317]: 2025-12-05 02:02:45.400504933 +0000 UTC m=+0.048035807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:02:45 compute-0 systemd[1]: Started libpod-conmon-a950c062aa91f17c2d028e6c84fc8510d7f14422d55361137ea5a747db85083b.scope.
Dec 05 02:02:45 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:02:45 compute-0 podman[433317]: 2025-12-05 02:02:45.574396067 +0000 UTC m=+0.221926941 container init a950c062aa91f17c2d028e6c84fc8510d7f14422d55361137ea5a747db85083b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bell, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 05 02:02:45 compute-0 podman[433317]: 2025-12-05 02:02:45.59624507 +0000 UTC m=+0.243775884 container start a950c062aa91f17c2d028e6c84fc8510d7f14422d55361137ea5a747db85083b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec 05 02:02:45 compute-0 podman[433317]: 2025-12-05 02:02:45.605360025 +0000 UTC m=+0.252890849 container attach a950c062aa91f17c2d028e6c84fc8510d7f14422d55361137ea5a747db85083b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bell, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 02:02:45 compute-0 elated_bell[433345]: 167 167
Dec 05 02:02:45 compute-0 podman[433334]: 2025-12-05 02:02:45.606023444 +0000 UTC m=+0.097817633 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:02:45 compute-0 systemd[1]: libpod-a950c062aa91f17c2d028e6c84fc8510d7f14422d55361137ea5a747db85083b.scope: Deactivated successfully.
Dec 05 02:02:45 compute-0 conmon[433345]: conmon a950c062aa91f17c2d02 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a950c062aa91f17c2d028e6c84fc8510d7f14422d55361137ea5a747db85083b.scope/container/memory.events
Dec 05 02:02:45 compute-0 podman[433317]: 2025-12-05 02:02:45.61053587 +0000 UTC m=+0.258066664 container died a950c062aa91f17c2d028e6c84fc8510d7f14422d55361137ea5a747db85083b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 05 02:02:45 compute-0 podman[433331]: 2025-12-05 02:02:45.627701331 +0000 UTC m=+0.120508859 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec 05 02:02:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-5dd6cf5406243618e0365e10099b806f0ffc030eb5c99f3e4f1cb226069f1a29-merged.mount: Deactivated successfully.
Dec 05 02:02:45 compute-0 podman[433317]: 2025-12-05 02:02:45.665726727 +0000 UTC m=+0.313257501 container remove a950c062aa91f17c2d028e6c84fc8510d7f14422d55361137ea5a747db85083b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bell, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 02:02:45 compute-0 systemd[1]: libpod-conmon-a950c062aa91f17c2d028e6c84fc8510d7f14422d55361137ea5a747db85083b.scope: Deactivated successfully.
Dec 05 02:02:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1604: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 KiB/s wr, 50 op/s
Dec 05 02:02:45 compute-0 podman[433394]: 2025-12-05 02:02:45.936979079 +0000 UTC m=+0.086362262 container create 9305b3f3975f4907787e81d32b9bad65141c865a3d446adc49990f1749d1aacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:02:45 compute-0 podman[433394]: 2025-12-05 02:02:45.905054904 +0000 UTC m=+0.054438147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:02:46 compute-0 systemd[1]: Started libpod-conmon-9305b3f3975f4907787e81d32b9bad65141c865a3d446adc49990f1749d1aacc.scope.
Dec 05 02:02:46 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:02:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96b20eeecc94ef70281a5ea7a146ae59af0972dbcdd9071d8c8103906285ccef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:02:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96b20eeecc94ef70281a5ea7a146ae59af0972dbcdd9071d8c8103906285ccef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:02:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96b20eeecc94ef70281a5ea7a146ae59af0972dbcdd9071d8c8103906285ccef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:02:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96b20eeecc94ef70281a5ea7a146ae59af0972dbcdd9071d8c8103906285ccef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:02:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/877224894' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:02:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/877224894' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:02:46 compute-0 podman[433394]: 2025-12-05 02:02:46.096785378 +0000 UTC m=+0.246168601 container init 9305b3f3975f4907787e81d32b9bad65141c865a3d446adc49990f1749d1aacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_murdock, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 02:02:46 compute-0 podman[433394]: 2025-12-05 02:02:46.111665505 +0000 UTC m=+0.261048688 container start 9305b3f3975f4907787e81d32b9bad65141c865a3d446adc49990f1749d1aacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_murdock, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 05 02:02:46 compute-0 podman[433394]: 2025-12-05 02:02:46.122148289 +0000 UTC m=+0.271531472 container attach 9305b3f3975f4907787e81d32b9bad65141c865a3d446adc49990f1749d1aacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_murdock, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:02:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:02:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:02:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:02:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:02:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:02:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:02:46 compute-0 nova_compute[349548]: 2025-12-05 02:02:46.829 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:02:46 compute-0 nova_compute[349548]: 2025-12-05 02:02:46.997 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764900151.9948099, ee0bd3a4-b224-4dad-948c-1362bf56fea1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:02:46 compute-0 nova_compute[349548]: 2025-12-05 02:02:46.998 349552 INFO nova.compute.manager [-] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] VM Stopped (Lifecycle Event)
Dec 05 02:02:47 compute-0 nova_compute[349548]: 2025-12-05 02:02:47.025 349552 DEBUG nova.compute.manager [None req-d6d0c888-7d93-431b-9be0-c4d8510faa09 - - - - - -] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:02:47 compute-0 ceph-mon[192914]: pgmap v1604: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 KiB/s wr, 50 op/s
Dec 05 02:02:47 compute-0 vigorous_murdock[433410]: {
Dec 05 02:02:47 compute-0 vigorous_murdock[433410]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:02:47 compute-0 vigorous_murdock[433410]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:02:47 compute-0 vigorous_murdock[433410]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:02:47 compute-0 vigorous_murdock[433410]:         "osd_id": 0,
Dec 05 02:02:47 compute-0 vigorous_murdock[433410]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:02:47 compute-0 vigorous_murdock[433410]:         "type": "bluestore"
Dec 05 02:02:47 compute-0 vigorous_murdock[433410]:     },
Dec 05 02:02:47 compute-0 vigorous_murdock[433410]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:02:47 compute-0 vigorous_murdock[433410]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:02:47 compute-0 vigorous_murdock[433410]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:02:47 compute-0 vigorous_murdock[433410]:         "osd_id": 1,
Dec 05 02:02:47 compute-0 vigorous_murdock[433410]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:02:47 compute-0 vigorous_murdock[433410]:         "type": "bluestore"
Dec 05 02:02:47 compute-0 vigorous_murdock[433410]:     },
Dec 05 02:02:47 compute-0 vigorous_murdock[433410]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:02:47 compute-0 vigorous_murdock[433410]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:02:47 compute-0 vigorous_murdock[433410]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:02:47 compute-0 vigorous_murdock[433410]:         "osd_id": 2,
Dec 05 02:02:47 compute-0 vigorous_murdock[433410]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:02:47 compute-0 vigorous_murdock[433410]:         "type": "bluestore"
Dec 05 02:02:47 compute-0 vigorous_murdock[433410]:     }
Dec 05 02:02:47 compute-0 vigorous_murdock[433410]: }
Dec 05 02:02:47 compute-0 systemd[1]: libpod-9305b3f3975f4907787e81d32b9bad65141c865a3d446adc49990f1749d1aacc.scope: Deactivated successfully.
Dec 05 02:02:47 compute-0 systemd[1]: libpod-9305b3f3975f4907787e81d32b9bad65141c865a3d446adc49990f1749d1aacc.scope: Consumed 1.236s CPU time.
Dec 05 02:02:47 compute-0 podman[433443]: 2025-12-05 02:02:47.443144446 +0000 UTC m=+0.064581371 container died 9305b3f3975f4907787e81d32b9bad65141c865a3d446adc49990f1749d1aacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_murdock, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 05 02:02:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-96b20eeecc94ef70281a5ea7a146ae59af0972dbcdd9071d8c8103906285ccef-merged.mount: Deactivated successfully.
Dec 05 02:02:47 compute-0 podman[433443]: 2025-12-05 02:02:47.533511819 +0000 UTC m=+0.154948704 container remove 9305b3f3975f4907787e81d32b9bad65141c865a3d446adc49990f1749d1aacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_murdock, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:02:47 compute-0 systemd[1]: libpod-conmon-9305b3f3975f4907787e81d32b9bad65141c865a3d446adc49990f1749d1aacc.scope: Deactivated successfully.
Dec 05 02:02:47 compute-0 sudo[433254]: pam_unix(sudo:session): session closed for user root
Dec 05 02:02:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:02:47 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:02:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:02:47 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:02:47 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev bb1461a4-9bb6-42e7-bb6f-21ef232d4c34 does not exist
Dec 05 02:02:47 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 844d226c-bd4b-4d3c-be63-f703d3e81c2a does not exist
Dec 05 02:02:47 compute-0 sudo[433459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:02:47 compute-0 sudo[433459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:02:47 compute-0 sudo[433459]: pam_unix(sudo:session): session closed for user root
Dec 05 02:02:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1605: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 613 B/s rd, 306 B/s wr, 0 op/s
Dec 05 02:02:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:02:47 compute-0 sudo[433484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:02:47 compute-0 sudo[433484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:02:47 compute-0 sudo[433484]: pam_unix(sudo:session): session closed for user root
Dec 05 02:02:48 compute-0 nova_compute[349548]: 2025-12-05 02:02:48.152 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:02:48 compute-0 sshd-session[431385]: Received disconnect from 38.102.83.179 port 39490:11: disconnected by user
Dec 05 02:02:48 compute-0 sshd-session[431385]: Disconnected from user zuul 38.102.83.179 port 39490
Dec 05 02:02:48 compute-0 sshd-session[431376]: pam_unix(sshd:session): session closed for user zuul
Dec 05 02:02:48 compute-0 systemd[1]: session-62.scope: Deactivated successfully.
Dec 05 02:02:48 compute-0 systemd[1]: session-62.scope: Consumed 1.330s CPU time.
Dec 05 02:02:48 compute-0 systemd-logind[792]: Session 62 logged out. Waiting for processes to exit.
Dec 05 02:02:48 compute-0 systemd-logind[792]: Removed session 62.
Dec 05 02:02:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:02:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:02:48 compute-0 ceph-mon[192914]: pgmap v1605: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 613 B/s rd, 306 B/s wr, 0 op/s
Dec 05 02:02:49 compute-0 podman[433510]: 2025-12-05 02:02:49.733208745 +0000 UTC m=+0.140736546 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, version=9.4, com.redhat.component=ubi9-container, architecture=x86_64, container_name=kepler, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 05 02:02:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1606: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 613 B/s rd, 306 B/s wr, 0 op/s
Dec 05 02:02:50 compute-0 ceph-mon[192914]: pgmap v1606: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 613 B/s rd, 306 B/s wr, 0 op/s
Dec 05 02:02:51 compute-0 nova_compute[349548]: 2025-12-05 02:02:51.833 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:02:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1607: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:02:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:02:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Dec 05 02:02:52 compute-0 ceph-mon[192914]: pgmap v1607: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:02:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Dec 05 02:02:52 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Dec 05 02:02:53 compute-0 nova_compute[349548]: 2025-12-05 02:02:53.154 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:02:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1609: 321 pgs: 321 active+clean; 147 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 819 KiB/s wr, 7 op/s
Dec 05 02:02:53 compute-0 ceph-mon[192914]: osdmap e130: 3 total, 3 up, 3 in
Dec 05 02:02:54 compute-0 ceph-mon[192914]: pgmap v1609: 321 pgs: 321 active+clean; 147 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 819 KiB/s wr, 7 op/s
Dec 05 02:02:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1610: 321 pgs: 321 active+clean; 147 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 819 KiB/s wr, 7 op/s
Dec 05 02:02:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:02:56.195 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:02:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:02:56.196 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:02:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:02:56.196 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:02:56 compute-0 nova_compute[349548]: 2025-12-05 02:02:56.835 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:02:57 compute-0 ceph-mon[192914]: pgmap v1610: 321 pgs: 321 active+clean; 147 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 819 KiB/s wr, 7 op/s
Dec 05 02:02:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1611: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Dec 05 02:02:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:02:58 compute-0 nova_compute[349548]: 2025-12-05 02:02:58.156 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:02:58 compute-0 podman[433530]: 2025-12-05 02:02:58.70220351 +0000 UTC m=+0.099061658 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:02:58 compute-0 podman[433529]: 2025-12-05 02:02:58.713542618 +0000 UTC m=+0.115767656 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 05 02:02:58 compute-0 podman[433532]: 2025-12-05 02:02:58.726160321 +0000 UTC m=+0.107396071 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, release=1755695350, config_id=edpm, vendor=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64)
Dec 05 02:02:58 compute-0 podman[433531]: 2025-12-05 02:02:58.767119749 +0000 UTC m=+0.161528578 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 05 02:02:59 compute-0 ceph-mon[192914]: pgmap v1611: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Dec 05 02:02:59 compute-0 podman[158197]: time="2025-12-05T02:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:02:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:02:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8637 "" "Go-http-client/1.1"
Dec 05 02:02:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1612: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Dec 05 02:03:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Dec 05 02:03:01 compute-0 ceph-mon[192914]: pgmap v1612: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Dec 05 02:03:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Dec 05 02:03:01 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Dec 05 02:03:01 compute-0 openstack_network_exporter[366555]: ERROR   02:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:03:01 compute-0 openstack_network_exporter[366555]: ERROR   02:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:03:01 compute-0 openstack_network_exporter[366555]: ERROR   02:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:03:01 compute-0 openstack_network_exporter[366555]: ERROR   02:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:03:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:03:01 compute-0 openstack_network_exporter[366555]: ERROR   02:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:03:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:03:01 compute-0 nova_compute[349548]: 2025-12-05 02:03:01.837 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:03:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1614: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 869 KiB/s wr, 11 op/s
Dec 05 02:03:02 compute-0 ceph-mon[192914]: osdmap e131: 3 total, 3 up, 3 in
Dec 05 02:03:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:03:03 compute-0 nova_compute[349548]: 2025-12-05 02:03:03.159 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:03:03 compute-0 ceph-mon[192914]: pgmap v1614: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 869 KiB/s wr, 11 op/s
Dec 05 02:03:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1615: 321 pgs: 321 active+clean; 139 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 774 KiB/s wr, 34 op/s
Dec 05 02:03:04 compute-0 ceph-mon[192914]: pgmap v1615: 321 pgs: 321 active+clean; 139 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 774 KiB/s wr, 34 op/s
Dec 05 02:03:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1616: 321 pgs: 321 active+clean; 139 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 775 KiB/s wr, 35 op/s
Dec 05 02:03:06 compute-0 nova_compute[349548]: 2025-12-05 02:03:06.840 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:03:07 compute-0 ceph-mon[192914]: pgmap v1616: 321 pgs: 321 active+clean; 139 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 775 KiB/s wr, 35 op/s
Dec 05 02:03:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:03:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Dec 05 02:03:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1617: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 05 02:03:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Dec 05 02:03:08 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Dec 05 02:03:08 compute-0 nova_compute[349548]: 2025-12-05 02:03:08.162 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:03:09 compute-0 ceph-mon[192914]: pgmap v1617: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 05 02:03:09 compute-0 ceph-mon[192914]: osdmap e132: 3 total, 3 up, 3 in
Dec 05 02:03:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1619: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Dec 05 02:03:10 compute-0 ceph-mon[192914]: pgmap v1619: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Dec 05 02:03:10 compute-0 sshd-session[433614]: Accepted publickey for zuul from 38.102.83.179 port 45734 ssh2: RSA SHA256:TVb6vFiLOEHtrkkdoyIozA4b0isBLmSla+NPtR7bFX8
Dec 05 02:03:10 compute-0 systemd-logind[792]: New session 63 of user zuul.
Dec 05 02:03:10 compute-0 systemd[1]: Started Session 63 of User zuul.
Dec 05 02:03:11 compute-0 sshd-session[433614]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 02:03:11 compute-0 nova_compute[349548]: 2025-12-05 02:03:11.843 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:03:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1620: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 05 02:03:11 compute-0 sudo[433791]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfcjqkztyregptldkjarpuadgjwnaxxc ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764900191.1490545-61379-155827470029091/AnsiballZ_command.py'
Dec 05 02:03:11 compute-0 sudo[433791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 02:03:12 compute-0 python3[433793]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 02:03:12 compute-0 sudo[433791]: pam_unix(sudo:session): session closed for user root
Dec 05 02:03:12 compute-0 ceph-mon[192914]: pgmap v1620: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 05 02:03:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:03:13 compute-0 nova_compute[349548]: 2025-12-05 02:03:13.164 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:03:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1621: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 307 B/s wr, 0 op/s
Dec 05 02:03:14 compute-0 podman[433833]: 2025-12-05 02:03:14.763484328 +0000 UTC m=+0.158051451 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:03:14 compute-0 podman[433834]: 2025-12-05 02:03:14.79103737 +0000 UTC m=+0.181623452 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:03:15 compute-0 ceph-mon[192914]: pgmap v1621: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 307 B/s wr, 0 op/s
Dec 05 02:03:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1622: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:03:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:03:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:03:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:03:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:03:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:03:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:03:16
Dec 05 02:03:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:03:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:03:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['volumes', 'images', 'default.rgw.control', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'backups']
Dec 05 02:03:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:03:16 compute-0 podman[433875]: 2025-12-05 02:03:16.712872348 +0000 UTC m=+0.117213146 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 05 02:03:16 compute-0 podman[433874]: 2025-12-05 02:03:16.731727857 +0000 UTC m=+0.142312650 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 02:03:16 compute-0 nova_compute[349548]: 2025-12-05 02:03:16.846 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:03:17 compute-0 ceph-mon[192914]: pgmap v1622: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:03:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:03:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:03:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:03:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:03:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:03:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:03:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:03:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:03:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:03:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1623: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:03:18 compute-0 nova_compute[349548]: 2025-12-05 02:03:18.168 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:03:19 compute-0 ceph-mon[192914]: pgmap v1623: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1624: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:19 compute-0 sudo[434083]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vntccmkdqjexoaiizikfdlpoyeludawp ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764900199.1896935-61541-220943772220799/AnsiballZ_command.py'
Dec 05 02:03:19 compute-0 sudo[434083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 02:03:20 compute-0 python3[434086]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 02:03:20 compute-0 podman[434085]: 2025-12-05 02:03:20.065762277 +0000 UTC m=+0.141901108 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release-0.7.12=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, release=1214.1726694543, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.29.0)
Dec 05 02:03:20 compute-0 sudo[434083]: pam_unix(sudo:session): session closed for user root
Dec 05 02:03:21 compute-0 ceph-mon[192914]: pgmap v1624: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:21 compute-0 nova_compute[349548]: 2025-12-05 02:03:21.849 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:03:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1625: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:03:23 compute-0 ceph-mon[192914]: pgmap v1625: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:23 compute-0 nova_compute[349548]: 2025-12-05 02:03:23.171 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:03:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1626: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:25 compute-0 ceph-mon[192914]: pgmap v1626: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1627: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:26 compute-0 nova_compute[349548]: 2025-12-05 02:03:26.854 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00110425264130364 of space, bias 1.0, pg target 0.331275792391092 quantized to 32 (current 32)
Dec 05 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 05 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:03:27 compute-0 ceph-mon[192914]: pgmap v1627: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1628: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:03:28 compute-0 nova_compute[349548]: 2025-12-05 02:03:28.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:03:28 compute-0 nova_compute[349548]: 2025-12-05 02:03:28.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:03:28 compute-0 nova_compute[349548]: 2025-12-05 02:03:28.174 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:03:29 compute-0 ceph-mon[192914]: pgmap v1628: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:29 compute-0 podman[434268]: 2025-12-05 02:03:29.701202573 +0000 UTC m=+0.112475864 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:03:29 compute-0 podman[434269]: 2025-12-05 02:03:29.713690623 +0000 UTC m=+0.115421007 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 02:03:29 compute-0 podman[434270]: 2025-12-05 02:03:29.715696379 +0000 UTC m=+0.114186602 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 02:03:29 compute-0 podman[434271]: 2025-12-05 02:03:29.734362982 +0000 UTC m=+0.133951786 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.buildah.version=1.33.7, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, architecture=x86_64, config_id=edpm, vendor=Red Hat, Inc., distribution-scope=public, container_name=openstack_network_exporter)
Dec 05 02:03:29 compute-0 sudo[434400]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lopunycjtlyiusvrpvlfkxrqnkppvkzv ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764900209.0691879-61696-66850669646950/AnsiballZ_command.py'
Dec 05 02:03:29 compute-0 podman[158197]: time="2025-12-05T02:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:03:29 compute-0 sudo[434400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 02:03:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:03:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8637 "" "Go-http-client/1.1"
Dec 05 02:03:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1629: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:29 compute-0 python3[434402]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 02:03:30 compute-0 sudo[434400]: pam_unix(sudo:session): session closed for user root
Dec 05 02:03:30 compute-0 nova_compute[349548]: 2025-12-05 02:03:30.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:03:30 compute-0 nova_compute[349548]: 2025-12-05 02:03:30.069 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:03:30 compute-0 nova_compute[349548]: 2025-12-05 02:03:30.069 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:03:31 compute-0 nova_compute[349548]: 2025-12-05 02:03:31.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:03:31 compute-0 nova_compute[349548]: 2025-12-05 02:03:31.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:03:31 compute-0 ceph-mon[192914]: pgmap v1629: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:31 compute-0 nova_compute[349548]: 2025-12-05 02:03:31.382 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:03:31 compute-0 nova_compute[349548]: 2025-12-05 02:03:31.383 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:03:31 compute-0 nova_compute[349548]: 2025-12-05 02:03:31.383 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 02:03:31 compute-0 openstack_network_exporter[366555]: ERROR   02:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:03:31 compute-0 openstack_network_exporter[366555]: ERROR   02:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:03:31 compute-0 openstack_network_exporter[366555]: ERROR   02:03:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:03:31 compute-0 openstack_network_exporter[366555]: ERROR   02:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:03:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:03:31 compute-0 openstack_network_exporter[366555]: ERROR   02:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:03:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:03:31 compute-0 nova_compute[349548]: 2025-12-05 02:03:31.856 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:03:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1630: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:32 compute-0 nova_compute[349548]: 2025-12-05 02:03:32.856 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Updating instance_info_cache with network_info: [{"id": "2799035c-b9e1-4c24-b031-9824b684480c", "address": "fa:16:3e:10:64:51", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2799035c-b9", "ovs_interfaceid": "2799035c-b9e1-4c24-b031-9824b684480c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:03:32 compute-0 nova_compute[349548]: 2025-12-05 02:03:32.881 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:03:32 compute-0 nova_compute[349548]: 2025-12-05 02:03:32.882 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 02:03:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.100 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.101 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.102 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.103 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.105 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.179 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:03:33 compute-0 ceph-mon[192914]: pgmap v1630: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:03:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/619876228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.596 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.757 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.759 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.760 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.770 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.771 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.772 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:03:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1631: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:34 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/619876228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.353 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.354 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3619MB free_disk=59.92203903198242GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.355 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.355 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.435 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.435 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.436 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.436 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.453 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing inventories for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.471 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating ProviderTree inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.472 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.493 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing aggregate associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.542 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing trait associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, traits: HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.611 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:03:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:03:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1000060851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:03:35 compute-0 nova_compute[349548]: 2025-12-05 02:03:35.107 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:03:35 compute-0 nova_compute[349548]: 2025-12-05 02:03:35.123 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:03:35 compute-0 ceph-mon[192914]: pgmap v1631: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:35 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1000060851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:03:35 compute-0 nova_compute[349548]: 2025-12-05 02:03:35.361 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:03:35 compute-0 nova_compute[349548]: 2025-12-05 02:03:35.366 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:03:35 compute-0 nova_compute[349548]: 2025-12-05 02:03:35.368 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:03:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1632: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:36 compute-0 nova_compute[349548]: 2025-12-05 02:03:36.364 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:03:36 compute-0 nova_compute[349548]: 2025-12-05 02:03:36.365 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:03:36 compute-0 nova_compute[349548]: 2025-12-05 02:03:36.859 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:03:37 compute-0 nova_compute[349548]: 2025-12-05 02:03:37.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:03:37 compute-0 ceph-mon[192914]: pgmap v1632: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1633: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:03:38 compute-0 nova_compute[349548]: 2025-12-05 02:03:38.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:03:38 compute-0 nova_compute[349548]: 2025-12-05 02:03:38.186 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:03:39 compute-0 ceph-mon[192914]: pgmap v1633: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1634: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:41 compute-0 ceph-mon[192914]: pgmap v1634: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:41 compute-0 nova_compute[349548]: 2025-12-05 02:03:41.863 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:03:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1635: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:03:43 compute-0 nova_compute[349548]: 2025-12-05 02:03:43.189 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:03:43 compute-0 ceph-mon[192914]: pgmap v1635: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1636: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:03:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3725910511' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:03:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:03:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3725910511' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:03:45 compute-0 ceph-mon[192914]: pgmap v1636: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/3725910511' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:03:45 compute-0 sudo[434686]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaykpvnijfutcpttptkbrhahpxadivxp ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764900224.8251386-61912-145626047864539/AnsiballZ_command.py'
Dec 05 02:03:45 compute-0 sudo[434686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 02:03:45 compute-0 podman[434633]: 2025-12-05 02:03:45.61011592 +0000 UTC m=+0.109326726 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:03:45 compute-0 podman[434632]: 2025-12-05 02:03:45.634950946 +0000 UTC m=+0.146367094 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 02:03:45 compute-0 python3[434700]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 05 02:03:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1637: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:45 compute-0 sudo[434686]: pam_unix(sudo:session): session closed for user root
Dec 05 02:03:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:03:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:03:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:03:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:03:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:03:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:03:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/3725910511' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:03:46 compute-0 nova_compute[349548]: 2025-12-05 02:03:46.866 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:03:47 compute-0 ceph-mon[192914]: pgmap v1637: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:47 compute-0 podman[434739]: 2025-12-05 02:03:47.726191402 +0000 UTC m=+0.114223942 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 05 02:03:47 compute-0 podman[434738]: 2025-12-05 02:03:47.72718475 +0000 UTC m=+0.124886371 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 02:03:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1638: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:03:48 compute-0 sudo[434777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:03:48 compute-0 sudo[434777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:03:48 compute-0 sudo[434777]: pam_unix(sudo:session): session closed for user root
Dec 05 02:03:48 compute-0 nova_compute[349548]: 2025-12-05 02:03:48.192 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:03:48 compute-0 sudo[434802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:03:48 compute-0 sudo[434802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:03:48 compute-0 sudo[434802]: pam_unix(sudo:session): session closed for user root
Dec 05 02:03:48 compute-0 ceph-mon[192914]: pgmap v1638: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:48 compute-0 sudo[434827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:03:48 compute-0 sudo[434827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:03:48 compute-0 sudo[434827]: pam_unix(sudo:session): session closed for user root
Dec 05 02:03:48 compute-0 sudo[434852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:03:48 compute-0 sudo[434852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:03:49 compute-0 sudo[434852]: pam_unix(sudo:session): session closed for user root
Dec 05 02:03:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:03:49 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:03:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:03:49 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:03:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:03:49 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:03:49 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 3c6d8abe-70c2-42d4-a107-e393386b4789 does not exist
Dec 05 02:03:49 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b20c11bd-4c41-46d5-8d5d-88d98907dfa6 does not exist
Dec 05 02:03:49 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev aa147508-a70b-4859-80ed-9d489b05e01b does not exist
Dec 05 02:03:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:03:49 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:03:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:03:49 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:03:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:03:49 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:03:49 compute-0 sudo[434907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:03:49 compute-0 sudo[434907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:03:49 compute-0 sudo[434907]: pam_unix(sudo:session): session closed for user root
Dec 05 02:03:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:03:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:03:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:03:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:03:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:03:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:03:49 compute-0 sudo[434932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:03:49 compute-0 sudo[434932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:03:49 compute-0 sudo[434932]: pam_unix(sudo:session): session closed for user root
Dec 05 02:03:49 compute-0 sudo[434957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:03:49 compute-0 sudo[434957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:03:49 compute-0 sudo[434957]: pam_unix(sudo:session): session closed for user root
Dec 05 02:03:49 compute-0 sudo[434983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:03:49 compute-0 sudo[434983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:03:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1639: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:50 compute-0 podman[435045]: 2025-12-05 02:03:50.288585514 +0000 UTC m=+0.095385655 container create 04ce5437b2b29bdf2deacce60aa6d39f0443fb336dce83f3569646841cb597f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:03:50 compute-0 podman[435045]: 2025-12-05 02:03:50.244288342 +0000 UTC m=+0.051088503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:03:50 compute-0 systemd[1]: Started libpod-conmon-04ce5437b2b29bdf2deacce60aa6d39f0443fb336dce83f3569646841cb597f8.scope.
Dec 05 02:03:50 compute-0 ceph-mon[192914]: pgmap v1639: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:50 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:03:50 compute-0 podman[435045]: 2025-12-05 02:03:50.416066467 +0000 UTC m=+0.222866668 container init 04ce5437b2b29bdf2deacce60aa6d39f0443fb336dce83f3569646841cb597f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bassi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:03:50 compute-0 podman[435045]: 2025-12-05 02:03:50.430116831 +0000 UTC m=+0.236916942 container start 04ce5437b2b29bdf2deacce60aa6d39f0443fb336dce83f3569646841cb597f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bassi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:03:50 compute-0 podman[435045]: 2025-12-05 02:03:50.43541545 +0000 UTC m=+0.242215591 container attach 04ce5437b2b29bdf2deacce60aa6d39f0443fb336dce83f3569646841cb597f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:03:50 compute-0 nifty_bassi[435061]: 167 167
Dec 05 02:03:50 compute-0 systemd[1]: libpod-04ce5437b2b29bdf2deacce60aa6d39f0443fb336dce83f3569646841cb597f8.scope: Deactivated successfully.
Dec 05 02:03:50 compute-0 podman[435045]: 2025-12-05 02:03:50.441025337 +0000 UTC m=+0.247825478 container died 04ce5437b2b29bdf2deacce60aa6d39f0443fb336dce83f3569646841cb597f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:03:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b54d0539e13e3171f31aab36886e24d71df4a8c1f8be6f57f492211058eb111-merged.mount: Deactivated successfully.
Dec 05 02:03:50 compute-0 podman[435059]: 2025-12-05 02:03:50.509065044 +0000 UTC m=+0.152965479 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_id=edpm, distribution-scope=public, release=1214.1726694543, release-0.7.12=, managed_by=edpm_ansible, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, name=ubi9, vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 05 02:03:50 compute-0 podman[435045]: 2025-12-05 02:03:50.515230667 +0000 UTC m=+0.322030778 container remove 04ce5437b2b29bdf2deacce60aa6d39f0443fb336dce83f3569646841cb597f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bassi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 05 02:03:50 compute-0 systemd[1]: libpod-conmon-04ce5437b2b29bdf2deacce60aa6d39f0443fb336dce83f3569646841cb597f8.scope: Deactivated successfully.
Dec 05 02:03:50 compute-0 podman[435104]: 2025-12-05 02:03:50.800599616 +0000 UTC m=+0.087462673 container create 898b827d4a6024cfa782b3ebdaaf83cc257f4dc219241501a2f5c9cf7c3ff25a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_keldysh, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 02:03:50 compute-0 podman[435104]: 2025-12-05 02:03:50.766992524 +0000 UTC m=+0.053855661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:03:50 compute-0 systemd[1]: Started libpod-conmon-898b827d4a6024cfa782b3ebdaaf83cc257f4dc219241501a2f5c9cf7c3ff25a.scope.
Dec 05 02:03:50 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b9d3729300b7383998995f9b875ceb97ad1075666fbda813c7c52e64cfd8e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b9d3729300b7383998995f9b875ceb97ad1075666fbda813c7c52e64cfd8e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b9d3729300b7383998995f9b875ceb97ad1075666fbda813c7c52e64cfd8e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b9d3729300b7383998995f9b875ceb97ad1075666fbda813c7c52e64cfd8e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b9d3729300b7383998995f9b875ceb97ad1075666fbda813c7c52e64cfd8e9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:03:50 compute-0 podman[435104]: 2025-12-05 02:03:50.938976984 +0000 UTC m=+0.225840101 container init 898b827d4a6024cfa782b3ebdaaf83cc257f4dc219241501a2f5c9cf7c3ff25a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:03:50 compute-0 podman[435104]: 2025-12-05 02:03:50.973197203 +0000 UTC m=+0.260060280 container start 898b827d4a6024cfa782b3ebdaaf83cc257f4dc219241501a2f5c9cf7c3ff25a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_keldysh, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:03:50 compute-0 podman[435104]: 2025-12-05 02:03:50.981060134 +0000 UTC m=+0.267923261 container attach 898b827d4a6024cfa782b3ebdaaf83cc257f4dc219241501a2f5c9cf7c3ff25a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:03:51 compute-0 nova_compute[349548]: 2025-12-05 02:03:51.870 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:03:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1640: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:52 compute-0 unruffled_keldysh[435120]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:03:52 compute-0 unruffled_keldysh[435120]: --> relative data size: 1.0
Dec 05 02:03:52 compute-0 unruffled_keldysh[435120]: --> All data devices are unavailable
Dec 05 02:03:52 compute-0 systemd[1]: libpod-898b827d4a6024cfa782b3ebdaaf83cc257f4dc219241501a2f5c9cf7c3ff25a.scope: Deactivated successfully.
Dec 05 02:03:52 compute-0 systemd[1]: libpod-898b827d4a6024cfa782b3ebdaaf83cc257f4dc219241501a2f5c9cf7c3ff25a.scope: Consumed 1.132s CPU time.
Dec 05 02:03:52 compute-0 podman[435104]: 2025-12-05 02:03:52.182321384 +0000 UTC m=+1.469184491 container died 898b827d4a6024cfa782b3ebdaaf83cc257f4dc219241501a2f5c9cf7c3ff25a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:03:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-71b9d3729300b7383998995f9b875ceb97ad1075666fbda813c7c52e64cfd8e9-merged.mount: Deactivated successfully.
Dec 05 02:03:52 compute-0 podman[435104]: 2025-12-05 02:03:52.61636278 +0000 UTC m=+1.903225867 container remove 898b827d4a6024cfa782b3ebdaaf83cc257f4dc219241501a2f5c9cf7c3ff25a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_keldysh, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:03:52 compute-0 systemd[1]: libpod-conmon-898b827d4a6024cfa782b3ebdaaf83cc257f4dc219241501a2f5c9cf7c3ff25a.scope: Deactivated successfully.
Dec 05 02:03:52 compute-0 sudo[434983]: pam_unix(sudo:session): session closed for user root
Dec 05 02:03:52 compute-0 sudo[435160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:03:52 compute-0 sudo[435160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:03:52 compute-0 sudo[435160]: pam_unix(sudo:session): session closed for user root
Dec 05 02:03:52 compute-0 sudo[435185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:03:52 compute-0 sudo[435185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:03:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:03:52 compute-0 sudo[435185]: pam_unix(sudo:session): session closed for user root
Dec 05 02:03:52 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Dec 05 02:03:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:52.969053) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 02:03:52 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Dec 05 02:03:52 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900232969103, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1009, "num_deletes": 251, "total_data_size": 1362392, "memory_usage": 1388152, "flush_reason": "Manual Compaction"}
Dec 05 02:03:52 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Dec 05 02:03:52 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900232979637, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 861818, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33017, "largest_seqno": 34025, "table_properties": {"data_size": 857750, "index_size": 1656, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10744, "raw_average_key_size": 20, "raw_value_size": 848983, "raw_average_value_size": 1654, "num_data_blocks": 74, "num_entries": 513, "num_filter_entries": 513, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764900147, "oldest_key_time": 1764900147, "file_creation_time": 1764900232, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:03:52 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 10709 microseconds, and 4226 cpu microseconds.
Dec 05 02:03:52 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:03:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:52.979759) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 861818 bytes OK
Dec 05 02:03:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:52.979791) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Dec 05 02:03:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:52.983225) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Dec 05 02:03:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:52.983250) EVENT_LOG_v1 {"time_micros": 1764900232983242, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 02:03:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:52.983276) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 02:03:52 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 1357586, prev total WAL file size 1357586, number of live WAL files 2.
Dec 05 02:03:52 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:03:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:52.984775) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323533' seq:72057594037927935, type:22 .. '6D6772737461740031353034' seq:0, type:0; will stop at (end)
Dec 05 02:03:52 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 02:03:52 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(841KB)], [74(9575KB)]
Dec 05 02:03:52 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900232984864, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 10667080, "oldest_snapshot_seqno": -1}
Dec 05 02:03:53 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 5336 keys, 7853840 bytes, temperature: kUnknown
Dec 05 02:03:53 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900233055563, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 7853840, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7819600, "index_size": 19794, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13381, "raw_key_size": 134791, "raw_average_key_size": 25, "raw_value_size": 7724482, "raw_average_value_size": 1447, "num_data_blocks": 818, "num_entries": 5336, "num_filter_entries": 5336, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764900232, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:03:53 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:03:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:53.055981) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 7853840 bytes
Dec 05 02:03:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:53.059195) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.7 rd, 110.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.4 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(21.5) write-amplify(9.1) OK, records in: 5816, records dropped: 480 output_compression: NoCompression
Dec 05 02:03:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:53.059220) EVENT_LOG_v1 {"time_micros": 1764900233059208, "job": 42, "event": "compaction_finished", "compaction_time_micros": 70798, "compaction_time_cpu_micros": 38914, "output_level": 6, "num_output_files": 1, "total_output_size": 7853840, "num_input_records": 5816, "num_output_records": 5336, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 02:03:53 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:03:53 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900233059624, "job": 42, "event": "table_file_deletion", "file_number": 76}
Dec 05 02:03:53 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:03:53 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900233062394, "job": 42, "event": "table_file_deletion", "file_number": 74}
Dec 05 02:03:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:52.984535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:03:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:53.062589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:03:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:53.062597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:03:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:53.062600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:03:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:53.063196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:03:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:53.063201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:03:53 compute-0 ceph-mon[192914]: pgmap v1640: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:53 compute-0 sudo[435210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:03:53 compute-0 sudo[435210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:03:53 compute-0 sudo[435210]: pam_unix(sudo:session): session closed for user root
Dec 05 02:03:53 compute-0 sudo[435235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:03:53 compute-0 sudo[435235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:03:53 compute-0 nova_compute[349548]: 2025-12-05 02:03:53.195 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:03:53 compute-0 podman[435300]: 2025-12-05 02:03:53.789774009 +0000 UTC m=+0.087106801 container create 454a11abde918f9f15b74f83055f6870b2243c157b95259a496433827f7b4860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:03:53 compute-0 podman[435300]: 2025-12-05 02:03:53.752044203 +0000 UTC m=+0.049377045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:03:53 compute-0 systemd[1]: Started libpod-conmon-454a11abde918f9f15b74f83055f6870b2243c157b95259a496433827f7b4860.scope.
Dec 05 02:03:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1641: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:53 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:03:53 compute-0 podman[435300]: 2025-12-05 02:03:53.953616232 +0000 UTC m=+0.250949014 container init 454a11abde918f9f15b74f83055f6870b2243c157b95259a496433827f7b4860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:03:53 compute-0 podman[435300]: 2025-12-05 02:03:53.971505083 +0000 UTC m=+0.268837885 container start 454a11abde918f9f15b74f83055f6870b2243c157b95259a496433827f7b4860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 05 02:03:53 compute-0 podman[435300]: 2025-12-05 02:03:53.978556341 +0000 UTC m=+0.275889103 container attach 454a11abde918f9f15b74f83055f6870b2243c157b95259a496433827f7b4860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 05 02:03:53 compute-0 practical_engelbart[435316]: 167 167
Dec 05 02:03:53 compute-0 systemd[1]: libpod-454a11abde918f9f15b74f83055f6870b2243c157b95259a496433827f7b4860.scope: Deactivated successfully.
Dec 05 02:03:53 compute-0 podman[435300]: 2025-12-05 02:03:53.985754512 +0000 UTC m=+0.283087314 container died 454a11abde918f9f15b74f83055f6870b2243c157b95259a496433827f7b4860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 05 02:03:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-0800ec38cb51fe045abc6144b085b243d26d797ef22e79d6deecd3d13a776926-merged.mount: Deactivated successfully.
Dec 05 02:03:54 compute-0 podman[435300]: 2025-12-05 02:03:54.063495351 +0000 UTC m=+0.360828123 container remove 454a11abde918f9f15b74f83055f6870b2243c157b95259a496433827f7b4860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:03:54 compute-0 systemd[1]: libpod-conmon-454a11abde918f9f15b74f83055f6870b2243c157b95259a496433827f7b4860.scope: Deactivated successfully.
Dec 05 02:03:54 compute-0 podman[435339]: 2025-12-05 02:03:54.331567965 +0000 UTC m=+0.087039340 container create ddc510f1980060ca769dcc2ae9879d32b59bb5f1baea404877124e38a203aa98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 02:03:54 compute-0 podman[435339]: 2025-12-05 02:03:54.298830248 +0000 UTC m=+0.054301673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:03:54 compute-0 systemd[1]: Started libpod-conmon-ddc510f1980060ca769dcc2ae9879d32b59bb5f1baea404877124e38a203aa98.scope.
Dec 05 02:03:54 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24aa83fcfef473f74903e94adb6f4b8af10073a6069b0b64e5f9544020aa3132/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24aa83fcfef473f74903e94adb6f4b8af10073a6069b0b64e5f9544020aa3132/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24aa83fcfef473f74903e94adb6f4b8af10073a6069b0b64e5f9544020aa3132/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24aa83fcfef473f74903e94adb6f4b8af10073a6069b0b64e5f9544020aa3132/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:03:54 compute-0 podman[435339]: 2025-12-05 02:03:54.459291585 +0000 UTC m=+0.214763010 container init ddc510f1980060ca769dcc2ae9879d32b59bb5f1baea404877124e38a203aa98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_williamson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 05 02:03:54 compute-0 podman[435339]: 2025-12-05 02:03:54.484743289 +0000 UTC m=+0.240214644 container start ddc510f1980060ca769dcc2ae9879d32b59bb5f1baea404877124e38a203aa98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_williamson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:03:54 compute-0 podman[435339]: 2025-12-05 02:03:54.489151272 +0000 UTC m=+0.244622857 container attach ddc510f1980060ca769dcc2ae9879d32b59bb5f1baea404877124e38a203aa98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:03:55 compute-0 ceph-mon[192914]: pgmap v1641: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]: {
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:     "0": [
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:         {
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "devices": [
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "/dev/loop3"
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             ],
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "lv_name": "ceph_lv0",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "lv_size": "21470642176",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "name": "ceph_lv0",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "tags": {
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.cluster_name": "ceph",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.crush_device_class": "",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.encrypted": "0",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.osd_id": "0",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.type": "block",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.vdo": "0"
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             },
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "type": "block",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "vg_name": "ceph_vg0"
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:         }
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:     ],
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:     "1": [
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:         {
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "devices": [
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "/dev/loop4"
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             ],
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "lv_name": "ceph_lv1",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "lv_size": "21470642176",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "name": "ceph_lv1",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "tags": {
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.cluster_name": "ceph",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.crush_device_class": "",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.encrypted": "0",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.osd_id": "1",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.type": "block",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.vdo": "0"
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             },
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "type": "block",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "vg_name": "ceph_vg1"
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:         }
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:     ],
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:     "2": [
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:         {
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "devices": [
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "/dev/loop5"
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             ],
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "lv_name": "ceph_lv2",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "lv_size": "21470642176",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "name": "ceph_lv2",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "tags": {
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.cluster_name": "ceph",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.crush_device_class": "",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.encrypted": "0",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.osd_id": "2",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.type": "block",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:                 "ceph.vdo": "0"
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             },
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "type": "block",
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:             "vg_name": "ceph_vg2"
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:         }
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]:     ]
Dec 05 02:03:55 compute-0 compassionate_williamson[435355]: }
Dec 05 02:03:55 compute-0 systemd[1]: libpod-ddc510f1980060ca769dcc2ae9879d32b59bb5f1baea404877124e38a203aa98.scope: Deactivated successfully.
Dec 05 02:03:55 compute-0 podman[435339]: 2025-12-05 02:03:55.317328706 +0000 UTC m=+1.072800091 container died ddc510f1980060ca769dcc2ae9879d32b59bb5f1baea404877124e38a203aa98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:03:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-24aa83fcfef473f74903e94adb6f4b8af10073a6069b0b64e5f9544020aa3132-merged.mount: Deactivated successfully.
Dec 05 02:03:55 compute-0 podman[435339]: 2025-12-05 02:03:55.442135504 +0000 UTC m=+1.197606859 container remove ddc510f1980060ca769dcc2ae9879d32b59bb5f1baea404877124e38a203aa98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:03:55 compute-0 systemd[1]: libpod-conmon-ddc510f1980060ca769dcc2ae9879d32b59bb5f1baea404877124e38a203aa98.scope: Deactivated successfully.
Dec 05 02:03:55 compute-0 sudo[435235]: pam_unix(sudo:session): session closed for user root
Dec 05 02:03:55 compute-0 sudo[435375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:03:55 compute-0 sudo[435375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:03:55 compute-0 sudo[435375]: pam_unix(sudo:session): session closed for user root
Dec 05 02:03:55 compute-0 sudo[435400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:03:55 compute-0 sudo[435400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:03:55 compute-0 sudo[435400]: pam_unix(sudo:session): session closed for user root
Dec 05 02:03:55 compute-0 sudo[435425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:03:55 compute-0 sudo[435425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:03:55 compute-0 sudo[435425]: pam_unix(sudo:session): session closed for user root
Dec 05 02:03:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1642: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:56 compute-0 sudo[435450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:03:56 compute-0 sudo[435450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:03:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:03:56.197 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:03:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:03:56.198 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:03:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:03:56.200 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:03:56 compute-0 podman[435512]: 2025-12-05 02:03:56.567618701 +0000 UTC m=+0.082901865 container create 8614913910dd3c2820098e216475ae8ad00bc0c33602ae13a887b1847f2421f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_maxwell, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:03:56 compute-0 podman[435512]: 2025-12-05 02:03:56.532574879 +0000 UTC m=+0.047858153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:03:56 compute-0 systemd[1]: Started libpod-conmon-8614913910dd3c2820098e216475ae8ad00bc0c33602ae13a887b1847f2421f1.scope.
Dec 05 02:03:56 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:03:56 compute-0 podman[435512]: 2025-12-05 02:03:56.710062763 +0000 UTC m=+0.225345937 container init 8614913910dd3c2820098e216475ae8ad00bc0c33602ae13a887b1847f2421f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 02:03:56 compute-0 podman[435512]: 2025-12-05 02:03:56.726315699 +0000 UTC m=+0.241598893 container start 8614913910dd3c2820098e216475ae8ad00bc0c33602ae13a887b1847f2421f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:03:56 compute-0 podman[435512]: 2025-12-05 02:03:56.733388827 +0000 UTC m=+0.248672011 container attach 8614913910dd3c2820098e216475ae8ad00bc0c33602ae13a887b1847f2421f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_maxwell, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 05 02:03:56 compute-0 jovial_maxwell[435526]: 167 167
Dec 05 02:03:56 compute-0 systemd[1]: libpod-8614913910dd3c2820098e216475ae8ad00bc0c33602ae13a887b1847f2421f1.scope: Deactivated successfully.
Dec 05 02:03:56 compute-0 podman[435512]: 2025-12-05 02:03:56.741414822 +0000 UTC m=+0.256698016 container died 8614913910dd3c2820098e216475ae8ad00bc0c33602ae13a887b1847f2421f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:03:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b47f1683717c53ebbd1ec33ea359394fd5cf61583369240b8adc3c6b3bb426d-merged.mount: Deactivated successfully.
Dec 05 02:03:56 compute-0 podman[435512]: 2025-12-05 02:03:56.813524263 +0000 UTC m=+0.328807447 container remove 8614913910dd3c2820098e216475ae8ad00bc0c33602ae13a887b1847f2421f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_maxwell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 05 02:03:56 compute-0 systemd[1]: libpod-conmon-8614913910dd3c2820098e216475ae8ad00bc0c33602ae13a887b1847f2421f1.scope: Deactivated successfully.
Dec 05 02:03:56 compute-0 nova_compute[349548]: 2025-12-05 02:03:56.872 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:03:57 compute-0 ceph-mon[192914]: pgmap v1642: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:57 compute-0 podman[435552]: 2025-12-05 02:03:57.112896905 +0000 UTC m=+0.097441042 container create 07cdba860a79bb5264c03b1e09305cec1796500279462865372b627eb34d94fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:03:57 compute-0 podman[435552]: 2025-12-05 02:03:57.077802481 +0000 UTC m=+0.062346678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:03:57 compute-0 systemd[1]: Started libpod-conmon-07cdba860a79bb5264c03b1e09305cec1796500279462865372b627eb34d94fd.scope.
Dec 05 02:03:57 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:03:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c13e0b47c97999cf2380b0a2a5d808da7cf4fdafee0d8a80a4f4379d00ecc7f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:03:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c13e0b47c97999cf2380b0a2a5d808da7cf4fdafee0d8a80a4f4379d00ecc7f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:03:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c13e0b47c97999cf2380b0a2a5d808da7cf4fdafee0d8a80a4f4379d00ecc7f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:03:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c13e0b47c97999cf2380b0a2a5d808da7cf4fdafee0d8a80a4f4379d00ecc7f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:03:57 compute-0 podman[435552]: 2025-12-05 02:03:57.297569721 +0000 UTC m=+0.282113828 container init 07cdba860a79bb5264c03b1e09305cec1796500279462865372b627eb34d94fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mclean, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:03:57 compute-0 podman[435552]: 2025-12-05 02:03:57.313839317 +0000 UTC m=+0.298383444 container start 07cdba860a79bb5264c03b1e09305cec1796500279462865372b627eb34d94fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mclean, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:03:57 compute-0 podman[435552]: 2025-12-05 02:03:57.320493904 +0000 UTC m=+0.305038021 container attach 07cdba860a79bb5264c03b1e09305cec1796500279462865372b627eb34d94fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:03:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1643: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:03:58 compute-0 nova_compute[349548]: 2025-12-05 02:03:58.197 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:03:58 compute-0 cool_mclean[435568]: {
Dec 05 02:03:58 compute-0 cool_mclean[435568]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:03:58 compute-0 cool_mclean[435568]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:03:58 compute-0 cool_mclean[435568]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:03:58 compute-0 cool_mclean[435568]:         "osd_id": 0,
Dec 05 02:03:58 compute-0 cool_mclean[435568]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:03:58 compute-0 cool_mclean[435568]:         "type": "bluestore"
Dec 05 02:03:58 compute-0 cool_mclean[435568]:     },
Dec 05 02:03:58 compute-0 cool_mclean[435568]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:03:58 compute-0 cool_mclean[435568]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:03:58 compute-0 cool_mclean[435568]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:03:58 compute-0 cool_mclean[435568]:         "osd_id": 1,
Dec 05 02:03:58 compute-0 cool_mclean[435568]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:03:58 compute-0 cool_mclean[435568]:         "type": "bluestore"
Dec 05 02:03:58 compute-0 cool_mclean[435568]:     },
Dec 05 02:03:58 compute-0 cool_mclean[435568]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:03:58 compute-0 cool_mclean[435568]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:03:58 compute-0 cool_mclean[435568]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:03:58 compute-0 cool_mclean[435568]:         "osd_id": 2,
Dec 05 02:03:58 compute-0 cool_mclean[435568]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:03:58 compute-0 cool_mclean[435568]:         "type": "bluestore"
Dec 05 02:03:58 compute-0 cool_mclean[435568]:     }
Dec 05 02:03:58 compute-0 cool_mclean[435568]: }
Dec 05 02:03:58 compute-0 systemd[1]: libpod-07cdba860a79bb5264c03b1e09305cec1796500279462865372b627eb34d94fd.scope: Deactivated successfully.
Dec 05 02:03:58 compute-0 systemd[1]: libpod-07cdba860a79bb5264c03b1e09305cec1796500279462865372b627eb34d94fd.scope: Consumed 1.132s CPU time.
Dec 05 02:03:58 compute-0 podman[435552]: 2025-12-05 02:03:58.455031953 +0000 UTC m=+1.439576100 container died 07cdba860a79bb5264c03b1e09305cec1796500279462865372b627eb34d94fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mclean, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:03:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-c13e0b47c97999cf2380b0a2a5d808da7cf4fdafee0d8a80a4f4379d00ecc7f9-merged.mount: Deactivated successfully.
Dec 05 02:03:58 compute-0 podman[435552]: 2025-12-05 02:03:58.538623966 +0000 UTC m=+1.523168073 container remove 07cdba860a79bb5264c03b1e09305cec1796500279462865372b627eb34d94fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 02:03:58 compute-0 systemd[1]: libpod-conmon-07cdba860a79bb5264c03b1e09305cec1796500279462865372b627eb34d94fd.scope: Deactivated successfully.
Dec 05 02:03:58 compute-0 sudo[435450]: pam_unix(sudo:session): session closed for user root
Dec 05 02:03:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:03:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:03:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:03:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:03:58 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 47d7979e-1056-42c7-9555-d34f85cd4943 does not exist
Dec 05 02:03:58 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 4815d197-d3a4-4139-a64f-4a77ac3c0993 does not exist
Dec 05 02:03:58 compute-0 sudo[435612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:03:58 compute-0 sudo[435612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:03:58 compute-0 sudo[435612]: pam_unix(sudo:session): session closed for user root
Dec 05 02:03:58 compute-0 sudo[435637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:03:58 compute-0 sudo[435637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:03:58 compute-0 sudo[435637]: pam_unix(sudo:session): session closed for user root
Dec 05 02:03:59 compute-0 ceph-mon[192914]: pgmap v1643: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:03:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:03:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:03:59 compute-0 podman[158197]: time="2025-12-05T02:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:03:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:03:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8637 "" "Go-http-client/1.1"
Dec 05 02:03:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1644: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:00 compute-0 podman[435663]: 2025-12-05 02:04:00.711784889 +0000 UTC m=+0.110575610 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:04:00 compute-0 podman[435669]: 2025-12-05 02:04:00.736371908 +0000 UTC m=+0.129700896 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9)
Dec 05 02:04:00 compute-0 podman[435662]: 2025-12-05 02:04:00.737184181 +0000 UTC m=+0.143414461 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 02:04:00 compute-0 podman[435664]: 2025-12-05 02:04:00.763985923 +0000 UTC m=+0.155499540 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 02:04:01 compute-0 ceph-mon[192914]: pgmap v1644: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:01 compute-0 openstack_network_exporter[366555]: ERROR   02:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:04:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:04:01 compute-0 openstack_network_exporter[366555]: ERROR   02:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:04:01 compute-0 openstack_network_exporter[366555]: ERROR   02:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:04:01 compute-0 openstack_network_exporter[366555]: ERROR   02:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:04:01 compute-0 openstack_network_exporter[366555]: ERROR   02:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:04:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:04:01 compute-0 nova_compute[349548]: 2025-12-05 02:04:01.875 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:04:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1645: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:04:03 compute-0 ceph-mon[192914]: pgmap v1645: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:03 compute-0 nova_compute[349548]: 2025-12-05 02:04:03.202 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:04:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1646: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:05 compute-0 ceph-mon[192914]: pgmap v1646: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1647: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:06 compute-0 nova_compute[349548]: 2025-12-05 02:04:06.879 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:04:07 compute-0 ceph-mon[192914]: pgmap v1647: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1648: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:04:08 compute-0 nova_compute[349548]: 2025-12-05 02:04:08.204 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:04:09 compute-0 ceph-mon[192914]: pgmap v1648: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1649: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:11 compute-0 ceph-mon[192914]: pgmap v1649: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:11 compute-0 nova_compute[349548]: 2025-12-05 02:04:11.883 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:04:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1650: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:04:13 compute-0 nova_compute[349548]: 2025-12-05 02:04:13.208 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:04:13 compute-0 ceph-mon[192914]: pgmap v1650: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:13 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 05 02:04:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1651: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:15 compute-0 ceph-mon[192914]: pgmap v1651: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1652: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:04:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:04:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:04:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:04:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:04:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:04:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:04:16
Dec 05 02:04:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:04:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:04:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'vms', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'images', 'backups', 'cephfs.cephfs.meta']
Dec 05 02:04:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:04:16 compute-0 podman[435748]: 2025-12-05 02:04:16.708660761 +0000 UTC m=+0.113424950 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 02:04:16 compute-0 podman[435749]: 2025-12-05 02:04:16.735180734 +0000 UTC m=+0.135015475 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 02:04:16 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 05 02:04:16 compute-0 nova_compute[349548]: 2025-12-05 02:04:16.887 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:04:17 compute-0 ceph-mon[192914]: pgmap v1652: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:04:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:04:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:04:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:04:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:04:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1653: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:04:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:04:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:04:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:04:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:04:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:04:18 compute-0 nova_compute[349548]: 2025-12-05 02:04:18.211 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:04:18 compute-0 ceph-mon[192914]: pgmap v1653: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:18 compute-0 podman[435790]: 2025-12-05 02:04:18.708157807 +0000 UTC m=+0.098573994 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:04:18 compute-0 podman[435789]: 2025-12-05 02:04:18.713654311 +0000 UTC m=+0.122322420 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec 05 02:04:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1654: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:20 compute-0 podman[435828]: 2025-12-05 02:04:20.694563155 +0000 UTC m=+0.090974881 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, architecture=x86_64, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=kepler, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 05 02:04:20 compute-0 ceph-mon[192914]: pgmap v1654: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:21 compute-0 nova_compute[349548]: 2025-12-05 02:04:21.890 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:04:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1655: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:04:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:04:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3000.1 total, 600.0 interval
                                            Cumulative writes: 7369 writes, 28K keys, 7369 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 7369 writes, 1658 syncs, 4.44 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 796 writes, 1996 keys, 796 commit groups, 1.0 writes per commit group, ingest: 1.23 MB, 0.00 MB/s
                                            Interval WAL: 796 writes, 362 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:04:22 compute-0 ceph-mon[192914]: pgmap v1655: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:23 compute-0 nova_compute[349548]: 2025-12-05 02:04:23.214 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:04:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1656: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:25 compute-0 ceph-mon[192914]: pgmap v1656: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1657: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:26 compute-0 nova_compute[349548]: 2025-12-05 02:04:26.893 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00110425264130364 of space, bias 1.0, pg target 0.331275792391092 quantized to 32 (current 32)
Dec 05 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 05 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:04:27 compute-0 ceph-mon[192914]: pgmap v1657: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1658: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:04:28 compute-0 nova_compute[349548]: 2025-12-05 02:04:28.218 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:04:29 compute-0 ceph-mon[192914]: pgmap v1658: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:04:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3000.2 total, 600.0 interval
                                            Cumulative writes: 8925 writes, 35K keys, 8925 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 8925 writes, 2023 syncs, 4.41 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 851 writes, 2760 keys, 851 commit groups, 1.0 writes per commit group, ingest: 1.82 MB, 0.00 MB/s
                                            Interval WAL: 851 writes, 368 syncs, 2.31 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:04:29 compute-0 podman[158197]: time="2025-12-05T02:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:04:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:04:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8647 "" "Go-http-client/1.1"
Dec 05 02:04:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1659: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:30 compute-0 nova_compute[349548]: 2025-12-05 02:04:30.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:04:30 compute-0 nova_compute[349548]: 2025-12-05 02:04:30.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:04:31 compute-0 ceph-mon[192914]: pgmap v1659: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:31 compute-0 nova_compute[349548]: 2025-12-05 02:04:31.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:04:31 compute-0 openstack_network_exporter[366555]: ERROR   02:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:04:31 compute-0 openstack_network_exporter[366555]: ERROR   02:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:04:31 compute-0 openstack_network_exporter[366555]: ERROR   02:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:04:31 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 02:04:31 compute-0 openstack_network_exporter[366555]: ERROR   02:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:04:31 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 02:04:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:04:31 compute-0 openstack_network_exporter[366555]: ERROR   02:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:04:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:04:31 compute-0 podman[435849]: 2025-12-05 02:04:31.734066361 +0000 UTC m=+0.140455187 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 02:04:31 compute-0 podman[435859]: 2025-12-05 02:04:31.739455782 +0000 UTC m=+0.132893745 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, release=1755695350, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=edpm, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public)
Dec 05 02:04:31 compute-0 podman[435848]: 2025-12-05 02:04:31.746046287 +0000 UTC m=+0.159062559 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 05 02:04:31 compute-0 podman[435850]: 2025-12-05 02:04:31.772503169 +0000 UTC m=+0.161847508 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller)
Dec 05 02:04:31 compute-0 nova_compute[349548]: 2025-12-05 02:04:31.896 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:04:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1660: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:32 compute-0 nova_compute[349548]: 2025-12-05 02:04:32.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:04:32 compute-0 nova_compute[349548]: 2025-12-05 02:04:32.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:04:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:04:33 compute-0 ceph-mon[192914]: pgmap v1660: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.092 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.092 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.128 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.129 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.130 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.130 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.130 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.223 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:04:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:04:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3667686660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.612 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.724 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.724 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.725 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.735 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.735 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.737 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:04:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1661: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:34 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3667686660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:04:34 compute-0 nova_compute[349548]: 2025-12-05 02:04:34.318 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:04:34 compute-0 nova_compute[349548]: 2025-12-05 02:04:34.319 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3622MB free_disk=59.92203903198242GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:04:34 compute-0 nova_compute[349548]: 2025-12-05 02:04:34.319 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:04:34 compute-0 nova_compute[349548]: 2025-12-05 02:04:34.319 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:04:34 compute-0 nova_compute[349548]: 2025-12-05 02:04:34.409 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:04:34 compute-0 nova_compute[349548]: 2025-12-05 02:04:34.410 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:04:34 compute-0 nova_compute[349548]: 2025-12-05 02:04:34.410 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:04:34 compute-0 nova_compute[349548]: 2025-12-05 02:04:34.411 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:04:34 compute-0 nova_compute[349548]: 2025-12-05 02:04:34.480 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:04:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:04:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1921157223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:04:35 compute-0 nova_compute[349548]: 2025-12-05 02:04:35.028 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:04:35 compute-0 nova_compute[349548]: 2025-12-05 02:04:35.042 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:04:35 compute-0 nova_compute[349548]: 2025-12-05 02:04:35.071 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:04:35 compute-0 nova_compute[349548]: 2025-12-05 02:04:35.074 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:04:35 compute-0 nova_compute[349548]: 2025-12-05 02:04:35.074 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:04:35 compute-0 ceph-mon[192914]: pgmap v1661: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:35 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1921157223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:04:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:04:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3000.1 total, 600.0 interval
                                            Cumulative writes: 7411 writes, 29K keys, 7411 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 7411 writes, 1632 syncs, 4.54 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 761 writes, 2337 keys, 761 commit groups, 1.0 writes per commit group, ingest: 1.65 MB, 0.00 MB/s
                                            Interval WAL: 761 writes, 334 syncs, 2.28 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:04:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1662: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:36 compute-0 nova_compute[349548]: 2025-12-05 02:04:36.900 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:04:37 compute-0 ceph-mon[192914]: pgmap v1662: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:37 compute-0 ceph-mgr[193209]: [devicehealth INFO root] Check health
Dec 05 02:04:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1663: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:04:38 compute-0 nova_compute[349548]: 2025-12-05 02:04:38.048 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:04:38 compute-0 nova_compute[349548]: 2025-12-05 02:04:38.049 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:04:38 compute-0 nova_compute[349548]: 2025-12-05 02:04:38.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:04:38 compute-0 nova_compute[349548]: 2025-12-05 02:04:38.227 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.321 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.321 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.338 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b69a0e24-1bc4-46a5-92d7-367c1efd53df', 'name': 'test_0', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.342 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3611d2ae-da33-4e55-aec7-0bec88d3b4e0', 'name': 'vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.343 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.343 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.343 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.343 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.345 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.345 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.345 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.346 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.346 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T02:04:38.343731) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.347 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T02:04:38.346314) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.373 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.374 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.374 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.406 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.407 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.408 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.409 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.409 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.409 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.409 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.410 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.410 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.411 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T02:04:38.410427) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.411 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.412 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.412 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.412 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.413 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.413 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.413 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.413 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T02:04:38.413541) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.504 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.506 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.507 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.579 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.579 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.580 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.580 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.580 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.580 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.580 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.580 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.581 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.581 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 2043636416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.581 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 325714825 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.581 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 190759187 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.581 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 1726190004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.581 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 302563806 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.582 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 198504004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.582 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.583 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.583 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.583 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.583 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.584 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.584 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.584 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.584 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.585 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.585 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T02:04:38.581009) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.585 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T02:04:38.583457) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.585 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.585 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.585 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.586 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.586 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.586 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.586 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.586 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.586 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.587 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.587 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.587 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.587 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.587 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.588 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.588 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.588 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T02:04:38.586096) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.588 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.588 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.588 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T02:04:38.588175) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.588 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.588 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.588 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.589 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.589 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.589 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.589 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.590 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.590 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.590 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.590 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T02:04:38.590310) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.615 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.637 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.637 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.638 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.638 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.638 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.638 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.638 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.638 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 7524740776 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.639 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 28454640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.639 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T02:04:38.638552) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.639 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.639 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 8278686410 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.640 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 33331693 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.640 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.641 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.641 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.641 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.641 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.641 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.641 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.641 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.642 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.642 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.643 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.643 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.644 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.644 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.645 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T02:04:38.641699) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.645 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.645 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.645 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.646 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.647 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T02:04:38.645745) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.650 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.655 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.655 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.656 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.656 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.656 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.656 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.656 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.656 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.657 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.658 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T02:04:38.656715) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.658 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.658 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.659 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.659 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.659 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.659 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.659 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.660 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T02:04:38.659310) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.660 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.660 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.661 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.661 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.662 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.662 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.662 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.662 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.663 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.663 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.663 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.663 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.664 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T02:04:38.663324) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.664 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.664 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.664 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.665 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.665 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.665 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.665 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.665 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.666 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T02:04:38.665584) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.666 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.bytes volume: 2426 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.667 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.667 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.667 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.668 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.668 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.668 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.668 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.669 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T02:04:38.668332) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.669 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.670 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.670 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.670 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.670 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.670 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.671 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.671 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.671 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.671 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/memory.usage volume: 48.87890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.672 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/memory.usage volume: 49.01171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.673 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.673 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.673 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T02:04:38.671241) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.674 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.674 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.674 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.674 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.674 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.675 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.676 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.676 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.676 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.676 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.676 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.676 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.677 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T02:04:38.674627) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.677 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.677 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T02:04:38.676797) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.677 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.678 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.678 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.678 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.678 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.678 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.678 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.678 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.679 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.679 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T02:04:38.678784) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.680 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.680 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.680 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.680 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.680 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.680 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.681 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/cpu volume: 49980000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.681 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/cpu volume: 43650000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.681 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.681 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.682 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.682 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.682 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.682 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.682 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.682 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T02:04:38.680721) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.683 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T02:04:38.682553) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.683 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.683 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.683 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.683 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.684 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.684 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.684 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.684 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.684 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.685 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T02:04:38.684381) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.685 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.685 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.685 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:04:39 compute-0 ceph-mon[192914]: pgmap v1663: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1664: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:41 compute-0 ceph-mon[192914]: pgmap v1664: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:41 compute-0 nova_compute[349548]: 2025-12-05 02:04:41.903 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:04:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1665: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:04:43 compute-0 ceph-mon[192914]: pgmap v1665: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:43 compute-0 nova_compute[349548]: 2025-12-05 02:04:43.230 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:04:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1666: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:45 compute-0 ceph-mon[192914]: pgmap v1666: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:04:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3853207781' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:04:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:04:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3853207781' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:04:45 compute-0 sshd-session[433617]: Received disconnect from 38.102.83.179 port 45734:11: disconnected by user
Dec 05 02:04:45 compute-0 sshd-session[433617]: Disconnected from user zuul 38.102.83.179 port 45734
Dec 05 02:04:45 compute-0 sshd-session[433614]: pam_unix(sshd:session): session closed for user zuul
Dec 05 02:04:45 compute-0 systemd[1]: session-63.scope: Deactivated successfully.
Dec 05 02:04:45 compute-0 systemd[1]: session-63.scope: Consumed 5.079s CPU time.
Dec 05 02:04:45 compute-0 systemd-logind[792]: Session 63 logged out. Waiting for processes to exit.
Dec 05 02:04:45 compute-0 systemd-logind[792]: Removed session 63.
Dec 05 02:04:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1667: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/3853207781' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:04:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/3853207781' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:04:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:04:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:04:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:04:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:04:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:04:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:04:46 compute-0 nova_compute[349548]: 2025-12-05 02:04:46.905 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:04:47 compute-0 ceph-mon[192914]: pgmap v1667: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:47 compute-0 podman[435979]: 2025-12-05 02:04:47.727649288 +0000 UTC m=+0.128413571 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 02:04:47 compute-0 podman[435980]: 2025-12-05 02:04:47.730393575 +0000 UTC m=+0.125518890 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 02:04:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1668: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:04:48 compute-0 nova_compute[349548]: 2025-12-05 02:04:48.235 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:04:49 compute-0 ceph-mon[192914]: pgmap v1668: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:49 compute-0 podman[436020]: 2025-12-05 02:04:49.685811004 +0000 UTC m=+0.101395733 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec 05 02:04:49 compute-0 podman[436021]: 2025-12-05 02:04:49.699357944 +0000 UTC m=+0.096559698 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible)
Dec 05 02:04:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1669: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:51 compute-0 ceph-mon[192914]: pgmap v1669: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:51 compute-0 podman[436058]: 2025-12-05 02:04:51.711160152 +0000 UTC m=+0.115207890 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-type=git, version=9.4, architecture=x86_64, io.buildah.version=1.29.0, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release-0.7.12=, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1214.1726694543, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 05 02:04:51 compute-0 nova_compute[349548]: 2025-12-05 02:04:51.908 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:04:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1670: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:04:53 compute-0 nova_compute[349548]: 2025-12-05 02:04:53.238 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:04:53 compute-0 ceph-mon[192914]: pgmap v1670: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.308014) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900293308073, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 717, "num_deletes": 251, "total_data_size": 899822, "memory_usage": 912984, "flush_reason": "Manual Compaction"}
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900293319833, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 891479, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34026, "largest_seqno": 34742, "table_properties": {"data_size": 887751, "index_size": 1572, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8318, "raw_average_key_size": 19, "raw_value_size": 880280, "raw_average_value_size": 2042, "num_data_blocks": 70, "num_entries": 431, "num_filter_entries": 431, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764900233, "oldest_key_time": 1764900233, "file_creation_time": 1764900293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 12216 microseconds, and 6073 cpu microseconds.
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.320236) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 891479 bytes OK
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.320262) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.323133) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.323156) EVENT_LOG_v1 {"time_micros": 1764900293323149, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.323179) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 896136, prev total WAL file size 896136, number of live WAL files 2.
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.324155) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(870KB)], [77(7669KB)]
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900293324217, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 8745319, "oldest_snapshot_seqno": -1}
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 5254 keys, 6998817 bytes, temperature: kUnknown
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900293375607, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 6998817, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6965915, "index_size": 18648, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13189, "raw_key_size": 133733, "raw_average_key_size": 25, "raw_value_size": 6872953, "raw_average_value_size": 1308, "num_data_blocks": 763, "num_entries": 5254, "num_filter_entries": 5254, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764900293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.376138) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 6998817 bytes
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.379111) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.7 rd, 135.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 7.5 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(17.7) write-amplify(7.9) OK, records in: 5767, records dropped: 513 output_compression: NoCompression
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.379150) EVENT_LOG_v1 {"time_micros": 1764900293379132, "job": 44, "event": "compaction_finished", "compaction_time_micros": 51536, "compaction_time_cpu_micros": 29906, "output_level": 6, "num_output_files": 1, "total_output_size": 6998817, "num_input_records": 5767, "num_output_records": 5254, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900293379719, "job": 44, "event": "table_file_deletion", "file_number": 79}
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900293383326, "job": 44, "event": "table_file_deletion", "file_number": 77}
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.323983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.383573) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.383581) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.383584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.383587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.383590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:04:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1671: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:55 compute-0 ceph-mon[192914]: pgmap v1671: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1672: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:04:56.198 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:04:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:04:56.199 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:04:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:04:56.200 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:04:56 compute-0 nova_compute[349548]: 2025-12-05 02:04:56.912 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:04:57 compute-0 ceph-mon[192914]: pgmap v1672: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1673: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:04:58 compute-0 nova_compute[349548]: 2025-12-05 02:04:58.241 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:04:58 compute-0 sudo[436077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:04:58 compute-0 sudo[436077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:04:58 compute-0 sudo[436077]: pam_unix(sudo:session): session closed for user root
Dec 05 02:04:59 compute-0 sudo[436102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:04:59 compute-0 sudo[436102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:04:59 compute-0 sudo[436102]: pam_unix(sudo:session): session closed for user root
Dec 05 02:04:59 compute-0 sudo[436127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:04:59 compute-0 sudo[436127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:04:59 compute-0 sudo[436127]: pam_unix(sudo:session): session closed for user root
Dec 05 02:04:59 compute-0 ceph-mon[192914]: pgmap v1673: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:04:59 compute-0 sudo[436152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:04:59 compute-0 sudo[436152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:04:59 compute-0 podman[158197]: time="2025-12-05T02:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:04:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:04:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8647 "" "Go-http-client/1.1"
Dec 05 02:04:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1674: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:00 compute-0 sudo[436152]: pam_unix(sudo:session): session closed for user root
Dec 05 02:05:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:05:00 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:05:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:05:00 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:05:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:05:00 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:05:00 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 98babc16-3d08-46ab-99e5-3fdfd00febbc does not exist
Dec 05 02:05:00 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 81e1dbda-15d2-4075-a9aa-20f0dbfcd99b does not exist
Dec 05 02:05:00 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 47c69e09-0302-47c5-b4bc-f37689004c16 does not exist
Dec 05 02:05:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:05:00 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:05:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:05:00 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:05:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:05:00 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:05:00 compute-0 sudo[436207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:05:00 compute-0 sudo[436207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:05:00 compute-0 sudo[436207]: pam_unix(sudo:session): session closed for user root
Dec 05 02:05:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:05:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:05:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:05:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:05:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:05:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:05:00 compute-0 sudo[436232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:05:00 compute-0 sudo[436232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:05:00 compute-0 sudo[436232]: pam_unix(sudo:session): session closed for user root
Dec 05 02:05:00 compute-0 sudo[436257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:05:00 compute-0 sudo[436257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:05:00 compute-0 sudo[436257]: pam_unix(sudo:session): session closed for user root
Dec 05 02:05:00 compute-0 sudo[436282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:05:00 compute-0 sudo[436282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:05:01 compute-0 podman[436343]: 2025-12-05 02:05:01.226374524 +0000 UTC m=+0.089965323 container create f4ca7e484516bcdfdad01449383fe2d9e10ecdec76b4d8138d431d566c7da034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 05 02:05:01 compute-0 podman[436343]: 2025-12-05 02:05:01.192181985 +0000 UTC m=+0.055772854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:05:01 compute-0 systemd[1]: Started libpod-conmon-f4ca7e484516bcdfdad01449383fe2d9e10ecdec76b4d8138d431d566c7da034.scope.
Dec 05 02:05:01 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:05:01 compute-0 ceph-mon[192914]: pgmap v1674: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:01 compute-0 podman[436343]: 2025-12-05 02:05:01.370518074 +0000 UTC m=+0.234108883 container init f4ca7e484516bcdfdad01449383fe2d9e10ecdec76b4d8138d431d566c7da034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:05:01 compute-0 podman[436343]: 2025-12-05 02:05:01.384607549 +0000 UTC m=+0.248198328 container start f4ca7e484516bcdfdad01449383fe2d9e10ecdec76b4d8138d431d566c7da034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:05:01 compute-0 podman[436343]: 2025-12-05 02:05:01.390390641 +0000 UTC m=+0.253981520 container attach f4ca7e484516bcdfdad01449383fe2d9e10ecdec76b4d8138d431d566c7da034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 02:05:01 compute-0 heuristic_williamson[436359]: 167 167
Dec 05 02:05:01 compute-0 systemd[1]: libpod-f4ca7e484516bcdfdad01449383fe2d9e10ecdec76b4d8138d431d566c7da034.scope: Deactivated successfully.
Dec 05 02:05:01 compute-0 podman[436343]: 2025-12-05 02:05:01.395143304 +0000 UTC m=+0.258734113 container died f4ca7e484516bcdfdad01449383fe2d9e10ecdec76b4d8138d431d566c7da034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:05:01 compute-0 openstack_network_exporter[366555]: ERROR   02:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:05:01 compute-0 openstack_network_exporter[366555]: ERROR   02:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:05:01 compute-0 openstack_network_exporter[366555]: ERROR   02:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:05:01 compute-0 openstack_network_exporter[366555]: ERROR   02:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:05:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:05:01 compute-0 openstack_network_exporter[366555]: ERROR   02:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:05:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:05:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-57a8e9179686cb910cbba9e50c77d271ba87690b083f7f38178aee67fcf87b0a-merged.mount: Deactivated successfully.
Dec 05 02:05:01 compute-0 podman[436343]: 2025-12-05 02:05:01.467557594 +0000 UTC m=+0.331148373 container remove f4ca7e484516bcdfdad01449383fe2d9e10ecdec76b4d8138d431d566c7da034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 05 02:05:01 compute-0 systemd[1]: libpod-conmon-f4ca7e484516bcdfdad01449383fe2d9e10ecdec76b4d8138d431d566c7da034.scope: Deactivated successfully.
Dec 05 02:05:01 compute-0 podman[436381]: 2025-12-05 02:05:01.673793515 +0000 UTC m=+0.057507433 container create b732d257328d6cf6217590dd81e32906106205f372a78bd9120496b52fba6f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 02:05:01 compute-0 systemd[1]: Started libpod-conmon-b732d257328d6cf6217590dd81e32906106205f372a78bd9120496b52fba6f7f.scope.
Dec 05 02:05:01 compute-0 podman[436381]: 2025-12-05 02:05:01.64971315 +0000 UTC m=+0.033427078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:05:01 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79827a8805bbd2edb8df80c2cc3e0114c0d8a66919154ee81a13f86974fb0b79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79827a8805bbd2edb8df80c2cc3e0114c0d8a66919154ee81a13f86974fb0b79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79827a8805bbd2edb8df80c2cc3e0114c0d8a66919154ee81a13f86974fb0b79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79827a8805bbd2edb8df80c2cc3e0114c0d8a66919154ee81a13f86974fb0b79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79827a8805bbd2edb8df80c2cc3e0114c0d8a66919154ee81a13f86974fb0b79/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:05:01 compute-0 podman[436381]: 2025-12-05 02:05:01.818095198 +0000 UTC m=+0.201809126 container init b732d257328d6cf6217590dd81e32906106205f372a78bd9120496b52fba6f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_antonelli, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 05 02:05:01 compute-0 podman[436381]: 2025-12-05 02:05:01.842177943 +0000 UTC m=+0.225891851 container start b732d257328d6cf6217590dd81e32906106205f372a78bd9120496b52fba6f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:05:01 compute-0 podman[436381]: 2025-12-05 02:05:01.849162829 +0000 UTC m=+0.232876737 container attach b732d257328d6cf6217590dd81e32906106205f372a78bd9120496b52fba6f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:05:01 compute-0 nova_compute[349548]: 2025-12-05 02:05:01.916 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:01 compute-0 podman[436399]: 2025-12-05 02:05:01.922380041 +0000 UTC m=+0.090349373 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 02:05:01 compute-0 podman[436398]: 2025-12-05 02:05:01.930991533 +0000 UTC m=+0.117206096 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 02:05:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1675: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:01 compute-0 podman[436400]: 2025-12-05 02:05:01.953260607 +0000 UTC m=+0.127643019 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, release=1755695350, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 02:05:01 compute-0 podman[436401]: 2025-12-05 02:05:01.988047692 +0000 UTC m=+0.156639281 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:05:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:05:03 compute-0 quizzical_antonelli[436395]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:05:03 compute-0 quizzical_antonelli[436395]: --> relative data size: 1.0
Dec 05 02:05:03 compute-0 quizzical_antonelli[436395]: --> All data devices are unavailable
Dec 05 02:05:03 compute-0 systemd[1]: libpod-b732d257328d6cf6217590dd81e32906106205f372a78bd9120496b52fba6f7f.scope: Deactivated successfully.
Dec 05 02:05:03 compute-0 podman[436381]: 2025-12-05 02:05:03.124964069 +0000 UTC m=+1.508677987 container died b732d257328d6cf6217590dd81e32906106205f372a78bd9120496b52fba6f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_antonelli, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 02:05:03 compute-0 systemd[1]: libpod-b732d257328d6cf6217590dd81e32906106205f372a78bd9120496b52fba6f7f.scope: Consumed 1.204s CPU time.
Dec 05 02:05:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-79827a8805bbd2edb8df80c2cc3e0114c0d8a66919154ee81a13f86974fb0b79-merged.mount: Deactivated successfully.
Dec 05 02:05:03 compute-0 podman[436381]: 2025-12-05 02:05:03.212612296 +0000 UTC m=+1.596326214 container remove b732d257328d6cf6217590dd81e32906106205f372a78bd9120496b52fba6f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_antonelli, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 05 02:05:03 compute-0 systemd[1]: libpod-conmon-b732d257328d6cf6217590dd81e32906106205f372a78bd9120496b52fba6f7f.scope: Deactivated successfully.
Dec 05 02:05:03 compute-0 nova_compute[349548]: 2025-12-05 02:05:03.244 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:03 compute-0 sudo[436282]: pam_unix(sudo:session): session closed for user root
Dec 05 02:05:03 compute-0 sudo[436521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:05:03 compute-0 ceph-mon[192914]: pgmap v1675: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:03 compute-0 sudo[436521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:05:03 compute-0 sudo[436521]: pam_unix(sudo:session): session closed for user root
Dec 05 02:05:03 compute-0 sudo[436546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:05:03 compute-0 sudo[436546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:05:03 compute-0 sudo[436546]: pam_unix(sudo:session): session closed for user root
Dec 05 02:05:03 compute-0 sudo[436571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:05:03 compute-0 sudo[436571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:05:03 compute-0 sudo[436571]: pam_unix(sudo:session): session closed for user root
Dec 05 02:05:03 compute-0 sudo[436596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:05:03 compute-0 sudo[436596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:05:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1676: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:04 compute-0 podman[436662]: 2025-12-05 02:05:04.346039996 +0000 UTC m=+0.094892331 container create 463917c783601275fc364c7eb5213f37bb52338fcbad393ffbb3f54567cfa673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_swirles, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 05 02:05:04 compute-0 ceph-mon[192914]: pgmap v1676: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:04 compute-0 podman[436662]: 2025-12-05 02:05:04.296860907 +0000 UTC m=+0.045713242 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:05:04 compute-0 systemd[1]: Started libpod-conmon-463917c783601275fc364c7eb5213f37bb52338fcbad393ffbb3f54567cfa673.scope.
Dec 05 02:05:04 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:05:04 compute-0 podman[436662]: 2025-12-05 02:05:04.481236075 +0000 UTC m=+0.230088460 container init 463917c783601275fc364c7eb5213f37bb52338fcbad393ffbb3f54567cfa673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_swirles, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 05 02:05:04 compute-0 podman[436662]: 2025-12-05 02:05:04.498368795 +0000 UTC m=+0.247221120 container start 463917c783601275fc364c7eb5213f37bb52338fcbad393ffbb3f54567cfa673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:05:04 compute-0 agitated_swirles[436677]: 167 167
Dec 05 02:05:04 compute-0 podman[436662]: 2025-12-05 02:05:04.508274313 +0000 UTC m=+0.257126648 container attach 463917c783601275fc364c7eb5213f37bb52338fcbad393ffbb3f54567cfa673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_swirles, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 05 02:05:04 compute-0 systemd[1]: libpod-463917c783601275fc364c7eb5213f37bb52338fcbad393ffbb3f54567cfa673.scope: Deactivated successfully.
Dec 05 02:05:04 compute-0 podman[436662]: 2025-12-05 02:05:04.510352771 +0000 UTC m=+0.259205146 container died 463917c783601275fc364c7eb5213f37bb52338fcbad393ffbb3f54567cfa673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_swirles, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec 05 02:05:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a66d77fff5873d2bc35c7624a1c5debf9e7dfacd3384e04675cfba20e03838fe-merged.mount: Deactivated successfully.
Dec 05 02:05:04 compute-0 podman[436662]: 2025-12-05 02:05:04.582138763 +0000 UTC m=+0.330991098 container remove 463917c783601275fc364c7eb5213f37bb52338fcbad393ffbb3f54567cfa673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 05 02:05:04 compute-0 systemd[1]: libpod-conmon-463917c783601275fc364c7eb5213f37bb52338fcbad393ffbb3f54567cfa673.scope: Deactivated successfully.
Dec 05 02:05:04 compute-0 podman[436700]: 2025-12-05 02:05:04.864988931 +0000 UTC m=+0.077575995 container create 62ec20c7b7d1b947544db6d0a77b25155fcf4bf633570f1ada8568e2b6562b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_torvalds, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:05:04 compute-0 podman[436700]: 2025-12-05 02:05:04.836575725 +0000 UTC m=+0.049162789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:05:04 compute-0 systemd[1]: Started libpod-conmon-62ec20c7b7d1b947544db6d0a77b25155fcf4bf633570f1ada8568e2b6562b5f.scope.
Dec 05 02:05:04 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:05:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ddb0fb959ae2841151722acf485fe3c101408938cbf57e7b4a17cdad7f5a30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:05:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ddb0fb959ae2841151722acf485fe3c101408938cbf57e7b4a17cdad7f5a30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:05:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ddb0fb959ae2841151722acf485fe3c101408938cbf57e7b4a17cdad7f5a30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:05:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ddb0fb959ae2841151722acf485fe3c101408938cbf57e7b4a17cdad7f5a30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:05:05 compute-0 podman[436700]: 2025-12-05 02:05:05.042002903 +0000 UTC m=+0.254590017 container init 62ec20c7b7d1b947544db6d0a77b25155fcf4bf633570f1ada8568e2b6562b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_torvalds, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:05:05 compute-0 podman[436700]: 2025-12-05 02:05:05.070019478 +0000 UTC m=+0.282606542 container start 62ec20c7b7d1b947544db6d0a77b25155fcf4bf633570f1ada8568e2b6562b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 02:05:05 compute-0 podman[436700]: 2025-12-05 02:05:05.076799898 +0000 UTC m=+0.289386962 container attach 62ec20c7b7d1b947544db6d0a77b25155fcf4bf633570f1ada8568e2b6562b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_torvalds, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]: {
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:     "0": [
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:         {
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "devices": [
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "/dev/loop3"
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             ],
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "lv_name": "ceph_lv0",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "lv_size": "21470642176",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "name": "ceph_lv0",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "tags": {
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.cluster_name": "ceph",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.crush_device_class": "",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.encrypted": "0",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.osd_id": "0",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.type": "block",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.vdo": "0"
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             },
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "type": "block",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "vg_name": "ceph_vg0"
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:         }
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:     ],
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:     "1": [
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:         {
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "devices": [
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "/dev/loop4"
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             ],
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "lv_name": "ceph_lv1",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "lv_size": "21470642176",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "name": "ceph_lv1",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "tags": {
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.cluster_name": "ceph",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.crush_device_class": "",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.encrypted": "0",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.osd_id": "1",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.type": "block",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.vdo": "0"
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             },
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "type": "block",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "vg_name": "ceph_vg1"
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:         }
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:     ],
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:     "2": [
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:         {
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "devices": [
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "/dev/loop5"
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             ],
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "lv_name": "ceph_lv2",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "lv_size": "21470642176",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "name": "ceph_lv2",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "tags": {
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.cluster_name": "ceph",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.crush_device_class": "",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.encrypted": "0",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.osd_id": "2",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.type": "block",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:                 "ceph.vdo": "0"
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             },
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "type": "block",
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:             "vg_name": "ceph_vg2"
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:         }
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]:     ]
Dec 05 02:05:05 compute-0 suspicious_torvalds[436716]: }
Dec 05 02:05:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1677: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:05 compute-0 systemd[1]: libpod-62ec20c7b7d1b947544db6d0a77b25155fcf4bf633570f1ada8568e2b6562b5f.scope: Deactivated successfully.
Dec 05 02:05:06 compute-0 podman[436725]: 2025-12-05 02:05:06.034279775 +0000 UTC m=+0.042101851 container died 62ec20c7b7d1b947544db6d0a77b25155fcf4bf633570f1ada8568e2b6562b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_torvalds, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 05 02:05:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-59ddb0fb959ae2841151722acf485fe3c101408938cbf57e7b4a17cdad7f5a30-merged.mount: Deactivated successfully.
Dec 05 02:05:06 compute-0 podman[436725]: 2025-12-05 02:05:06.145451701 +0000 UTC m=+0.153273727 container remove 62ec20c7b7d1b947544db6d0a77b25155fcf4bf633570f1ada8568e2b6562b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:05:06 compute-0 systemd[1]: libpod-conmon-62ec20c7b7d1b947544db6d0a77b25155fcf4bf633570f1ada8568e2b6562b5f.scope: Deactivated successfully.
Dec 05 02:05:06 compute-0 sudo[436596]: pam_unix(sudo:session): session closed for user root
Dec 05 02:05:06 compute-0 sudo[436739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:05:06 compute-0 sudo[436739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:05:06 compute-0 sudo[436739]: pam_unix(sudo:session): session closed for user root
Dec 05 02:05:06 compute-0 sudo[436764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:05:06 compute-0 sudo[436764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:05:06 compute-0 sudo[436764]: pam_unix(sudo:session): session closed for user root
Dec 05 02:05:06 compute-0 sudo[436789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:05:06 compute-0 sudo[436789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:05:06 compute-0 sudo[436789]: pam_unix(sudo:session): session closed for user root
Dec 05 02:05:06 compute-0 sudo[436814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:05:06 compute-0 sudo[436814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:05:06 compute-0 nova_compute[349548]: 2025-12-05 02:05:06.919 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:07 compute-0 ceph-mon[192914]: pgmap v1677: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:07 compute-0 podman[436876]: 2025-12-05 02:05:07.212573522 +0000 UTC m=+0.061737331 container create 3136e18b461bea9a650b37917ac351315b467bb1cc887cf3e12cb0b21be5bb7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 05 02:05:07 compute-0 systemd[1]: Started libpod-conmon-3136e18b461bea9a650b37917ac351315b467bb1cc887cf3e12cb0b21be5bb7d.scope.
Dec 05 02:05:07 compute-0 podman[436876]: 2025-12-05 02:05:07.197440828 +0000 UTC m=+0.046604657 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:05:07 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:05:07 compute-0 podman[436876]: 2025-12-05 02:05:07.352780622 +0000 UTC m=+0.201944521 container init 3136e18b461bea9a650b37917ac351315b467bb1cc887cf3e12cb0b21be5bb7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:05:07 compute-0 podman[436876]: 2025-12-05 02:05:07.369777638 +0000 UTC m=+0.218941487 container start 3136e18b461bea9a650b37917ac351315b467bb1cc887cf3e12cb0b21be5bb7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 02:05:07 compute-0 podman[436876]: 2025-12-05 02:05:07.376593689 +0000 UTC m=+0.225757538 container attach 3136e18b461bea9a650b37917ac351315b467bb1cc887cf3e12cb0b21be5bb7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:05:07 compute-0 crazy_yonath[436891]: 167 167
Dec 05 02:05:07 compute-0 systemd[1]: libpod-3136e18b461bea9a650b37917ac351315b467bb1cc887cf3e12cb0b21be5bb7d.scope: Deactivated successfully.
Dec 05 02:05:07 compute-0 conmon[436891]: conmon 3136e18b461bea9a650b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3136e18b461bea9a650b37917ac351315b467bb1cc887cf3e12cb0b21be5bb7d.scope/container/memory.events
Dec 05 02:05:07 compute-0 podman[436876]: 2025-12-05 02:05:07.38266771 +0000 UTC m=+0.231831569 container died 3136e18b461bea9a650b37917ac351315b467bb1cc887cf3e12cb0b21be5bb7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 05 02:05:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-901af356a2aa863c0a3c299ec45d1255f3591dcdace5a6d7c9ae5bd9718f0d73-merged.mount: Deactivated successfully.
Dec 05 02:05:07 compute-0 podman[436876]: 2025-12-05 02:05:07.4533041 +0000 UTC m=+0.302467929 container remove 3136e18b461bea9a650b37917ac351315b467bb1cc887cf3e12cb0b21be5bb7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:05:07 compute-0 systemd[1]: libpod-conmon-3136e18b461bea9a650b37917ac351315b467bb1cc887cf3e12cb0b21be5bb7d.scope: Deactivated successfully.
Dec 05 02:05:07 compute-0 podman[436916]: 2025-12-05 02:05:07.684305974 +0000 UTC m=+0.073998695 container create cd34364ffb1fbcb73dddbeb821da46470ac9ad338d209c3a4d3bd681bf5f1ac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:05:07 compute-0 podman[436916]: 2025-12-05 02:05:07.653540782 +0000 UTC m=+0.043233523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:05:07 compute-0 systemd[1]: Started libpod-conmon-cd34364ffb1fbcb73dddbeb821da46470ac9ad338d209c3a4d3bd681bf5f1ac4.scope.
Dec 05 02:05:07 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:05:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7ab28fde98738a5945a44520228f55a92932e9683fbe9578593d2a6feb9180/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:05:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7ab28fde98738a5945a44520228f55a92932e9683fbe9578593d2a6feb9180/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:05:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7ab28fde98738a5945a44520228f55a92932e9683fbe9578593d2a6feb9180/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:05:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7ab28fde98738a5945a44520228f55a92932e9683fbe9578593d2a6feb9180/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:05:07 compute-0 podman[436916]: 2025-12-05 02:05:07.847057836 +0000 UTC m=+0.236750577 container init cd34364ffb1fbcb73dddbeb821da46470ac9ad338d209c3a4d3bd681bf5f1ac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:05:07 compute-0 podman[436916]: 2025-12-05 02:05:07.879731282 +0000 UTC m=+0.269424013 container start cd34364ffb1fbcb73dddbeb821da46470ac9ad338d209c3a4d3bd681bf5f1ac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 02:05:07 compute-0 podman[436916]: 2025-12-05 02:05:07.886300236 +0000 UTC m=+0.275992967 container attach cd34364ffb1fbcb73dddbeb821da46470ac9ad338d209c3a4d3bd681bf5f1ac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ride, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 05 02:05:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1678: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:05:08 compute-0 nova_compute[349548]: 2025-12-05 02:05:08.247 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:08 compute-0 pedantic_ride[436931]: {
Dec 05 02:05:08 compute-0 pedantic_ride[436931]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:05:08 compute-0 pedantic_ride[436931]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:05:08 compute-0 pedantic_ride[436931]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:05:08 compute-0 pedantic_ride[436931]:         "osd_id": 0,
Dec 05 02:05:08 compute-0 pedantic_ride[436931]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:05:08 compute-0 pedantic_ride[436931]:         "type": "bluestore"
Dec 05 02:05:08 compute-0 pedantic_ride[436931]:     },
Dec 05 02:05:08 compute-0 pedantic_ride[436931]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:05:08 compute-0 pedantic_ride[436931]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:05:08 compute-0 pedantic_ride[436931]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:05:08 compute-0 pedantic_ride[436931]:         "osd_id": 1,
Dec 05 02:05:08 compute-0 pedantic_ride[436931]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:05:08 compute-0 pedantic_ride[436931]:         "type": "bluestore"
Dec 05 02:05:08 compute-0 pedantic_ride[436931]:     },
Dec 05 02:05:08 compute-0 pedantic_ride[436931]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:05:08 compute-0 pedantic_ride[436931]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:05:08 compute-0 pedantic_ride[436931]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:05:08 compute-0 pedantic_ride[436931]:         "osd_id": 2,
Dec 05 02:05:08 compute-0 pedantic_ride[436931]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:05:08 compute-0 pedantic_ride[436931]:         "type": "bluestore"
Dec 05 02:05:08 compute-0 pedantic_ride[436931]:     }
Dec 05 02:05:08 compute-0 pedantic_ride[436931]: }
Dec 05 02:05:08 compute-0 systemd[1]: libpod-cd34364ffb1fbcb73dddbeb821da46470ac9ad338d209c3a4d3bd681bf5f1ac4.scope: Deactivated successfully.
Dec 05 02:05:08 compute-0 podman[436916]: 2025-12-05 02:05:08.985122715 +0000 UTC m=+1.374815456 container died cd34364ffb1fbcb73dddbeb821da46470ac9ad338d209c3a4d3bd681bf5f1ac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 05 02:05:08 compute-0 systemd[1]: libpod-cd34364ffb1fbcb73dddbeb821da46470ac9ad338d209c3a4d3bd681bf5f1ac4.scope: Consumed 1.118s CPU time.
Dec 05 02:05:09 compute-0 ceph-mon[192914]: pgmap v1678: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e7ab28fde98738a5945a44520228f55a92932e9683fbe9578593d2a6feb9180-merged.mount: Deactivated successfully.
Dec 05 02:05:09 compute-0 podman[436916]: 2025-12-05 02:05:09.075491468 +0000 UTC m=+1.465184179 container remove cd34364ffb1fbcb73dddbeb821da46470ac9ad338d209c3a4d3bd681bf5f1ac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 02:05:09 compute-0 systemd[1]: libpod-conmon-cd34364ffb1fbcb73dddbeb821da46470ac9ad338d209c3a4d3bd681bf5f1ac4.scope: Deactivated successfully.
Dec 05 02:05:09 compute-0 sudo[436814]: pam_unix(sudo:session): session closed for user root
Dec 05 02:05:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:05:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:05:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:05:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:05:09 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev ccb55c4c-4c89-4278-b08b-f0c7389963b2 does not exist
Dec 05 02:05:09 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8edc4e7b-c24b-4218-85a2-5d95efca1f29 does not exist
Dec 05 02:05:09 compute-0 sudo[436978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:05:09 compute-0 sudo[436978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:05:09 compute-0 sudo[436978]: pam_unix(sudo:session): session closed for user root
Dec 05 02:05:09 compute-0 sudo[437003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:05:09 compute-0 sudo[437003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:05:09 compute-0 sudo[437003]: pam_unix(sudo:session): session closed for user root
Dec 05 02:05:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1679: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:05:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:05:11 compute-0 ceph-mon[192914]: pgmap v1679: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:11 compute-0 nova_compute[349548]: 2025-12-05 02:05:11.922 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1680: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:05:13 compute-0 ceph-mon[192914]: pgmap v1680: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:13 compute-0 nova_compute[349548]: 2025-12-05 02:05:13.250 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1681: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:15 compute-0 ceph-mon[192914]: pgmap v1681: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1682: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:05:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:05:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:05:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:05:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:05:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:05:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:05:16
Dec 05 02:05:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:05:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:05:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'volumes', '.rgw.root', 'default.rgw.control', 'vms']
Dec 05 02:05:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:05:16 compute-0 nova_compute[349548]: 2025-12-05 02:05:16.926 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:17 compute-0 ceph-mon[192914]: pgmap v1682: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:05:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1683: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:05:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:05:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:05:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:05:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:05:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:05:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:05:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:05:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:05:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:05:18 compute-0 nova_compute[349548]: 2025-12-05 02:05:18.252 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:18 compute-0 podman[437028]: 2025-12-05 02:05:18.709734471 +0000 UTC m=+0.115736515 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:05:18 compute-0 podman[437029]: 2025-12-05 02:05:18.719477904 +0000 UTC m=+0.129813580 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 02:05:19 compute-0 ceph-mon[192914]: pgmap v1683: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1684: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:20 compute-0 podman[437069]: 2025-12-05 02:05:20.714440601 +0000 UTC m=+0.118696268 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:05:20 compute-0 podman[437068]: 2025-12-05 02:05:20.729623937 +0000 UTC m=+0.131988951 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 02:05:21 compute-0 ceph-mon[192914]: pgmap v1684: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:21 compute-0 nova_compute[349548]: 2025-12-05 02:05:21.929 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1685: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:22 compute-0 podman[437108]: 2025-12-05 02:05:22.744838022 +0000 UTC m=+0.149494291 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.openshift.expose-services=, version=9.4, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Dec 05 02:05:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:05:23 compute-0 nova_compute[349548]: 2025-12-05 02:05:23.256 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:23 compute-0 ceph-mon[192914]: pgmap v1685: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1686: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:25 compute-0 ceph-mon[192914]: pgmap v1686: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1687: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:26 compute-0 nova_compute[349548]: 2025-12-05 02:05:26.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:05:26 compute-0 nova_compute[349548]: 2025-12-05 02:05:26.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 02:05:26 compute-0 nova_compute[349548]: 2025-12-05 02:05:26.085 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 02:05:26 compute-0 nova_compute[349548]: 2025-12-05 02:05:26.932 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00110425264130364 of space, bias 1.0, pg target 0.331275792391092 quantized to 32 (current 32)
Dec 05 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 05 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:05:27 compute-0 ceph-mon[192914]: pgmap v1687: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1688: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:05:28 compute-0 nova_compute[349548]: 2025-12-05 02:05:28.259 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:29 compute-0 ceph-mon[192914]: pgmap v1688: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:29 compute-0 podman[158197]: time="2025-12-05T02:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:05:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:05:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8631 "" "Go-http-client/1.1"
Dec 05 02:05:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1689: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:31 compute-0 nova_compute[349548]: 2025-12-05 02:05:31.087 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:05:31 compute-0 ceph-mon[192914]: pgmap v1689: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:31 compute-0 openstack_network_exporter[366555]: ERROR   02:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:05:31 compute-0 openstack_network_exporter[366555]: ERROR   02:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:05:31 compute-0 openstack_network_exporter[366555]: ERROR   02:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:05:31 compute-0 openstack_network_exporter[366555]: ERROR   02:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:05:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:05:31 compute-0 openstack_network_exporter[366555]: ERROR   02:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:05:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:05:31 compute-0 nova_compute[349548]: 2025-12-05 02:05:31.936 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1690: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:32 compute-0 nova_compute[349548]: 2025-12-05 02:05:32.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:05:32 compute-0 nova_compute[349548]: 2025-12-05 02:05:32.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:05:32 compute-0 podman[437130]: 2025-12-05 02:05:32.71285748 +0000 UTC m=+0.114330205 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:05:32 compute-0 podman[437129]: 2025-12-05 02:05:32.727188522 +0000 UTC m=+0.133898424 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:05:32 compute-0 podman[437132]: 2025-12-05 02:05:32.731954706 +0000 UTC m=+0.130875980 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6)
Dec 05 02:05:32 compute-0 podman[437131]: 2025-12-05 02:05:32.757511762 +0000 UTC m=+0.156638742 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 05 02:05:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:05:33 compute-0 nova_compute[349548]: 2025-12-05 02:05:33.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:05:33 compute-0 nova_compute[349548]: 2025-12-05 02:05:33.263 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:33 compute-0 ceph-mon[192914]: pgmap v1690: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1691: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:34 compute-0 nova_compute[349548]: 2025-12-05 02:05:34.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:05:34 compute-0 nova_compute[349548]: 2025-12-05 02:05:34.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:05:34 compute-0 nova_compute[349548]: 2025-12-05 02:05:34.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:05:34 compute-0 nova_compute[349548]: 2025-12-05 02:05:34.552 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:05:34 compute-0 nova_compute[349548]: 2025-12-05 02:05:34.552 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:05:34 compute-0 nova_compute[349548]: 2025-12-05 02:05:34.553 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 02:05:34 compute-0 nova_compute[349548]: 2025-12-05 02:05:34.553 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b69a0e24-1bc4-46a5-92d7-367c1efd53df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:05:35 compute-0 ceph-mon[192914]: pgmap v1691: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:35 compute-0 nova_compute[349548]: 2025-12-05 02:05:35.920 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updating instance_info_cache with network_info: [{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:05:35 compute-0 nova_compute[349548]: 2025-12-05 02:05:35.940 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:05:35 compute-0 nova_compute[349548]: 2025-12-05 02:05:35.940 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 02:05:35 compute-0 nova_compute[349548]: 2025-12-05 02:05:35.941 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:05:35 compute-0 nova_compute[349548]: 2025-12-05 02:05:35.942 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:05:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1692: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:35 compute-0 nova_compute[349548]: 2025-12-05 02:05:35.979 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:05:35 compute-0 nova_compute[349548]: 2025-12-05 02:05:35.980 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:05:35 compute-0 nova_compute[349548]: 2025-12-05 02:05:35.980 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:05:35 compute-0 nova_compute[349548]: 2025-12-05 02:05:35.980 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:05:35 compute-0 nova_compute[349548]: 2025-12-05 02:05:35.981 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:05:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:05:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4034647949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:05:36 compute-0 nova_compute[349548]: 2025-12-05 02:05:36.517 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:05:36 compute-0 nova_compute[349548]: 2025-12-05 02:05:36.645 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:05:36 compute-0 nova_compute[349548]: 2025-12-05 02:05:36.646 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:05:36 compute-0 nova_compute[349548]: 2025-12-05 02:05:36.646 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:05:36 compute-0 nova_compute[349548]: 2025-12-05 02:05:36.658 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:05:36 compute-0 nova_compute[349548]: 2025-12-05 02:05:36.659 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:05:36 compute-0 nova_compute[349548]: 2025-12-05 02:05:36.659 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:05:36 compute-0 nova_compute[349548]: 2025-12-05 02:05:36.940 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:37 compute-0 nova_compute[349548]: 2025-12-05 02:05:37.261 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:05:37 compute-0 nova_compute[349548]: 2025-12-05 02:05:37.264 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3634MB free_disk=59.92203903198242GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:05:37 compute-0 nova_compute[349548]: 2025-12-05 02:05:37.265 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:05:37 compute-0 nova_compute[349548]: 2025-12-05 02:05:37.265 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:05:37 compute-0 ceph-mon[192914]: pgmap v1692: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:37 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4034647949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:05:37 compute-0 nova_compute[349548]: 2025-12-05 02:05:37.422 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:05:37 compute-0 nova_compute[349548]: 2025-12-05 02:05:37.423 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:05:37 compute-0 nova_compute[349548]: 2025-12-05 02:05:37.424 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:05:37 compute-0 nova_compute[349548]: 2025-12-05 02:05:37.424 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:05:37 compute-0 nova_compute[349548]: 2025-12-05 02:05:37.642 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:05:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1693: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:05:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:05:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1311608690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:05:38 compute-0 nova_compute[349548]: 2025-12-05 02:05:38.174 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:05:38 compute-0 nova_compute[349548]: 2025-12-05 02:05:38.187 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:05:38 compute-0 nova_compute[349548]: 2025-12-05 02:05:38.203 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:05:38 compute-0 nova_compute[349548]: 2025-12-05 02:05:38.206 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:05:38 compute-0 nova_compute[349548]: 2025-12-05 02:05:38.207 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:05:38 compute-0 nova_compute[349548]: 2025-12-05 02:05:38.208 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:05:38 compute-0 nova_compute[349548]: 2025-12-05 02:05:38.265 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:38 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1311608690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:05:39 compute-0 ceph-mon[192914]: pgmap v1693: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1694: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:40 compute-0 nova_compute[349548]: 2025-12-05 02:05:40.371 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:05:40 compute-0 nova_compute[349548]: 2025-12-05 02:05:40.372 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:05:40 compute-0 nova_compute[349548]: 2025-12-05 02:05:40.372 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:05:40 compute-0 ceph-mon[192914]: pgmap v1694: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:41 compute-0 nova_compute[349548]: 2025-12-05 02:05:41.063 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:05:41 compute-0 nova_compute[349548]: 2025-12-05 02:05:41.943 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1695: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:05:43 compute-0 ceph-mon[192914]: pgmap v1695: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:43 compute-0 nova_compute[349548]: 2025-12-05 02:05:43.269 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1696: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:45 compute-0 ceph-mon[192914]: pgmap v1696: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:05:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2250566486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:05:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:05:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2250566486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:05:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1697: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2250566486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:05:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2250566486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:05:46 compute-0 nova_compute[349548]: 2025-12-05 02:05:46.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:05:46 compute-0 nova_compute[349548]: 2025-12-05 02:05:46.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 02:05:46 compute-0 nova_compute[349548]: 2025-12-05 02:05:46.081 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:05:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:05:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:05:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:05:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:05:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:05:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:05:46 compute-0 nova_compute[349548]: 2025-12-05 02:05:46.947 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:47 compute-0 ceph-mon[192914]: pgmap v1697: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1698: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:05:48 compute-0 nova_compute[349548]: 2025-12-05 02:05:48.272 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:49 compute-0 ceph-mon[192914]: pgmap v1698: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:49 compute-0 podman[437258]: 2025-12-05 02:05:49.713341147 +0000 UTC m=+0.118010179 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 02:05:49 compute-0 podman[437257]: 2025-12-05 02:05:49.718759459 +0000 UTC m=+0.125971782 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 05 02:05:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1699: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:51 compute-0 ceph-mon[192914]: pgmap v1699: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:51 compute-0 podman[437297]: 2025-12-05 02:05:51.721173255 +0000 UTC m=+0.121616569 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 02:05:51 compute-0 podman[437296]: 2025-12-05 02:05:51.723630814 +0000 UTC m=+0.131669201 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 02:05:51 compute-0 nova_compute[349548]: 2025-12-05 02:05:51.951 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1700: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:05:53 compute-0 ceph-mon[192914]: pgmap v1700: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:53 compute-0 nova_compute[349548]: 2025-12-05 02:05:53.275 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:53 compute-0 podman[437334]: 2025-12-05 02:05:53.737820181 +0000 UTC m=+0.148204075 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, version=9.4, architecture=x86_64, distribution-scope=public, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.expose-services=, name=ubi9, release=1214.1726694543, maintainer=Red Hat, Inc.)
Dec 05 02:05:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1701: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.307 349552 DEBUG oslo_concurrency.lockutils [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.308 349552 DEBUG oslo_concurrency.lockutils [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.308 349552 DEBUG oslo_concurrency.lockutils [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.308 349552 DEBUG oslo_concurrency.lockutils [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.309 349552 DEBUG oslo_concurrency.lockutils [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.311 349552 INFO nova.compute.manager [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Terminating instance
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.313 349552 DEBUG nova.compute.manager [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 05 02:05:54 compute-0 kernel: tap2799035c-b9 (unregistering): left promiscuous mode
Dec 05 02:05:54 compute-0 NetworkManager[49092]: <info>  [1764900354.4894] device (tap2799035c-b9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.504 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:54 compute-0 ovn_controller[89286]: 2025-12-05T02:05:54Z|00058|binding|INFO|Releasing lport 2799035c-b9e1-4c24-b031-9824b684480c from this chassis (sb_readonly=0)
Dec 05 02:05:54 compute-0 ovn_controller[89286]: 2025-12-05T02:05:54Z|00059|binding|INFO|Setting lport 2799035c-b9e1-4c24-b031-9824b684480c down in Southbound
Dec 05 02:05:54 compute-0 ovn_controller[89286]: 2025-12-05T02:05:54Z|00060|binding|INFO|Removing iface tap2799035c-b9 ovn-installed in OVS
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.510 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.516 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:64:51 192.168.0.169'], port_security=['fa:16:3e:10:64:51 192.168.0.169'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-qkgif4ysdpfw-etyk2gsqvxro-nwtay2ho224x-port-44wmftlb3hgo', 'neutron:cidrs': '192.168.0.169/24', 'neutron:device_id': '3611d2ae-da33-4e55-aec7-0bec88d3b4e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-qkgif4ysdpfw-etyk2gsqvxro-nwtay2ho224x-port-44wmftlb3hgo', 'neutron:project_id': '6ad982b73954486390215862ee62239f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf07c149-4b4f-4cc9-a5b5-cfd139acbede', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.221', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8440543a-d57d-422f-b491-49a678c2776e, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=2799035c-b9e1-4c24-b031-9824b684480c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.518 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 2799035c-b9e1-4c24-b031-9824b684480c in datapath 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 unbound from our chassis
Dec 05 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.520 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.532 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.547 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[050476c8-763c-4808-9373-d125d6da87ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.593 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[d9bb6124-2012-4379-b3cb-249280840c9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.597 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[4563ec15-0fd5-4ff4-b110-ba3975947be4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:05:54 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Dec 05 02:05:54 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 2min 13.002s CPU time.
Dec 05 02:05:54 compute-0 systemd-machined[138700]: Machine qemu-4-instance-00000004 terminated.
Dec 05 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.632 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[87921158-9638-4a5b-9835-3b1f8c503688]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.659 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[2bfc2e8d-ddd4-4eb0-9437-1682beb9a374]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49f7d2f1-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:8a:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 16, 'rx_bytes': 616, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 16, 'rx_bytes': 616, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537514, 'reachable_time': 38410, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 437364, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.682 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[5cb390b9-9986-4a89-b249-d5408274a9d0]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap49f7d2f1-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537531, 'tstamp': 537531}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 437365, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap49f7d2f1-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537536, 'tstamp': 537536}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 437365, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.684 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49f7d2f1-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.686 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.697 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.698 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49f7d2f1-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.699 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.700 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap49f7d2f1-f0, col_values=(('external_ids', {'iface-id': '35b0af3f-4a87-44c5-9b77-2f08261b9985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.700 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.774 349552 INFO nova.virt.libvirt.driver [-] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Instance destroyed successfully.
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.775 349552 DEBUG nova.objects.instance [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'resources' on Instance uuid 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.790 349552 DEBUG nova.virt.libvirt.vif [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T01:55:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq',id=4,image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-05T01:55:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ad982b73954486390215862ee62239f',ramdisk_id='',reservation_id='r-105jpxj7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T01:55:46Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04ODA2MDY4NjMzMjAxNTAxMzcxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTg4MDYwNjg2MzMyMDE1MDEzNzE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODgwNjA2ODYzMzIwMTUwMTM3MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTg4MDYwNjg2MzMyMDE1MDEzNzE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04ODA2MDY4NjMzMjAxNTAxMzcxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04ODA2MDY4NjMzMjAxNTAxMzcxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Dec 05 02:05:54 compute-0 nova_compute[349548]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODgwNjA2ODYzMzIwMTUwMTM3MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTg4MDYwNjg2MzMyMDE1MDEzNzE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04ODA2MDY4NjMzMjAxNTAxMzcxPT0tLQo=',user_id='ff880837791d4f49a54672b8d0e705ff',uuid=3611d2ae-da33-4e55-aec7-0bec88d3b4e0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2799035c-b9e1-4c24-b031-9824b684480c", "address": "fa:16:3e:10:64:51", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2799035c-b9", "ovs_interfaceid": "2799035c-b9e1-4c24-b031-9824b684480c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.792 349552 DEBUG nova.network.os_vif_util [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converting VIF {"id": "2799035c-b9e1-4c24-b031-9824b684480c", "address": "fa:16:3e:10:64:51", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2799035c-b9", "ovs_interfaceid": "2799035c-b9e1-4c24-b031-9824b684480c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.793 349552 DEBUG nova.network.os_vif_util [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:10:64:51,bridge_name='br-int',has_traffic_filtering=True,id=2799035c-b9e1-4c24-b031-9824b684480c,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2799035c-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.794 349552 DEBUG os_vif [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:64:51,bridge_name='br-int',has_traffic_filtering=True,id=2799035c-b9e1-4c24-b031-9824b684480c,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2799035c-b9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.796 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.797 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2799035c-b9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.800 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.803 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.804 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.807 349552 INFO os_vif [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:64:51,bridge_name='br-int',has_traffic_filtering=True,id=2799035c-b9e1-4c24-b031-9824b684480c,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2799035c-b9')
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.891 349552 DEBUG nova.compute.manager [req-798d238f-c905-4d4e-a113-5002d3b8f5c8 req-003838d9-d3f2-43a6-ac6a-e6c3abf87600 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Received event network-vif-unplugged-2799035c-b9e1-4c24-b031-9824b684480c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.891 349552 DEBUG oslo_concurrency.lockutils [req-798d238f-c905-4d4e-a113-5002d3b8f5c8 req-003838d9-d3f2-43a6-ac6a-e6c3abf87600 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.892 349552 DEBUG oslo_concurrency.lockutils [req-798d238f-c905-4d4e-a113-5002d3b8f5c8 req-003838d9-d3f2-43a6-ac6a-e6c3abf87600 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.892 349552 DEBUG oslo_concurrency.lockutils [req-798d238f-c905-4d4e-a113-5002d3b8f5c8 req-003838d9-d3f2-43a6-ac6a-e6c3abf87600 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.893 349552 DEBUG nova.compute.manager [req-798d238f-c905-4d4e-a113-5002d3b8f5c8 req-003838d9-d3f2-43a6-ac6a-e6c3abf87600 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] No waiting events found dispatching network-vif-unplugged-2799035c-b9e1-4c24-b031-9824b684480c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.895 349552 DEBUG nova.compute.manager [req-798d238f-c905-4d4e-a113-5002d3b8f5c8 req-003838d9-d3f2-43a6-ac6a-e6c3abf87600 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Received event network-vif-unplugged-2799035c-b9e1-4c24-b031-9824b684480c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 05 02:05:54 compute-0 rsyslogd[188644]: message too long (8192) with configured size 8096, begin of message is: 2025-12-05 02:05:54.790 349552 DEBUG nova.virt.libvirt.vif [None req-7a53523e-9a [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 05 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.934 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.934 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.936 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 02:05:55 compute-0 ceph-mon[192914]: pgmap v1701: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:05:55 compute-0 nova_compute[349548]: 2025-12-05 02:05:55.876 349552 DEBUG nova.compute.manager [req-394131f4-7a47-4a35-9a3c-7f7933c74daf req-bf53862d-f018-4517-a21c-2ddb211e10fd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Received event network-changed-2799035c-b9e1-4c24-b031-9824b684480c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:05:55 compute-0 nova_compute[349548]: 2025-12-05 02:05:55.877 349552 DEBUG nova.compute.manager [req-394131f4-7a47-4a35-9a3c-7f7933c74daf req-bf53862d-f018-4517-a21c-2ddb211e10fd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Refreshing instance network info cache due to event network-changed-2799035c-b9e1-4c24-b031-9824b684480c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 02:05:55 compute-0 nova_compute[349548]: 2025-12-05 02:05:55.877 349552 DEBUG oslo_concurrency.lockutils [req-394131f4-7a47-4a35-9a3c-7f7933c74daf req-bf53862d-f018-4517-a21c-2ddb211e10fd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:05:55 compute-0 nova_compute[349548]: 2025-12-05 02:05:55.878 349552 DEBUG oslo_concurrency.lockutils [req-394131f4-7a47-4a35-9a3c-7f7933c74daf req-bf53862d-f018-4517-a21c-2ddb211e10fd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:05:55 compute-0 nova_compute[349548]: 2025-12-05 02:05:55.878 349552 DEBUG nova.network.neutron [req-394131f4-7a47-4a35-9a3c-7f7933c74daf req-bf53862d-f018-4517-a21c-2ddb211e10fd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Refreshing network info cache for port 2799035c-b9e1-4c24-b031-9824b684480c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 02:05:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1702: 321 pgs: 321 active+clean; 120 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 341 B/s wr, 11 op/s
Dec 05 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.160 349552 INFO nova.virt.libvirt.driver [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Deleting instance files /var/lib/nova/instances/3611d2ae-da33-4e55-aec7-0bec88d3b4e0_del
Dec 05 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.162 349552 INFO nova.virt.libvirt.driver [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Deletion of /var/lib/nova/instances/3611d2ae-da33-4e55-aec7-0bec88d3b4e0_del complete
Dec 05 02:05:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:56.199 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:05:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:56.200 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:05:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:56.201 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.231 349552 INFO nova.compute.manager [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Took 1.92 seconds to destroy the instance on the hypervisor.
Dec 05 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.232 349552 DEBUG oslo.service.loopingcall [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 05 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.234 349552 DEBUG nova.compute.manager [-] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 05 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.234 349552 DEBUG nova.network.neutron [-] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 05 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.898 349552 DEBUG nova.network.neutron [req-394131f4-7a47-4a35-9a3c-7f7933c74daf req-bf53862d-f018-4517-a21c-2ddb211e10fd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Updated VIF entry in instance network info cache for port 2799035c-b9e1-4c24-b031-9824b684480c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.898 349552 DEBUG nova.network.neutron [req-394131f4-7a47-4a35-9a3c-7f7933c74daf req-bf53862d-f018-4517-a21c-2ddb211e10fd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Updating instance_info_cache with network_info: [{"id": "2799035c-b9e1-4c24-b031-9824b684480c", "address": "fa:16:3e:10:64:51", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2799035c-b9", "ovs_interfaceid": "2799035c-b9e1-4c24-b031-9824b684480c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.918 349552 DEBUG oslo_concurrency.lockutils [req-394131f4-7a47-4a35-9a3c-7f7933c74daf req-bf53862d-f018-4517-a21c-2ddb211e10fd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.954 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.976 349552 DEBUG nova.compute.manager [req-4881592f-42e8-4b45-b724-30e3f90096a4 req-64419768-b714-4358-8708-9ba529c9806c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Received event network-vif-plugged-2799035c-b9e1-4c24-b031-9824b684480c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.977 349552 DEBUG oslo_concurrency.lockutils [req-4881592f-42e8-4b45-b724-30e3f90096a4 req-64419768-b714-4358-8708-9ba529c9806c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.978 349552 DEBUG oslo_concurrency.lockutils [req-4881592f-42e8-4b45-b724-30e3f90096a4 req-64419768-b714-4358-8708-9ba529c9806c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.978 349552 DEBUG oslo_concurrency.lockutils [req-4881592f-42e8-4b45-b724-30e3f90096a4 req-64419768-b714-4358-8708-9ba529c9806c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.979 349552 DEBUG nova.compute.manager [req-4881592f-42e8-4b45-b724-30e3f90096a4 req-64419768-b714-4358-8708-9ba529c9806c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] No waiting events found dispatching network-vif-plugged-2799035c-b9e1-4c24-b031-9824b684480c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.979 349552 WARNING nova.compute.manager [req-4881592f-42e8-4b45-b724-30e3f90096a4 req-64419768-b714-4358-8708-9ba529c9806c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Received unexpected event network-vif-plugged-2799035c-b9e1-4c24-b031-9824b684480c for instance with vm_state active and task_state deleting.
Dec 05 02:05:57 compute-0 nova_compute[349548]: 2025-12-05 02:05:57.091 349552 DEBUG nova.network.neutron [-] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:05:57 compute-0 nova_compute[349548]: 2025-12-05 02:05:57.109 349552 INFO nova.compute.manager [-] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Took 0.87 seconds to deallocate network for instance.
Dec 05 02:05:57 compute-0 ceph-mon[192914]: pgmap v1702: 321 pgs: 321 active+clean; 120 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 341 B/s wr, 11 op/s
Dec 05 02:05:57 compute-0 nova_compute[349548]: 2025-12-05 02:05:57.154 349552 DEBUG oslo_concurrency.lockutils [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:05:57 compute-0 nova_compute[349548]: 2025-12-05 02:05:57.155 349552 DEBUG oslo_concurrency.lockutils [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:05:57 compute-0 nova_compute[349548]: 2025-12-05 02:05:57.275 349552 DEBUG oslo_concurrency.processutils [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:05:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:05:57 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1858799124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:05:57 compute-0 nova_compute[349548]: 2025-12-05 02:05:57.807 349552 DEBUG oslo_concurrency.processutils [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:05:57 compute-0 nova_compute[349548]: 2025-12-05 02:05:57.821 349552 DEBUG nova.compute.provider_tree [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:05:57 compute-0 nova_compute[349548]: 2025-12-05 02:05:57.843 349552 DEBUG nova.scheduler.client.report [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:05:57 compute-0 nova_compute[349548]: 2025-12-05 02:05:57.879 349552 DEBUG oslo_concurrency.lockutils [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:05:57 compute-0 nova_compute[349548]: 2025-12-05 02:05:57.916 349552 INFO nova.scheduler.client.report [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Deleted allocations for instance 3611d2ae-da33-4e55-aec7-0bec88d3b4e0
Dec 05 02:05:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1703: 321 pgs: 321 active+clean; 78 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Dec 05 02:05:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:05:58 compute-0 nova_compute[349548]: 2025-12-05 02:05:58.017 349552 DEBUG oslo_concurrency.lockutils [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:05:58 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1858799124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:05:59 compute-0 ceph-mon[192914]: pgmap v1703: 321 pgs: 321 active+clean; 78 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Dec 05 02:05:59 compute-0 podman[158197]: time="2025-12-05T02:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:05:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:05:59 compute-0 nova_compute[349548]: 2025-12-05 02:05:59.800 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:05:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8634 "" "Go-http-client/1.1"
Dec 05 02:05:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1704: 321 pgs: 321 active+clean; 78 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Dec 05 02:06:01 compute-0 ceph-mon[192914]: pgmap v1704: 321 pgs: 321 active+clean; 78 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Dec 05 02:06:01 compute-0 openstack_network_exporter[366555]: ERROR   02:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:06:01 compute-0 openstack_network_exporter[366555]: ERROR   02:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:06:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:06:01 compute-0 openstack_network_exporter[366555]: ERROR   02:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:06:01 compute-0 openstack_network_exporter[366555]: ERROR   02:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:06:01 compute-0 openstack_network_exporter[366555]: ERROR   02:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:06:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:06:01 compute-0 nova_compute[349548]: 2025-12-05 02:06:01.957 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1705: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 02:06:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:06:03 compute-0 ceph-mon[192914]: pgmap v1705: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 02:06:03 compute-0 podman[437420]: 2025-12-05 02:06:03.698037579 +0000 UTC m=+0.099332815 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 02:06:03 compute-0 podman[437419]: 2025-12-05 02:06:03.72768597 +0000 UTC m=+0.129666505 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 02:06:03 compute-0 podman[437422]: 2025-12-05 02:06:03.742552557 +0000 UTC m=+0.128509253 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 05 02:06:03 compute-0 podman[437421]: 2025-12-05 02:06:03.781655963 +0000 UTC m=+0.169895493 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec 05 02:06:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1706: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 02:06:04 compute-0 nova_compute[349548]: 2025-12-05 02:06:04.804 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:04 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:04.939 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:06:05 compute-0 ceph-mon[192914]: pgmap v1706: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 02:06:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1707: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.7 KiB/s wr, 49 op/s
Dec 05 02:06:06 compute-0 nova_compute[349548]: 2025-12-05 02:06:06.961 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:07 compute-0 ceph-mon[192914]: pgmap v1707: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.7 KiB/s wr, 49 op/s
Dec 05 02:06:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1708: 321 pgs: 321 active+clean; 78 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 1.4 KiB/s wr, 75 op/s
Dec 05 02:06:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:06:09 compute-0 ceph-mon[192914]: pgmap v1708: 321 pgs: 321 active+clean; 78 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 1.4 KiB/s wr, 75 op/s
Dec 05 02:06:09 compute-0 sudo[437503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:06:09 compute-0 sudo[437503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:06:09 compute-0 sudo[437503]: pam_unix(sudo:session): session closed for user root
Dec 05 02:06:09 compute-0 sudo[437528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:06:09 compute-0 sudo[437528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:06:09 compute-0 sudo[437528]: pam_unix(sudo:session): session closed for user root
Dec 05 02:06:09 compute-0 nova_compute[349548]: 2025-12-05 02:06:09.767 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764900354.7657616, 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:06:09 compute-0 nova_compute[349548]: 2025-12-05 02:06:09.768 349552 INFO nova.compute.manager [-] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] VM Stopped (Lifecycle Event)
Dec 05 02:06:09 compute-0 nova_compute[349548]: 2025-12-05 02:06:09.791 349552 DEBUG nova.compute.manager [None req-8182f6bc-714f-49f8-bb72-79f902194f8d - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:06:09 compute-0 nova_compute[349548]: 2025-12-05 02:06:09.807 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:09 compute-0 sudo[437553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:06:09 compute-0 sudo[437553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:06:09 compute-0 sudo[437553]: pam_unix(sudo:session): session closed for user root
Dec 05 02:06:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1709: 321 pgs: 321 active+clean; 78 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 56 op/s
Dec 05 02:06:10 compute-0 sudo[437578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:06:10 compute-0 sudo[437578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:06:10 compute-0 sudo[437578]: pam_unix(sudo:session): session closed for user root
Dec 05 02:06:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:06:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:06:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:06:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:06:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:06:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:06:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev fba79e18-a11d-4ce1-b759-cb95035c4441 does not exist
Dec 05 02:06:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 3376d10d-caeb-4635-8b55-037038ef4944 does not exist
Dec 05 02:06:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 694c1e9d-dbcc-4e1a-a5c3-fa7620fc210d does not exist
Dec 05 02:06:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:06:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:06:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:06:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:06:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:06:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:06:11 compute-0 sudo[437634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:06:11 compute-0 sudo[437634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:06:11 compute-0 sudo[437634]: pam_unix(sudo:session): session closed for user root
Dec 05 02:06:11 compute-0 sudo[437659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:06:11 compute-0 sudo[437659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:06:11 compute-0 sudo[437659]: pam_unix(sudo:session): session closed for user root
Dec 05 02:06:11 compute-0 ceph-mon[192914]: pgmap v1709: 321 pgs: 321 active+clean; 78 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 56 op/s
Dec 05 02:06:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:06:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:06:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:06:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:06:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:06:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:06:11 compute-0 sudo[437684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:06:11 compute-0 sudo[437684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:06:11 compute-0 sudo[437684]: pam_unix(sudo:session): session closed for user root
Dec 05 02:06:11 compute-0 sudo[437709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:06:11 compute-0 sudo[437709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:06:11 compute-0 nova_compute[349548]: 2025-12-05 02:06:11.964 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1710: 321 pgs: 321 active+clean; 78 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 0 B/s wr, 69 op/s
Dec 05 02:06:12 compute-0 podman[437772]: 2025-12-05 02:06:12.046190818 +0000 UTC m=+0.093785150 container create 338ad9464d46e1e5e87fc538ed3a58ce9458f0887f8710e629aeff4a6eccc221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 02:06:12 compute-0 systemd[1]: Started libpod-conmon-338ad9464d46e1e5e87fc538ed3a58ce9458f0887f8710e629aeff4a6eccc221.scope.
Dec 05 02:06:12 compute-0 podman[437772]: 2025-12-05 02:06:12.012105512 +0000 UTC m=+0.059699884 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:06:12 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:06:12 compute-0 podman[437772]: 2025-12-05 02:06:12.160494942 +0000 UTC m=+0.208089304 container init 338ad9464d46e1e5e87fc538ed3a58ce9458f0887f8710e629aeff4a6eccc221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 05 02:06:12 compute-0 podman[437772]: 2025-12-05 02:06:12.177828307 +0000 UTC m=+0.225422609 container start 338ad9464d46e1e5e87fc538ed3a58ce9458f0887f8710e629aeff4a6eccc221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wozniak, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:06:12 compute-0 podman[437772]: 2025-12-05 02:06:12.184373641 +0000 UTC m=+0.231967963 container attach 338ad9464d46e1e5e87fc538ed3a58ce9458f0887f8710e629aeff4a6eccc221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wozniak, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 02:06:12 compute-0 cool_wozniak[437787]: 167 167
Dec 05 02:06:12 compute-0 systemd[1]: libpod-338ad9464d46e1e5e87fc538ed3a58ce9458f0887f8710e629aeff4a6eccc221.scope: Deactivated successfully.
Dec 05 02:06:12 compute-0 podman[437772]: 2025-12-05 02:06:12.191766098 +0000 UTC m=+0.239360430 container died 338ad9464d46e1e5e87fc538ed3a58ce9458f0887f8710e629aeff4a6eccc221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:06:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c51af9e59d23edadf7908376ba673978c790a72db7629ef8c9215c66457a3e3-merged.mount: Deactivated successfully.
Dec 05 02:06:12 compute-0 podman[437772]: 2025-12-05 02:06:12.277017408 +0000 UTC m=+0.324611710 container remove 338ad9464d46e1e5e87fc538ed3a58ce9458f0887f8710e629aeff4a6eccc221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 02:06:12 compute-0 systemd[1]: libpod-conmon-338ad9464d46e1e5e87fc538ed3a58ce9458f0887f8710e629aeff4a6eccc221.scope: Deactivated successfully.
Dec 05 02:06:12 compute-0 podman[437810]: 2025-12-05 02:06:12.578709334 +0000 UTC m=+0.111660851 container create 8c1b3b3c47bdc36d254a94c196169c5831d086343fdb69f9f5932c5908066b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poitras, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 02:06:12 compute-0 podman[437810]: 2025-12-05 02:06:12.54074542 +0000 UTC m=+0.073696987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:06:12 compute-0 systemd[1]: Started libpod-conmon-8c1b3b3c47bdc36d254a94c196169c5831d086343fdb69f9f5932c5908066b07.scope.
Dec 05 02:06:12 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:06:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d57ec27aa8ac55924a7ddbbe8fb85046565f69c1f485f4e8b74a59fd67bfe1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:06:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d57ec27aa8ac55924a7ddbbe8fb85046565f69c1f485f4e8b74a59fd67bfe1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:06:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d57ec27aa8ac55924a7ddbbe8fb85046565f69c1f485f4e8b74a59fd67bfe1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:06:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d57ec27aa8ac55924a7ddbbe8fb85046565f69c1f485f4e8b74a59fd67bfe1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:06:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d57ec27aa8ac55924a7ddbbe8fb85046565f69c1f485f4e8b74a59fd67bfe1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:06:12 compute-0 podman[437810]: 2025-12-05 02:06:12.724993144 +0000 UTC m=+0.257944671 container init 8c1b3b3c47bdc36d254a94c196169c5831d086343fdb69f9f5932c5908066b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poitras, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 05 02:06:12 compute-0 podman[437810]: 2025-12-05 02:06:12.746018103 +0000 UTC m=+0.278969580 container start 8c1b3b3c47bdc36d254a94c196169c5831d086343fdb69f9f5932c5908066b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poitras, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 02:06:12 compute-0 podman[437810]: 2025-12-05 02:06:12.750657464 +0000 UTC m=+0.283608941 container attach 8c1b3b3c47bdc36d254a94c196169c5831d086343fdb69f9f5932c5908066b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:06:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:06:13 compute-0 ceph-mon[192914]: pgmap v1710: 321 pgs: 321 active+clean; 78 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 0 B/s wr, 69 op/s
Dec 05 02:06:13 compute-0 admiring_poitras[437826]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:06:13 compute-0 admiring_poitras[437826]: --> relative data size: 1.0
Dec 05 02:06:13 compute-0 admiring_poitras[437826]: --> All data devices are unavailable
Dec 05 02:06:13 compute-0 systemd[1]: libpod-8c1b3b3c47bdc36d254a94c196169c5831d086343fdb69f9f5932c5908066b07.scope: Deactivated successfully.
Dec 05 02:06:13 compute-0 podman[437810]: 2025-12-05 02:06:13.950137543 +0000 UTC m=+1.483089060 container died 8c1b3b3c47bdc36d254a94c196169c5831d086343fdb69f9f5932c5908066b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poitras, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:06:13 compute-0 systemd[1]: libpod-8c1b3b3c47bdc36d254a94c196169c5831d086343fdb69f9f5932c5908066b07.scope: Consumed 1.154s CPU time.
Dec 05 02:06:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1711: 321 pgs: 321 active+clean; 78 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 02:06:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-75d57ec27aa8ac55924a7ddbbe8fb85046565f69c1f485f4e8b74a59fd67bfe1-merged.mount: Deactivated successfully.
Dec 05 02:06:14 compute-0 podman[437810]: 2025-12-05 02:06:14.039322973 +0000 UTC m=+1.572274460 container remove 8c1b3b3c47bdc36d254a94c196169c5831d086343fdb69f9f5932c5908066b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poitras, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:06:14 compute-0 systemd[1]: libpod-conmon-8c1b3b3c47bdc36d254a94c196169c5831d086343fdb69f9f5932c5908066b07.scope: Deactivated successfully.
Dec 05 02:06:14 compute-0 sudo[437709]: pam_unix(sudo:session): session closed for user root
Dec 05 02:06:14 compute-0 sudo[437866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:06:14 compute-0 sudo[437866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:06:14 compute-0 sudo[437866]: pam_unix(sudo:session): session closed for user root
Dec 05 02:06:14 compute-0 sudo[437891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:06:14 compute-0 sudo[437891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:06:14 compute-0 sudo[437891]: pam_unix(sudo:session): session closed for user root
Dec 05 02:06:14 compute-0 sudo[437916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:06:14 compute-0 sudo[437916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:06:14 compute-0 sudo[437916]: pam_unix(sudo:session): session closed for user root
Dec 05 02:06:14 compute-0 sudo[437941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:06:14 compute-0 sudo[437941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:06:14 compute-0 nova_compute[349548]: 2025-12-05 02:06:14.776 349552 DEBUG oslo_concurrency.lockutils [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:06:14 compute-0 nova_compute[349548]: 2025-12-05 02:06:14.778 349552 DEBUG oslo_concurrency.lockutils [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:06:14 compute-0 nova_compute[349548]: 2025-12-05 02:06:14.779 349552 DEBUG oslo_concurrency.lockutils [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:06:14 compute-0 nova_compute[349548]: 2025-12-05 02:06:14.780 349552 DEBUG oslo_concurrency.lockutils [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:06:14 compute-0 nova_compute[349548]: 2025-12-05 02:06:14.781 349552 DEBUG oslo_concurrency.lockutils [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:06:14 compute-0 nova_compute[349548]: 2025-12-05 02:06:14.783 349552 INFO nova.compute.manager [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Terminating instance
Dec 05 02:06:14 compute-0 nova_compute[349548]: 2025-12-05 02:06:14.785 349552 DEBUG nova.compute.manager [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 05 02:06:14 compute-0 nova_compute[349548]: 2025-12-05 02:06:14.811 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:14 compute-0 kernel: tap68143c81-65 (unregistering): left promiscuous mode
Dec 05 02:06:14 compute-0 NetworkManager[49092]: <info>  [1764900374.9393] device (tap68143c81-65): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 05 02:06:14 compute-0 ovn_controller[89286]: 2025-12-05T02:06:14Z|00061|binding|INFO|Releasing lport 68143c81-65a4-4ed0-8902-dbe0c8d89224 from this chassis (sb_readonly=0)
Dec 05 02:06:14 compute-0 ovn_controller[89286]: 2025-12-05T02:06:14Z|00062|binding|INFO|Setting lport 68143c81-65a4-4ed0-8902-dbe0c8d89224 down in Southbound
Dec 05 02:06:14 compute-0 nova_compute[349548]: 2025-12-05 02:06:14.957 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:14 compute-0 ovn_controller[89286]: 2025-12-05T02:06:14Z|00063|binding|INFO|Removing iface tap68143c81-65 ovn-installed in OVS
Dec 05 02:06:14 compute-0 nova_compute[349548]: 2025-12-05 02:06:14.960 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:14.965 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:12:24 192.168.0.48'], port_security=['fa:16:3e:0c:12:24 192.168.0.48'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.48/24', 'neutron:device_id': 'b69a0e24-1bc4-46a5-92d7-367c1efd53df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ad982b73954486390215862ee62239f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf07c149-4b4f-4cc9-a5b5-cfd139acbede', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8440543a-d57d-422f-b491-49a678c2776e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=68143c81-65a4-4ed0-8902-dbe0c8d89224) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:06:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:14.967 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 68143c81-65a4-4ed0-8902-dbe0c8d89224 in datapath 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 unbound from our chassis
Dec 05 02:06:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:14.968 287122 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 02:06:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:14.970 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[67eb8688-d472-4d5a-89a4-6a0e875d438a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:06:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:14.971 287122 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 namespace which is not needed anymore
Dec 05 02:06:14 compute-0 nova_compute[349548]: 2025-12-05 02:06:14.987 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:15 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Dec 05 02:06:15 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 3min 28.101s CPU time.
Dec 05 02:06:15 compute-0 systemd-machined[138700]: Machine qemu-1-instance-00000001 terminated.
Dec 05 02:06:15 compute-0 podman[438021]: 2025-12-05 02:06:15.144869131 +0000 UTC m=+0.055282971 container create fc296c36233c1206c2f01b64465262c536aabefb1453d1c8e37a5cefb838df96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_meitner, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 02:06:15 compute-0 systemd[1]: Started libpod-conmon-fc296c36233c1206c2f01b64465262c536aabefb1453d1c8e37a5cefb838df96.scope.
Dec 05 02:06:15 compute-0 neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183[412838]: [NOTICE]   (412842) : haproxy version is 2.8.14-c23fe91
Dec 05 02:06:15 compute-0 neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183[412838]: [NOTICE]   (412842) : path to executable is /usr/sbin/haproxy
Dec 05 02:06:15 compute-0 neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183[412838]: [WARNING]  (412842) : Exiting Master process...
Dec 05 02:06:15 compute-0 neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183[412838]: [WARNING]  (412842) : Exiting Master process...
Dec 05 02:06:15 compute-0 neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183[412838]: [ALERT]    (412842) : Current worker (412844) exited with code 143 (Terminated)
Dec 05 02:06:15 compute-0 neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183[412838]: [WARNING]  (412842) : All workers exited. Exiting... (0)
Dec 05 02:06:15 compute-0 systemd[1]: libpod-70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe.scope: Deactivated successfully.
Dec 05 02:06:15 compute-0 podman[438021]: 2025-12-05 02:06:15.123270186 +0000 UTC m=+0.033684046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:06:15 compute-0 podman[438034]: 2025-12-05 02:06:15.221113358 +0000 UTC m=+0.106349922 container died 70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec 05 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.236 349552 INFO nova.virt.libvirt.driver [-] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Instance destroyed successfully.
Dec 05 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.237 349552 DEBUG nova.objects.instance [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'resources' on Instance uuid b69a0e24-1bc4-46a5-92d7-367c1efd53df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:06:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.262 349552 DEBUG nova.virt.libvirt.vif [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T01:47:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-05T01:48:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ad982b73954486390215862ee62239f',ramdisk_id='',reservation_id='r-u7sbhrgz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T01:48:05Z,user_data=None,user_id='ff880837791d4f49a54672b8d0e705ff',uuid=b69a0e24-1bc4-46a5-92d7-367c1efd53df,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 05 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.263 349552 DEBUG nova.network.os_vif_util [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converting VIF {"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.264 349552 DEBUG nova.network.os_vif_util [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0c:12:24,bridge_name='br-int',has_traffic_filtering=True,id=68143c81-65a4-4ed0-8902-dbe0c8d89224,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68143c81-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.264 349552 DEBUG os_vif [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:12:24,bridge_name='br-int',has_traffic_filtering=True,id=68143c81-65a4-4ed0-8902-dbe0c8d89224,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68143c81-65') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 05 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.266 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.266 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68143c81-65, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.271 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.274 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 02:06:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe-userdata-shm.mount: Deactivated successfully.
Dec 05 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.278 349552 INFO os_vif [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:12:24,bridge_name='br-int',has_traffic_filtering=True,id=68143c81-65a4-4ed0-8902-dbe0c8d89224,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68143c81-65')
Dec 05 02:06:15 compute-0 podman[438021]: 2025-12-05 02:06:15.281336556 +0000 UTC m=+0.191750446 container init fc296c36233c1206c2f01b64465262c536aabefb1453d1c8e37a5cefb838df96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_meitner, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Dec 05 02:06:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a820a613b1e07df1e33c546156b70839ccd983fd42dcef40eb3db4bae4f3e023-merged.mount: Deactivated successfully.
Dec 05 02:06:15 compute-0 ceph-mon[192914]: pgmap v1711: 321 pgs: 321 active+clean; 78 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 02:06:15 compute-0 podman[438021]: 2025-12-05 02:06:15.292831158 +0000 UTC m=+0.203245008 container start fc296c36233c1206c2f01b64465262c536aabefb1453d1c8e37a5cefb838df96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_meitner, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Dec 05 02:06:15 compute-0 podman[438034]: 2025-12-05 02:06:15.296961764 +0000 UTC m=+0.182198338 container cleanup 70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 02:06:15 compute-0 nifty_meitner[438062]: 167 167
Dec 05 02:06:15 compute-0 systemd[1]: libpod-fc296c36233c1206c2f01b64465262c536aabefb1453d1c8e37a5cefb838df96.scope: Deactivated successfully.
Dec 05 02:06:15 compute-0 podman[438021]: 2025-12-05 02:06:15.307229012 +0000 UTC m=+0.217642902 container attach fc296c36233c1206c2f01b64465262c536aabefb1453d1c8e37a5cefb838df96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:06:15 compute-0 podman[438021]: 2025-12-05 02:06:15.308099976 +0000 UTC m=+0.218513866 container died fc296c36233c1206c2f01b64465262c536aabefb1453d1c8e37a5cefb838df96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_meitner, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:06:15 compute-0 systemd[1]: libpod-conmon-70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe.scope: Deactivated successfully.
Dec 05 02:06:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-908a593687eb4325e9ae8dbda8dec95f6b74b689b11673cfdbd2bf02b8835806-merged.mount: Deactivated successfully.
Dec 05 02:06:15 compute-0 podman[438021]: 2025-12-05 02:06:15.370640659 +0000 UTC m=+0.281054499 container remove fc296c36233c1206c2f01b64465262c536aabefb1453d1c8e37a5cefb838df96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 02:06:15 compute-0 systemd[1]: libpod-conmon-fc296c36233c1206c2f01b64465262c536aabefb1453d1c8e37a5cefb838df96.scope: Deactivated successfully.
Dec 05 02:06:15 compute-0 podman[438102]: 2025-12-05 02:06:15.402991336 +0000 UTC m=+0.067058541 container remove 70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 02:06:15 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:15.419 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[cf358dd8-1fed-4f68-9042-b835be654403]: (4, ('Fri Dec  5 02:06:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 (70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe)\n70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe\nFri Dec  5 02:06:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 (70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe)\n70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:06:15 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:15.423 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b1a6ec-5738-43ef-a7cd-44a04beffbea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:06:15 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:15.424 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49f7d2f1-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.427 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:15 compute-0 kernel: tap49f7d2f1-f0: left promiscuous mode
Dec 05 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.440 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:15 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:15.445 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[8f047bc5-54cf-4cce-b870-2d954975ea78]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:06:15 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:15.467 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[3a064bb4-f095-4902-b891-81dfe8d81e56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:06:15 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:15.469 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[a8ccf76d-7797-4e8b-a6cb-4b0056d6b618]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:06:15 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:15.489 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[b68dc1cf-6563-41e1-ad8a-1e39d0fefd2d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537500, 'reachable_time': 37315, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 438131, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:06:15 compute-0 systemd[1]: run-netns-ovnmeta\x2d49f7d2f1\x2df1ff\x2d4dcc\x2d94db\x2dd088dc8d3183.mount: Deactivated successfully.
Dec 05 02:06:15 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:15.504 287504 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 05 02:06:15 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:15.504 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[97fd04d5-1d29-4e92-a5dc-c514efe1f125]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:06:15 compute-0 podman[438138]: 2025-12-05 02:06:15.640359949 +0000 UTC m=+0.099154110 container create c8c45582c6cfbcc0644ee7f1d1dd33338f323c55ac0943a96951abf8a652e38c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lewin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:06:15 compute-0 podman[438138]: 2025-12-05 02:06:15.587401475 +0000 UTC m=+0.046195676 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:06:15 compute-0 systemd[1]: Started libpod-conmon-c8c45582c6cfbcc0644ee7f1d1dd33338f323c55ac0943a96951abf8a652e38c.scope.
Dec 05 02:06:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:06:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81d56d6d131913e1cec8f0ceb97f1b2943433821158723bd33ce4e0f426edd94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:06:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81d56d6d131913e1cec8f0ceb97f1b2943433821158723bd33ce4e0f426edd94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:06:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81d56d6d131913e1cec8f0ceb97f1b2943433821158723bd33ce4e0f426edd94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:06:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81d56d6d131913e1cec8f0ceb97f1b2943433821158723bd33ce4e0f426edd94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:06:15 compute-0 podman[438138]: 2025-12-05 02:06:15.800982622 +0000 UTC m=+0.259776813 container init c8c45582c6cfbcc0644ee7f1d1dd33338f323c55ac0943a96951abf8a652e38c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 05 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.814 349552 DEBUG nova.compute.manager [req-67e1698a-b024-4326-a1c6-20e5132a65fb req-3dd2efe5-2e50-4fcd-b141-0f8cc772d06f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Received event network-vif-unplugged-68143c81-65a4-4ed0-8902-dbe0c8d89224 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.815 349552 DEBUG oslo_concurrency.lockutils [req-67e1698a-b024-4326-a1c6-20e5132a65fb req-3dd2efe5-2e50-4fcd-b141-0f8cc772d06f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.816 349552 DEBUG oslo_concurrency.lockutils [req-67e1698a-b024-4326-a1c6-20e5132a65fb req-3dd2efe5-2e50-4fcd-b141-0f8cc772d06f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.816 349552 DEBUG oslo_concurrency.lockutils [req-67e1698a-b024-4326-a1c6-20e5132a65fb req-3dd2efe5-2e50-4fcd-b141-0f8cc772d06f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.816 349552 DEBUG nova.compute.manager [req-67e1698a-b024-4326-a1c6-20e5132a65fb req-3dd2efe5-2e50-4fcd-b141-0f8cc772d06f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] No waiting events found dispatching network-vif-unplugged-68143c81-65a4-4ed0-8902-dbe0c8d89224 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.816 349552 DEBUG nova.compute.manager [req-67e1698a-b024-4326-a1c6-20e5132a65fb req-3dd2efe5-2e50-4fcd-b141-0f8cc772d06f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Received event network-vif-unplugged-68143c81-65a4-4ed0-8902-dbe0c8d89224 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 05 02:06:15 compute-0 podman[438138]: 2025-12-05 02:06:15.820422616 +0000 UTC m=+0.279216747 container start c8c45582c6cfbcc0644ee7f1d1dd33338f323c55ac0943a96951abf8a652e38c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lewin, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 02:06:15 compute-0 podman[438138]: 2025-12-05 02:06:15.825506199 +0000 UTC m=+0.284300340 container attach c8c45582c6cfbcc0644ee7f1d1dd33338f323c55ac0943a96951abf8a652e38c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 05 02:06:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1712: 321 pgs: 321 active+clean; 78 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 02:06:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:06:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:06:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:06:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:06:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:06:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:06:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:06:16
Dec 05 02:06:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:06:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:06:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['.rgw.root', 'backups', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'default.rgw.log', 'images']
Dec 05 02:06:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:06:16 compute-0 nova_compute[349548]: 2025-12-05 02:06:16.441 349552 INFO nova.virt.libvirt.driver [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Deleting instance files /var/lib/nova/instances/b69a0e24-1bc4-46a5-92d7-367c1efd53df_del
Dec 05 02:06:16 compute-0 nova_compute[349548]: 2025-12-05 02:06:16.442 349552 INFO nova.virt.libvirt.driver [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Deletion of /var/lib/nova/instances/b69a0e24-1bc4-46a5-92d7-367c1efd53df_del complete
Dec 05 02:06:16 compute-0 nova_compute[349548]: 2025-12-05 02:06:16.522 349552 INFO nova.compute.manager [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Took 1.74 seconds to destroy the instance on the hypervisor.
Dec 05 02:06:16 compute-0 nova_compute[349548]: 2025-12-05 02:06:16.523 349552 DEBUG oslo.service.loopingcall [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 05 02:06:16 compute-0 nova_compute[349548]: 2025-12-05 02:06:16.523 349552 DEBUG nova.compute.manager [-] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 05 02:06:16 compute-0 nova_compute[349548]: 2025-12-05 02:06:16.524 349552 DEBUG nova.network.neutron [-] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 05 02:06:16 compute-0 jolly_lewin[438155]: {
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:     "0": [
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:         {
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "devices": [
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "/dev/loop3"
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             ],
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "lv_name": "ceph_lv0",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "lv_size": "21470642176",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "name": "ceph_lv0",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "tags": {
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.cluster_name": "ceph",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.crush_device_class": "",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.encrypted": "0",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.osd_id": "0",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.type": "block",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.vdo": "0"
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             },
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "type": "block",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "vg_name": "ceph_vg0"
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:         }
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:     ],
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:     "1": [
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:         {
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "devices": [
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "/dev/loop4"
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             ],
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "lv_name": "ceph_lv1",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "lv_size": "21470642176",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "name": "ceph_lv1",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "tags": {
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.cluster_name": "ceph",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.crush_device_class": "",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.encrypted": "0",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.osd_id": "1",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.type": "block",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.vdo": "0"
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             },
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "type": "block",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "vg_name": "ceph_vg1"
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:         }
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:     ],
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:     "2": [
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:         {
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "devices": [
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "/dev/loop5"
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             ],
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "lv_name": "ceph_lv2",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "lv_size": "21470642176",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "name": "ceph_lv2",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "tags": {
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.cluster_name": "ceph",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.crush_device_class": "",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.encrypted": "0",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.osd_id": "2",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.type": "block",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:                 "ceph.vdo": "0"
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             },
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "type": "block",
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:             "vg_name": "ceph_vg2"
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:         }
Dec 05 02:06:16 compute-0 jolly_lewin[438155]:     ]
Dec 05 02:06:16 compute-0 jolly_lewin[438155]: }
Dec 05 02:06:16 compute-0 systemd[1]: libpod-c8c45582c6cfbcc0644ee7f1d1dd33338f323c55ac0943a96951abf8a652e38c.scope: Deactivated successfully.
Dec 05 02:06:16 compute-0 podman[438164]: 2025-12-05 02:06:16.769946111 +0000 UTC m=+0.047259356 container died c8c45582c6cfbcc0644ee7f1d1dd33338f323c55ac0943a96951abf8a652e38c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lewin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:06:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-81d56d6d131913e1cec8f0ceb97f1b2943433821158723bd33ce4e0f426edd94-merged.mount: Deactivated successfully.
Dec 05 02:06:16 compute-0 podman[438164]: 2025-12-05 02:06:16.889762069 +0000 UTC m=+0.167075304 container remove c8c45582c6cfbcc0644ee7f1d1dd33338f323c55ac0943a96951abf8a652e38c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lewin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:06:16 compute-0 systemd[1]: libpod-conmon-c8c45582c6cfbcc0644ee7f1d1dd33338f323c55ac0943a96951abf8a652e38c.scope: Deactivated successfully.
Dec 05 02:06:16 compute-0 sudo[437941]: pam_unix(sudo:session): session closed for user root
Dec 05 02:06:16 compute-0 nova_compute[349548]: 2025-12-05 02:06:16.967 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:17 compute-0 sudo[438179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:06:17 compute-0 sudo[438179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:06:17 compute-0 sudo[438179]: pam_unix(sudo:session): session closed for user root
Dec 05 02:06:17 compute-0 sudo[438204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:06:17 compute-0 sudo[438204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:06:17 compute-0 sudo[438204]: pam_unix(sudo:session): session closed for user root
Dec 05 02:06:17 compute-0 ceph-mon[192914]: pgmap v1712: 321 pgs: 321 active+clean; 78 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 02:06:17 compute-0 sudo[438229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:06:17 compute-0 sudo[438229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:06:17 compute-0 sudo[438229]: pam_unix(sudo:session): session closed for user root
Dec 05 02:06:17 compute-0 sudo[438254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:06:17 compute-0 sudo[438254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:06:17 compute-0 nova_compute[349548]: 2025-12-05 02:06:17.883 349552 DEBUG nova.compute.manager [req-572ee730-2eff-41f9-8b0a-a161ed1c2305 req-c789fef6-e6b9-4690-a0ed-f23b02fe5e51 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Received event network-vif-plugged-68143c81-65a4-4ed0-8902-dbe0c8d89224 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:06:17 compute-0 nova_compute[349548]: 2025-12-05 02:06:17.884 349552 DEBUG oslo_concurrency.lockutils [req-572ee730-2eff-41f9-8b0a-a161ed1c2305 req-c789fef6-e6b9-4690-a0ed-f23b02fe5e51 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:06:17 compute-0 nova_compute[349548]: 2025-12-05 02:06:17.885 349552 DEBUG oslo_concurrency.lockutils [req-572ee730-2eff-41f9-8b0a-a161ed1c2305 req-c789fef6-e6b9-4690-a0ed-f23b02fe5e51 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:06:17 compute-0 nova_compute[349548]: 2025-12-05 02:06:17.885 349552 DEBUG oslo_concurrency.lockutils [req-572ee730-2eff-41f9-8b0a-a161ed1c2305 req-c789fef6-e6b9-4690-a0ed-f23b02fe5e51 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:06:17 compute-0 nova_compute[349548]: 2025-12-05 02:06:17.885 349552 DEBUG nova.compute.manager [req-572ee730-2eff-41f9-8b0a-a161ed1c2305 req-c789fef6-e6b9-4690-a0ed-f23b02fe5e51 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] No waiting events found dispatching network-vif-plugged-68143c81-65a4-4ed0-8902-dbe0c8d89224 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:06:17 compute-0 nova_compute[349548]: 2025-12-05 02:06:17.886 349552 WARNING nova.compute.manager [req-572ee730-2eff-41f9-8b0a-a161ed1c2305 req-c789fef6-e6b9-4690-a0ed-f23b02fe5e51 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Received unexpected event network-vif-plugged-68143c81-65a4-4ed0-8902-dbe0c8d89224 for instance with vm_state active and task_state deleting.
Dec 05 02:06:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:06:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:06:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:06:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:06:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:06:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:06:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:06:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:06:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:06:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:06:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1713: 321 pgs: 321 active+clean; 33 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.7 KiB/s wr, 64 op/s
Dec 05 02:06:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:06:18 compute-0 podman[438317]: 2025-12-05 02:06:18.077374197 +0000 UTC m=+0.083611055 container create 6d666649107f181172578cd7718675dc377a20e320258ba40ef86f58e60587ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_black, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:06:18 compute-0 podman[438317]: 2025-12-05 02:06:18.047647883 +0000 UTC m=+0.053884811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:06:18 compute-0 systemd[1]: Started libpod-conmon-6d666649107f181172578cd7718675dc377a20e320258ba40ef86f58e60587ff.scope.
Dec 05 02:06:18 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:06:18 compute-0 podman[438317]: 2025-12-05 02:06:18.215103007 +0000 UTC m=+0.221339945 container init 6d666649107f181172578cd7718675dc377a20e320258ba40ef86f58e60587ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 05 02:06:18 compute-0 podman[438317]: 2025-12-05 02:06:18.23054889 +0000 UTC m=+0.236785768 container start 6d666649107f181172578cd7718675dc377a20e320258ba40ef86f58e60587ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_black, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:06:18 compute-0 podman[438317]: 2025-12-05 02:06:18.237033582 +0000 UTC m=+0.243270500 container attach 6d666649107f181172578cd7718675dc377a20e320258ba40ef86f58e60587ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_black, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 02:06:18 compute-0 pedantic_black[438332]: 167 167
Dec 05 02:06:18 compute-0 systemd[1]: libpod-6d666649107f181172578cd7718675dc377a20e320258ba40ef86f58e60587ff.scope: Deactivated successfully.
Dec 05 02:06:18 compute-0 podman[438317]: 2025-12-05 02:06:18.242291339 +0000 UTC m=+0.248528217 container died 6d666649107f181172578cd7718675dc377a20e320258ba40ef86f58e60587ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_black, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 05 02:06:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-c294a0b072976971260a31bc302fdb8dfae908c69cad5b3016c960a7000c7a84-merged.mount: Deactivated successfully.
Dec 05 02:06:18 compute-0 podman[438317]: 2025-12-05 02:06:18.319102262 +0000 UTC m=+0.325339140 container remove 6d666649107f181172578cd7718675dc377a20e320258ba40ef86f58e60587ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 05 02:06:18 compute-0 systemd[1]: libpod-conmon-6d666649107f181172578cd7718675dc377a20e320258ba40ef86f58e60587ff.scope: Deactivated successfully.
Dec 05 02:06:18 compute-0 podman[438354]: 2025-12-05 02:06:18.574196522 +0000 UTC m=+0.078473100 container create 6243bc521f73846bc3ab93dfca503f54416e9b333e127d82d25e885eeaa18882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hamilton, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 05 02:06:18 compute-0 podman[438354]: 2025-12-05 02:06:18.532159114 +0000 UTC m=+0.036435722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:06:18 compute-0 systemd[1]: Started libpod-conmon-6243bc521f73846bc3ab93dfca503f54416e9b333e127d82d25e885eeaa18882.scope.
Dec 05 02:06:18 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/292a89f71e83dbf0d5b323127569c041d8b3a38a2352b7970363a1989c355ef8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/292a89f71e83dbf0d5b323127569c041d8b3a38a2352b7970363a1989c355ef8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/292a89f71e83dbf0d5b323127569c041d8b3a38a2352b7970363a1989c355ef8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/292a89f71e83dbf0d5b323127569c041d8b3a38a2352b7970363a1989c355ef8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:06:18 compute-0 podman[438354]: 2025-12-05 02:06:18.732131729 +0000 UTC m=+0.236408387 container init 6243bc521f73846bc3ab93dfca503f54416e9b333e127d82d25e885eeaa18882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hamilton, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:06:18 compute-0 podman[438354]: 2025-12-05 02:06:18.764328702 +0000 UTC m=+0.268605270 container start 6243bc521f73846bc3ab93dfca503f54416e9b333e127d82d25e885eeaa18882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:06:18 compute-0 podman[438354]: 2025-12-05 02:06:18.772859491 +0000 UTC m=+0.277136139 container attach 6243bc521f73846bc3ab93dfca503f54416e9b333e127d82d25e885eeaa18882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 05 02:06:18 compute-0 nova_compute[349548]: 2025-12-05 02:06:18.945 349552 DEBUG nova.network.neutron [-] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:06:19 compute-0 nova_compute[349548]: 2025-12-05 02:06:19.009 349552 INFO nova.compute.manager [-] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Took 2.49 seconds to deallocate network for instance.
Dec 05 02:06:19 compute-0 nova_compute[349548]: 2025-12-05 02:06:19.079 349552 DEBUG oslo_concurrency.lockutils [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:06:19 compute-0 nova_compute[349548]: 2025-12-05 02:06:19.079 349552 DEBUG oslo_concurrency.lockutils [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:06:19 compute-0 nova_compute[349548]: 2025-12-05 02:06:19.084 349552 DEBUG nova.compute.manager [req-f6cb3adb-4e28-40f7-884b-5c4fb47d8647 req-b7a550a6-7352-4a20-b5b4-ef53bd625e42 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Received event network-vif-deleted-68143c81-65a4-4ed0-8902-dbe0c8d89224 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:06:19 compute-0 nova_compute[349548]: 2025-12-05 02:06:19.158 349552 DEBUG oslo_concurrency.processutils [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:06:19 compute-0 ceph-mon[192914]: pgmap v1713: 321 pgs: 321 active+clean; 33 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.7 KiB/s wr, 64 op/s
Dec 05 02:06:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:06:19 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1535300410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:06:19 compute-0 nova_compute[349548]: 2025-12-05 02:06:19.700 349552 DEBUG oslo_concurrency.processutils [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:06:19 compute-0 nova_compute[349548]: 2025-12-05 02:06:19.712 349552 DEBUG nova.compute.provider_tree [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:06:19 compute-0 nova_compute[349548]: 2025-12-05 02:06:19.743 349552 DEBUG nova.scheduler.client.report [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:06:19 compute-0 nova_compute[349548]: 2025-12-05 02:06:19.778 349552 DEBUG oslo_concurrency.lockutils [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:06:19 compute-0 nova_compute[349548]: 2025-12-05 02:06:19.811 349552 INFO nova.scheduler.client.report [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Deleted allocations for instance b69a0e24-1bc4-46a5-92d7-367c1efd53df
Dec 05 02:06:19 compute-0 wizardly_hamilton[438371]: {
Dec 05 02:06:19 compute-0 wizardly_hamilton[438371]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:06:19 compute-0 wizardly_hamilton[438371]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:06:19 compute-0 wizardly_hamilton[438371]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:06:19 compute-0 wizardly_hamilton[438371]:         "osd_id": 0,
Dec 05 02:06:19 compute-0 wizardly_hamilton[438371]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:06:19 compute-0 wizardly_hamilton[438371]:         "type": "bluestore"
Dec 05 02:06:19 compute-0 wizardly_hamilton[438371]:     },
Dec 05 02:06:19 compute-0 wizardly_hamilton[438371]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:06:19 compute-0 wizardly_hamilton[438371]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:06:19 compute-0 wizardly_hamilton[438371]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:06:19 compute-0 wizardly_hamilton[438371]:         "osd_id": 1,
Dec 05 02:06:19 compute-0 wizardly_hamilton[438371]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:06:19 compute-0 wizardly_hamilton[438371]:         "type": "bluestore"
Dec 05 02:06:19 compute-0 wizardly_hamilton[438371]:     },
Dec 05 02:06:19 compute-0 wizardly_hamilton[438371]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:06:19 compute-0 wizardly_hamilton[438371]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:06:19 compute-0 wizardly_hamilton[438371]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:06:19 compute-0 wizardly_hamilton[438371]:         "osd_id": 2,
Dec 05 02:06:19 compute-0 wizardly_hamilton[438371]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:06:19 compute-0 wizardly_hamilton[438371]:         "type": "bluestore"
Dec 05 02:06:19 compute-0 wizardly_hamilton[438371]:     }
Dec 05 02:06:19 compute-0 wizardly_hamilton[438371]: }
Dec 05 02:06:19 compute-0 systemd[1]: libpod-6243bc521f73846bc3ab93dfca503f54416e9b333e127d82d25e885eeaa18882.scope: Deactivated successfully.
Dec 05 02:06:19 compute-0 systemd[1]: libpod-6243bc521f73846bc3ab93dfca503f54416e9b333e127d82d25e885eeaa18882.scope: Consumed 1.136s CPU time.
Dec 05 02:06:19 compute-0 nova_compute[349548]: 2025-12-05 02:06:19.949 349552 DEBUG oslo_concurrency.lockutils [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:06:19 compute-0 podman[438427]: 2025-12-05 02:06:19.978317199 +0000 UTC m=+0.054595681 container died 6243bc521f73846bc3ab93dfca503f54416e9b333e127d82d25e885eeaa18882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:06:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1714: 321 pgs: 321 active+clean; 33 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.7 KiB/s wr, 27 op/s
Dec 05 02:06:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-292a89f71e83dbf0d5b323127569c041d8b3a38a2352b7970363a1989c355ef8-merged.mount: Deactivated successfully.
Dec 05 02:06:20 compute-0 podman[438427]: 2025-12-05 02:06:20.066651645 +0000 UTC m=+0.142930107 container remove 6243bc521f73846bc3ab93dfca503f54416e9b333e127d82d25e885eeaa18882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:06:20 compute-0 podman[438434]: 2025-12-05 02:06:20.068326702 +0000 UTC m=+0.111293290 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:06:20 compute-0 systemd[1]: libpod-conmon-6243bc521f73846bc3ab93dfca503f54416e9b333e127d82d25e885eeaa18882.scope: Deactivated successfully.
Dec 05 02:06:20 compute-0 podman[438428]: 2025-12-05 02:06:20.08608029 +0000 UTC m=+0.126112526 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:06:20 compute-0 sudo[438254]: pam_unix(sudo:session): session closed for user root
Dec 05 02:06:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:06:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:06:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:06:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:06:20 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 18ab9890-cf2e-4cc1-bbf7-cc94564ca0dd does not exist
Dec 05 02:06:20 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev dff142e8-b791-4128-9069-bbe51b722a36 does not exist
Dec 05 02:06:20 compute-0 sudo[438476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:06:20 compute-0 sudo[438476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:06:20 compute-0 sudo[438476]: pam_unix(sudo:session): session closed for user root
Dec 05 02:06:20 compute-0 nova_compute[349548]: 2025-12-05 02:06:20.271 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:20 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1535300410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:06:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:06:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:06:20 compute-0 sudo[438501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:06:20 compute-0 sudo[438501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:06:20 compute-0 sudo[438501]: pam_unix(sudo:session): session closed for user root
Dec 05 02:06:21 compute-0 ceph-mon[192914]: pgmap v1714: 321 pgs: 321 active+clean; 33 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.7 KiB/s wr, 27 op/s
Dec 05 02:06:21 compute-0 nova_compute[349548]: 2025-12-05 02:06:21.970 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1715: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.7 KiB/s wr, 52 op/s
Dec 05 02:06:22 compute-0 nova_compute[349548]: 2025-12-05 02:06:22.106 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:06:22 compute-0 podman[438527]: 2025-12-05 02:06:22.735620364 +0000 UTC m=+0.130640972 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:06:22 compute-0 podman[438526]: 2025-12-05 02:06:22.753558787 +0000 UTC m=+0.155263072 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 05 02:06:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:06:23 compute-0 ceph-mon[192914]: pgmap v1715: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.7 KiB/s wr, 52 op/s
Dec 05 02:06:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1716: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 02:06:24 compute-0 podman[438563]: 2025-12-05 02:06:24.724668845 +0000 UTC m=+0.127702750 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=base rhel9, vcs-type=git, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, distribution-scope=public, version=9.4, name=ubi9)
Dec 05 02:06:25 compute-0 nova_compute[349548]: 2025-12-05 02:06:25.276 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:25 compute-0 ceph-mon[192914]: pgmap v1716: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 02:06:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1717: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 02:06:26 compute-0 nova_compute[349548]: 2025-12-05 02:06:26.974 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 05 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:06:27 compute-0 ceph-mon[192914]: pgmap v1717: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 05 02:06:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1718: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 39 op/s
Dec 05 02:06:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:06:28 compute-0 sshd-session[438581]: Invalid user admin from 123.253.22.45 port 54172
Dec 05 02:06:28 compute-0 sshd-session[438581]: Connection closed by invalid user admin 123.253.22.45 port 54172 [preauth]
Dec 05 02:06:29 compute-0 ceph-mon[192914]: pgmap v1718: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 39 op/s
Dec 05 02:06:29 compute-0 podman[158197]: time="2025-12-05T02:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:06:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:06:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8161 "" "Go-http-client/1.1"
Dec 05 02:06:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1719: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 25 op/s
Dec 05 02:06:30 compute-0 nova_compute[349548]: 2025-12-05 02:06:30.227 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764900375.2254682, b69a0e24-1bc4-46a5-92d7-367c1efd53df => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:06:30 compute-0 nova_compute[349548]: 2025-12-05 02:06:30.227 349552 INFO nova.compute.manager [-] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] VM Stopped (Lifecycle Event)
Dec 05 02:06:30 compute-0 nova_compute[349548]: 2025-12-05 02:06:30.261 349552 DEBUG nova.compute.manager [None req-429f1c2f-f6a4-4448-a308-2d815af6be9c - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:06:30 compute-0 nova_compute[349548]: 2025-12-05 02:06:30.280 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:30 compute-0 ceph-mon[192914]: pgmap v1719: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 25 op/s
Dec 05 02:06:31 compute-0 nova_compute[349548]: 2025-12-05 02:06:31.091 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:06:31 compute-0 openstack_network_exporter[366555]: ERROR   02:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:06:31 compute-0 openstack_network_exporter[366555]: ERROR   02:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:06:31 compute-0 openstack_network_exporter[366555]: ERROR   02:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:06:31 compute-0 openstack_network_exporter[366555]: ERROR   02:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:06:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:06:31 compute-0 openstack_network_exporter[366555]: ERROR   02:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:06:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:06:31 compute-0 nova_compute[349548]: 2025-12-05 02:06:31.977 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1720: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 25 op/s
Dec 05 02:06:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:06:33 compute-0 ceph-mon[192914]: pgmap v1720: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 25 op/s
Dec 05 02:06:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1721: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:34 compute-0 nova_compute[349548]: 2025-12-05 02:06:34.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:06:34 compute-0 nova_compute[349548]: 2025-12-05 02:06:34.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:06:34 compute-0 nova_compute[349548]: 2025-12-05 02:06:34.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:06:34 compute-0 nova_compute[349548]: 2025-12-05 02:06:34.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:06:34 compute-0 nova_compute[349548]: 2025-12-05 02:06:34.103 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:06:34 compute-0 nova_compute[349548]: 2025-12-05 02:06:34.104 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:06:34 compute-0 nova_compute[349548]: 2025-12-05 02:06:34.104 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:06:34 compute-0 nova_compute[349548]: 2025-12-05 02:06:34.104 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:06:34 compute-0 nova_compute[349548]: 2025-12-05 02:06:34.105 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:06:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:06:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/954609985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:06:34 compute-0 nova_compute[349548]: 2025-12-05 02:06:34.641 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:06:34 compute-0 podman[438604]: 2025-12-05 02:06:34.715500341 +0000 UTC m=+0.111008123 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 02:06:34 compute-0 podman[438603]: 2025-12-05 02:06:34.744559485 +0000 UTC m=+0.144981464 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 02:06:34 compute-0 podman[438606]: 2025-12-05 02:06:34.765535023 +0000 UTC m=+0.145552500 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_id=edpm, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, maintainer=Red Hat, Inc., architecture=x86_64, name=ubi9-minimal, build-date=2025-08-20T13:12:41)
Dec 05 02:06:34 compute-0 podman[438605]: 2025-12-05 02:06:34.786527792 +0000 UTC m=+0.171502888 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 02:06:35 compute-0 ceph-mon[192914]: pgmap v1721: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:35 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/954609985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.106 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.108 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4096MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.109 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.110 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.174 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.175 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.205 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.284 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:06:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2159397416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.681 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.696 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.734 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.754 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.755 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:06:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1722: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:36 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2159397416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:06:36 compute-0 nova_compute[349548]: 2025-12-05 02:06:36.755 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:06:36 compute-0 nova_compute[349548]: 2025-12-05 02:06:36.755 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:06:36 compute-0 nova_compute[349548]: 2025-12-05 02:06:36.781 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 02:06:36 compute-0 nova_compute[349548]: 2025-12-05 02:06:36.781 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:06:36 compute-0 nova_compute[349548]: 2025-12-05 02:06:36.980 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:37 compute-0 ceph-mon[192914]: pgmap v1722: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1723: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:06:38 compute-0 nova_compute[349548]: 2025-12-05 02:06:38.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:06:38 compute-0 nova_compute[349548]: 2025-12-05 02:06:38.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.321 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.322 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.343 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.343 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.349 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.362 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.362 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:06:39 compute-0 ceph-mon[192914]: pgmap v1723: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1724: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:40 compute-0 nova_compute[349548]: 2025-12-05 02:06:40.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:06:40 compute-0 nova_compute[349548]: 2025-12-05 02:06:40.289 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:41 compute-0 ceph-mon[192914]: pgmap v1724: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:41 compute-0 nova_compute[349548]: 2025-12-05 02:06:41.982 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1725: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:06:43 compute-0 ceph-mon[192914]: pgmap v1725: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1726: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:45 compute-0 ceph-mon[192914]: pgmap v1726: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:45 compute-0 nova_compute[349548]: 2025-12-05 02:06:45.292 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:06:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2446728686' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:06:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:06:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2446728686' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:06:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1727: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2446728686' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:06:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2446728686' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:06:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:06:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:06:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:06:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:06:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:06:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:06:46 compute-0 nova_compute[349548]: 2025-12-05 02:06:46.985 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:47 compute-0 ceph-mon[192914]: pgmap v1727: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1728: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:06:48 compute-0 ovn_controller[89286]: 2025-12-05T02:06:48Z|00064|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory
Dec 05 02:06:49 compute-0 ceph-mon[192914]: pgmap v1728: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1729: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:50 compute-0 nova_compute[349548]: 2025-12-05 02:06:50.296 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:50 compute-0 podman[438713]: 2025-12-05 02:06:50.717066821 +0000 UTC m=+0.116470665 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 05 02:06:50 compute-0 podman[438714]: 2025-12-05 02:06:50.735684313 +0000 UTC m=+0.132242077 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 02:06:51 compute-0 ceph-mon[192914]: pgmap v1729: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:51 compute-0 nova_compute[349548]: 2025-12-05 02:06:51.988 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1730: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:06:53 compute-0 ceph-mon[192914]: pgmap v1730: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:53 compute-0 podman[438756]: 2025-12-05 02:06:53.681781021 +0000 UTC m=+0.098401619 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:06:53 compute-0 podman[438755]: 2025-12-05 02:06:53.696558745 +0000 UTC m=+0.118207544 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 05 02:06:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1731: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:55 compute-0 nova_compute[349548]: 2025-12-05 02:06:55.300 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:55 compute-0 ceph-mon[192914]: pgmap v1731: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:55 compute-0 podman[438792]: 2025-12-05 02:06:55.744588801 +0000 UTC m=+0.155750857 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, architecture=x86_64, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9)
Dec 05 02:06:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1732: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:56.201 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:06:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:56.201 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:06:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:56.201 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:06:56 compute-0 nova_compute[349548]: 2025-12-05 02:06:56.991 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:06:57 compute-0 ceph-mon[192914]: pgmap v1732: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1733: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:06:58 compute-0 ceph-mon[192914]: pgmap v1733: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:06:59 compute-0 podman[158197]: time="2025-12-05T02:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:06:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:06:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8180 "" "Go-http-client/1.1"
Dec 05 02:07:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1734: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:00 compute-0 nova_compute[349548]: 2025-12-05 02:07:00.304 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:07:01 compute-0 ceph-mon[192914]: pgmap v1734: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:01 compute-0 openstack_network_exporter[366555]: ERROR   02:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:07:01 compute-0 openstack_network_exporter[366555]: ERROR   02:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:07:01 compute-0 openstack_network_exporter[366555]: ERROR   02:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:07:01 compute-0 openstack_network_exporter[366555]: ERROR   02:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:07:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:07:01 compute-0 openstack_network_exporter[366555]: ERROR   02:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:07:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:07:01 compute-0 nova_compute[349548]: 2025-12-05 02:07:01.993 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:07:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1735: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:07:03 compute-0 ceph-mon[192914]: pgmap v1735: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1736: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:05 compute-0 ceph-mon[192914]: pgmap v1736: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:05 compute-0 nova_compute[349548]: 2025-12-05 02:07:05.309 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:07:05 compute-0 podman[438813]: 2025-12-05 02:07:05.712071512 +0000 UTC m=+0.109670465 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:07:05 compute-0 podman[438814]: 2025-12-05 02:07:05.731792034 +0000 UTC m=+0.126353152 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:07:05 compute-0 podman[438816]: 2025-12-05 02:07:05.737531325 +0000 UTC m=+0.111709132 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-type=git)
Dec 05 02:07:05 compute-0 podman[438815]: 2025-12-05 02:07:05.769170302 +0000 UTC m=+0.152559207 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 02:07:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1737: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:06 compute-0 nova_compute[349548]: 2025-12-05 02:07:06.996 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:07:07 compute-0 ceph-mon[192914]: pgmap v1737: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:07:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1738: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:09 compute-0 ceph-mon[192914]: pgmap v1738: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1739: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:10 compute-0 nova_compute[349548]: 2025-12-05 02:07:10.312 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:07:11 compute-0 ceph-mon[192914]: pgmap v1739: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:12 compute-0 nova_compute[349548]: 2025-12-05 02:07:11.999 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:07:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1740: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:07:13 compute-0 ceph-mon[192914]: pgmap v1740: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1741: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:15 compute-0 ceph-mon[192914]: pgmap v1741: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:15 compute-0 nova_compute[349548]: 2025-12-05 02:07:15.315 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:07:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1742: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:07:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:07:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:07:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:07:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:07:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:07:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:07:16
Dec 05 02:07:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:07:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:07:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.log', 'vms', '.mgr', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', '.rgw.root']
Dec 05 02:07:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:07:17 compute-0 nova_compute[349548]: 2025-12-05 02:07:17.002 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:07:17 compute-0 ceph-mon[192914]: pgmap v1742: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:07:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:07:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:07:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:07:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:07:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:07:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:07:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:07:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:07:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:07:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:07:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1743: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:19 compute-0 ceph-mon[192914]: pgmap v1743: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1744: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:20 compute-0 nova_compute[349548]: 2025-12-05 02:07:20.317 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:07:20 compute-0 sudo[438902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:07:20 compute-0 sudo[438902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:07:20 compute-0 sudo[438902]: pam_unix(sudo:session): session closed for user root
Dec 05 02:07:20 compute-0 sudo[438927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:07:20 compute-0 sudo[438927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:07:20 compute-0 sudo[438927]: pam_unix(sudo:session): session closed for user root
Dec 05 02:07:20 compute-0 sudo[438952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:07:20 compute-0 sudo[438952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:07:20 compute-0 sudo[438952]: pam_unix(sudo:session): session closed for user root
Dec 05 02:07:20 compute-0 podman[438976]: 2025-12-05 02:07:20.897555406 +0000 UTC m=+0.098924784 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 05 02:07:20 compute-0 sudo[438983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 05 02:07:20 compute-0 sudo[438983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:07:20 compute-0 podman[438977]: 2025-12-05 02:07:20.925999164 +0000 UTC m=+0.126954430 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:07:21 compute-0 ceph-mon[192914]: pgmap v1744: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:21 compute-0 sudo[438983]: pam_unix(sudo:session): session closed for user root
Dec 05 02:07:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:07:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:07:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:07:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:07:21 compute-0 sudo[439063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:07:21 compute-0 sudo[439063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:07:21 compute-0 sudo[439063]: pam_unix(sudo:session): session closed for user root
Dec 05 02:07:21 compute-0 sudo[439088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:07:21 compute-0 sudo[439088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:07:21 compute-0 sudo[439088]: pam_unix(sudo:session): session closed for user root
Dec 05 02:07:21 compute-0 sudo[439113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:07:21 compute-0 sudo[439113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:07:21 compute-0 sudo[439113]: pam_unix(sudo:session): session closed for user root
Dec 05 02:07:21 compute-0 sudo[439138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:07:21 compute-0 sudo[439138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:07:22 compute-0 nova_compute[349548]: 2025-12-05 02:07:22.005 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:07:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1745: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:07:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:07:22 compute-0 sudo[439138]: pam_unix(sudo:session): session closed for user root
Dec 05 02:07:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:07:22 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:07:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:07:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:07:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:07:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:07:22 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a0fb62f1-eee1-46fa-8310-59dda40c1384 does not exist
Dec 05 02:07:22 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 11e363de-0c9a-45c6-bad3-e01f4241a3fa does not exist
Dec 05 02:07:22 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 48958a0f-ec99-43da-adc0-893522cc56ad does not exist
Dec 05 02:07:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:07:22 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:07:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:07:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:07:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:07:22 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:07:22 compute-0 sudo[439193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:07:22 compute-0 sudo[439193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:07:22 compute-0 sudo[439193]: pam_unix(sudo:session): session closed for user root
Dec 05 02:07:22 compute-0 sudo[439218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:07:22 compute-0 sudo[439218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:07:22 compute-0 sudo[439218]: pam_unix(sudo:session): session closed for user root
Dec 05 02:07:22 compute-0 sudo[439243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:07:22 compute-0 sudo[439243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:07:22 compute-0 sudo[439243]: pam_unix(sudo:session): session closed for user root
Dec 05 02:07:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:07:23 compute-0 sudo[439268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:07:23 compute-0 sudo[439268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:07:23 compute-0 ceph-mon[192914]: pgmap v1745: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:23 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:07:23 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:07:23 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:07:23 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:07:23 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:07:23 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:07:23 compute-0 podman[439333]: 2025-12-05 02:07:23.655078757 +0000 UTC m=+0.093742768 container create 629e959f035ecb71f69d41eadbd279246ac1741cea83e0ed03156af95c5f7bbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mahavira, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:07:23 compute-0 podman[439333]: 2025-12-05 02:07:23.619785078 +0000 UTC m=+0.058449149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:07:23 compute-0 systemd[1]: Started libpod-conmon-629e959f035ecb71f69d41eadbd279246ac1741cea83e0ed03156af95c5f7bbb.scope.
Dec 05 02:07:23 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:07:23 compute-0 podman[439333]: 2025-12-05 02:07:23.819465215 +0000 UTC m=+0.258129286 container init 629e959f035ecb71f69d41eadbd279246ac1741cea83e0ed03156af95c5f7bbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mahavira, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:07:23 compute-0 podman[439333]: 2025-12-05 02:07:23.831744269 +0000 UTC m=+0.270408280 container start 629e959f035ecb71f69d41eadbd279246ac1741cea83e0ed03156af95c5f7bbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 02:07:23 compute-0 podman[439333]: 2025-12-05 02:07:23.838796727 +0000 UTC m=+0.277460728 container attach 629e959f035ecb71f69d41eadbd279246ac1741cea83e0ed03156af95c5f7bbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mahavira, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:07:23 compute-0 admiring_mahavira[439349]: 167 167
Dec 05 02:07:23 compute-0 systemd[1]: libpod-629e959f035ecb71f69d41eadbd279246ac1741cea83e0ed03156af95c5f7bbb.scope: Deactivated successfully.
Dec 05 02:07:23 compute-0 podman[439333]: 2025-12-05 02:07:23.843026295 +0000 UTC m=+0.281690306 container died 629e959f035ecb71f69d41eadbd279246ac1741cea83e0ed03156af95c5f7bbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mahavira, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:07:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-844f66852ba57d0e1401f8f387cc10fc72ea5a1fff22d6a3f455816e3ae451ca-merged.mount: Deactivated successfully.
Dec 05 02:07:23 compute-0 podman[439352]: 2025-12-05 02:07:23.906967668 +0000 UTC m=+0.135766077 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 02:07:23 compute-0 podman[439333]: 2025-12-05 02:07:23.918037008 +0000 UTC m=+0.356700999 container remove 629e959f035ecb71f69d41eadbd279246ac1741cea83e0ed03156af95c5f7bbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mahavira, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:07:23 compute-0 podman[439350]: 2025-12-05 02:07:23.931713851 +0000 UTC m=+0.163258697 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 02:07:23 compute-0 systemd[1]: libpod-conmon-629e959f035ecb71f69d41eadbd279246ac1741cea83e0ed03156af95c5f7bbb.scope: Deactivated successfully.
Dec 05 02:07:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1746: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:24 compute-0 podman[439407]: 2025-12-05 02:07:24.182251614 +0000 UTC m=+0.097261338 container create 1993b5864afd01f3e895dec278cb72ac94d94ae4f5440d71bb7943ada0421566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:07:24 compute-0 podman[439407]: 2025-12-05 02:07:24.154163706 +0000 UTC m=+0.069173410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:07:24 compute-0 systemd[1]: Started libpod-conmon-1993b5864afd01f3e895dec278cb72ac94d94ae4f5440d71bb7943ada0421566.scope.
Dec 05 02:07:24 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61e4e0b3f4115b611983f10fd1a2bf372f62bb16faaee8ae605a1f3b2b85810e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61e4e0b3f4115b611983f10fd1a2bf372f62bb16faaee8ae605a1f3b2b85810e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61e4e0b3f4115b611983f10fd1a2bf372f62bb16faaee8ae605a1f3b2b85810e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61e4e0b3f4115b611983f10fd1a2bf372f62bb16faaee8ae605a1f3b2b85810e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61e4e0b3f4115b611983f10fd1a2bf372f62bb16faaee8ae605a1f3b2b85810e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:07:24 compute-0 podman[439407]: 2025-12-05 02:07:24.371141188 +0000 UTC m=+0.286150932 container init 1993b5864afd01f3e895dec278cb72ac94d94ae4f5440d71bb7943ada0421566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:07:24 compute-0 podman[439407]: 2025-12-05 02:07:24.391500739 +0000 UTC m=+0.306510433 container start 1993b5864afd01f3e895dec278cb72ac94d94ae4f5440d71bb7943ada0421566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 05 02:07:24 compute-0 podman[439407]: 2025-12-05 02:07:24.397492747 +0000 UTC m=+0.312502491 container attach 1993b5864afd01f3e895dec278cb72ac94d94ae4f5440d71bb7943ada0421566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moser, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:07:25 compute-0 sshd-session[439274]: Invalid user demo from 45.140.17.124 port 57294
Dec 05 02:07:25 compute-0 nova_compute[349548]: 2025-12-05 02:07:25.321 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:07:25 compute-0 ceph-mon[192914]: pgmap v1746: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:25 compute-0 sshd-session[439274]: Connection reset by invalid user demo 45.140.17.124 port 57294 [preauth]
Dec 05 02:07:25 compute-0 optimistic_moser[439421]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:07:25 compute-0 optimistic_moser[439421]: --> relative data size: 1.0
Dec 05 02:07:25 compute-0 optimistic_moser[439421]: --> All data devices are unavailable
Dec 05 02:07:25 compute-0 systemd[1]: libpod-1993b5864afd01f3e895dec278cb72ac94d94ae4f5440d71bb7943ada0421566.scope: Deactivated successfully.
Dec 05 02:07:25 compute-0 systemd[1]: libpod-1993b5864afd01f3e895dec278cb72ac94d94ae4f5440d71bb7943ada0421566.scope: Consumed 1.300s CPU time.
Dec 05 02:07:25 compute-0 conmon[439421]: conmon 1993b5864afd01f3e895 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1993b5864afd01f3e895dec278cb72ac94d94ae4f5440d71bb7943ada0421566.scope/container/memory.events
Dec 05 02:07:25 compute-0 podman[439452]: 2025-12-05 02:07:25.823207438 +0000 UTC m=+0.050671732 container died 1993b5864afd01f3e895dec278cb72ac94d94ae4f5440d71bb7943ada0421566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moser, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:07:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-61e4e0b3f4115b611983f10fd1a2bf372f62bb16faaee8ae605a1f3b2b85810e-merged.mount: Deactivated successfully.
Dec 05 02:07:25 compute-0 podman[439452]: 2025-12-05 02:07:25.925100914 +0000 UTC m=+0.152565188 container remove 1993b5864afd01f3e895dec278cb72ac94d94ae4f5440d71bb7943ada0421566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moser, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 05 02:07:25 compute-0 systemd[1]: libpod-conmon-1993b5864afd01f3e895dec278cb72ac94d94ae4f5440d71bb7943ada0421566.scope: Deactivated successfully.
Dec 05 02:07:25 compute-0 sudo[439268]: pam_unix(sudo:session): session closed for user root
Dec 05 02:07:26 compute-0 podman[439466]: 2025-12-05 02:07:26.019448608 +0000 UTC m=+0.113343558 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, release-0.7.12=, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, config_id=edpm, distribution-scope=public)
Dec 05 02:07:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1747: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:26 compute-0 sudo[439485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:07:26 compute-0 sudo[439485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:07:26 compute-0 sudo[439485]: pam_unix(sudo:session): session closed for user root
Dec 05 02:07:26 compute-0 sudo[439512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:07:26 compute-0 sudo[439512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:07:26 compute-0 sudo[439512]: pam_unix(sudo:session): session closed for user root
Dec 05 02:07:26 compute-0 sudo[439537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:07:26 compute-0 sudo[439537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:07:26 compute-0 sudo[439537]: pam_unix(sudo:session): session closed for user root
Dec 05 02:07:26 compute-0 sudo[439562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:07:26 compute-0 sudo[439562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:07:27 compute-0 nova_compute[349548]: 2025-12-05 02:07:27.008 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec 05 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:07:27 compute-0 podman[439623]: 2025-12-05 02:07:27.159805972 +0000 UTC m=+0.082499983 container create 395cb049764daf1d2f264b3020978714fcc04ad434ef912a84c78349db3a2c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nash, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:07:27 compute-0 podman[439623]: 2025-12-05 02:07:27.124459461 +0000 UTC m=+0.047153542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:07:27 compute-0 systemd[1]: Started libpod-conmon-395cb049764daf1d2f264b3020978714fcc04ad434ef912a84c78349db3a2c9a.scope.
Dec 05 02:07:27 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:07:27 compute-0 podman[439623]: 2025-12-05 02:07:27.30815691 +0000 UTC m=+0.230850961 container init 395cb049764daf1d2f264b3020978714fcc04ad434ef912a84c78349db3a2c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Dec 05 02:07:27 compute-0 podman[439623]: 2025-12-05 02:07:27.326770222 +0000 UTC m=+0.249464243 container start 395cb049764daf1d2f264b3020978714fcc04ad434ef912a84c78349db3a2c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nash, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:07:27 compute-0 podman[439623]: 2025-12-05 02:07:27.333607494 +0000 UTC m=+0.256301565 container attach 395cb049764daf1d2f264b3020978714fcc04ad434ef912a84c78349db3a2c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 05 02:07:27 compute-0 vigilant_nash[439638]: 167 167
Dec 05 02:07:27 compute-0 systemd[1]: libpod-395cb049764daf1d2f264b3020978714fcc04ad434ef912a84c78349db3a2c9a.scope: Deactivated successfully.
Dec 05 02:07:27 compute-0 podman[439623]: 2025-12-05 02:07:27.339616532 +0000 UTC m=+0.262310553 container died 395cb049764daf1d2f264b3020978714fcc04ad434ef912a84c78349db3a2c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Dec 05 02:07:27 compute-0 ceph-mon[192914]: pgmap v1747: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1a4067de49258979567a0deef701ec512c00fe301ac30f4132f698124c3b72b-merged.mount: Deactivated successfully.
Dec 05 02:07:27 compute-0 podman[439623]: 2025-12-05 02:07:27.417406163 +0000 UTC m=+0.340100184 container remove 395cb049764daf1d2f264b3020978714fcc04ad434ef912a84c78349db3a2c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 02:07:27 compute-0 systemd[1]: libpod-conmon-395cb049764daf1d2f264b3020978714fcc04ad434ef912a84c78349db3a2c9a.scope: Deactivated successfully.
Dec 05 02:07:27 compute-0 sshd-session[439451]: Connection reset by authenticating user root 45.140.17.124 port 57302 [preauth]
Dec 05 02:07:27 compute-0 podman[439660]: 2025-12-05 02:07:27.722454363 +0000 UTC m=+0.098975825 container create 570127c000464e0d4942a2e3068fdec2cbb7b94e676011661290ea6c864fb48a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_goldwasser, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:07:27 compute-0 podman[439660]: 2025-12-05 02:07:27.686166726 +0000 UTC m=+0.062688258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:07:27 compute-0 systemd[1]: Started libpod-conmon-570127c000464e0d4942a2e3068fdec2cbb7b94e676011661290ea6c864fb48a.scope.
Dec 05 02:07:27 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:07:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fbf0a9e98881ef8a907da13102b2d161b26f6c411f8df1cd28650950fa21f97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:07:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fbf0a9e98881ef8a907da13102b2d161b26f6c411f8df1cd28650950fa21f97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:07:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fbf0a9e98881ef8a907da13102b2d161b26f6c411f8df1cd28650950fa21f97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:07:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fbf0a9e98881ef8a907da13102b2d161b26f6c411f8df1cd28650950fa21f97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:07:27 compute-0 podman[439660]: 2025-12-05 02:07:27.908686813 +0000 UTC m=+0.285208325 container init 570127c000464e0d4942a2e3068fdec2cbb7b94e676011661290ea6c864fb48a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 05 02:07:27 compute-0 podman[439660]: 2025-12-05 02:07:27.934019013 +0000 UTC m=+0.310540475 container start 570127c000464e0d4942a2e3068fdec2cbb7b94e676011661290ea6c864fb48a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_goldwasser, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec 05 02:07:27 compute-0 podman[439660]: 2025-12-05 02:07:27.940163905 +0000 UTC m=+0.316685407 container attach 570127c000464e0d4942a2e3068fdec2cbb7b94e676011661290ea6c864fb48a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 05 02:07:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:07:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1748: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]: {
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:     "0": [
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:         {
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "devices": [
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "/dev/loop3"
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             ],
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "lv_name": "ceph_lv0",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "lv_size": "21470642176",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "name": "ceph_lv0",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "tags": {
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.cluster_name": "ceph",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.crush_device_class": "",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.encrypted": "0",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.osd_id": "0",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.type": "block",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.vdo": "0"
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             },
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "type": "block",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "vg_name": "ceph_vg0"
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:         }
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:     ],
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:     "1": [
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:         {
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "devices": [
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "/dev/loop4"
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             ],
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "lv_name": "ceph_lv1",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "lv_size": "21470642176",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "name": "ceph_lv1",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "tags": {
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.cluster_name": "ceph",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.crush_device_class": "",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.encrypted": "0",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.osd_id": "1",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.type": "block",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.vdo": "0"
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             },
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "type": "block",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "vg_name": "ceph_vg1"
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:         }
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:     ],
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:     "2": [
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:         {
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "devices": [
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "/dev/loop5"
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             ],
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "lv_name": "ceph_lv2",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "lv_size": "21470642176",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "name": "ceph_lv2",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "tags": {
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.cluster_name": "ceph",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.crush_device_class": "",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.encrypted": "0",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.osd_id": "2",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.type": "block",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:                 "ceph.vdo": "0"
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             },
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "type": "block",
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:             "vg_name": "ceph_vg2"
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:         }
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]:     ]
Dec 05 02:07:28 compute-0 hardcore_goldwasser[439676]: }
Dec 05 02:07:28 compute-0 systemd[1]: libpod-570127c000464e0d4942a2e3068fdec2cbb7b94e676011661290ea6c864fb48a.scope: Deactivated successfully.
Dec 05 02:07:28 compute-0 podman[439660]: 2025-12-05 02:07:28.796833836 +0000 UTC m=+1.173355278 container died 570127c000464e0d4942a2e3068fdec2cbb7b94e676011661290ea6c864fb48a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 02:07:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fbf0a9e98881ef8a907da13102b2d161b26f6c411f8df1cd28650950fa21f97-merged.mount: Deactivated successfully.
Dec 05 02:07:28 compute-0 podman[439660]: 2025-12-05 02:07:28.901237833 +0000 UTC m=+1.277759295 container remove 570127c000464e0d4942a2e3068fdec2cbb7b94e676011661290ea6c864fb48a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 05 02:07:28 compute-0 systemd[1]: libpod-conmon-570127c000464e0d4942a2e3068fdec2cbb7b94e676011661290ea6c864fb48a.scope: Deactivated successfully.
Dec 05 02:07:28 compute-0 sudo[439562]: pam_unix(sudo:session): session closed for user root
Dec 05 02:07:29 compute-0 sudo[439699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:07:29 compute-0 sudo[439699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:07:29 compute-0 sudo[439699]: pam_unix(sudo:session): session closed for user root
Dec 05 02:07:29 compute-0 sudo[439724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:07:29 compute-0 sudo[439724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:07:29 compute-0 sudo[439724]: pam_unix(sudo:session): session closed for user root
Dec 05 02:07:29 compute-0 sudo[439749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:07:29 compute-0 sudo[439749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:07:29 compute-0 sudo[439749]: pam_unix(sudo:session): session closed for user root
Dec 05 02:07:29 compute-0 ceph-mon[192914]: pgmap v1748: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:29 compute-0 sudo[439774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:07:29 compute-0 sudo[439774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:07:29 compute-0 podman[158197]: time="2025-12-05T02:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:07:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:07:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8177 "" "Go-http-client/1.1"
Dec 05 02:07:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1749: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:30 compute-0 sshd-session[439678]: Connection reset by authenticating user root 45.140.17.124 port 57314 [preauth]
Dec 05 02:07:30 compute-0 podman[439838]: 2025-12-05 02:07:30.131321211 +0000 UTC m=+0.079072067 container create 307e25cd9e1826ebbf5b0dc9255f358984c58c6c994f722d0fe994ffb513eeeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:07:30 compute-0 podman[439838]: 2025-12-05 02:07:30.107238316 +0000 UTC m=+0.054989192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:07:30 compute-0 systemd[1]: Started libpod-conmon-307e25cd9e1826ebbf5b0dc9255f358984c58c6c994f722d0fe994ffb513eeeb.scope.
Dec 05 02:07:30 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:07:30 compute-0 podman[439838]: 2025-12-05 02:07:30.274038272 +0000 UTC m=+0.221789178 container init 307e25cd9e1826ebbf5b0dc9255f358984c58c6c994f722d0fe994ffb513eeeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hugle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 05 02:07:30 compute-0 podman[439838]: 2025-12-05 02:07:30.291537902 +0000 UTC m=+0.239288768 container start 307e25cd9e1826ebbf5b0dc9255f358984c58c6c994f722d0fe994ffb513eeeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hugle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:07:30 compute-0 crazy_hugle[439853]: 167 167
Dec 05 02:07:30 compute-0 podman[439838]: 2025-12-05 02:07:30.300084702 +0000 UTC m=+0.247835558 container attach 307e25cd9e1826ebbf5b0dc9255f358984c58c6c994f722d0fe994ffb513eeeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 05 02:07:30 compute-0 systemd[1]: libpod-307e25cd9e1826ebbf5b0dc9255f358984c58c6c994f722d0fe994ffb513eeeb.scope: Deactivated successfully.
Dec 05 02:07:30 compute-0 podman[439838]: 2025-12-05 02:07:30.302434367 +0000 UTC m=+0.250185263 container died 307e25cd9e1826ebbf5b0dc9255f358984c58c6c994f722d0fe994ffb513eeeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 05 02:07:30 compute-0 nova_compute[349548]: 2025-12-05 02:07:30.327 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:07:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-164a0784eb7ea087632fcc88a4bd3e053ba1b0a94051bc20c3e871c7db1d22de-merged.mount: Deactivated successfully.
Dec 05 02:07:30 compute-0 podman[439838]: 2025-12-05 02:07:30.37851713 +0000 UTC m=+0.326267976 container remove 307e25cd9e1826ebbf5b0dc9255f358984c58c6c994f722d0fe994ffb513eeeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hugle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 05 02:07:30 compute-0 systemd[1]: libpod-conmon-307e25cd9e1826ebbf5b0dc9255f358984c58c6c994f722d0fe994ffb513eeeb.scope: Deactivated successfully.
Dec 05 02:07:30 compute-0 podman[439879]: 2025-12-05 02:07:30.624851565 +0000 UTC m=+0.089502420 container create 5c7c016273d3aac5a445fb02165454241ee3a09bd0058b4c3b2ab4cc1d1324d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mendeleev, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:07:30 compute-0 podman[439879]: 2025-12-05 02:07:30.593317521 +0000 UTC m=+0.057968386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:07:30 compute-0 systemd[1]: Started libpod-conmon-5c7c016273d3aac5a445fb02165454241ee3a09bd0058b4c3b2ab4cc1d1324d2.scope.
Dec 05 02:07:30 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b096ae5052943d8da919db06bbc6afbfab8dddad6b4ed6362761dd4ab9e6f840/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b096ae5052943d8da919db06bbc6afbfab8dddad6b4ed6362761dd4ab9e6f840/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b096ae5052943d8da919db06bbc6afbfab8dddad6b4ed6362761dd4ab9e6f840/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b096ae5052943d8da919db06bbc6afbfab8dddad6b4ed6362761dd4ab9e6f840/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:07:30 compute-0 podman[439879]: 2025-12-05 02:07:30.806795694 +0000 UTC m=+0.271446609 container init 5c7c016273d3aac5a445fb02165454241ee3a09bd0058b4c3b2ab4cc1d1324d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mendeleev, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 02:07:30 compute-0 podman[439879]: 2025-12-05 02:07:30.824666985 +0000 UTC m=+0.289317850 container start 5c7c016273d3aac5a445fb02165454241ee3a09bd0058b4c3b2ab4cc1d1324d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:07:30 compute-0 podman[439879]: 2025-12-05 02:07:30.831542268 +0000 UTC m=+0.296193183 container attach 5c7c016273d3aac5a445fb02165454241ee3a09bd0058b4c3b2ab4cc1d1324d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:07:31 compute-0 ceph-mon[192914]: pgmap v1749: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:31 compute-0 openstack_network_exporter[366555]: ERROR   02:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:07:31 compute-0 openstack_network_exporter[366555]: ERROR   02:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:07:31 compute-0 openstack_network_exporter[366555]: ERROR   02:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:07:31 compute-0 openstack_network_exporter[366555]: ERROR   02:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:07:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:07:31 compute-0 openstack_network_exporter[366555]: ERROR   02:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:07:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:07:31 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:07:31.590 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:07:31 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:07:31.592 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 02:07:31 compute-0 nova_compute[349548]: 2025-12-05 02:07:31.593 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:07:31 compute-0 vigorous_mendeleev[439895]: {
Dec 05 02:07:31 compute-0 vigorous_mendeleev[439895]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:07:31 compute-0 vigorous_mendeleev[439895]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:07:31 compute-0 vigorous_mendeleev[439895]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:07:31 compute-0 vigorous_mendeleev[439895]:         "osd_id": 0,
Dec 05 02:07:31 compute-0 vigorous_mendeleev[439895]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:07:31 compute-0 vigorous_mendeleev[439895]:         "type": "bluestore"
Dec 05 02:07:31 compute-0 vigorous_mendeleev[439895]:     },
Dec 05 02:07:31 compute-0 vigorous_mendeleev[439895]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:07:31 compute-0 vigorous_mendeleev[439895]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:07:31 compute-0 vigorous_mendeleev[439895]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:07:31 compute-0 vigorous_mendeleev[439895]:         "osd_id": 1,
Dec 05 02:07:31 compute-0 vigorous_mendeleev[439895]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:07:31 compute-0 vigorous_mendeleev[439895]:         "type": "bluestore"
Dec 05 02:07:31 compute-0 vigorous_mendeleev[439895]:     },
Dec 05 02:07:31 compute-0 vigorous_mendeleev[439895]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:07:31 compute-0 vigorous_mendeleev[439895]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:07:31 compute-0 vigorous_mendeleev[439895]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:07:31 compute-0 vigorous_mendeleev[439895]:         "osd_id": 2,
Dec 05 02:07:31 compute-0 vigorous_mendeleev[439895]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:07:31 compute-0 vigorous_mendeleev[439895]:         "type": "bluestore"
Dec 05 02:07:31 compute-0 vigorous_mendeleev[439895]:     }
Dec 05 02:07:31 compute-0 vigorous_mendeleev[439895]: }
Dec 05 02:07:32 compute-0 systemd[1]: libpod-5c7c016273d3aac5a445fb02165454241ee3a09bd0058b4c3b2ab4cc1d1324d2.scope: Deactivated successfully.
Dec 05 02:07:32 compute-0 podman[439879]: 2025-12-05 02:07:32.009550907 +0000 UTC m=+1.474201772 container died 5c7c016273d3aac5a445fb02165454241ee3a09bd0058b4c3b2ab4cc1d1324d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:07:32 compute-0 systemd[1]: libpod-5c7c016273d3aac5a445fb02165454241ee3a09bd0058b4c3b2ab4cc1d1324d2.scope: Consumed 1.186s CPU time.
Dec 05 02:07:32 compute-0 nova_compute[349548]: 2025-12-05 02:07:32.011 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:07:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1750: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-b096ae5052943d8da919db06bbc6afbfab8dddad6b4ed6362761dd4ab9e6f840-merged.mount: Deactivated successfully.
Dec 05 02:07:32 compute-0 podman[439879]: 2025-12-05 02:07:32.103215702 +0000 UTC m=+1.567866537 container remove 5c7c016273d3aac5a445fb02165454241ee3a09bd0058b4c3b2ab4cc1d1324d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:07:32 compute-0 systemd[1]: libpod-conmon-5c7c016273d3aac5a445fb02165454241ee3a09bd0058b4c3b2ab4cc1d1324d2.scope: Deactivated successfully.
Dec 05 02:07:32 compute-0 sudo[439774]: pam_unix(sudo:session): session closed for user root
Dec 05 02:07:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:07:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:07:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:07:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:07:32 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 371e2a94-8120-4590-9854-89f00f8cb64f does not exist
Dec 05 02:07:32 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a3dc9336-5824-4bed-953e-eb43ed5e065f does not exist
Dec 05 02:07:32 compute-0 sudo[439940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:07:32 compute-0 sudo[439940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:07:32 compute-0 sudo[439940]: pam_unix(sudo:session): session closed for user root
Dec 05 02:07:32 compute-0 sudo[439965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:07:32 compute-0 sudo[439965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:07:32 compute-0 sudo[439965]: pam_unix(sudo:session): session closed for user root
Dec 05 02:07:32 compute-0 sshd-session[439856]: Connection reset by authenticating user root 45.140.17.124 port 57322 [preauth]
Dec 05 02:07:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:07:33 compute-0 nova_compute[349548]: 2025-12-05 02:07:33.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:07:33 compute-0 ceph-mon[192914]: pgmap v1750: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:07:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:07:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1751: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:34 compute-0 nova_compute[349548]: 2025-12-05 02:07:34.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:07:34 compute-0 sshd-session[439990]: Invalid user ubuntu from 45.140.17.124 port 25926
Dec 05 02:07:34 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:07:34.594 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:07:35 compute-0 sshd-session[439990]: Connection reset by invalid user ubuntu 45.140.17.124 port 25926 [preauth]
Dec 05 02:07:35 compute-0 nova_compute[349548]: 2025-12-05 02:07:35.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:07:35 compute-0 nova_compute[349548]: 2025-12-05 02:07:35.097 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:07:35 compute-0 nova_compute[349548]: 2025-12-05 02:07:35.097 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:07:35 compute-0 nova_compute[349548]: 2025-12-05 02:07:35.097 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:07:35 compute-0 nova_compute[349548]: 2025-12-05 02:07:35.098 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:07:35 compute-0 nova_compute[349548]: 2025-12-05 02:07:35.098 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:07:35 compute-0 ceph-mon[192914]: pgmap v1751: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:35 compute-0 nova_compute[349548]: 2025-12-05 02:07:35.335 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:07:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:07:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3272371931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:07:35 compute-0 nova_compute[349548]: 2025-12-05 02:07:35.613 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:07:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1752: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:36 compute-0 nova_compute[349548]: 2025-12-05 02:07:36.138 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:07:36 compute-0 nova_compute[349548]: 2025-12-05 02:07:36.140 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4098MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:07:36 compute-0 nova_compute[349548]: 2025-12-05 02:07:36.140 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:07:36 compute-0 nova_compute[349548]: 2025-12-05 02:07:36.140 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:07:36 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3272371931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:07:36 compute-0 nova_compute[349548]: 2025-12-05 02:07:36.225 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:07:36 compute-0 nova_compute[349548]: 2025-12-05 02:07:36.226 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:07:36 compute-0 nova_compute[349548]: 2025-12-05 02:07:36.248 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:07:36 compute-0 podman[440035]: 2025-12-05 02:07:36.741516469 +0000 UTC m=+0.142158075 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 02:07:36 compute-0 podman[440037]: 2025-12-05 02:07:36.745643275 +0000 UTC m=+0.131066735 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git)
Dec 05 02:07:36 compute-0 podman[440034]: 2025-12-05 02:07:36.75366277 +0000 UTC m=+0.159384489 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd)
Dec 05 02:07:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:07:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3842685520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:07:36 compute-0 podman[440036]: 2025-12-05 02:07:36.776111039 +0000 UTC m=+0.170008746 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 05 02:07:36 compute-0 nova_compute[349548]: 2025-12-05 02:07:36.787 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:07:36 compute-0 nova_compute[349548]: 2025-12-05 02:07:36.796 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:07:36 compute-0 nova_compute[349548]: 2025-12-05 02:07:36.817 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:07:36 compute-0 nova_compute[349548]: 2025-12-05 02:07:36.820 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:07:36 compute-0 nova_compute[349548]: 2025-12-05 02:07:36.820 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:07:37 compute-0 nova_compute[349548]: 2025-12-05 02:07:37.015 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:07:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Dec 05 02:07:37 compute-0 ceph-mon[192914]: pgmap v1752: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:37 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3842685520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:07:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Dec 05 02:07:37 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Dec 05 02:07:37 compute-0 nova_compute[349548]: 2025-12-05 02:07:37.821 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:07:37 compute-0 nova_compute[349548]: 2025-12-05 02:07:37.821 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:07:37 compute-0 nova_compute[349548]: 2025-12-05 02:07:37.821 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:07:37 compute-0 nova_compute[349548]: 2025-12-05 02:07:37.992 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 02:07:37 compute-0 nova_compute[349548]: 2025-12-05 02:07:37.992 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:07:37 compute-0 nova_compute[349548]: 2025-12-05 02:07:37.993 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:07:37 compute-0 nova_compute[349548]: 2025-12-05 02:07:37.994 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:07:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:07:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1754: 321 pgs: 321 active+clean; 28 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 1.2 MiB/s wr, 1 op/s
Dec 05 02:07:38 compute-0 ceph-mon[192914]: osdmap e133: 3 total, 3 up, 3 in
Dec 05 02:07:39 compute-0 nova_compute[349548]: 2025-12-05 02:07:39.069 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:07:39 compute-0 nova_compute[349548]: 2025-12-05 02:07:39.069 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:07:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Dec 05 02:07:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Dec 05 02:07:39 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Dec 05 02:07:39 compute-0 ceph-mon[192914]: pgmap v1754: 321 pgs: 321 active+clean; 28 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 1.2 MiB/s wr, 1 op/s
Dec 05 02:07:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1756: 321 pgs: 321 active+clean; 28 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1.6 MiB/s wr, 2 op/s
Dec 05 02:07:40 compute-0 ceph-mon[192914]: osdmap e134: 3 total, 3 up, 3 in
Dec 05 02:07:40 compute-0 nova_compute[349548]: 2025-12-05 02:07:40.338 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:07:41 compute-0 nova_compute[349548]: 2025-12-05 02:07:41.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:07:41 compute-0 ceph-mon[192914]: pgmap v1756: 321 pgs: 321 active+clean; 28 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1.6 MiB/s wr, 2 op/s
Dec 05 02:07:42 compute-0 nova_compute[349548]: 2025-12-05 02:07:42.018 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:07:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1757: 321 pgs: 321 active+clean; 49 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 4.1 MiB/s wr, 32 op/s
Dec 05 02:07:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:07:43 compute-0 ceph-mon[192914]: pgmap v1757: 321 pgs: 321 active+clean; 49 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 4.1 MiB/s wr, 32 op/s
Dec 05 02:07:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1758: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Dec 05 02:07:44 compute-0 nova_compute[349548]: 2025-12-05 02:07:44.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:07:45 compute-0 ceph-mon[192914]: pgmap v1758: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Dec 05 02:07:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:07:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2540063627' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:07:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:07:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2540063627' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:07:45 compute-0 nova_compute[349548]: 2025-12-05 02:07:45.340 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:07:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1759: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.2 MiB/s wr, 41 op/s
Dec 05 02:07:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:07:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:07:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:07:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:07:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:07:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:07:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2540063627' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:07:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2540063627' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:07:47 compute-0 nova_compute[349548]: 2025-12-05 02:07:47.021 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:07:47 compute-0 ceph-mon[192914]: pgmap v1759: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.2 MiB/s wr, 41 op/s
Dec 05 02:07:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:07:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1760: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 MiB/s wr, 36 op/s
Dec 05 02:07:49 compute-0 ceph-mon[192914]: pgmap v1760: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 MiB/s wr, 36 op/s
Dec 05 02:07:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1761: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.6 MiB/s wr, 33 op/s
Dec 05 02:07:50 compute-0 nova_compute[349548]: 2025-12-05 02:07:50.343 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:07:51 compute-0 ceph-mon[192914]: pgmap v1761: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.6 MiB/s wr, 33 op/s
Dec 05 02:07:51 compute-0 podman[440123]: 2025-12-05 02:07:51.739773018 +0000 UTC m=+0.147411543 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec 05 02:07:51 compute-0 podman[440124]: 2025-12-05 02:07:51.756190508 +0000 UTC m=+0.158180995 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:07:52 compute-0 nova_compute[349548]: 2025-12-05 02:07:52.024 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:07:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1762: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.4 MiB/s wr, 30 op/s
Dec 05 02:07:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:07:53 compute-0 ceph-mon[192914]: pgmap v1762: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.4 MiB/s wr, 30 op/s
Dec 05 02:07:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1763: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 683 KiB/s wr, 10 op/s
Dec 05 02:07:54 compute-0 podman[440167]: 2025-12-05 02:07:54.723806228 +0000 UTC m=+0.131330492 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 05 02:07:54 compute-0 podman[440166]: 2025-12-05 02:07:54.736415441 +0000 UTC m=+0.139366927 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec 05 02:07:55 compute-0 nova_compute[349548]: 2025-12-05 02:07:55.347 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:07:55 compute-0 ceph-mon[192914]: pgmap v1763: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 683 KiB/s wr, 10 op/s
Dec 05 02:07:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1764: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:07:56.202 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:07:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:07:56.203 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:07:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:07:56.204 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:07:56 compute-0 podman[440205]: 2025-12-05 02:07:56.710810134 +0000 UTC m=+0.117702381 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., name=ubi9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 02:07:57 compute-0 nova_compute[349548]: 2025-12-05 02:07:57.029 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:07:57 compute-0 ceph-mon[192914]: pgmap v1764: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:07:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1765: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:59 compute-0 ceph-mon[192914]: pgmap v1765: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:07:59 compute-0 podman[158197]: time="2025-12-05T02:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:07:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:07:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8174 "" "Go-http-client/1.1"
Dec 05 02:08:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1766: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:00 compute-0 nova_compute[349548]: 2025-12-05 02:08:00.350 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:00 compute-0 ceph-mon[192914]: pgmap v1766: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:01 compute-0 openstack_network_exporter[366555]: ERROR   02:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:08:01 compute-0 openstack_network_exporter[366555]: ERROR   02:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:08:01 compute-0 openstack_network_exporter[366555]: ERROR   02:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:08:01 compute-0 openstack_network_exporter[366555]: ERROR   02:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:08:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:08:01 compute-0 openstack_network_exporter[366555]: ERROR   02:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:08:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:08:01 compute-0 ovn_controller[89286]: 2025-12-05T02:08:01Z|00065|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 05 02:08:02 compute-0 nova_compute[349548]: 2025-12-05 02:08:02.031 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1767: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:08:03 compute-0 ceph-mon[192914]: pgmap v1767: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1768: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:05 compute-0 ceph-mon[192914]: pgmap v1768: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:05 compute-0 nova_compute[349548]: 2025-12-05 02:08:05.354 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1769: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:07 compute-0 nova_compute[349548]: 2025-12-05 02:08:07.034 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:07 compute-0 ceph-mon[192914]: pgmap v1769: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:07 compute-0 podman[440225]: 2025-12-05 02:08:07.747385624 +0000 UTC m=+0.145600862 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 02:08:07 compute-0 podman[440224]: 2025-12-05 02:08:07.752089586 +0000 UTC m=+0.160056278 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS)
Dec 05 02:08:07 compute-0 podman[440227]: 2025-12-05 02:08:07.767687363 +0000 UTC m=+0.155355206 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, container_name=openstack_network_exporter, name=ubi9-minimal, vendor=Red Hat, Inc.)
Dec 05 02:08:07 compute-0 podman[440226]: 2025-12-05 02:08:07.775315467 +0000 UTC m=+0.167034333 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec 05 02:08:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:08:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1770: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:09 compute-0 ceph-mon[192914]: pgmap v1770: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1771: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:10 compute-0 nova_compute[349548]: 2025-12-05 02:08:10.358 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:11 compute-0 nova_compute[349548]: 2025-12-05 02:08:11.158 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:11 compute-0 ceph-mon[192914]: pgmap v1771: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:12 compute-0 nova_compute[349548]: 2025-12-05 02:08:12.037 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1772: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:08:13 compute-0 ceph-mon[192914]: pgmap v1772: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1773: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:14 compute-0 nova_compute[349548]: 2025-12-05 02:08:14.368 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:15 compute-0 nova_compute[349548]: 2025-12-05 02:08:15.137 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:15 compute-0 ceph-mon[192914]: pgmap v1773: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:15 compute-0 nova_compute[349548]: 2025-12-05 02:08:15.360 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1774: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:16 compute-0 nova_compute[349548]: 2025-12-05 02:08:16.065 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:08:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:08:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:08:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:08:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:08:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:08:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:08:16
Dec 05 02:08:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:08:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:08:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['volumes', 'backups', 'vms', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'default.rgw.log', '.rgw.root']
Dec 05 02:08:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:08:17 compute-0 nova_compute[349548]: 2025-12-05 02:08:17.040 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:17 compute-0 ceph-mon[192914]: pgmap v1774: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:08:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:08:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:08:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:08:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:08:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:08:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:08:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:08:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:08:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:08:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:08:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1775: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:18 compute-0 nova_compute[349548]: 2025-12-05 02:08:18.233 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:19 compute-0 ceph-mon[192914]: pgmap v1775: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1776: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:20 compute-0 nova_compute[349548]: 2025-12-05 02:08:20.363 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:21 compute-0 ceph-mon[192914]: pgmap v1776: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:22 compute-0 nova_compute[349548]: 2025-12-05 02:08:22.044 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1777: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:22 compute-0 nova_compute[349548]: 2025-12-05 02:08:22.159 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:22 compute-0 podman[440313]: 2025-12-05 02:08:22.672526162 +0000 UTC m=+0.089886221 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 05 02:08:22 compute-0 podman[440314]: 2025-12-05 02:08:22.691404821 +0000 UTC m=+0.103249085 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 02:08:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:08:23 compute-0 ceph-mon[192914]: pgmap v1777: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:23 compute-0 nova_compute[349548]: 2025-12-05 02:08:23.529 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:23 compute-0 nova_compute[349548]: 2025-12-05 02:08:23.833 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1778: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:24 compute-0 nova_compute[349548]: 2025-12-05 02:08:24.290 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:25 compute-0 ceph-mon[192914]: pgmap v1778: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:25 compute-0 nova_compute[349548]: 2025-12-05 02:08:25.367 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:25 compute-0 podman[440355]: 2025-12-05 02:08:25.706202783 +0000 UTC m=+0.113502683 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:08:25 compute-0 podman[440354]: 2025-12-05 02:08:25.748129688 +0000 UTC m=+0.152907667 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 02:08:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1779: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 05 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:08:27 compute-0 nova_compute[349548]: 2025-12-05 02:08:27.047 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:27 compute-0 ceph-mon[192914]: pgmap v1779: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:27 compute-0 podman[440393]: 2025-12-05 02:08:27.723575546 +0000 UTC m=+0.131087705 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.buildah.version=1.29.0, io.openshift.expose-services=, config_id=edpm, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, vcs-type=git, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, release-0.7.12=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 02:08:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:08:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1780: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:29 compute-0 nova_compute[349548]: 2025-12-05 02:08:29.296 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:29 compute-0 ceph-mon[192914]: pgmap v1780: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:29 compute-0 podman[158197]: time="2025-12-05T02:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:08:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:08:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8167 "" "Go-http-client/1.1"
Dec 05 02:08:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1781: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:30 compute-0 nova_compute[349548]: 2025-12-05 02:08:30.370 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:31 compute-0 ceph-mon[192914]: pgmap v1781: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:31 compute-0 openstack_network_exporter[366555]: ERROR   02:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:08:31 compute-0 openstack_network_exporter[366555]: ERROR   02:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:08:31 compute-0 openstack_network_exporter[366555]: ERROR   02:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:08:31 compute-0 openstack_network_exporter[366555]: ERROR   02:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:08:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:08:31 compute-0 openstack_network_exporter[366555]: ERROR   02:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:08:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:08:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:32.011 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:08:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:32.013 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 02:08:32 compute-0 nova_compute[349548]: 2025-12-05 02:08:32.013 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:32 compute-0 nova_compute[349548]: 2025-12-05 02:08:32.050 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1782: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:32 compute-0 sudo[440413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:08:32 compute-0 sudo[440413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:08:32 compute-0 sudo[440413]: pam_unix(sudo:session): session closed for user root
Dec 05 02:08:32 compute-0 sudo[440438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:08:32 compute-0 sudo[440438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:08:32 compute-0 sudo[440438]: pam_unix(sudo:session): session closed for user root
Dec 05 02:08:32 compute-0 sudo[440463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:08:32 compute-0 sudo[440463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:08:32 compute-0 sudo[440463]: pam_unix(sudo:session): session closed for user root
Dec 05 02:08:33 compute-0 sudo[440488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:08:33 compute-0 sudo[440488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:08:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:08:33 compute-0 nova_compute[349548]: 2025-12-05 02:08:33.379 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Acquiring lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:33 compute-0 nova_compute[349548]: 2025-12-05 02:08:33.381 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:33 compute-0 nova_compute[349548]: 2025-12-05 02:08:33.401 349552 DEBUG nova.compute.manager [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 05 02:08:33 compute-0 ceph-mon[192914]: pgmap v1782: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:33 compute-0 nova_compute[349548]: 2025-12-05 02:08:33.604 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:33 compute-0 nova_compute[349548]: 2025-12-05 02:08:33.605 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:33 compute-0 nova_compute[349548]: 2025-12-05 02:08:33.619 349552 DEBUG nova.virt.hardware [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 05 02:08:33 compute-0 nova_compute[349548]: 2025-12-05 02:08:33.620 349552 INFO nova.compute.claims [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Claim successful on node compute-0.ctlplane.example.com
Dec 05 02:08:33 compute-0 sudo[440488]: pam_unix(sudo:session): session closed for user root
Dec 05 02:08:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 05 02:08:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 02:08:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:08:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:08:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:08:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:08:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:08:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:08:33 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a1545e1c-6890-4c10-9bbe-0866a3184cdb does not exist
Dec 05 02:08:33 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 23201496-1d82-464d-b3ed-874c6c9cae29 does not exist
Dec 05 02:08:33 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 2fa80860-0310-4b1a-8aab-6910291d19ac does not exist
Dec 05 02:08:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:08:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:08:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:08:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:08:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:08:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:08:33 compute-0 nova_compute[349548]: 2025-12-05 02:08:33.767 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:33 compute-0 sudo[440543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:08:33 compute-0 sudo[440543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:08:33 compute-0 sudo[440543]: pam_unix(sudo:session): session closed for user root
Dec 05 02:08:33 compute-0 sudo[440569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:08:33 compute-0 sudo[440569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:08:33 compute-0 sudo[440569]: pam_unix(sudo:session): session closed for user root
Dec 05 02:08:34 compute-0 sudo[440613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:08:34 compute-0 sudo[440613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:08:34 compute-0 sudo[440613]: pam_unix(sudo:session): session closed for user root
Dec 05 02:08:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1783: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:34 compute-0 sudo[440638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:08:34 compute-0 sudo[440638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:08:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:08:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/829770048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.288 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.300 349552 DEBUG nova.compute.provider_tree [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.329 349552 DEBUG nova.scheduler.client.report [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.375 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.376 349552 DEBUG nova.compute.manager [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 05 02:08:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 02:08:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:08:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:08:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:08:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:08:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:08:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:08:34 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/829770048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.455 349552 DEBUG nova.compute.manager [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 05 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.456 349552 DEBUG nova.network.neutron [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 05 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.489 349552 INFO nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 05 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.522 349552 DEBUG nova.compute.manager [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 05 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.620 349552 DEBUG nova.compute.manager [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 05 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.622 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 05 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.623 349552 INFO nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Creating image(s)
Dec 05 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.672 349552 DEBUG nova.storage.rbd_utils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] rbd image a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:08:34 compute-0 podman[440701]: 2025-12-05 02:08:34.693463337 +0000 UTC m=+0.097341500 container create 1d54f129e8bb77bb961a43bc4edb04d8d8ddeef6dede3f2c02453236be1e6c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:08:34 compute-0 podman[440701]: 2025-12-05 02:08:34.637656403 +0000 UTC m=+0.041534526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.747 349552 DEBUG nova.storage.rbd_utils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] rbd image a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:08:34 compute-0 systemd[1]: Started libpod-conmon-1d54f129e8bb77bb961a43bc4edb04d8d8ddeef6dede3f2c02453236be1e6c53.scope.
Dec 05 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.793 349552 DEBUG nova.storage.rbd_utils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] rbd image a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:08:34 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.806 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Acquiring lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.808 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:34 compute-0 podman[440701]: 2025-12-05 02:08:34.835673503 +0000 UTC m=+0.239551796 container init 1d54f129e8bb77bb961a43bc4edb04d8d8ddeef6dede3f2c02453236be1e6c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mirzakhani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:08:34 compute-0 podman[440701]: 2025-12-05 02:08:34.857545516 +0000 UTC m=+0.261423679 container start 1d54f129e8bb77bb961a43bc4edb04d8d8ddeef6dede3f2c02453236be1e6c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mirzakhani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Dec 05 02:08:34 compute-0 podman[440701]: 2025-12-05 02:08:34.863124633 +0000 UTC m=+0.267002806 container attach 1d54f129e8bb77bb961a43bc4edb04d8d8ddeef6dede3f2c02453236be1e6c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 05 02:08:34 compute-0 angry_mirzakhani[440759]: 167 167
Dec 05 02:08:34 compute-0 systemd[1]: libpod-1d54f129e8bb77bb961a43bc4edb04d8d8ddeef6dede3f2c02453236be1e6c53.scope: Deactivated successfully.
Dec 05 02:08:34 compute-0 podman[440701]: 2025-12-05 02:08:34.869377168 +0000 UTC m=+0.273255301 container died 1d54f129e8bb77bb961a43bc4edb04d8d8ddeef6dede3f2c02453236be1e6c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:08:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-5dd0ee023f3266ff9c1677ef13b94e6a1afa0fe71d93b06f8e6dd9daa9a86c59-merged.mount: Deactivated successfully.
Dec 05 02:08:34 compute-0 podman[440701]: 2025-12-05 02:08:34.945171522 +0000 UTC m=+0.349049655 container remove 1d54f129e8bb77bb961a43bc4edb04d8d8ddeef6dede3f2c02453236be1e6c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 05 02:08:34 compute-0 systemd[1]: libpod-conmon-1d54f129e8bb77bb961a43bc4edb04d8d8ddeef6dede3f2c02453236be1e6c53.scope: Deactivated successfully.
Dec 05 02:08:35 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:35.015 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:08:35 compute-0 nova_compute[349548]: 2025-12-05 02:08:35.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:08:35 compute-0 nova_compute[349548]: 2025-12-05 02:08:35.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:08:35 compute-0 nova_compute[349548]: 2025-12-05 02:08:35.185 349552 DEBUG nova.policy [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5e8484f22ce84af99708d2e728179b92', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '159039e5ad4a46a7be912cd9756c76c5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 05 02:08:35 compute-0 podman[440793]: 2025-12-05 02:08:35.206046124 +0000 UTC m=+0.076006681 container create f03cd81a17d2c9550617266a7a92de09912920c14c8254ef721727efcc55cee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 02:08:35 compute-0 podman[440793]: 2025-12-05 02:08:35.179544462 +0000 UTC m=+0.049505019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:08:35 compute-0 systemd[1]: Started libpod-conmon-f03cd81a17d2c9550617266a7a92de09912920c14c8254ef721727efcc55cee2.scope.
Dec 05 02:08:35 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57dd04497f02eb200fc90529b1b57adb27bea18b692c93996aae23eb7cfff968/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57dd04497f02eb200fc90529b1b57adb27bea18b692c93996aae23eb7cfff968/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57dd04497f02eb200fc90529b1b57adb27bea18b692c93996aae23eb7cfff968/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57dd04497f02eb200fc90529b1b57adb27bea18b692c93996aae23eb7cfff968/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57dd04497f02eb200fc90529b1b57adb27bea18b692c93996aae23eb7cfff968/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:08:35 compute-0 podman[440793]: 2025-12-05 02:08:35.347944332 +0000 UTC m=+0.217904869 container init f03cd81a17d2c9550617266a7a92de09912920c14c8254ef721727efcc55cee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ride, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:08:35 compute-0 podman[440793]: 2025-12-05 02:08:35.372262893 +0000 UTC m=+0.242223410 container start f03cd81a17d2c9550617266a7a92de09912920c14c8254ef721727efcc55cee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ride, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 05 02:08:35 compute-0 nova_compute[349548]: 2025-12-05 02:08:35.373 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:35 compute-0 podman[440793]: 2025-12-05 02:08:35.377683525 +0000 UTC m=+0.247644082 container attach f03cd81a17d2c9550617266a7a92de09912920c14c8254ef721727efcc55cee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:08:35 compute-0 ceph-mon[192914]: pgmap v1783: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.443590) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900515443686, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2052, "num_deletes": 251, "total_data_size": 3366005, "memory_usage": 3427384, "flush_reason": "Manual Compaction"}
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Dec 05 02:08:35 compute-0 nova_compute[349548]: 2025-12-05 02:08:35.465 349552 DEBUG nova.virt.libvirt.imagebackend [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Image locations are: [{'url': 'rbd://cbd280d3-cbd8-528b-ace6-2b3a887cdcee/images/e9091bfb-b431-47c9-a284-79372046956b/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://cbd280d3-cbd8-528b-ace6-2b3a887cdcee/images/e9091bfb-b431-47c9-a284-79372046956b/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900515469531, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 3299961, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34743, "largest_seqno": 36794, "table_properties": {"data_size": 3290606, "index_size": 5913, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18747, "raw_average_key_size": 20, "raw_value_size": 3271973, "raw_average_value_size": 3506, "num_data_blocks": 262, "num_entries": 933, "num_filter_entries": 933, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764900294, "oldest_key_time": 1764900294, "file_creation_time": 1764900515, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 26005 microseconds, and 10259 cpu microseconds.
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.469602) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 3299961 bytes OK
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.469626) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.471607) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.471623) EVENT_LOG_v1 {"time_micros": 1764900515471618, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.471642) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3357438, prev total WAL file size 3357438, number of live WAL files 2.
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.473043) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(3222KB)], [80(6834KB)]
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900515473122, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 10298778, "oldest_snapshot_seqno": -1}
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 5669 keys, 8588057 bytes, temperature: kUnknown
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900515543525, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 8588057, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8550812, "index_size": 21967, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14213, "raw_key_size": 142945, "raw_average_key_size": 25, "raw_value_size": 8448933, "raw_average_value_size": 1490, "num_data_blocks": 902, "num_entries": 5669, "num_filter_entries": 5669, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764900515, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.544562) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8588057 bytes
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.547082) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.2 rd, 121.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 6.7 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(5.7) write-amplify(2.6) OK, records in: 6187, records dropped: 518 output_compression: NoCompression
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.547104) EVENT_LOG_v1 {"time_micros": 1764900515547094, "job": 46, "event": "compaction_finished", "compaction_time_micros": 70466, "compaction_time_cpu_micros": 42322, "output_level": 6, "num_output_files": 1, "total_output_size": 8588057, "num_input_records": 6187, "num_output_records": 5669, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900515548232, "job": 46, "event": "table_file_deletion", "file_number": 82}
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900515549874, "job": 46, "event": "table_file_deletion", "file_number": 80}
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.472727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.550243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.550251) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.550254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.550257) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.550260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:08:35 compute-0 nova_compute[349548]: 2025-12-05 02:08:35.764 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquiring lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:35 compute-0 nova_compute[349548]: 2025-12-05 02:08:35.765 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:35 compute-0 nova_compute[349548]: 2025-12-05 02:08:35.807 349552 DEBUG nova.compute.manager [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 05 02:08:35 compute-0 nova_compute[349548]: 2025-12-05 02:08:35.881 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:35 compute-0 nova_compute[349548]: 2025-12-05 02:08:35.882 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:35 compute-0 nova_compute[349548]: 2025-12-05 02:08:35.894 349552 DEBUG nova.virt.hardware [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 05 02:08:35 compute-0 nova_compute[349548]: 2025-12-05 02:08:35.895 349552 INFO nova.compute.claims [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Claim successful on node compute-0.ctlplane.example.com
Dec 05 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.058 349552 DEBUG nova.scheduler.client.report [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Refreshing inventories for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:08:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1784: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.080 349552 DEBUG nova.scheduler.client.report [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Updating ProviderTree inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.081 349552 DEBUG nova.compute.provider_tree [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.098 349552 DEBUG nova.scheduler.client.report [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Refreshing aggregate associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.126 349552 DEBUG nova.scheduler.client.report [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Refreshing trait associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, traits: HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.237 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Dec 05 02:08:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Dec 05 02:08:36 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Dec 05 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.631 349552 DEBUG nova.network.neutron [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Successfully created port: 1eebaade-abb1-412c-95f2-2b7240026f85 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 05 02:08:36 compute-0 naughty_ride[440809]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:08:36 compute-0 naughty_ride[440809]: --> relative data size: 1.0
Dec 05 02:08:36 compute-0 naughty_ride[440809]: --> All data devices are unavailable
Dec 05 02:08:36 compute-0 systemd[1]: libpod-f03cd81a17d2c9550617266a7a92de09912920c14c8254ef721727efcc55cee2.scope: Deactivated successfully.
Dec 05 02:08:36 compute-0 systemd[1]: libpod-f03cd81a17d2c9550617266a7a92de09912920c14c8254ef721727efcc55cee2.scope: Consumed 1.246s CPU time.
Dec 05 02:08:36 compute-0 conmon[440809]: conmon f03cd81a17d2c9550617 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f03cd81a17d2c9550617266a7a92de09912920c14c8254ef721727efcc55cee2.scope/container/memory.events
Dec 05 02:08:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:08:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4028471828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.787 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.795 349552 DEBUG nova.compute.provider_tree [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:08:36 compute-0 podman[440858]: 2025-12-05 02:08:36.799637071 +0000 UTC m=+0.053617664 container died f03cd81a17d2c9550617266a7a92de09912920c14c8254ef721727efcc55cee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ride, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.812 349552 DEBUG nova.scheduler.client.report [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:08:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-57dd04497f02eb200fc90529b1b57adb27bea18b692c93996aae23eb7cfff968-merged.mount: Deactivated successfully.
Dec 05 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.846 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.964s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.847 349552 DEBUG nova.compute.manager [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 05 02:08:36 compute-0 podman[440858]: 2025-12-05 02:08:36.877193775 +0000 UTC m=+0.131174348 container remove f03cd81a17d2c9550617266a7a92de09912920c14c8254ef721727efcc55cee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ride, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 05 02:08:36 compute-0 systemd[1]: libpod-conmon-f03cd81a17d2c9550617266a7a92de09912920c14c8254ef721727efcc55cee2.scope: Deactivated successfully.
Dec 05 02:08:36 compute-0 sudo[440638]: pam_unix(sudo:session): session closed for user root
Dec 05 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.953 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.055 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.061 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3.part --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.063 349552 DEBUG nova.virt.images [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] e9091bfb-b431-47c9-a284-79372046956b was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.065 349552 DEBUG nova.privsep.utils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 05 02:08:37 compute-0 sudo[440874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.067 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3.part /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:37 compute-0 sudo[440874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:08:37 compute-0 sudo[440874]: pam_unix(sudo:session): session closed for user root
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.091 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.092 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.093 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:08:37 compute-0 sudo[440903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:08:37 compute-0 sudo[440903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:08:37 compute-0 sudo[440903]: pam_unix(sudo:session): session closed for user root
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.284 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3.part /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3.converted" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.290 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:37 compute-0 sudo[440933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:08:37 compute-0 sudo[440933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:08:37 compute-0 sudo[440933]: pam_unix(sudo:session): session closed for user root
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.391 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3.converted --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.393 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:37 compute-0 sudo[440959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.436 349552 DEBUG nova.storage.rbd_utils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] rbd image a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:08:37 compute-0 sudo[440959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.450 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:37 compute-0 ceph-mon[192914]: pgmap v1784: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:08:37 compute-0 ceph-mon[192914]: osdmap e135: 3 total, 3 up, 3 in
Dec 05 02:08:37 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4028471828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.489 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.490 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.490 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.495 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.495 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.496 349552 DEBUG nova.compute.manager [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.496 349552 DEBUG nova.network.neutron [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.503 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.765 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.766 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.767 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.768 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.772 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.793 349552 INFO nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.827 349552 DEBUG nova.compute.manager [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 05 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.888 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:37 compute-0 podman[441072]: 2025-12-05 02:08:37.962453274 +0000 UTC m=+0.057961126 container create 2563d7667c29b2e7d38f9336d3f06bd7c9eab96ce3948c5f2da43bf87ce1ddce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_solomon, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 05 02:08:38 compute-0 systemd[1]: Started libpod-conmon-2563d7667c29b2e7d38f9336d3f06bd7c9eab96ce3948c5f2da43bf87ce1ddce.scope.
Dec 05 02:08:38 compute-0 podman[441072]: 2025-12-05 02:08:37.942349141 +0000 UTC m=+0.037857013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:08:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:08:38 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.057 349552 DEBUG nova.compute.manager [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 05 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.059 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 05 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.060 349552 INFO nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Creating image(s)
Dec 05 02:08:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1786: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.6 KiB/s wr, 16 op/s
Dec 05 02:08:38 compute-0 podman[441072]: 2025-12-05 02:08:38.077071457 +0000 UTC m=+0.172579329 container init 2563d7667c29b2e7d38f9336d3f06bd7c9eab96ce3948c5f2da43bf87ce1ddce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 02:08:38 compute-0 podman[441072]: 2025-12-05 02:08:38.087249022 +0000 UTC m=+0.182756864 container start 2563d7667c29b2e7d38f9336d3f06bd7c9eab96ce3948c5f2da43bf87ce1ddce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:08:38 compute-0 podman[441072]: 2025-12-05 02:08:38.091481931 +0000 UTC m=+0.186989793 container attach 2563d7667c29b2e7d38f9336d3f06bd7c9eab96ce3948c5f2da43bf87ce1ddce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:08:38 compute-0 sad_solomon[441152]: 167 167
Dec 05 02:08:38 compute-0 systemd[1]: libpod-2563d7667c29b2e7d38f9336d3f06bd7c9eab96ce3948c5f2da43bf87ce1ddce.scope: Deactivated successfully.
Dec 05 02:08:38 compute-0 podman[441072]: 2025-12-05 02:08:38.09608333 +0000 UTC m=+0.191591172 container died 2563d7667c29b2e7d38f9336d3f06bd7c9eab96ce3948c5f2da43bf87ce1ddce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 05 02:08:38 compute-0 podman[441131]: 2025-12-05 02:08:38.120453593 +0000 UTC m=+0.097301959 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:08:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a563331aa56326b2f534c415961d606a01eb1226e6a17d03c55199c8249665a-merged.mount: Deactivated successfully.
Dec 05 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.144 349552 DEBUG nova.storage.rbd_utils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] rbd image 939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:08:38 compute-0 podman[441072]: 2025-12-05 02:08:38.148054446 +0000 UTC m=+0.243562288 container remove 2563d7667c29b2e7d38f9336d3f06bd7c9eab96ce3948c5f2da43bf87ce1ddce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_solomon, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 05 02:08:38 compute-0 podman[441120]: 2025-12-05 02:08:38.151118292 +0000 UTC m=+0.134746468 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 05 02:08:38 compute-0 podman[441134]: 2025-12-05 02:08:38.156249636 +0000 UTC m=+0.123728099 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, managed_by=edpm_ansible, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_id=edpm)
Dec 05 02:08:38 compute-0 systemd[1]: libpod-conmon-2563d7667c29b2e7d38f9336d3f06bd7c9eab96ce3948c5f2da43bf87ce1ddce.scope: Deactivated successfully.
Dec 05 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.183 349552 DEBUG nova.storage.rbd_utils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] rbd image 939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:08:38 compute-0 podman[441132]: 2025-12-05 02:08:38.19384109 +0000 UTC m=+0.168771942 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 05 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.216 349552 DEBUG nova.storage.rbd_utils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] rbd image 939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.231 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:08:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/168538381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.251 349552 DEBUG nova.policy [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3439b5cde2ff4830bb0294f007842282', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '70b71e0f6ffe47ed86a910f90d71557a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 05 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.271 349552 DEBUG nova.storage.rbd_utils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] resizing rbd image a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 05 02:08:38 compute-0 podman[441303]: 2025-12-05 02:08:38.321253831 +0000 UTC m=+0.050824826 container create c7a3b38ebce188ef0eaf8ffd3f63f22a6d4826d6fbe1dd244df9d4cec9299e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_maxwell, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.322 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.322 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.327 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.330 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.331 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.331 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.331 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.332 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.334 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquiring lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.340 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.341 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.376 349552 DEBUG nova.storage.rbd_utils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] rbd image 939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.383 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:38 compute-0 systemd[1]: Started libpod-conmon-c7a3b38ebce188ef0eaf8ffd3f63f22a6d4826d6fbe1dd244df9d4cec9299e13.scope.
Dec 05 02:08:38 compute-0 podman[441303]: 2025-12-05 02:08:38.298847003 +0000 UTC m=+0.028418018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:08:38 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:08:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a400c0708c9030dbdade3d36a7cf889a04e5ee8fc271b535fc6e1bd732e91b7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:08:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a400c0708c9030dbdade3d36a7cf889a04e5ee8fc271b535fc6e1bd732e91b7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:08:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a400c0708c9030dbdade3d36a7cf889a04e5ee8fc271b535fc6e1bd732e91b7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:08:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a400c0708c9030dbdade3d36a7cf889a04e5ee8fc271b535fc6e1bd732e91b7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:08:38 compute-0 podman[441303]: 2025-12-05 02:08:38.455162564 +0000 UTC m=+0.184733599 container init c7a3b38ebce188ef0eaf8ffd3f63f22a6d4826d6fbe1dd244df9d4cec9299e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:08:38 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/168538381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:08:38 compute-0 podman[441303]: 2025-12-05 02:08:38.469139606 +0000 UTC m=+0.198710591 container start c7a3b38ebce188ef0eaf8ffd3f63f22a6d4826d6fbe1dd244df9d4cec9299e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_maxwell, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 02:08:38 compute-0 podman[441303]: 2025-12-05 02:08:38.474925728 +0000 UTC m=+0.204496743 container attach c7a3b38ebce188ef0eaf8ffd3f63f22a6d4826d6fbe1dd244df9d4cec9299e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_maxwell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.551 349552 DEBUG nova.objects.instance [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lazy-loading 'migration_context' on Instance uuid a2605a46-d779-4fc3-aeff-1e040dbcf17d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.645 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 05 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.646 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Ensure instance console log exists: /var/lib/nova/instances/a2605a46-d779-4fc3-aeff-1e040dbcf17d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 05 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.647 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.647 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.649 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.744 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.870 349552 DEBUG nova.storage.rbd_utils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] resizing rbd image 939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 05 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.078 349552 DEBUG nova.objects.instance [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lazy-loading 'migration_context' on Instance uuid 939ae9f2-b89c-4a19-96de-ab4dfc882a35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.094 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 05 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.095 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Ensure instance console log exists: /var/lib/nova/instances/939ae9f2-b89c-4a19-96de-ab4dfc882a35/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 05 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.096 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.097 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.097 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.238 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.239 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4025MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.239 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.240 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.308 349552 DEBUG nova.network.neutron [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Successfully created port: 2ac46e0a-6888-440f-b155-d4b0e8677304 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 05 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.339 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance a2605a46-d779-4fc3-aeff-1e040dbcf17d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.339 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 939ae9f2-b89c-4a19-96de-ab4dfc882a35 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.340 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.340 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]: {
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:     "0": [
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:         {
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "devices": [
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "/dev/loop3"
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             ],
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "lv_name": "ceph_lv0",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "lv_size": "21470642176",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "name": "ceph_lv0",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "tags": {
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.cluster_name": "ceph",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.crush_device_class": "",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.encrypted": "0",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.osd_id": "0",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.type": "block",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.vdo": "0"
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             },
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "type": "block",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "vg_name": "ceph_vg0"
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:         }
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:     ],
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:     "1": [
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:         {
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "devices": [
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "/dev/loop4"
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             ],
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "lv_name": "ceph_lv1",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "lv_size": "21470642176",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "name": "ceph_lv1",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "tags": {
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.cluster_name": "ceph",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.crush_device_class": "",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.encrypted": "0",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.osd_id": "1",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.type": "block",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.vdo": "0"
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             },
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "type": "block",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "vg_name": "ceph_vg1"
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:         }
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:     ],
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:     "2": [
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:         {
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "devices": [
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "/dev/loop5"
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             ],
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "lv_name": "ceph_lv2",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "lv_size": "21470642176",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "name": "ceph_lv2",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "tags": {
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.cluster_name": "ceph",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.crush_device_class": "",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.encrypted": "0",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.osd_id": "2",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.type": "block",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:                 "ceph.vdo": "0"
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             },
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "type": "block",
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:             "vg_name": "ceph_vg2"
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:         }
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]:     ]
Dec 05 02:08:39 compute-0 wonderful_maxwell[441358]: }
Dec 05 02:08:39 compute-0 systemd[1]: libpod-c7a3b38ebce188ef0eaf8ffd3f63f22a6d4826d6fbe1dd244df9d4cec9299e13.scope: Deactivated successfully.
Dec 05 02:08:39 compute-0 podman[441303]: 2025-12-05 02:08:39.408067014 +0000 UTC m=+1.137638119 container died c7a3b38ebce188ef0eaf8ffd3f63f22a6d4826d6fbe1dd244df9d4cec9299e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.431 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a400c0708c9030dbdade3d36a7cf889a04e5ee8fc271b535fc6e1bd732e91b7b-merged.mount: Deactivated successfully.
Dec 05 02:08:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Dec 05 02:08:39 compute-0 ceph-mon[192914]: pgmap v1786: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.6 KiB/s wr, 16 op/s
Dec 05 02:08:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Dec 05 02:08:39 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Dec 05 02:08:39 compute-0 podman[441303]: 2025-12-05 02:08:39.516946076 +0000 UTC m=+1.246517081 container remove c7a3b38ebce188ef0eaf8ffd3f63f22a6d4826d6fbe1dd244df9d4cec9299e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_maxwell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:08:39 compute-0 systemd[1]: libpod-conmon-c7a3b38ebce188ef0eaf8ffd3f63f22a6d4826d6fbe1dd244df9d4cec9299e13.scope: Deactivated successfully.
Dec 05 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.554 349552 DEBUG nova.network.neutron [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Successfully updated port: 1eebaade-abb1-412c-95f2-2b7240026f85 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 05 02:08:39 compute-0 sudo[440959]: pam_unix(sudo:session): session closed for user root
Dec 05 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.579 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Acquiring lock "refresh_cache-a2605a46-d779-4fc3-aeff-1e040dbcf17d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.580 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Acquired lock "refresh_cache-a2605a46-d779-4fc3-aeff-1e040dbcf17d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.581 349552 DEBUG nova.network.neutron [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 05 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.646 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.647 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.674 349552 DEBUG nova.compute.manager [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 05 02:08:39 compute-0 sudo[441489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:08:39 compute-0 sudo[441489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:08:39 compute-0 sudo[441489]: pam_unix(sudo:session): session closed for user root
Dec 05 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.757 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:39 compute-0 sudo[441533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:08:39 compute-0 sudo[441533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:08:39 compute-0 sudo[441533]: pam_unix(sudo:session): session closed for user root
Dec 05 02:08:39 compute-0 sudo[441558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.931 349552 DEBUG nova.network.neutron [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 05 02:08:39 compute-0 sudo[441558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:08:39 compute-0 sudo[441558]: pam_unix(sudo:session): session closed for user root
Dec 05 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.000 349552 DEBUG nova.compute.manager [req-3fe960b8-eb6b-4df5-9abe-f3b3efad4a9d req-2c81012f-c18d-4147-8ba9-0dc8684a6e52 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Received event network-changed-1eebaade-abb1-412c-95f2-2b7240026f85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.001 349552 DEBUG nova.compute.manager [req-3fe960b8-eb6b-4df5-9abe-f3b3efad4a9d req-2c81012f-c18d-4147-8ba9-0dc8684a6e52 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Refreshing instance network info cache due to event network-changed-1eebaade-abb1-412c-95f2-2b7240026f85. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.002 349552 DEBUG oslo_concurrency.lockutils [req-3fe960b8-eb6b-4df5-9abe-f3b3efad4a9d req-2c81012f-c18d-4147-8ba9-0dc8684a6e52 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-a2605a46-d779-4fc3-aeff-1e040dbcf17d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:08:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:08:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4123846993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.027 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:40 compute-0 sudo[441583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.039 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:08:40 compute-0 sudo[441583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.064 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:08:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1788: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.0 KiB/s wr, 20 op/s
Dec 05 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.092 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.093 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.094 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.337s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.106 349552 DEBUG nova.virt.hardware [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 05 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.107 349552 INFO nova.compute.claims [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Claim successful on node compute-0.ctlplane.example.com
Dec 05 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.321 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.376 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:40 compute-0 ceph-mon[192914]: osdmap e136: 3 total, 3 up, 3 in
Dec 05 02:08:40 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4123846993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:08:40 compute-0 ceph-mon[192914]: pgmap v1788: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.0 KiB/s wr, 20 op/s
Dec 05 02:08:40 compute-0 podman[441669]: 2025-12-05 02:08:40.627305758 +0000 UTC m=+0.076790504 container create ee04334df6bac8d07669ff02940fb4194b7e6efa41f6095a510cf8daedd241d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 05 02:08:40 compute-0 podman[441669]: 2025-12-05 02:08:40.599585651 +0000 UTC m=+0.049070467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:08:40 compute-0 systemd[1]: Started libpod-conmon-ee04334df6bac8d07669ff02940fb4194b7e6efa41f6095a510cf8daedd241d3.scope.
Dec 05 02:08:40 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:08:40 compute-0 podman[441669]: 2025-12-05 02:08:40.781643164 +0000 UTC m=+0.231127930 container init ee04334df6bac8d07669ff02940fb4194b7e6efa41f6095a510cf8daedd241d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_maxwell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:08:40 compute-0 podman[441669]: 2025-12-05 02:08:40.793508476 +0000 UTC m=+0.242993232 container start ee04334df6bac8d07669ff02940fb4194b7e6efa41f6095a510cf8daedd241d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:08:40 compute-0 podman[441669]: 2025-12-05 02:08:40.799461533 +0000 UTC m=+0.248946289 container attach ee04334df6bac8d07669ff02940fb4194b7e6efa41f6095a510cf8daedd241d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_maxwell, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Dec 05 02:08:40 compute-0 infallible_maxwell[441684]: 167 167
Dec 05 02:08:40 compute-0 systemd[1]: libpod-ee04334df6bac8d07669ff02940fb4194b7e6efa41f6095a510cf8daedd241d3.scope: Deactivated successfully.
Dec 05 02:08:40 compute-0 podman[441669]: 2025-12-05 02:08:40.80361849 +0000 UTC m=+0.253103236 container died ee04334df6bac8d07669ff02940fb4194b7e6efa41f6095a510cf8daedd241d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_maxwell, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 05 02:08:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:08:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1442380969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:08:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8f65eadd4a47dfbe4c53b2488e5456cdf0eed285da2ab64516bebf27071ffcc-merged.mount: Deactivated successfully.
Dec 05 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.858 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.868 349552 DEBUG nova.compute.provider_tree [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:08:40 compute-0 podman[441669]: 2025-12-05 02:08:40.875020491 +0000 UTC m=+0.324505217 container remove ee04334df6bac8d07669ff02940fb4194b7e6efa41f6095a510cf8daedd241d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 05 02:08:40 compute-0 systemd[1]: libpod-conmon-ee04334df6bac8d07669ff02940fb4194b7e6efa41f6095a510cf8daedd241d3.scope: Deactivated successfully.
Dec 05 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.898 349552 DEBUG nova.scheduler.client.report [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.931 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.932 349552 DEBUG nova.compute.manager [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.027 349552 DEBUG nova.compute.manager [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.028 349552 DEBUG nova.network.neutron [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.049 349552 INFO nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.077 349552 DEBUG nova.compute.manager [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.085 349552 DEBUG nova.network.neutron [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Successfully updated port: 2ac46e0a-6888-440f-b155-d4b0e8677304 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.123 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquiring lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.124 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquired lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.124 349552 DEBUG nova.network.neutron [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 05 02:08:41 compute-0 podman[441709]: 2025-12-05 02:08:41.140634556 +0000 UTC m=+0.094202722 container create b15dab387ff2f1d6f3ec075ae2eef4714bc7e08fdfa00e65500e5d9ca1a0aaf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:08:41 compute-0 podman[441709]: 2025-12-05 02:08:41.103977068 +0000 UTC m=+0.057545304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.206 349552 DEBUG nova.compute.manager [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.210 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.211 349552 INFO nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Creating image(s)
Dec 05 02:08:41 compute-0 systemd[1]: Started libpod-conmon-b15dab387ff2f1d6f3ec075ae2eef4714bc7e08fdfa00e65500e5d9ca1a0aaf5.scope.
Dec 05 02:08:41 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:08:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0914bfca76c32e7eff7101de0b49d43e362000bea79fd992edea30373abc6afe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.274 349552 DEBUG nova.storage.rbd_utils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] rbd image 59e35a32-9023-4e49-be56-9da10df3027f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:08:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0914bfca76c32e7eff7101de0b49d43e362000bea79fd992edea30373abc6afe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:08:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0914bfca76c32e7eff7101de0b49d43e362000bea79fd992edea30373abc6afe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:08:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0914bfca76c32e7eff7101de0b49d43e362000bea79fd992edea30373abc6afe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:08:41 compute-0 podman[441709]: 2025-12-05 02:08:41.307788051 +0000 UTC m=+0.261356287 container init b15dab387ff2f1d6f3ec075ae2eef4714bc7e08fdfa00e65500e5d9ca1a0aaf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:08:41 compute-0 podman[441709]: 2025-12-05 02:08:41.334309855 +0000 UTC m=+0.287878051 container start b15dab387ff2f1d6f3ec075ae2eef4714bc7e08fdfa00e65500e5d9ca1a0aaf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 05 02:08:41 compute-0 podman[441709]: 2025-12-05 02:08:41.341428464 +0000 UTC m=+0.294996650 container attach b15dab387ff2f1d6f3ec075ae2eef4714bc7e08fdfa00e65500e5d9ca1a0aaf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tesla, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.377 349552 DEBUG nova.storage.rbd_utils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] rbd image 59e35a32-9023-4e49-be56-9da10df3027f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.431 349552 DEBUG nova.storage.rbd_utils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] rbd image 59e35a32-9023-4e49-be56-9da10df3027f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.452 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.482 349552 DEBUG nova.network.neutron [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.488 349552 DEBUG nova.compute.manager [req-7fa8458c-f67c-4b57-b276-aa2a8b3ff6f1 req-4f8b9cd2-2cbf-4ed7-9c7a-4e160d7ec2d3 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Received event network-changed-2ac46e0a-6888-440f-b155-d4b0e8677304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.488 349552 DEBUG nova.compute.manager [req-7fa8458c-f67c-4b57-b276-aa2a8b3ff6f1 req-4f8b9cd2-2cbf-4ed7-9c7a-4e160d7ec2d3 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Refreshing instance network info cache due to event network-changed-2ac46e0a-6888-440f-b155-d4b0e8677304. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.489 349552 DEBUG oslo_concurrency.lockutils [req-7fa8458c-f67c-4b57-b276-aa2a8b3ff6f1 req-4f8b9cd2-2cbf-4ed7-9c7a-4e160d7ec2d3 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:08:41 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1442380969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.532 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.533 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquiring lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.534 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.535 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.586 349552 DEBUG nova.storage.rbd_utils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] rbd image 59e35a32-9023-4e49-be56-9da10df3027f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.594 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 59e35a32-9023-4e49-be56-9da10df3027f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.628 349552 DEBUG nova.policy [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b4745812b7eb47908ded25b1eb7c7328', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dd34a6a62cf94436a2b836fa4f49c4fa', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.665 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.666 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.879 349552 DEBUG nova.network.neutron [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Updating instance_info_cache with network_info: [{"id": "1eebaade-abb1-412c-95f2-2b7240026f85", "address": "fa:16:3e:af:f6:1b", "network": {"id": "5a020a22-53e0-4ddc-b74b-9b343d75de26", "bridge": "br-int", "label": "tempest-ServersTestJSON-124637277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "159039e5ad4a46a7be912cd9756c76c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1eebaade-ab", "ovs_interfaceid": "1eebaade-abb1-412c-95f2-2b7240026f85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.906 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Releasing lock "refresh_cache-a2605a46-d779-4fc3-aeff-1e040dbcf17d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.908 349552 DEBUG nova.compute.manager [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Instance network_info: |[{"id": "1eebaade-abb1-412c-95f2-2b7240026f85", "address": "fa:16:3e:af:f6:1b", "network": {"id": "5a020a22-53e0-4ddc-b74b-9b343d75de26", "bridge": "br-int", "label": "tempest-ServersTestJSON-124637277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "159039e5ad4a46a7be912cd9756c76c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1eebaade-ab", "ovs_interfaceid": "1eebaade-abb1-412c-95f2-2b7240026f85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.910 349552 DEBUG oslo_concurrency.lockutils [req-3fe960b8-eb6b-4df5-9abe-f3b3efad4a9d req-2c81012f-c18d-4147-8ba9-0dc8684a6e52 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-a2605a46-d779-4fc3-aeff-1e040dbcf17d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.911 349552 DEBUG nova.network.neutron [req-3fe960b8-eb6b-4df5-9abe-f3b3efad4a9d req-2c81012f-c18d-4147-8ba9-0dc8684a6e52 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Refreshing network info cache for port 1eebaade-abb1-412c-95f2-2b7240026f85 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.926 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Start _get_guest_xml network_info=[{"id": "1eebaade-abb1-412c-95f2-2b7240026f85", "address": "fa:16:3e:af:f6:1b", "network": {"id": "5a020a22-53e0-4ddc-b74b-9b343d75de26", "bridge": "br-int", "label": "tempest-ServersTestJSON-124637277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "159039e5ad4a46a7be912cd9756c76c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1eebaade-ab", "ovs_interfaceid": "1eebaade-abb1-412c-95f2-2b7240026f85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'e9091bfb-b431-47c9-a284-79372046956b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.961 349552 WARNING nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.978 349552 DEBUG nova.virt.libvirt.host [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.980 349552 DEBUG nova.virt.libvirt.host [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.990 349552 DEBUG nova.virt.libvirt.host [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.991 349552 DEBUG nova.virt.libvirt.host [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.993 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.994 349552 DEBUG nova.virt.hardware [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T02:07:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.995 349552 DEBUG nova.virt.hardware [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.996 349552 DEBUG nova.virt.hardware [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 05 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.997 349552 DEBUG nova.virt.hardware [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.999 349552 DEBUG nova.virt.hardware [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.000 349552 DEBUG nova.virt.hardware [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.001 349552 DEBUG nova.virt.hardware [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.002 349552 DEBUG nova.virt.hardware [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.002 349552 DEBUG nova.virt.hardware [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.003 349552 DEBUG nova.virt.hardware [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.003 349552 DEBUG nova.virt.hardware [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.007 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.034 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 59e35a32-9023-4e49-be56-9da10df3027f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1789: 321 pgs: 321 active+clean; 136 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.5 MiB/s wr, 97 op/s
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.098 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.184 349552 DEBUG nova.storage.rbd_utils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] resizing rbd image 59e35a32-9023-4e49-be56-9da10df3027f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.446 349552 DEBUG nova.objects.instance [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lazy-loading 'migration_context' on Instance uuid 59e35a32-9023-4e49-be56-9da10df3027f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.466 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.467 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Ensure instance console log exists: /var/lib/nova/instances/59e35a32-9023-4e49-be56-9da10df3027f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.467 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.468 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.468 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 02:08:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2270075970' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.529 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:42 compute-0 ceph-mon[192914]: pgmap v1789: 321 pgs: 321 active+clean; 136 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.5 MiB/s wr, 97 op/s
Dec 05 02:08:42 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2270075970' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:08:42 compute-0 mystifying_tesla[441729]: {
Dec 05 02:08:42 compute-0 mystifying_tesla[441729]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:08:42 compute-0 mystifying_tesla[441729]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:08:42 compute-0 mystifying_tesla[441729]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:08:42 compute-0 mystifying_tesla[441729]:         "osd_id": 0,
Dec 05 02:08:42 compute-0 mystifying_tesla[441729]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:08:42 compute-0 mystifying_tesla[441729]:         "type": "bluestore"
Dec 05 02:08:42 compute-0 mystifying_tesla[441729]:     },
Dec 05 02:08:42 compute-0 mystifying_tesla[441729]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:08:42 compute-0 mystifying_tesla[441729]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:08:42 compute-0 mystifying_tesla[441729]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:08:42 compute-0 mystifying_tesla[441729]:         "osd_id": 1,
Dec 05 02:08:42 compute-0 mystifying_tesla[441729]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:08:42 compute-0 mystifying_tesla[441729]:         "type": "bluestore"
Dec 05 02:08:42 compute-0 mystifying_tesla[441729]:     },
Dec 05 02:08:42 compute-0 mystifying_tesla[441729]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:08:42 compute-0 mystifying_tesla[441729]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:08:42 compute-0 mystifying_tesla[441729]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:08:42 compute-0 mystifying_tesla[441729]:         "osd_id": 2,
Dec 05 02:08:42 compute-0 mystifying_tesla[441729]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:08:42 compute-0 mystifying_tesla[441729]:         "type": "bluestore"
Dec 05 02:08:42 compute-0 mystifying_tesla[441729]:     }
Dec 05 02:08:42 compute-0 mystifying_tesla[441729]: }
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.563 349552 DEBUG nova.storage.rbd_utils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] rbd image a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.571 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:42 compute-0 systemd[1]: libpod-b15dab387ff2f1d6f3ec075ae2eef4714bc7e08fdfa00e65500e5d9ca1a0aaf5.scope: Deactivated successfully.
Dec 05 02:08:42 compute-0 podman[441709]: 2025-12-05 02:08:42.575838494 +0000 UTC m=+1.529406640 container died b15dab387ff2f1d6f3ec075ae2eef4714bc7e08fdfa00e65500e5d9ca1a0aaf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Dec 05 02:08:42 compute-0 systemd[1]: libpod-b15dab387ff2f1d6f3ec075ae2eef4714bc7e08fdfa00e65500e5d9ca1a0aaf5.scope: Consumed 1.199s CPU time.
Dec 05 02:08:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-0914bfca76c32e7eff7101de0b49d43e362000bea79fd992edea30373abc6afe-merged.mount: Deactivated successfully.
Dec 05 02:08:42 compute-0 podman[441709]: 2025-12-05 02:08:42.653615164 +0000 UTC m=+1.607183320 container remove b15dab387ff2f1d6f3ec075ae2eef4714bc7e08fdfa00e65500e5d9ca1a0aaf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tesla, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 05 02:08:42 compute-0 systemd[1]: libpod-conmon-b15dab387ff2f1d6f3ec075ae2eef4714bc7e08fdfa00e65500e5d9ca1a0aaf5.scope: Deactivated successfully.
Dec 05 02:08:42 compute-0 sudo[441583]: pam_unix(sudo:session): session closed for user root
Dec 05 02:08:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:08:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:08:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:08:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:08:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 979fbe6d-41dc-4d89-a08e-1e4cee8a64de does not exist
Dec 05 02:08:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b88c6ea8-1d4b-4f62-bdc3-92f9a8685a93 does not exist
Dec 05 02:08:42 compute-0 sudo[441995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:08:42 compute-0 sudo[441995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:08:42 compute-0 sudo[441995]: pam_unix(sudo:session): session closed for user root
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.832 349552 DEBUG nova.network.neutron [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updating instance_info_cache with network_info: [{"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.851 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Releasing lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.852 349552 DEBUG nova.compute.manager [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Instance network_info: |[{"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.854 349552 DEBUG oslo_concurrency.lockutils [req-7fa8458c-f67c-4b57-b276-aa2a8b3ff6f1 req-4f8b9cd2-2cbf-4ed7-9c7a-4e160d7ec2d3 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.856 349552 DEBUG nova.network.neutron [req-7fa8458c-f67c-4b57-b276-aa2a8b3ff6f1 req-4f8b9cd2-2cbf-4ed7-9c7a-4e160d7ec2d3 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Refreshing network info cache for port 2ac46e0a-6888-440f-b155-d4b0e8677304 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.862 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Start _get_guest_xml network_info=[{"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'e9091bfb-b431-47c9-a284-79372046956b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.872 349552 WARNING nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.886 349552 DEBUG nova.virt.libvirt.host [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.888 349552 DEBUG nova.virt.libvirt.host [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 05 02:08:42 compute-0 sudo[442022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.896 349552 DEBUG nova.virt.libvirt.host [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 05 02:08:42 compute-0 sudo[442022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.898 349552 DEBUG nova.virt.libvirt.host [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.900 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 05 02:08:42 compute-0 sudo[442022]: pam_unix(sudo:session): session closed for user root
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.902 349552 DEBUG nova.virt.hardware [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T02:07:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.904 349552 DEBUG nova.virt.hardware [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.905 349552 DEBUG nova.virt.hardware [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.906 349552 DEBUG nova.virt.hardware [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.907 349552 DEBUG nova.virt.hardware [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.908 349552 DEBUG nova.virt.hardware [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.910 349552 DEBUG nova.virt.hardware [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.913 349552 DEBUG nova.virt.hardware [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.913 349552 DEBUG nova.virt.hardware [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.914 349552 DEBUG nova.virt.hardware [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.915 349552 DEBUG nova.virt.hardware [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 05 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.919 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 02:08:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1135350042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:08:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.049 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.051 349552 DEBUG nova.virt.libvirt.vif [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:08:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1341674106',display_name='tempest-ServersTestJSON-server-1341674106',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1341674106',id=6,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFtAkt1MUQ0UzD5ZNvg4emNMv//Ij9tTGEw8OvSj9D0kv+BMeeC2o2SF/4NX3oBFTRlyP9xb/yjd8SFW8gRLZtLdrfqvo1ZN4HP0TzIFpNkL3M1lCxjV2HbcSKr2zzjZbg==',key_name='tempest-keypair-1342908531',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='159039e5ad4a46a7be912cd9756c76c5',ramdisk_id='',reservation_id='r-1esl3ayq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-244710502',owner_user_name='tempest-ServersTestJSON-244710502-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:08:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5e8484f22ce84af99708d2e728179b92',uuid=a2605a46-d779-4fc3-aeff-1e040dbcf17d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1eebaade-abb1-412c-95f2-2b7240026f85", "address": "fa:16:3e:af:f6:1b", "network": {"id": "5a020a22-53e0-4ddc-b74b-9b343d75de26", "bridge": "br-int", "label": "tempest-ServersTestJSON-124637277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "159039e5ad4a46a7be912cd9756c76c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1eebaade-ab", "ovs_interfaceid": "1eebaade-abb1-412c-95f2-2b7240026f85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.052 349552 DEBUG nova.network.os_vif_util [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Converting VIF {"id": "1eebaade-abb1-412c-95f2-2b7240026f85", "address": "fa:16:3e:af:f6:1b", "network": {"id": "5a020a22-53e0-4ddc-b74b-9b343d75de26", "bridge": "br-int", "label": "tempest-ServersTestJSON-124637277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "159039e5ad4a46a7be912cd9756c76c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1eebaade-ab", "ovs_interfaceid": "1eebaade-abb1-412c-95f2-2b7240026f85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.053 349552 DEBUG nova.network.os_vif_util [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:f6:1b,bridge_name='br-int',has_traffic_filtering=True,id=1eebaade-abb1-412c-95f2-2b7240026f85,network=Network(5a020a22-53e0-4ddc-b74b-9b343d75de26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1eebaade-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.054 349552 DEBUG nova.objects.instance [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lazy-loading 'pci_devices' on Instance uuid a2605a46-d779-4fc3-aeff-1e040dbcf17d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.069 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] End _get_guest_xml xml=<domain type="kvm">
Dec 05 02:08:43 compute-0 nova_compute[349548]:   <uuid>a2605a46-d779-4fc3-aeff-1e040dbcf17d</uuid>
Dec 05 02:08:43 compute-0 nova_compute[349548]:   <name>instance-00000006</name>
Dec 05 02:08:43 compute-0 nova_compute[349548]:   <memory>131072</memory>
Dec 05 02:08:43 compute-0 nova_compute[349548]:   <vcpu>1</vcpu>
Dec 05 02:08:43 compute-0 nova_compute[349548]:   <metadata>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <nova:name>tempest-ServersTestJSON-server-1341674106</nova:name>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <nova:creationTime>2025-12-05 02:08:41</nova:creationTime>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <nova:flavor name="m1.nano">
Dec 05 02:08:43 compute-0 nova_compute[349548]:         <nova:memory>128</nova:memory>
Dec 05 02:08:43 compute-0 nova_compute[349548]:         <nova:disk>1</nova:disk>
Dec 05 02:08:43 compute-0 nova_compute[349548]:         <nova:swap>0</nova:swap>
Dec 05 02:08:43 compute-0 nova_compute[349548]:         <nova:ephemeral>0</nova:ephemeral>
Dec 05 02:08:43 compute-0 nova_compute[349548]:         <nova:vcpus>1</nova:vcpus>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       </nova:flavor>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <nova:owner>
Dec 05 02:08:43 compute-0 nova_compute[349548]:         <nova:user uuid="5e8484f22ce84af99708d2e728179b92">tempest-ServersTestJSON-244710502-project-member</nova:user>
Dec 05 02:08:43 compute-0 nova_compute[349548]:         <nova:project uuid="159039e5ad4a46a7be912cd9756c76c5">tempest-ServersTestJSON-244710502</nova:project>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       </nova:owner>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <nova:root type="image" uuid="e9091bfb-b431-47c9-a284-79372046956b"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <nova:ports>
Dec 05 02:08:43 compute-0 nova_compute[349548]:         <nova:port uuid="1eebaade-abb1-412c-95f2-2b7240026f85">
Dec 05 02:08:43 compute-0 nova_compute[349548]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:         </nova:port>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       </nova:ports>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     </nova:instance>
Dec 05 02:08:43 compute-0 nova_compute[349548]:   </metadata>
Dec 05 02:08:43 compute-0 nova_compute[349548]:   <sysinfo type="smbios">
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <system>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <entry name="manufacturer">RDO</entry>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <entry name="product">OpenStack Compute</entry>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <entry name="serial">a2605a46-d779-4fc3-aeff-1e040dbcf17d</entry>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <entry name="uuid">a2605a46-d779-4fc3-aeff-1e040dbcf17d</entry>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <entry name="family">Virtual Machine</entry>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     </system>
Dec 05 02:08:43 compute-0 nova_compute[349548]:   </sysinfo>
Dec 05 02:08:43 compute-0 nova_compute[349548]:   <os>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <boot dev="hd"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <smbios mode="sysinfo"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:   </os>
Dec 05 02:08:43 compute-0 nova_compute[349548]:   <features>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <acpi/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <apic/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <vmcoreinfo/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:   </features>
Dec 05 02:08:43 compute-0 nova_compute[349548]:   <clock offset="utc">
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <timer name="pit" tickpolicy="delay"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <timer name="hpet" present="no"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:   </clock>
Dec 05 02:08:43 compute-0 nova_compute[349548]:   <cpu mode="host-model" match="exact">
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <topology sockets="1" cores="1" threads="1"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:   </cpu>
Dec 05 02:08:43 compute-0 nova_compute[349548]:   <devices>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <disk type="network" device="disk">
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk">
Dec 05 02:08:43 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       </source>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 02:08:43 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       </auth>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <target dev="vda" bus="virtio"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     </disk>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <disk type="network" device="cdrom">
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk.config">
Dec 05 02:08:43 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       </source>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 02:08:43 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       </auth>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <target dev="sda" bus="sata"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     </disk>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <interface type="ethernet">
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <mac address="fa:16:3e:af:f6:1b"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <driver name="vhost" rx_queue_size="512"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <mtu size="1442"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <target dev="tap1eebaade-ab"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     </interface>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <serial type="pty">
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <log file="/var/lib/nova/instances/a2605a46-d779-4fc3-aeff-1e040dbcf17d/console.log" append="off"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     </serial>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <video>
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     </video>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <input type="tablet" bus="usb"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <rng model="virtio">
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <backend model="random">/dev/urandom</backend>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     </rng>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <controller type="usb" index="0"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     <memballoon model="virtio">
Dec 05 02:08:43 compute-0 nova_compute[349548]:       <stats period="10"/>
Dec 05 02:08:43 compute-0 nova_compute[349548]:     </memballoon>
Dec 05 02:08:43 compute-0 nova_compute[349548]:   </devices>
Dec 05 02:08:43 compute-0 nova_compute[349548]: </domain>
Dec 05 02:08:43 compute-0 nova_compute[349548]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.070 349552 DEBUG nova.compute.manager [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Preparing to wait for external event network-vif-plugged-1eebaade-abb1-412c-95f2-2b7240026f85 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.071 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Acquiring lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.071 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.071 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.072 349552 DEBUG nova.virt.libvirt.vif [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:08:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1341674106',display_name='tempest-ServersTestJSON-server-1341674106',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1341674106',id=6,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFtAkt1MUQ0UzD5ZNvg4emNMv//Ij9tTGEw8OvSj9D0kv+BMeeC2o2SF/4NX3oBFTRlyP9xb/yjd8SFW8gRLZtLdrfqvo1ZN4HP0TzIFpNkL3M1lCxjV2HbcSKr2zzjZbg==',key_name='tempest-keypair-1342908531',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='159039e5ad4a46a7be912cd9756c76c5',ramdisk_id='',reservation_id='r-1esl3ayq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-244710502',owner_user_name='tempest-ServersTestJSON-244710502-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:08:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5e8484f22ce84af99708d2e728179b92',uuid=a2605a46-d779-4fc3-aeff-1e040dbcf17d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1eebaade-abb1-412c-95f2-2b7240026f85", "address": "fa:16:3e:af:f6:1b", "network": {"id": "5a020a22-53e0-4ddc-b74b-9b343d75de26", "bridge": "br-int", "label": "tempest-ServersTestJSON-124637277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "159039e5ad4a46a7be912cd9756c76c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1eebaade-ab", "ovs_interfaceid": "1eebaade-abb1-412c-95f2-2b7240026f85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.072 349552 DEBUG nova.network.os_vif_util [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Converting VIF {"id": "1eebaade-abb1-412c-95f2-2b7240026f85", "address": "fa:16:3e:af:f6:1b", "network": {"id": "5a020a22-53e0-4ddc-b74b-9b343d75de26", "bridge": "br-int", "label": "tempest-ServersTestJSON-124637277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "159039e5ad4a46a7be912cd9756c76c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1eebaade-ab", "ovs_interfaceid": "1eebaade-abb1-412c-95f2-2b7240026f85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.073 349552 DEBUG nova.network.os_vif_util [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:f6:1b,bridge_name='br-int',has_traffic_filtering=True,id=1eebaade-abb1-412c-95f2-2b7240026f85,network=Network(5a020a22-53e0-4ddc-b74b-9b343d75de26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1eebaade-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.073 349552 DEBUG os_vif [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:f6:1b,bridge_name='br-int',has_traffic_filtering=True,id=1eebaade-abb1-412c-95f2-2b7240026f85,network=Network(5a020a22-53e0-4ddc-b74b-9b343d75de26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1eebaade-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.074 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.074 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.075 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.078 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.079 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eebaade-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.079 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1eebaade-ab, col_values=(('external_ids', {'iface-id': '1eebaade-abb1-412c-95f2-2b7240026f85', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:f6:1b', 'vm-uuid': 'a2605a46-d779-4fc3-aeff-1e040dbcf17d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.082 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.083 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 02:08:43 compute-0 NetworkManager[49092]: <info>  [1764900523.0840] manager: (tap1eebaade-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.092 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.093 349552 INFO os_vif [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:f6:1b,bridge_name='br-int',has_traffic_filtering=True,id=1eebaade-abb1-412c-95f2-2b7240026f85,network=Network(5a020a22-53e0-4ddc-b74b-9b343d75de26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1eebaade-ab')
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.168 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.169 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.170 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] No VIF found with MAC fa:16:3e:af:f6:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.171 349552 INFO nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Using config drive
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.216 349552 DEBUG nova.storage.rbd_utils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] rbd image a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:08:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 02:08:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1720028422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.415 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.453 349552 DEBUG nova.storage.rbd_utils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] rbd image 939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.463 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.695 349552 INFO nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Creating config drive at /var/lib/nova/instances/a2605a46-d779-4fc3-aeff-1e040dbcf17d/disk.config
Dec 05 02:08:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:08:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:08:43 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1135350042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:08:43 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1720028422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.702 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a2605a46-d779-4fc3-aeff-1e040dbcf17d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf6dj8d_8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.848 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a2605a46-d779-4fc3-aeff-1e040dbcf17d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf6dj8d_8" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.895 349552 DEBUG nova.storage.rbd_utils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] rbd image a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.905 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a2605a46-d779-4fc3-aeff-1e040dbcf17d/disk.config a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 02:08:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1668676317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.964 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.966 349552 DEBUG nova.virt.libvirt.vif [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:08:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-604018291',display_name='tempest-AttachInterfacesUnderV243Test-server-604018291',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-604018291',id=7,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKU6bELVlVCoUJIshERiWUVj0OnvYD2CYxIalQbnWU21bRDwU7WBbW97LN2cH4XlAr/7mmUrM7ksINLIA4cX46Z53k6IEf2IAXFLlXwCAxrx7KcHDeFsx/HWqs2AH5gWDA==',key_name='tempest-keypair-1932183514',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='70b71e0f6ffe47ed86a910f90d71557a',ramdisk_id='',reservation_id='r-agiyf4o6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-532006644',owner_user_name='tempest-AttachInterfacesUnderV243Test-532006644-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:08:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3439b5cde2ff4830bb0294f007842282',uuid=939ae9f2-b89c-4a19-96de-ab4dfc882a35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.967 349552 DEBUG nova.network.os_vif_util [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Converting VIF {"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.969 349552 DEBUG nova.network.os_vif_util [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:ba:4f,bridge_name='br-int',has_traffic_filtering=True,id=2ac46e0a-6888-440f-b155-d4b0e8677304,network=Network(77ae1103-3871-4354-8e08-09bb5c0c1ad1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ac46e0a-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.970 349552 DEBUG nova.objects.instance [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lazy-loading 'pci_devices' on Instance uuid 939ae9f2-b89c-4a19-96de-ab4dfc882a35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.005 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] End _get_guest_xml xml=<domain type="kvm">
Dec 05 02:08:44 compute-0 nova_compute[349548]:   <uuid>939ae9f2-b89c-4a19-96de-ab4dfc882a35</uuid>
Dec 05 02:08:44 compute-0 nova_compute[349548]:   <name>instance-00000007</name>
Dec 05 02:08:44 compute-0 nova_compute[349548]:   <memory>131072</memory>
Dec 05 02:08:44 compute-0 nova_compute[349548]:   <vcpu>1</vcpu>
Dec 05 02:08:44 compute-0 nova_compute[349548]:   <metadata>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-604018291</nova:name>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <nova:creationTime>2025-12-05 02:08:42</nova:creationTime>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <nova:flavor name="m1.nano">
Dec 05 02:08:44 compute-0 nova_compute[349548]:         <nova:memory>128</nova:memory>
Dec 05 02:08:44 compute-0 nova_compute[349548]:         <nova:disk>1</nova:disk>
Dec 05 02:08:44 compute-0 nova_compute[349548]:         <nova:swap>0</nova:swap>
Dec 05 02:08:44 compute-0 nova_compute[349548]:         <nova:ephemeral>0</nova:ephemeral>
Dec 05 02:08:44 compute-0 nova_compute[349548]:         <nova:vcpus>1</nova:vcpus>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       </nova:flavor>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <nova:owner>
Dec 05 02:08:44 compute-0 nova_compute[349548]:         <nova:user uuid="3439b5cde2ff4830bb0294f007842282">tempest-AttachInterfacesUnderV243Test-532006644-project-member</nova:user>
Dec 05 02:08:44 compute-0 nova_compute[349548]:         <nova:project uuid="70b71e0f6ffe47ed86a910f90d71557a">tempest-AttachInterfacesUnderV243Test-532006644</nova:project>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       </nova:owner>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <nova:root type="image" uuid="e9091bfb-b431-47c9-a284-79372046956b"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <nova:ports>
Dec 05 02:08:44 compute-0 nova_compute[349548]:         <nova:port uuid="2ac46e0a-6888-440f-b155-d4b0e8677304">
Dec 05 02:08:44 compute-0 nova_compute[349548]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:         </nova:port>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       </nova:ports>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     </nova:instance>
Dec 05 02:08:44 compute-0 nova_compute[349548]:   </metadata>
Dec 05 02:08:44 compute-0 nova_compute[349548]:   <sysinfo type="smbios">
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <system>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <entry name="manufacturer">RDO</entry>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <entry name="product">OpenStack Compute</entry>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <entry name="serial">939ae9f2-b89c-4a19-96de-ab4dfc882a35</entry>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <entry name="uuid">939ae9f2-b89c-4a19-96de-ab4dfc882a35</entry>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <entry name="family">Virtual Machine</entry>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     </system>
Dec 05 02:08:44 compute-0 nova_compute[349548]:   </sysinfo>
Dec 05 02:08:44 compute-0 nova_compute[349548]:   <os>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <boot dev="hd"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <smbios mode="sysinfo"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:   </os>
Dec 05 02:08:44 compute-0 nova_compute[349548]:   <features>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <acpi/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <apic/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <vmcoreinfo/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:   </features>
Dec 05 02:08:44 compute-0 nova_compute[349548]:   <clock offset="utc">
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <timer name="pit" tickpolicy="delay"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <timer name="hpet" present="no"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:   </clock>
Dec 05 02:08:44 compute-0 nova_compute[349548]:   <cpu mode="host-model" match="exact">
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <topology sockets="1" cores="1" threads="1"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:   </cpu>
Dec 05 02:08:44 compute-0 nova_compute[349548]:   <devices>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <disk type="network" device="disk">
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk">
Dec 05 02:08:44 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       </source>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 02:08:44 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       </auth>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <target dev="vda" bus="virtio"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     </disk>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <disk type="network" device="cdrom">
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk.config">
Dec 05 02:08:44 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       </source>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 02:08:44 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       </auth>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <target dev="sda" bus="sata"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     </disk>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <interface type="ethernet">
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <mac address="fa:16:3e:ca:ba:4f"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <driver name="vhost" rx_queue_size="512"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <mtu size="1442"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <target dev="tap2ac46e0a-68"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     </interface>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <serial type="pty">
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <log file="/var/lib/nova/instances/939ae9f2-b89c-4a19-96de-ab4dfc882a35/console.log" append="off"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     </serial>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <video>
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     </video>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <input type="tablet" bus="usb"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <rng model="virtio">
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <backend model="random">/dev/urandom</backend>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     </rng>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <controller type="usb" index="0"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     <memballoon model="virtio">
Dec 05 02:08:44 compute-0 nova_compute[349548]:       <stats period="10"/>
Dec 05 02:08:44 compute-0 nova_compute[349548]:     </memballoon>
Dec 05 02:08:44 compute-0 nova_compute[349548]:   </devices>
Dec 05 02:08:44 compute-0 nova_compute[349548]: </domain>
Dec 05 02:08:44 compute-0 nova_compute[349548]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.007 349552 DEBUG nova.compute.manager [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Preparing to wait for external event network-vif-plugged-2ac46e0a-6888-440f-b155-d4b0e8677304 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.008 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquiring lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.009 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.010 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.012 349552 DEBUG nova.virt.libvirt.vif [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:08:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-604018291',display_name='tempest-AttachInterfacesUnderV243Test-server-604018291',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-604018291',id=7,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKU6bELVlVCoUJIshERiWUVj0OnvYD2CYxIalQbnWU21bRDwU7WBbW97LN2cH4XlAr/7mmUrM7ksINLIA4cX46Z53k6IEf2IAXFLlXwCAxrx7KcHDeFsx/HWqs2AH5gWDA==',key_name='tempest-keypair-1932183514',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='70b71e0f6ffe47ed86a910f90d71557a',ramdisk_id='',reservation_id='r-agiyf4o6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-532006644',owner_user_name='tempest-AttachInterfacesUnderV243Test-532006644-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:08:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3439b5cde2ff4830bb0294f007842282',uuid=939ae9f2-b89c-4a19-96de-ab4dfc882a35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.013 349552 DEBUG nova.network.os_vif_util [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Converting VIF {"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.015 349552 DEBUG nova.network.os_vif_util [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:ba:4f,bridge_name='br-int',has_traffic_filtering=True,id=2ac46e0a-6888-440f-b155-d4b0e8677304,network=Network(77ae1103-3871-4354-8e08-09bb5c0c1ad1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ac46e0a-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.017 349552 DEBUG os_vif [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:ba:4f,bridge_name='br-int',has_traffic_filtering=True,id=2ac46e0a-6888-440f-b155-d4b0e8677304,network=Network(77ae1103-3871-4354-8e08-09bb5c0c1ad1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ac46e0a-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.018 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.019 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.021 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.026 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.027 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ac46e0a-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.029 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2ac46e0a-68, col_values=(('external_ids', {'iface-id': '2ac46e0a-6888-440f-b155-d4b0e8677304', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ca:ba:4f', 'vm-uuid': '939ae9f2-b89c-4a19-96de-ab4dfc882a35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:08:44 compute-0 NetworkManager[49092]: <info>  [1764900524.0353] manager: (tap2ac46e0a-68): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.032 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.037 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.046 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.049 349552 INFO os_vif [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:ba:4f,bridge_name='br-int',has_traffic_filtering=True,id=2ac46e0a-6888-440f-b155-d4b0e8677304,network=Network(77ae1103-3871-4354-8e08-09bb5c0c1ad1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ac46e0a-68')
Dec 05 02:08:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1790: 321 pgs: 321 active+clean; 179 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.7 MiB/s wr, 158 op/s
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.109 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.110 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.110 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] No VIF found with MAC fa:16:3e:ca:ba:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.111 349552 INFO nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Using config drive
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.155 349552 DEBUG nova.storage.rbd_utils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] rbd image 939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.189 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a2605a46-d779-4fc3-aeff-1e040dbcf17d/disk.config a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.190 349552 INFO nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Deleting local config drive /var/lib/nova/instances/a2605a46-d779-4fc3-aeff-1e040dbcf17d/disk.config because it was imported into RBD.
Dec 05 02:08:44 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.233 349552 DEBUG nova.network.neutron [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Successfully created port: a240e2ef-1773-4509-ac04-eae1f5d36e08 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 05 02:08:44 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 05 02:08:44 compute-0 kernel: tap1eebaade-ab: entered promiscuous mode
Dec 05 02:08:44 compute-0 NetworkManager[49092]: <info>  [1764900524.3601] manager: (tap1eebaade-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/37)
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.366 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:44 compute-0 ovn_controller[89286]: 2025-12-05T02:08:44Z|00066|binding|INFO|Claiming lport 1eebaade-abb1-412c-95f2-2b7240026f85 for this chassis.
Dec 05 02:08:44 compute-0 ovn_controller[89286]: 2025-12-05T02:08:44Z|00067|binding|INFO|1eebaade-abb1-412c-95f2-2b7240026f85: Claiming fa:16:3e:af:f6:1b 10.100.0.5
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.381 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:f6:1b 10.100.0.5'], port_security=['fa:16:3e:af:f6:1b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a2605a46-d779-4fc3-aeff-1e040dbcf17d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5a020a22-53e0-4ddc-b74b-9b343d75de26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '159039e5ad4a46a7be912cd9756c76c5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c4f1e166-f717-4795-a420-f74c256dc7dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec639c0d-4f01-43c3-a93f-8a1059f20fc9, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=1eebaade-abb1-412c-95f2-2b7240026f85) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.384 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 1eebaade-abb1-412c-95f2-2b7240026f85 in datapath 5a020a22-53e0-4ddc-b74b-9b343d75de26 bound to our chassis
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.387 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5a020a22-53e0-4ddc-b74b-9b343d75de26
Dec 05 02:08:44 compute-0 ovn_controller[89286]: 2025-12-05T02:08:44Z|00068|binding|INFO|Setting lport 1eebaade-abb1-412c-95f2-2b7240026f85 ovn-installed in OVS
Dec 05 02:08:44 compute-0 ovn_controller[89286]: 2025-12-05T02:08:44Z|00069|binding|INFO|Setting lport 1eebaade-abb1-412c-95f2-2b7240026f85 up in Southbound
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.399 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.403 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.406 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[310034ff-554d-4b3f-8289-c61c5deb6b90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.407 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5a020a22-51 in ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.410 412744 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5a020a22-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.410 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[17e7adea-8aae-4abb-ac77-9cdac72f2552]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.412 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[7f9c9c77-4181-4a76-8125-dca886ea2368]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:44 compute-0 systemd-machined[138700]: New machine qemu-6-instance-00000006.
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.431 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[507cedab-fbeb-4434-9173-41d897db52dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:44 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Dec 05 02:08:44 compute-0 systemd-udevd[442229]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.468 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[d198da03-0739-4f3b-a703-8d0321b7a351]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:44 compute-0 NetworkManager[49092]: <info>  [1764900524.4866] device (tap1eebaade-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 05 02:08:44 compute-0 NetworkManager[49092]: <info>  [1764900524.4875] device (tap1eebaade-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.516 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[c57ddabc-6f54-4f76-bbf8-3bfa1e4a1832]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:44 compute-0 systemd-udevd[442234]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 02:08:44 compute-0 NetworkManager[49092]: <info>  [1764900524.5266] manager: (tap5a020a22-50): new Veth device (/org/freedesktop/NetworkManager/Devices/38)
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.526 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa99a84-8e10-466b-bfc5-6a1f2c9db739]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.567 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[99c80ede-9ded-4f1a-b29d-e5f34b0ba727]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.571 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[435393b3-aa07-4519-972e-3583b6da8545]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:44 compute-0 NetworkManager[49092]: <info>  [1764900524.6035] device (tap5a020a22-50): carrier: link connected
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.611 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[0efc2be5-adf5-43b8-9555-9d7aec77b4dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.636 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[66b13002-6001-4e38-99de-45e87ac59f49]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5a020a22-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:49:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661126, 'reachable_time': 44617, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 442263, 'error': None, 'target': 'ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.659 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[e51ec097-1236-47d8-a55f-482b91305146]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb2:49f1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661126, 'tstamp': 661126}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 442264, 'error': None, 'target': 'ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.665 349552 DEBUG nova.network.neutron [req-7fa8458c-f67c-4b57-b276-aa2a8b3ff6f1 req-4f8b9cd2-2cbf-4ed7-9c7a-4e160d7ec2d3 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updated VIF entry in instance network info cache for port 2ac46e0a-6888-440f-b155-d4b0e8677304. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.666 349552 DEBUG nova.network.neutron [req-7fa8458c-f67c-4b57-b276-aa2a8b3ff6f1 req-4f8b9cd2-2cbf-4ed7-9c7a-4e160d7ec2d3 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updating instance_info_cache with network_info: [{"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.680 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[0d1f1532-ab36-418d-b2ae-71749de82e66]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5a020a22-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:49:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661126, 'reachable_time': 44617, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 442265, 'error': None, 'target': 'ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.698 349552 DEBUG oslo_concurrency.lockutils [req-7fa8458c-f67c-4b57-b276-aa2a8b3ff6f1 req-4f8b9cd2-2cbf-4ed7-9c7a-4e160d7ec2d3 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:08:44 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1668676317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:08:44 compute-0 ceph-mon[192914]: pgmap v1790: 321 pgs: 321 active+clean; 179 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.7 MiB/s wr, 158 op/s
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.714 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[aebb8eae-8f29-4405-a5a7-28f92576f535]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.809 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[f422ca83-3489-4ed9-bc43-df94ec43e7f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.811 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a020a22-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.811 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.811 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a020a22-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:08:44 compute-0 kernel: tap5a020a22-50: entered promiscuous mode
Dec 05 02:08:44 compute-0 NetworkManager[49092]: <info>  [1764900524.8152] manager: (tap5a020a22-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.817 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.821 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.822 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5a020a22-50, col_values=(('external_ids', {'iface-id': '2395c111-a45b-4516-ba09-9b57be3b16f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.829 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:44 compute-0 ovn_controller[89286]: 2025-12-05T02:08:44Z|00070|binding|INFO|Releasing lport 2395c111-a45b-4516-ba09-9b57be3b16f8 from this chassis (sb_readonly=0)
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.830 287122 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5a020a22-53e0-4ddc-b74b-9b343d75de26.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5a020a22-53e0-4ddc-b74b-9b343d75de26.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.832 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[86781aaf-fcbc-4b60-b4dc-a85c525ab507]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.833 287122 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: global
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]:     log         /dev/log local0 debug
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]:     log-tag     haproxy-metadata-proxy-5a020a22-53e0-4ddc-b74b-9b343d75de26
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]:     user        root
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]:     group       root
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]:     maxconn     1024
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]:     pidfile     /var/lib/neutron/external/pids/5a020a22-53e0-4ddc-b74b-9b343d75de26.pid.haproxy
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]:     daemon
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: defaults
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]:     log global
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]:     mode http
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]:     option httplog
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]:     option dontlognull
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]:     option http-server-close
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]:     option forwardfor
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]:     retries                 3
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]:     timeout http-request    30s
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]:     timeout connect         30s
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]:     timeout client          32s
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]:     timeout server          32s
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]:     timeout http-keep-alive 30s
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: listen listener
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]:     bind 169.254.169.254:80
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]:     server metadata /var/lib/neutron/metadata_proxy
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]:     http-request add-header X-OVN-Network-ID 5a020a22-53e0-4ddc-b74b-9b343d75de26
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 05 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.833 287122 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26', 'env', 'PROCESS_TAG=haproxy-5a020a22-53e0-4ddc-b74b-9b343d75de26', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5a020a22-53e0-4ddc-b74b-9b343d75de26.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.837 349552 INFO nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Creating config drive at /var/lib/nova/instances/939ae9f2-b89c-4a19-96de-ab4dfc882a35/disk.config
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.852 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/939ae9f2-b89c-4a19-96de-ab4dfc882a35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2xjzt62c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.879 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.962 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900524.9616396, a2605a46-d779-4fc3-aeff-1e040dbcf17d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.963 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] VM Started (Lifecycle Event)
Dec 05 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.989 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/939ae9f2-b89c-4a19-96de-ab4dfc882a35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2xjzt62c" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.019 349552 DEBUG nova.storage.rbd_utils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] rbd image 939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.026 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/939ae9f2-b89c-4a19-96de-ab4dfc882a35/disk.config 939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.056 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.063 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900524.9664264, a2605a46-d779-4fc3-aeff-1e040dbcf17d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.063 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] VM Paused (Lifecycle Event)
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.089 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.095 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.122 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.285 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/939ae9f2-b89c-4a19-96de-ab4dfc882a35/disk.config 939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.259s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.286 349552 INFO nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Deleting local config drive /var/lib/nova/instances/939ae9f2-b89c-4a19-96de-ab4dfc882a35/disk.config because it was imported into RBD.
Dec 05 02:08:45 compute-0 podman[442384]: 2025-12-05 02:08:45.342163692 +0000 UTC m=+0.102719790 container create eef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:08:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:08:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1723955173' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:08:45 compute-0 kernel: tap2ac46e0a-68: entered promiscuous mode
Dec 05 02:08:45 compute-0 NetworkManager[49092]: <info>  [1764900525.3720] manager: (tap2ac46e0a-68): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Dec 05 02:08:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:08:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1723955173' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:08:45 compute-0 ovn_controller[89286]: 2025-12-05T02:08:45Z|00071|binding|INFO|Claiming lport 2ac46e0a-6888-440f-b155-d4b0e8677304 for this chassis.
Dec 05 02:08:45 compute-0 ovn_controller[89286]: 2025-12-05T02:08:45Z|00072|binding|INFO|2ac46e0a-6888-440f-b155-d4b0e8677304: Claiming fa:16:3e:ca:ba:4f 10.100.0.11
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.378 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:45 compute-0 podman[442384]: 2025-12-05 02:08:45.298459167 +0000 UTC m=+0.059015275 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 05 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.390 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:ba:4f 10.100.0.11'], port_security=['fa:16:3e:ca:ba:4f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '939ae9f2-b89c-4a19-96de-ab4dfc882a35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77ae1103-3871-4354-8e08-09bb5c0c1ad1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '70b71e0f6ffe47ed86a910f90d71557a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fd91b173-28fd-4506-a2d4-b70d7da34ab9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1a9bd25-2abf-40fe-aac7-26f2653ba067, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=2ac46e0a-6888-440f-b155-d4b0e8677304) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:08:45 compute-0 NetworkManager[49092]: <info>  [1764900525.3993] device (tap2ac46e0a-68): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 05 02:08:45 compute-0 ovn_controller[89286]: 2025-12-05T02:08:45Z|00073|binding|INFO|Setting lport 2ac46e0a-6888-440f-b155-d4b0e8677304 ovn-installed in OVS
Dec 05 02:08:45 compute-0 ovn_controller[89286]: 2025-12-05T02:08:45Z|00074|binding|INFO|Setting lport 2ac46e0a-6888-440f-b155-d4b0e8677304 up in Southbound
Dec 05 02:08:45 compute-0 NetworkManager[49092]: <info>  [1764900525.4006] device (tap2ac46e0a-68): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.401 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:45 compute-0 systemd[1]: Started libpod-conmon-eef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4.scope.
Dec 05 02:08:45 compute-0 systemd-machined[138700]: New machine qemu-7-instance-00000007.
Dec 05 02:08:45 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:08:45 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Dec 05 02:08:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f12546ccb3acfced01c79d379c89ef48eed6791003c99536b68dbe8f03ece420/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 02:08:45 compute-0 podman[442384]: 2025-12-05 02:08:45.473011139 +0000 UTC m=+0.233567327 container init eef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 02:08:45 compute-0 podman[442384]: 2025-12-05 02:08:45.484969355 +0000 UTC m=+0.245525483 container start eef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 05 02:08:45 compute-0 neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26[442413]: [NOTICE]   (442418) : New worker (442425) forked
Dec 05 02:08:45 compute-0 neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26[442413]: [NOTICE]   (442418) : Loading success.
Dec 05 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.569 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 2ac46e0a-6888-440f-b155-d4b0e8677304 in datapath 77ae1103-3871-4354-8e08-09bb5c0c1ad1 unbound from our chassis
Dec 05 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.574 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77ae1103-3871-4354-8e08-09bb5c0c1ad1
Dec 05 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.591 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[9f5d8922-cf72-4d0e-9ee7-5cf9896eec94]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.594 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77ae1103-31 in ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 05 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.598 412744 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77ae1103-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 05 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.598 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[5475fdc8-a330-40c9-932e-886fecd16a55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.601 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[1dd3d0bc-50c6-467a-9ee7-3f33cbe5802d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.620 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[2abcb8d2-0807-4835-b1ff-81d7ecd89264]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.639 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[f95a1d30-3138-4754-9e5f-3ffab40f29c8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.685 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[3258a4f0-c1eb-46ce-8f09-a12287c61464]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:45 compute-0 NetworkManager[49092]: <info>  [1764900525.6992] manager: (tap77ae1103-30): new Veth device (/org/freedesktop/NetworkManager/Devices/41)
Dec 05 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.698 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[cc094df5-9e68-4095-aafa-b162bf9af598]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.711 349552 DEBUG nova.compute.manager [req-5b0e231d-b055-4c1a-8d5d-9de2e0b11f8a req-51967660-c59a-406f-94cd-ed7b1bb3d734 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Received event network-vif-plugged-1eebaade-abb1-412c-95f2-2b7240026f85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.711 349552 DEBUG oslo_concurrency.lockutils [req-5b0e231d-b055-4c1a-8d5d-9de2e0b11f8a req-51967660-c59a-406f-94cd-ed7b1bb3d734 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.713 349552 DEBUG oslo_concurrency.lockutils [req-5b0e231d-b055-4c1a-8d5d-9de2e0b11f8a req-51967660-c59a-406f-94cd-ed7b1bb3d734 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.713 349552 DEBUG oslo_concurrency.lockutils [req-5b0e231d-b055-4c1a-8d5d-9de2e0b11f8a req-51967660-c59a-406f-94cd-ed7b1bb3d734 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.714 349552 DEBUG nova.compute.manager [req-5b0e231d-b055-4c1a-8d5d-9de2e0b11f8a req-51967660-c59a-406f-94cd-ed7b1bb3d734 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Processing event network-vif-plugged-1eebaade-abb1-412c-95f2-2b7240026f85 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.716 349552 DEBUG nova.compute.manager [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 05 02:08:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1723955173' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:08:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1723955173' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.724 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900525.7241232, a2605a46-d779-4fc3-aeff-1e040dbcf17d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.724 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] VM Resumed (Lifecycle Event)
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.729 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.747 349552 INFO nova.virt.libvirt.driver [-] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Instance spawned successfully.
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.747 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.752 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.759 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.759 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[b7e04232-9da8-4c0e-8bb6-3a144de29ffe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.770 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[a1992c5e-6303-47eb-b988-c9de5f8bcedc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.781 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.787 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.788 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.790 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.791 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.792 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.793 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:08:45 compute-0 NetworkManager[49092]: <info>  [1764900525.8159] device (tap77ae1103-30): carrier: link connected
Dec 05 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.829 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[d5128596-697c-46cb-922f-feb9a6248884]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.856 349552 INFO nova.compute.manager [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Took 11.23 seconds to spawn the instance on the hypervisor.
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.856 349552 DEBUG nova.compute.manager [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.864 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[e8384866-f7b6-4db0-bef6-d2a164ff65cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77ae1103-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:88:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661247, 'reachable_time': 25682, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 442481, 'error': None, 'target': 'ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.900 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[8b7eb39f-c950-4c01-9915-bd7de547198b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:883e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661247, 'tstamp': 661247}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 442485, 'error': None, 'target': 'ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.916 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[5a878b7c-eacd-4160-9cc3-bada338ac048]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77ae1103-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:88:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661247, 'reachable_time': 25682, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 442489, 'error': None, 'target': 'ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.954 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee2f2af-9814-48b8-b488-86fe2301cd09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.960 349552 INFO nova.compute.manager [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Took 12.39 seconds to build instance.
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.985 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.998 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900525.9983442, 939ae9f2-b89c-4a19-96de-ab4dfc882a35 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.998 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] VM Started (Lifecycle Event)
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.017 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.023 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900526.0002499, 939ae9f2-b89c-4a19-96de-ab4dfc882a35 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.023 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] VM Paused (Lifecycle Event)
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.034 349552 DEBUG nova.network.neutron [req-3fe960b8-eb6b-4df5-9abe-f3b3efad4a9d req-2c81012f-c18d-4147-8ba9-0dc8684a6e52 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Updated VIF entry in instance network info cache for port 1eebaade-abb1-412c-95f2-2b7240026f85. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.034 349552 DEBUG nova.network.neutron [req-3fe960b8-eb6b-4df5-9abe-f3b3efad4a9d req-2c81012f-c18d-4147-8ba9-0dc8684a6e52 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Updating instance_info_cache with network_info: [{"id": "1eebaade-abb1-412c-95f2-2b7240026f85", "address": "fa:16:3e:af:f6:1b", "network": {"id": "5a020a22-53e0-4ddc-b74b-9b343d75de26", "bridge": "br-int", "label": "tempest-ServersTestJSON-124637277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "159039e5ad4a46a7be912cd9756c76c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1eebaade-ab", "ovs_interfaceid": "1eebaade-abb1-412c-95f2-2b7240026f85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:46.041 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[1423e842-42e5-4067-8d88-3d263262feb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:46.043 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77ae1103-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:46.044 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:46.044 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77ae1103-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.045 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:08:46 compute-0 kernel: tap77ae1103-30: entered promiscuous mode
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.046 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:46 compute-0 NetworkManager[49092]: <info>  [1764900526.0480] manager: (tap77ae1103-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.050 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.051 349552 DEBUG oslo_concurrency.lockutils [req-3fe960b8-eb6b-4df5-9abe-f3b3efad4a9d req-2c81012f-c18d-4147-8ba9-0dc8684a6e52 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-a2605a46-d779-4fc3-aeff-1e040dbcf17d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:46.054 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77ae1103-30, col_values=(('external_ids', {'iface-id': '5f3160d9-2dc7-4f0c-9f4e-c46a8a847823'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.055 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:46 compute-0 ovn_controller[89286]: 2025-12-05T02:08:46Z|00075|binding|INFO|Releasing lport 5f3160d9-2dc7-4f0c-9f4e-c46a8a847823 from this chassis (sb_readonly=0)
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.057 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:46.060 287122 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77ae1103-3871-4354-8e08-09bb5c0c1ad1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77ae1103-3871-4354-8e08-09bb5c0c1ad1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:46.062 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[3663ec57-183e-4d3d-a8b5-30acdf41c952]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:46.063 287122 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]: global
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]:     log         /dev/log local0 debug
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]:     log-tag     haproxy-metadata-proxy-77ae1103-3871-4354-8e08-09bb5c0c1ad1
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]:     user        root
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]:     group       root
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]:     maxconn     1024
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]:     pidfile     /var/lib/neutron/external/pids/77ae1103-3871-4354-8e08-09bb5c0c1ad1.pid.haproxy
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]:     daemon
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]: defaults
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]:     log global
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]:     mode http
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]:     option httplog
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]:     option dontlognull
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]:     option http-server-close
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]:     option forwardfor
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]:     retries                 3
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]:     timeout http-request    30s
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]:     timeout connect         30s
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]:     timeout client          32s
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]:     timeout server          32s
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]:     timeout http-keep-alive 30s
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]: listen listener
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]:     bind 169.254.169.254:80
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]:     server metadata /var/lib/neutron/metadata_proxy
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]:     http-request add-header X-OVN-Network-ID 77ae1103-3871-4354-8e08-09bb5c0c1ad1
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 05 02:08:46 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:46.064 287122 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1', 'env', 'PROCESS_TAG=haproxy-77ae1103-3871-4354-8e08-09bb5c0c1ad1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77ae1103-3871-4354-8e08-09bb5c0c1ad1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.068 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.077 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1791: 321 pgs: 321 active+clean; 196 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.6 MiB/s wr, 152 op/s
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.092 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 02:08:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:08:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:08:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:08:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:08:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:08:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.325 349552 DEBUG nova.compute.manager [req-c0e8edfe-6b01-4821-9803-d326c9104772 req-72d9a3e8-61d3-4511-9896-4a4cbd4389e1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Received event network-vif-plugged-2ac46e0a-6888-440f-b155-d4b0e8677304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.325 349552 DEBUG oslo_concurrency.lockutils [req-c0e8edfe-6b01-4821-9803-d326c9104772 req-72d9a3e8-61d3-4511-9896-4a4cbd4389e1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.326 349552 DEBUG oslo_concurrency.lockutils [req-c0e8edfe-6b01-4821-9803-d326c9104772 req-72d9a3e8-61d3-4511-9896-4a4cbd4389e1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.326 349552 DEBUG oslo_concurrency.lockutils [req-c0e8edfe-6b01-4821-9803-d326c9104772 req-72d9a3e8-61d3-4511-9896-4a4cbd4389e1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.326 349552 DEBUG nova.compute.manager [req-c0e8edfe-6b01-4821-9803-d326c9104772 req-72d9a3e8-61d3-4511-9896-4a4cbd4389e1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Processing event network-vif-plugged-2ac46e0a-6888-440f-b155-d4b0e8677304 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.327 349552 DEBUG nova.compute.manager [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.335 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900526.335343, 939ae9f2-b89c-4a19-96de-ab4dfc882a35 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.336 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] VM Resumed (Lifecycle Event)
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.338 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.347 349552 INFO nova.virt.libvirt.driver [-] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Instance spawned successfully.
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.347 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.372 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.380 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.386 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.386 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.386 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.387 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.387 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.387 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.409 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.450 349552 INFO nova.compute.manager [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Took 8.39 seconds to spawn the instance on the hypervisor.
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.451 349552 DEBUG nova.compute.manager [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.537 349552 INFO nova.compute.manager [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Took 10.68 seconds to build instance.
Dec 05 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.553 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:46 compute-0 podman[442520]: 2025-12-05 02:08:46.590029649 +0000 UTC m=+0.096050584 container create 12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 02:08:46 compute-0 podman[442520]: 2025-12-05 02:08:46.544674727 +0000 UTC m=+0.050695702 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 05 02:08:46 compute-0 systemd[1]: Started libpod-conmon-12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16.scope.
Dec 05 02:08:46 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 05 02:08:46 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:08:46 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 05 02:08:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c66e076ce0b97b1ffb0be792f84404fb2f83ab9c6ac5cd8cc44b4f6206b0bf01/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 02:08:46 compute-0 ceph-mon[192914]: pgmap v1791: 321 pgs: 321 active+clean; 196 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.6 MiB/s wr, 152 op/s
Dec 05 02:08:46 compute-0 podman[442520]: 2025-12-05 02:08:46.735818365 +0000 UTC m=+0.241839380 container init 12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 02:08:46 compute-0 podman[442520]: 2025-12-05 02:08:46.752019389 +0000 UTC m=+0.258040354 container start 12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 02:08:46 compute-0 neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1[442534]: [NOTICE]   (442557) : New worker (442559) forked
Dec 05 02:08:46 compute-0 neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1[442534]: [NOTICE]   (442557) : Loading success.
Dec 05 02:08:47 compute-0 nova_compute[349548]: 2025-12-05 02:08:47.059 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:47 compute-0 nova_compute[349548]: 2025-12-05 02:08:47.933 349552 DEBUG nova.compute.manager [req-528e3ace-205c-4298-a372-54c864c2d233 req-dbc5be51-d750-48da-903f-c521a6f4fbc6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Received event network-vif-plugged-1eebaade-abb1-412c-95f2-2b7240026f85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:08:47 compute-0 nova_compute[349548]: 2025-12-05 02:08:47.933 349552 DEBUG oslo_concurrency.lockutils [req-528e3ace-205c-4298-a372-54c864c2d233 req-dbc5be51-d750-48da-903f-c521a6f4fbc6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:47 compute-0 nova_compute[349548]: 2025-12-05 02:08:47.934 349552 DEBUG oslo_concurrency.lockutils [req-528e3ace-205c-4298-a372-54c864c2d233 req-dbc5be51-d750-48da-903f-c521a6f4fbc6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:47 compute-0 nova_compute[349548]: 2025-12-05 02:08:47.935 349552 DEBUG oslo_concurrency.lockutils [req-528e3ace-205c-4298-a372-54c864c2d233 req-dbc5be51-d750-48da-903f-c521a6f4fbc6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:47 compute-0 nova_compute[349548]: 2025-12-05 02:08:47.936 349552 DEBUG nova.compute.manager [req-528e3ace-205c-4298-a372-54c864c2d233 req-dbc5be51-d750-48da-903f-c521a6f4fbc6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] No waiting events found dispatching network-vif-plugged-1eebaade-abb1-412c-95f2-2b7240026f85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:08:47 compute-0 nova_compute[349548]: 2025-12-05 02:08:47.936 349552 WARNING nova.compute.manager [req-528e3ace-205c-4298-a372-54c864c2d233 req-dbc5be51-d750-48da-903f-c521a6f4fbc6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Received unexpected event network-vif-plugged-1eebaade-abb1-412c-95f2-2b7240026f85 for instance with vm_state active and task_state None.
Dec 05 02:08:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:08:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Dec 05 02:08:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Dec 05 02:08:48 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Dec 05 02:08:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1793: 321 pgs: 321 active+clean; 196 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 7.5 MiB/s wr, 184 op/s
Dec 05 02:08:48 compute-0 nova_compute[349548]: 2025-12-05 02:08:48.742 349552 DEBUG nova.network.neutron [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Successfully updated port: a240e2ef-1773-4509-ac04-eae1f5d36e08 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 05 02:08:48 compute-0 nova_compute[349548]: 2025-12-05 02:08:48.762 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquiring lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:08:48 compute-0 nova_compute[349548]: 2025-12-05 02:08:48.763 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquired lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:08:48 compute-0 nova_compute[349548]: 2025-12-05 02:08:48.763 349552 DEBUG nova.network.neutron [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 05 02:08:49 compute-0 nova_compute[349548]: 2025-12-05 02:08:49.034 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:49 compute-0 nova_compute[349548]: 2025-12-05 02:08:49.042 349552 DEBUG nova.network.neutron [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 05 02:08:49 compute-0 ceph-mon[192914]: osdmap e137: 3 total, 3 up, 3 in
Dec 05 02:08:49 compute-0 ceph-mon[192914]: pgmap v1793: 321 pgs: 321 active+clean; 196 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 7.5 MiB/s wr, 184 op/s
Dec 05 02:08:49 compute-0 nova_compute[349548]: 2025-12-05 02:08:49.147 349552 DEBUG nova.compute.manager [req-7967438d-1106-4f88-b77c-2f4fac985ba0 req-3e828140-6dac-4094-b847-9026455b4c74 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Received event network-vif-plugged-2ac46e0a-6888-440f-b155-d4b0e8677304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:08:49 compute-0 nova_compute[349548]: 2025-12-05 02:08:49.148 349552 DEBUG oslo_concurrency.lockutils [req-7967438d-1106-4f88-b77c-2f4fac985ba0 req-3e828140-6dac-4094-b847-9026455b4c74 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:49 compute-0 nova_compute[349548]: 2025-12-05 02:08:49.149 349552 DEBUG oslo_concurrency.lockutils [req-7967438d-1106-4f88-b77c-2f4fac985ba0 req-3e828140-6dac-4094-b847-9026455b4c74 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:49 compute-0 nova_compute[349548]: 2025-12-05 02:08:49.149 349552 DEBUG oslo_concurrency.lockutils [req-7967438d-1106-4f88-b77c-2f4fac985ba0 req-3e828140-6dac-4094-b847-9026455b4c74 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:49 compute-0 nova_compute[349548]: 2025-12-05 02:08:49.150 349552 DEBUG nova.compute.manager [req-7967438d-1106-4f88-b77c-2f4fac985ba0 req-3e828140-6dac-4094-b847-9026455b4c74 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] No waiting events found dispatching network-vif-plugged-2ac46e0a-6888-440f-b155-d4b0e8677304 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:08:49 compute-0 nova_compute[349548]: 2025-12-05 02:08:49.150 349552 WARNING nova.compute.manager [req-7967438d-1106-4f88-b77c-2f4fac985ba0 req-3e828140-6dac-4094-b847-9026455b4c74 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Received unexpected event network-vif-plugged-2ac46e0a-6888-440f-b155-d4b0e8677304 for instance with vm_state active and task_state None.
Dec 05 02:08:49 compute-0 ovn_controller[89286]: 2025-12-05T02:08:49Z|00076|binding|INFO|Releasing lport 2395c111-a45b-4516-ba09-9b57be3b16f8 from this chassis (sb_readonly=0)
Dec 05 02:08:49 compute-0 ovn_controller[89286]: 2025-12-05T02:08:49Z|00077|binding|INFO|Releasing lport 5f3160d9-2dc7-4f0c-9f4e-c46a8a847823 from this chassis (sb_readonly=0)
Dec 05 02:08:49 compute-0 nova_compute[349548]: 2025-12-05 02:08:49.514 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:49 compute-0 ovn_controller[89286]: 2025-12-05T02:08:49Z|00078|binding|INFO|Releasing lport 2395c111-a45b-4516-ba09-9b57be3b16f8 from this chassis (sb_readonly=0)
Dec 05 02:08:49 compute-0 ovn_controller[89286]: 2025-12-05T02:08:49Z|00079|binding|INFO|Releasing lport 5f3160d9-2dc7-4f0c-9f4e-c46a8a847823 from this chassis (sb_readonly=0)
Dec 05 02:08:49 compute-0 nova_compute[349548]: 2025-12-05 02:08:49.743 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:49 compute-0 nova_compute[349548]: 2025-12-05 02:08:49.888 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:49 compute-0 NetworkManager[49092]: <info>  [1764900529.8907] manager: (patch-provnet-f36f4e0f-0425-4742-afb6-bfffeac36335-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Dec 05 02:08:49 compute-0 NetworkManager[49092]: <info>  [1764900529.8979] manager: (patch-br-int-to-provnet-f36f4e0f-0425-4742-afb6-bfffeac36335): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.057 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:50 compute-0 ovn_controller[89286]: 2025-12-05T02:08:50Z|00080|binding|INFO|Releasing lport 2395c111-a45b-4516-ba09-9b57be3b16f8 from this chassis (sb_readonly=0)
Dec 05 02:08:50 compute-0 ovn_controller[89286]: 2025-12-05T02:08:50Z|00081|binding|INFO|Releasing lport 5f3160d9-2dc7-4f0c-9f4e-c46a8a847823 from this chassis (sb_readonly=0)
Dec 05 02:08:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1794: 321 pgs: 321 active+clean; 196 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 935 KiB/s rd, 6.4 MiB/s wr, 158 op/s
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.095 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.416 349552 DEBUG nova.compute.manager [req-2b42ceb7-cb7b-4c56-9383-b8a043508828 req-9160bea8-3902-4c18-9fd0-723653425c81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Received event network-changed-1eebaade-abb1-412c-95f2-2b7240026f85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.422 349552 DEBUG nova.compute.manager [req-2b42ceb7-cb7b-4c56-9383-b8a043508828 req-9160bea8-3902-4c18-9fd0-723653425c81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Refreshing instance network info cache due to event network-changed-1eebaade-abb1-412c-95f2-2b7240026f85. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.423 349552 DEBUG oslo_concurrency.lockutils [req-2b42ceb7-cb7b-4c56-9383-b8a043508828 req-9160bea8-3902-4c18-9fd0-723653425c81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-a2605a46-d779-4fc3-aeff-1e040dbcf17d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.424 349552 DEBUG oslo_concurrency.lockutils [req-2b42ceb7-cb7b-4c56-9383-b8a043508828 req-9160bea8-3902-4c18-9fd0-723653425c81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-a2605a46-d779-4fc3-aeff-1e040dbcf17d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.425 349552 DEBUG nova.network.neutron [req-2b42ceb7-cb7b-4c56-9383-b8a043508828 req-9160bea8-3902-4c18-9fd0-723653425c81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Refreshing network info cache for port 1eebaade-abb1-412c-95f2-2b7240026f85 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.500 349552 DEBUG nova.network.neutron [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Updating instance_info_cache with network_info: [{"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.527 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Releasing lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.528 349552 DEBUG nova.compute.manager [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Instance network_info: |[{"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.533 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Start _get_guest_xml network_info=[{"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'e9091bfb-b431-47c9-a284-79372046956b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.545 349552 WARNING nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.554 349552 DEBUG nova.virt.libvirt.host [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.555 349552 DEBUG nova.virt.libvirt.host [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.564 349552 DEBUG nova.virt.libvirt.host [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.565 349552 DEBUG nova.virt.libvirt.host [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.566 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.567 349552 DEBUG nova.virt.hardware [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T02:07:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.568 349552 DEBUG nova.virt.hardware [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.569 349552 DEBUG nova.virt.hardware [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.570 349552 DEBUG nova.virt.hardware [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.574 349552 DEBUG nova.virt.hardware [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.575 349552 DEBUG nova.virt.hardware [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.577 349552 DEBUG nova.virt.hardware [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.578 349552 DEBUG nova.virt.hardware [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.581 349552 DEBUG nova.virt.hardware [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.583 349552 DEBUG nova.virt.hardware [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.584 349552 DEBUG nova.virt.hardware [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 05 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.589 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 02:08:51 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1744661611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.116 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:51 compute-0 ceph-mon[192914]: pgmap v1794: 321 pgs: 321 active+clean; 196 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 935 KiB/s rd, 6.4 MiB/s wr, 158 op/s
Dec 05 02:08:51 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1744661611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.176 349552 DEBUG nova.storage.rbd_utils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] rbd image 59e35a32-9023-4e49-be56-9da10df3027f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.188 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.280 349552 DEBUG nova.compute.manager [req-24990104-0dc5-4fef-9a67-abd20f7806d5 req-29397b65-a1cb-4569-a201-aaa3d7fb09e2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received event network-changed-a240e2ef-1773-4509-ac04-eae1f5d36e08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.282 349552 DEBUG nova.compute.manager [req-24990104-0dc5-4fef-9a67-abd20f7806d5 req-29397b65-a1cb-4569-a201-aaa3d7fb09e2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Refreshing instance network info cache due to event network-changed-a240e2ef-1773-4509-ac04-eae1f5d36e08. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.283 349552 DEBUG oslo_concurrency.lockutils [req-24990104-0dc5-4fef-9a67-abd20f7806d5 req-29397b65-a1cb-4569-a201-aaa3d7fb09e2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.284 349552 DEBUG oslo_concurrency.lockutils [req-24990104-0dc5-4fef-9a67-abd20f7806d5 req-29397b65-a1cb-4569-a201-aaa3d7fb09e2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.285 349552 DEBUG nova.network.neutron [req-24990104-0dc5-4fef-9a67-abd20f7806d5 req-29397b65-a1cb-4569-a201-aaa3d7fb09e2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Refreshing network info cache for port a240e2ef-1773-4509-ac04-eae1f5d36e08 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 02:08:51 compute-0 ovn_controller[89286]: 2025-12-05T02:08:51Z|00082|binding|INFO|Releasing lport 2395c111-a45b-4516-ba09-9b57be3b16f8 from this chassis (sb_readonly=0)
Dec 05 02:08:51 compute-0 ovn_controller[89286]: 2025-12-05T02:08:51Z|00083|binding|INFO|Releasing lport 5f3160d9-2dc7-4f0c-9f4e-c46a8a847823 from this chassis (sb_readonly=0)
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.416 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 02:08:51 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/427824560' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.758 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.759 349552 DEBUG nova.virt.libvirt.vif [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:08:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1678320742',display_name='tempest-ServerActionsTestJSON-server-1678320742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1678320742',id=8,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmirf5PzEcVuq6RNudVuflcugnc6r3Jy50MVVEH7tkttBe4cf5zv9kQC3Ss53DUYZTE/QaGNMMsby6pKc4tzWxZGKXsndhFMr79gHGA5klSxVz8kWH2nsbelSj8zkK0fg==',key_name='tempest-keypair-1953156472',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd34a6a62cf94436a2b836fa4f49c4fa',ramdisk_id='',reservation_id='r-i4td7gfo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1914764435',owner_user_name='tempest-ServerActionsTestJSON-1914764435-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:08:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4745812b7eb47908ded25b1eb7c7328',uuid=59e35a32-9023-4e49-be56-9da10df3027f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.760 349552 DEBUG nova.network.os_vif_util [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Converting VIF {"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.761 349552 DEBUG nova.network.os_vif_util [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.762 349552 DEBUG nova.objects.instance [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lazy-loading 'pci_devices' on Instance uuid 59e35a32-9023-4e49-be56-9da10df3027f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.781 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] End _get_guest_xml xml=<domain type="kvm">
Dec 05 02:08:51 compute-0 nova_compute[349548]:   <uuid>59e35a32-9023-4e49-be56-9da10df3027f</uuid>
Dec 05 02:08:51 compute-0 nova_compute[349548]:   <name>instance-00000008</name>
Dec 05 02:08:51 compute-0 nova_compute[349548]:   <memory>131072</memory>
Dec 05 02:08:51 compute-0 nova_compute[349548]:   <vcpu>1</vcpu>
Dec 05 02:08:51 compute-0 nova_compute[349548]:   <metadata>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <nova:name>tempest-ServerActionsTestJSON-server-1678320742</nova:name>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <nova:creationTime>2025-12-05 02:08:50</nova:creationTime>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <nova:flavor name="m1.nano">
Dec 05 02:08:51 compute-0 nova_compute[349548]:         <nova:memory>128</nova:memory>
Dec 05 02:08:51 compute-0 nova_compute[349548]:         <nova:disk>1</nova:disk>
Dec 05 02:08:51 compute-0 nova_compute[349548]:         <nova:swap>0</nova:swap>
Dec 05 02:08:51 compute-0 nova_compute[349548]:         <nova:ephemeral>0</nova:ephemeral>
Dec 05 02:08:51 compute-0 nova_compute[349548]:         <nova:vcpus>1</nova:vcpus>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       </nova:flavor>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <nova:owner>
Dec 05 02:08:51 compute-0 nova_compute[349548]:         <nova:user uuid="b4745812b7eb47908ded25b1eb7c7328">tempest-ServerActionsTestJSON-1914764435-project-member</nova:user>
Dec 05 02:08:51 compute-0 nova_compute[349548]:         <nova:project uuid="dd34a6a62cf94436a2b836fa4f49c4fa">tempest-ServerActionsTestJSON-1914764435</nova:project>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       </nova:owner>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <nova:root type="image" uuid="e9091bfb-b431-47c9-a284-79372046956b"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <nova:ports>
Dec 05 02:08:51 compute-0 nova_compute[349548]:         <nova:port uuid="a240e2ef-1773-4509-ac04-eae1f5d36e08">
Dec 05 02:08:51 compute-0 nova_compute[349548]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:         </nova:port>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       </nova:ports>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     </nova:instance>
Dec 05 02:08:51 compute-0 nova_compute[349548]:   </metadata>
Dec 05 02:08:51 compute-0 nova_compute[349548]:   <sysinfo type="smbios">
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <system>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <entry name="manufacturer">RDO</entry>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <entry name="product">OpenStack Compute</entry>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <entry name="serial">59e35a32-9023-4e49-be56-9da10df3027f</entry>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <entry name="uuid">59e35a32-9023-4e49-be56-9da10df3027f</entry>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <entry name="family">Virtual Machine</entry>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     </system>
Dec 05 02:08:51 compute-0 nova_compute[349548]:   </sysinfo>
Dec 05 02:08:51 compute-0 nova_compute[349548]:   <os>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <boot dev="hd"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <smbios mode="sysinfo"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:   </os>
Dec 05 02:08:51 compute-0 nova_compute[349548]:   <features>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <acpi/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <apic/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <vmcoreinfo/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:   </features>
Dec 05 02:08:51 compute-0 nova_compute[349548]:   <clock offset="utc">
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <timer name="pit" tickpolicy="delay"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <timer name="hpet" present="no"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:   </clock>
Dec 05 02:08:51 compute-0 nova_compute[349548]:   <cpu mode="host-model" match="exact">
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <topology sockets="1" cores="1" threads="1"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:   </cpu>
Dec 05 02:08:51 compute-0 nova_compute[349548]:   <devices>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <disk type="network" device="disk">
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/59e35a32-9023-4e49-be56-9da10df3027f_disk">
Dec 05 02:08:51 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       </source>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 02:08:51 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       </auth>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <target dev="vda" bus="virtio"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     </disk>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <disk type="network" device="cdrom">
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/59e35a32-9023-4e49-be56-9da10df3027f_disk.config">
Dec 05 02:08:51 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       </source>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 02:08:51 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       </auth>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <target dev="sda" bus="sata"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     </disk>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <interface type="ethernet">
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <mac address="fa:16:3e:16:81:87"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <driver name="vhost" rx_queue_size="512"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <mtu size="1442"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <target dev="tapa240e2ef-17"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     </interface>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <serial type="pty">
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <log file="/var/lib/nova/instances/59e35a32-9023-4e49-be56-9da10df3027f/console.log" append="off"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     </serial>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <video>
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     </video>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <input type="tablet" bus="usb"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <rng model="virtio">
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <backend model="random">/dev/urandom</backend>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     </rng>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <controller type="usb" index="0"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     <memballoon model="virtio">
Dec 05 02:08:51 compute-0 nova_compute[349548]:       <stats period="10"/>
Dec 05 02:08:51 compute-0 nova_compute[349548]:     </memballoon>
Dec 05 02:08:51 compute-0 nova_compute[349548]:   </devices>
Dec 05 02:08:51 compute-0 nova_compute[349548]: </domain>
Dec 05 02:08:51 compute-0 nova_compute[349548]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.782 349552 DEBUG nova.compute.manager [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Preparing to wait for external event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.782 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.783 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.783 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.783 349552 DEBUG nova.virt.libvirt.vif [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:08:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1678320742',display_name='tempest-ServerActionsTestJSON-server-1678320742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1678320742',id=8,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmirf5PzEcVuq6RNudVuflcugnc6r3Jy50MVVEH7tkttBe4cf5zv9kQC3Ss53DUYZTE/QaGNMMsby6pKc4tzWxZGKXsndhFMr79gHGA5klSxVz8kWH2nsbelSj8zkK0fg==',key_name='tempest-keypair-1953156472',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd34a6a62cf94436a2b836fa4f49c4fa',ramdisk_id='',reservation_id='r-i4td7gfo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1914764435',owner_user_name='tempest-ServerActionsTestJSON-1914764435-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:08:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4745812b7eb47908ded25b1eb7c7328',uuid=59e35a32-9023-4e49-be56-9da10df3027f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.784 349552 DEBUG nova.network.os_vif_util [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Converting VIF {"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.784 349552 DEBUG nova.network.os_vif_util [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.784 349552 DEBUG os_vif [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.785 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.785 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.785 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.788 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.789 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa240e2ef-17, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.789 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa240e2ef-17, col_values=(('external_ids', {'iface-id': 'a240e2ef-1773-4509-ac04-eae1f5d36e08', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:81:87', 'vm-uuid': '59e35a32-9023-4e49-be56-9da10df3027f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:08:51 compute-0 NetworkManager[49092]: <info>  [1764900531.7926] manager: (tapa240e2ef-17): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.791 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.793 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.800 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.801 349552 INFO os_vif [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17')
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.868 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.869 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.869 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] No VIF found with MAC fa:16:3e:16:81:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.869 349552 INFO nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Using config drive
Dec 05 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.904 349552 DEBUG nova.storage.rbd_utils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] rbd image 59e35a32-9023-4e49-be56-9da10df3027f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:08:52 compute-0 nova_compute[349548]: 2025-12-05 02:08:52.068 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1795: 321 pgs: 321 active+clean; 196 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.8 MiB/s wr, 217 op/s
Dec 05 02:08:52 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/427824560' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:08:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.105 349552 INFO nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Creating config drive at /var/lib/nova/instances/59e35a32-9023-4e49-be56-9da10df3027f/disk.config
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.112 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/59e35a32-9023-4e49-be56-9da10df3027f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyee4by74 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.172 349552 DEBUG oslo_concurrency.lockutils [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Acquiring lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:53 compute-0 ceph-mon[192914]: pgmap v1795: 321 pgs: 321 active+clean; 196 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.8 MiB/s wr, 217 op/s
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.173 349552 DEBUG oslo_concurrency.lockutils [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.174 349552 DEBUG oslo_concurrency.lockutils [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Acquiring lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.175 349552 DEBUG oslo_concurrency.lockutils [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.175 349552 DEBUG oslo_concurrency.lockutils [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.178 349552 INFO nova.compute.manager [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Terminating instance
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.181 349552 DEBUG nova.compute.manager [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.245 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/59e35a32-9023-4e49-be56-9da10df3027f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyee4by74" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:53 compute-0 kernel: tap1eebaade-ab (unregistering): left promiscuous mode
Dec 05 02:08:53 compute-0 NetworkManager[49092]: <info>  [1764900533.2973] device (tap1eebaade-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 05 02:08:53 compute-0 ovn_controller[89286]: 2025-12-05T02:08:53Z|00084|binding|INFO|Releasing lport 1eebaade-abb1-412c-95f2-2b7240026f85 from this chassis (sb_readonly=0)
Dec 05 02:08:53 compute-0 ovn_controller[89286]: 2025-12-05T02:08:53Z|00085|binding|INFO|Setting lport 1eebaade-abb1-412c-95f2-2b7240026f85 down in Southbound
Dec 05 02:08:53 compute-0 ovn_controller[89286]: 2025-12-05T02:08:53Z|00086|binding|INFO|Removing iface tap1eebaade-ab ovn-installed in OVS
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.322 349552 DEBUG nova.storage.rbd_utils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] rbd image 59e35a32-9023-4e49-be56-9da10df3027f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.330 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:f6:1b 10.100.0.5'], port_security=['fa:16:3e:af:f6:1b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a2605a46-d779-4fc3-aeff-1e040dbcf17d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5a020a22-53e0-4ddc-b74b-9b343d75de26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '159039e5ad4a46a7be912cd9756c76c5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c4f1e166-f717-4795-a420-f74c256dc7dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.237'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec639c0d-4f01-43c3-a93f-8a1059f20fc9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=1eebaade-abb1-412c-95f2-2b7240026f85) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.333 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 1eebaade-abb1-412c-95f2-2b7240026f85 in datapath 5a020a22-53e0-4ddc-b74b-9b343d75de26 unbound from our chassis
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.337 287122 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5a020a22-53e0-4ddc-b74b-9b343d75de26, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.339 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[20ee3389-6bf4-47be-98c5-15727193ee4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.339 287122 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26 namespace which is not needed anymore
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.349 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/59e35a32-9023-4e49-be56-9da10df3027f/disk.config 59e35a32-9023-4e49-be56-9da10df3027f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:53 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Dec 05 02:08:53 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 8.357s CPU time.
Dec 05 02:08:53 compute-0 systemd-machined[138700]: Machine qemu-6-instance-00000006 terminated.
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.380 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.414 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.424 349552 INFO nova.virt.libvirt.driver [-] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Instance destroyed successfully.
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.425 349552 DEBUG nova.objects.instance [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lazy-loading 'resources' on Instance uuid a2605a46-d779-4fc3-aeff-1e040dbcf17d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:08:53 compute-0 podman[442674]: 2025-12-05 02:08:53.429534565 +0000 UTC m=+0.118795741 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:08:53 compute-0 podman[442672]: 2025-12-05 02:08:53.43793919 +0000 UTC m=+0.119276844 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.452 349552 DEBUG nova.virt.libvirt.vif [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:08:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1341674106',display_name='tempest-ServersTestJSON-server-1341674106',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1341674106',id=6,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFtAkt1MUQ0UzD5ZNvg4emNMv//Ij9tTGEw8OvSj9D0kv+BMeeC2o2SF/4NX3oBFTRlyP9xb/yjd8SFW8gRLZtLdrfqvo1ZN4HP0TzIFpNkL3M1lCxjV2HbcSKr2zzjZbg==',key_name='tempest-keypair-1342908531',keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:08:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='159039e5ad4a46a7be912cd9756c76c5',ramdisk_id='',reservation_id='r-1esl3ayq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-244710502',owner_user_name='tempest-ServersTestJSON-244710502-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:08:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5e8484f22ce84af99708d2e728179b92',uuid=a2605a46-d779-4fc3-aeff-1e040dbcf17d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1eebaade-abb1-412c-95f2-2b7240026f85", "address": "fa:16:3e:af:f6:1b", "network": {"id": "5a020a22-53e0-4ddc-b74b-9b343d75de26", "bridge": "br-int", "label": "tempest-ServersTestJSON-124637277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "159039e5ad4a46a7be912cd9756c76c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1eebaade-ab", "ovs_interfaceid": "1eebaade-abb1-412c-95f2-2b7240026f85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.452 349552 DEBUG nova.network.os_vif_util [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Converting VIF {"id": "1eebaade-abb1-412c-95f2-2b7240026f85", "address": "fa:16:3e:af:f6:1b", "network": {"id": "5a020a22-53e0-4ddc-b74b-9b343d75de26", "bridge": "br-int", "label": "tempest-ServersTestJSON-124637277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "159039e5ad4a46a7be912cd9756c76c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1eebaade-ab", "ovs_interfaceid": "1eebaade-abb1-412c-95f2-2b7240026f85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.453 349552 DEBUG nova.network.os_vif_util [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:f6:1b,bridge_name='br-int',has_traffic_filtering=True,id=1eebaade-abb1-412c-95f2-2b7240026f85,network=Network(5a020a22-53e0-4ddc-b74b-9b343d75de26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1eebaade-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.453 349552 DEBUG os_vif [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:f6:1b,bridge_name='br-int',has_traffic_filtering=True,id=1eebaade-abb1-412c-95f2-2b7240026f85,network=Network(5a020a22-53e0-4ddc-b74b-9b343d75de26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1eebaade-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.458 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.458 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eebaade-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.461 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.466 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.469 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.472 349552 INFO os_vif [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:f6:1b,bridge_name='br-int',has_traffic_filtering=True,id=1eebaade-abb1-412c-95f2-2b7240026f85,network=Network(5a020a22-53e0-4ddc-b74b-9b343d75de26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1eebaade-ab')
Dec 05 02:08:53 compute-0 neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26[442413]: [NOTICE]   (442418) : haproxy version is 2.8.14-c23fe91
Dec 05 02:08:53 compute-0 neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26[442413]: [NOTICE]   (442418) : path to executable is /usr/sbin/haproxy
Dec 05 02:08:53 compute-0 neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26[442413]: [WARNING]  (442418) : Exiting Master process...
Dec 05 02:08:53 compute-0 neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26[442413]: [ALERT]    (442418) : Current worker (442425) exited with code 143 (Terminated)
Dec 05 02:08:53 compute-0 neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26[442413]: [WARNING]  (442418) : All workers exited. Exiting... (0)
Dec 05 02:08:53 compute-0 systemd[1]: libpod-eef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4.scope: Deactivated successfully.
Dec 05 02:08:53 compute-0 podman[442768]: 2025-12-05 02:08:53.531971746 +0000 UTC m=+0.060893868 container died eef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:08:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4-userdata-shm.mount: Deactivated successfully.
Dec 05 02:08:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f12546ccb3acfced01c79d379c89ef48eed6791003c99536b68dbe8f03ece420-merged.mount: Deactivated successfully.
Dec 05 02:08:53 compute-0 podman[442768]: 2025-12-05 02:08:53.634644894 +0000 UTC m=+0.163567026 container cleanup eef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.645 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/59e35a32-9023-4e49-be56-9da10df3027f/disk.config 59e35a32-9023-4e49-be56-9da10df3027f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.646 349552 INFO nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Deleting local config drive /var/lib/nova/instances/59e35a32-9023-4e49-be56-9da10df3027f/disk.config because it was imported into RBD.
Dec 05 02:08:53 compute-0 systemd[1]: libpod-conmon-eef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4.scope: Deactivated successfully.
Dec 05 02:08:53 compute-0 kernel: tapa240e2ef-17: entered promiscuous mode
Dec 05 02:08:53 compute-0 systemd-udevd[442693]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 02:08:53 compute-0 NetworkManager[49092]: <info>  [1764900533.7155] manager: (tapa240e2ef-17): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.720 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:53 compute-0 ovn_controller[89286]: 2025-12-05T02:08:53Z|00087|binding|INFO|Claiming lport a240e2ef-1773-4509-ac04-eae1f5d36e08 for this chassis.
Dec 05 02:08:53 compute-0 ovn_controller[89286]: 2025-12-05T02:08:53Z|00088|binding|INFO|a240e2ef-1773-4509-ac04-eae1f5d36e08: Claiming fa:16:3e:16:81:87 10.100.0.10
Dec 05 02:08:53 compute-0 NetworkManager[49092]: <info>  [1764900533.7287] device (tapa240e2ef-17): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.728 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:81:87 10.100.0.10'], port_security=['fa:16:3e:16:81:87 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '59e35a32-9023-4e49-be56-9da10df3027f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd34a6a62cf94436a2b836fa4f49c4fa', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0ad1486e-ab79-4bad-bad5-777f54ed0ef1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=880ae0ff-40ec-4de0-a5e7-7c2cf13ecf72, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=a240e2ef-1773-4509-ac04-eae1f5d36e08) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:08:53 compute-0 NetworkManager[49092]: <info>  [1764900533.7326] device (tapa240e2ef-17): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 05 02:08:53 compute-0 podman[442818]: 2025-12-05 02:08:53.736923971 +0000 UTC m=+0.070593810 container remove eef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 05 02:08:53 compute-0 ovn_controller[89286]: 2025-12-05T02:08:53Z|00089|binding|INFO|Setting lport a240e2ef-1773-4509-ac04-eae1f5d36e08 ovn-installed in OVS
Dec 05 02:08:53 compute-0 ovn_controller[89286]: 2025-12-05T02:08:53Z|00090|binding|INFO|Setting lport a240e2ef-1773-4509-ac04-eae1f5d36e08 up in Southbound
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.747 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.748 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.753 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[daadc2aa-9eb3-4cbe-87ad-1daa3c2ceef9]: (4, ('Fri Dec  5 02:08:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26 (eef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4)\neef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4\nFri Dec  5 02:08:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26 (eef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4)\neef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.756 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[b811a762-f9eb-4097-b792-8c472bdc842c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.757 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a020a22-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.757 349552 DEBUG nova.compute.manager [req-24ccabad-56cd-426b-aa2b-0bf498b425f2 req-9bdb6494-60c3-405e-8c7c-83ca88dddb3d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Received event network-changed-2ac46e0a-6888-440f-b155-d4b0e8677304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.758 349552 DEBUG nova.compute.manager [req-24ccabad-56cd-426b-aa2b-0bf498b425f2 req-9bdb6494-60c3-405e-8c7c-83ca88dddb3d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Refreshing instance network info cache due to event network-changed-2ac46e0a-6888-440f-b155-d4b0e8677304. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.758 349552 DEBUG oslo_concurrency.lockutils [req-24ccabad-56cd-426b-aa2b-0bf498b425f2 req-9bdb6494-60c3-405e-8c7c-83ca88dddb3d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.758 349552 DEBUG oslo_concurrency.lockutils [req-24ccabad-56cd-426b-aa2b-0bf498b425f2 req-9bdb6494-60c3-405e-8c7c-83ca88dddb3d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.758 349552 DEBUG nova.network.neutron [req-24ccabad-56cd-426b-aa2b-0bf498b425f2 req-9bdb6494-60c3-405e-8c7c-83ca88dddb3d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Refreshing network info cache for port 2ac46e0a-6888-440f-b155-d4b0e8677304 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.759 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:53 compute-0 kernel: tap5a020a22-50: left promiscuous mode
Dec 05 02:08:53 compute-0 systemd-machined[138700]: New machine qemu-8-instance-00000008.
Dec 05 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.779 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.785 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[c950d7ae-d134-478b-adc0-c92e93ed9c7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:53 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.801 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa222e3-bae8-461b-9972-1a7193e075fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.802 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c9ba52-ce8f-4031-883a-944969d51b76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.819 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[f4da9fa7-5711-4383-8f65-7f032f9cafdc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661117, 'reachable_time': 27029, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 442844, 'error': None, 'target': 'ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d5a020a22\x2d53e0\x2d4ddc\x2db74b\x2d9b343d75de26.mount: Deactivated successfully.
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.824 287504 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.824 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[5e26e179-900e-4b73-b516-3e88d3e72f74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.825 287122 INFO neutron.agent.ovn.metadata.agent [-] Port a240e2ef-1773-4509-ac04-eae1f5d36e08 in datapath a9bc378d-2d4b-4990-99ce-02656b1fec0d unbound from our chassis
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.828 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a9bc378d-2d4b-4990-99ce-02656b1fec0d
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.843 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[26762fef-8ce7-4680-a7cd-a18133017bdc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.845 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa9bc378d-21 in ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.847 412744 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa9bc378d-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.847 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[18a361f2-8e3b-4700-8e57-9dabaf65024c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.849 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[114b1c3e-d8fb-4716-8071-97cab6b5e522]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.863 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[ae5f20bf-24d0-402d-9908-3c9ba5d70cde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.887 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[083c1e46-c3d1-496e-be73-71d5536667df]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.923 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[94066268-2ed7-4010-843b-78cb21f87c77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.930 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[fb5b3db3-89d8-4a63-8950-550d4f539cbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:53 compute-0 NetworkManager[49092]: <info>  [1764900533.9317] manager: (tapa9bc378d-20): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.965 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[45d80916-41ce-4a15-9401-c431ff305fe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.971 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[baf0449d-aced-47e4-a958-9c2b69b3bd1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:53 compute-0 NetworkManager[49092]: <info>  [1764900533.9910] device (tapa9bc378d-20): carrier: link connected
Dec 05 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.996 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[81e084ee-3a9b-44cd-9168-52fa03461fe0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.028 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[2e90c673-8923-43cc-a2df-d53f6aadcbb9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa9bc378d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c2:fe:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662065, 'reachable_time': 20102, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 442876, 'error': None, 'target': 'ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.049 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[da3b7bb7-8b27-4960-b2b8-3460023bd738]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec2:feea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662065, 'tstamp': 662065}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 442877, 'error': None, 'target': 'ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.077 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[6c7f25a6-cc20-493d-8c5f-56a428d0538f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa9bc378d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c2:fe:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662065, 'reachable_time': 20102, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 442878, 'error': None, 'target': 'ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1796: 321 pgs: 321 active+clean; 196 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.0 MiB/s wr, 203 op/s
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.106 349552 DEBUG nova.compute.manager [req-752c993f-0c9e-41f2-bbfb-9e88b58a0edf req-55a2f2c8-e7f1-4ea2-899a-a6bc2c1caf2d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Received event network-vif-unplugged-1eebaade-abb1-412c-95f2-2b7240026f85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.107 349552 DEBUG oslo_concurrency.lockutils [req-752c993f-0c9e-41f2-bbfb-9e88b58a0edf req-55a2f2c8-e7f1-4ea2-899a-a6bc2c1caf2d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.107 349552 DEBUG oslo_concurrency.lockutils [req-752c993f-0c9e-41f2-bbfb-9e88b58a0edf req-55a2f2c8-e7f1-4ea2-899a-a6bc2c1caf2d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.107 349552 DEBUG oslo_concurrency.lockutils [req-752c993f-0c9e-41f2-bbfb-9e88b58a0edf req-55a2f2c8-e7f1-4ea2-899a-a6bc2c1caf2d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.107 349552 DEBUG nova.compute.manager [req-752c993f-0c9e-41f2-bbfb-9e88b58a0edf req-55a2f2c8-e7f1-4ea2-899a-a6bc2c1caf2d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] No waiting events found dispatching network-vif-unplugged-1eebaade-abb1-412c-95f2-2b7240026f85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.108 349552 DEBUG nova.compute.manager [req-752c993f-0c9e-41f2-bbfb-9e88b58a0edf req-55a2f2c8-e7f1-4ea2-899a-a6bc2c1caf2d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Received event network-vif-unplugged-1eebaade-abb1-412c-95f2-2b7240026f85 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.117 349552 DEBUG nova.network.neutron [req-2b42ceb7-cb7b-4c56-9383-b8a043508828 req-9160bea8-3902-4c18-9fd0-723653425c81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Updated VIF entry in instance network info cache for port 1eebaade-abb1-412c-95f2-2b7240026f85. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.118 349552 DEBUG nova.network.neutron [req-2b42ceb7-cb7b-4c56-9383-b8a043508828 req-9160bea8-3902-4c18-9fd0-723653425c81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Updating instance_info_cache with network_info: [{"id": "1eebaade-abb1-412c-95f2-2b7240026f85", "address": "fa:16:3e:af:f6:1b", "network": {"id": "5a020a22-53e0-4ddc-b74b-9b343d75de26", "bridge": "br-int", "label": "tempest-ServersTestJSON-124637277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "159039e5ad4a46a7be912cd9756c76c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1eebaade-ab", "ovs_interfaceid": "1eebaade-abb1-412c-95f2-2b7240026f85", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.125 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[1fb1b99d-c329-4b56-826b-a16e198c2fd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.140 349552 DEBUG oslo_concurrency.lockutils [req-2b42ceb7-cb7b-4c56-9383-b8a043508828 req-9160bea8-3902-4c18-9fd0-723653425c81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-a2605a46-d779-4fc3-aeff-1e040dbcf17d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.214 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[39896ae5-3631-40e0-b658-c2cf1c90f9f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.215 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa9bc378d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.216 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.216 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa9bc378d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.218 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:54 compute-0 NetworkManager[49092]: <info>  [1764900534.2193] manager: (tapa9bc378d-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Dec 05 02:08:54 compute-0 kernel: tapa9bc378d-20: entered promiscuous mode
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.221 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.223 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa9bc378d-20, col_values=(('external_ids', {'iface-id': '3d0916d7-6f03-4daf-8f3b-126228223c53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.226 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:54 compute-0 ovn_controller[89286]: 2025-12-05T02:08:54Z|00091|binding|INFO|Releasing lport 3d0916d7-6f03-4daf-8f3b-126228223c53 from this chassis (sb_readonly=0)
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.246 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.250 287122 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a9bc378d-2d4b-4990-99ce-02656b1fec0d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a9bc378d-2d4b-4990-99ce-02656b1fec0d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.253 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a86198-57bc-475d-9426-801cd8578d75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.255 287122 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]: global
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]:     log         /dev/log local0 debug
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]:     log-tag     haproxy-metadata-proxy-a9bc378d-2d4b-4990-99ce-02656b1fec0d
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]:     user        root
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]:     group       root
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]:     maxconn     1024
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]:     pidfile     /var/lib/neutron/external/pids/a9bc378d-2d4b-4990-99ce-02656b1fec0d.pid.haproxy
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]:     daemon
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]: defaults
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]:     log global
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]:     mode http
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]:     option httplog
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]:     option dontlognull
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]:     option http-server-close
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]:     option forwardfor
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]:     retries                 3
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]:     timeout http-request    30s
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]:     timeout connect         30s
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]:     timeout client          32s
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]:     timeout server          32s
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]:     timeout http-keep-alive 30s
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]: listen listener
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]:     bind 169.254.169.254:80
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]:     server metadata /var/lib/neutron/metadata_proxy
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]:     http-request add-header X-OVN-Network-ID a9bc378d-2d4b-4990-99ce-02656b1fec0d
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 05 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.255 287122 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'env', 'PROCESS_TAG=haproxy-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a9bc378d-2d4b-4990-99ce-02656b1fec0d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.285 349552 INFO nova.virt.libvirt.driver [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Deleting instance files /var/lib/nova/instances/a2605a46-d779-4fc3-aeff-1e040dbcf17d_del
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.286 349552 INFO nova.virt.libvirt.driver [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Deletion of /var/lib/nova/instances/a2605a46-d779-4fc3-aeff-1e040dbcf17d_del complete
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.369 349552 INFO nova.compute.manager [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Took 1.19 seconds to destroy the instance on the hypervisor.
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.370 349552 DEBUG oslo.service.loopingcall [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.379 349552 DEBUG nova.compute.manager [-] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.380 349552 DEBUG nova.network.neutron [-] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.389 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900534.3887367, 59e35a32-9023-4e49-be56-9da10df3027f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.390 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] VM Started (Lifecycle Event)
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.409 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.417 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900534.3894296, 59e35a32-9023-4e49-be56-9da10df3027f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.418 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] VM Paused (Lifecycle Event)
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.434 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.442 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.463 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 02:08:54 compute-0 podman[442952]: 2025-12-05 02:08:54.713252786 +0000 UTC m=+0.106433575 container create 4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:08:54 compute-0 podman[442952]: 2025-12-05 02:08:54.670494407 +0000 UTC m=+0.063675276 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 05 02:08:54 compute-0 systemd[1]: Started libpod-conmon-4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05.scope.
Dec 05 02:08:54 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:08:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/834861e6ee78dc388a1bf92deca51436b692390ae47802f4ad88169beea7eb85/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 02:08:54 compute-0 podman[442952]: 2025-12-05 02:08:54.90250563 +0000 UTC m=+0.295686429 container init 4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 05 02:08:54 compute-0 podman[442952]: 2025-12-05 02:08:54.915759072 +0000 UTC m=+0.308939861 container start 4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 02:08:54 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[442967]: [NOTICE]   (442971) : New worker (442973) forked
Dec 05 02:08:54 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[442967]: [NOTICE]   (442971) : Loading success.
Dec 05 02:08:55 compute-0 ceph-mon[192914]: pgmap v1796: 321 pgs: 321 active+clean; 196 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.0 MiB/s wr, 203 op/s
Dec 05 02:08:55 compute-0 nova_compute[349548]: 2025-12-05 02:08:55.538 349552 DEBUG nova.network.neutron [req-24990104-0dc5-4fef-9a67-abd20f7806d5 req-29397b65-a1cb-4569-a201-aaa3d7fb09e2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Updated VIF entry in instance network info cache for port a240e2ef-1773-4509-ac04-eae1f5d36e08. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 02:08:55 compute-0 nova_compute[349548]: 2025-12-05 02:08:55.539 349552 DEBUG nova.network.neutron [req-24990104-0dc5-4fef-9a67-abd20f7806d5 req-29397b65-a1cb-4569-a201-aaa3d7fb09e2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Updating instance_info_cache with network_info: [{"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:08:55 compute-0 nova_compute[349548]: 2025-12-05 02:08:55.571 349552 DEBUG oslo_concurrency.lockutils [req-24990104-0dc5-4fef-9a67-abd20f7806d5 req-29397b65-a1cb-4569-a201-aaa3d7fb09e2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:08:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1797: 321 pgs: 321 active+clean; 171 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 35 KiB/s wr, 192 op/s
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.166 349552 DEBUG nova.network.neutron [-] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.194 349552 INFO nova.compute.manager [-] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Took 1.81 seconds to deallocate network for instance.
Dec 05 02:08:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:56.203 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:56.204 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:56.205 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.254 349552 DEBUG oslo_concurrency.lockutils [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.255 349552 DEBUG oslo_concurrency.lockutils [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.376 349552 DEBUG oslo_concurrency.processutils [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.447 349552 DEBUG nova.compute.manager [req-a9b2160b-3c44-486d-932d-d38a11ed629b req-4c2eab4e-b7dd-4f67-a1f6-914b739f4fe8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.449 349552 DEBUG oslo_concurrency.lockutils [req-a9b2160b-3c44-486d-932d-d38a11ed629b req-4c2eab4e-b7dd-4f67-a1f6-914b739f4fe8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.450 349552 DEBUG oslo_concurrency.lockutils [req-a9b2160b-3c44-486d-932d-d38a11ed629b req-4c2eab4e-b7dd-4f67-a1f6-914b739f4fe8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.451 349552 DEBUG oslo_concurrency.lockutils [req-a9b2160b-3c44-486d-932d-d38a11ed629b req-4c2eab4e-b7dd-4f67-a1f6-914b739f4fe8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.452 349552 DEBUG nova.compute.manager [req-a9b2160b-3c44-486d-932d-d38a11ed629b req-4c2eab4e-b7dd-4f67-a1f6-914b739f4fe8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Processing event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.453 349552 DEBUG nova.compute.manager [req-a9b2160b-3c44-486d-932d-d38a11ed629b req-4c2eab4e-b7dd-4f67-a1f6-914b739f4fe8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.454 349552 DEBUG oslo_concurrency.lockutils [req-a9b2160b-3c44-486d-932d-d38a11ed629b req-4c2eab4e-b7dd-4f67-a1f6-914b739f4fe8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.455 349552 DEBUG oslo_concurrency.lockutils [req-a9b2160b-3c44-486d-932d-d38a11ed629b req-4c2eab4e-b7dd-4f67-a1f6-914b739f4fe8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.456 349552 DEBUG oslo_concurrency.lockutils [req-a9b2160b-3c44-486d-932d-d38a11ed629b req-4c2eab4e-b7dd-4f67-a1f6-914b739f4fe8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.457 349552 DEBUG nova.compute.manager [req-a9b2160b-3c44-486d-932d-d38a11ed629b req-4c2eab4e-b7dd-4f67-a1f6-914b739f4fe8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] No waiting events found dispatching network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.458 349552 WARNING nova.compute.manager [req-a9b2160b-3c44-486d-932d-d38a11ed629b req-4c2eab4e-b7dd-4f67-a1f6-914b739f4fe8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received unexpected event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 for instance with vm_state building and task_state spawning.
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.460 349552 DEBUG nova.compute.manager [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.481 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900536.4651842, 59e35a32-9023-4e49-be56-9da10df3027f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.485 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] VM Resumed (Lifecycle Event)
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.491 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.510 349552 INFO nova.virt.libvirt.driver [-] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Instance spawned successfully.
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.510 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.525 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.548 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.557 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.558 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.558 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.559 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.560 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.560 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.584 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 02:08:56 compute-0 podman[443002]: 2025-12-05 02:08:56.683251154 +0000 UTC m=+0.091301750 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute)
Dec 05 02:08:56 compute-0 podman[443003]: 2025-12-05 02:08:56.68275571 +0000 UTC m=+0.091006042 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.723 349552 DEBUG nova.compute.manager [req-49587267-7f34-4ff9-bfc6-e7a484f82522 req-403c1748-f9f8-46ce-adc5-b29541546868 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Received event network-vif-plugged-1eebaade-abb1-412c-95f2-2b7240026f85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.723 349552 DEBUG oslo_concurrency.lockutils [req-49587267-7f34-4ff9-bfc6-e7a484f82522 req-403c1748-f9f8-46ce-adc5-b29541546868 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.724 349552 DEBUG oslo_concurrency.lockutils [req-49587267-7f34-4ff9-bfc6-e7a484f82522 req-403c1748-f9f8-46ce-adc5-b29541546868 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.724 349552 DEBUG oslo_concurrency.lockutils [req-49587267-7f34-4ff9-bfc6-e7a484f82522 req-403c1748-f9f8-46ce-adc5-b29541546868 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.725 349552 DEBUG nova.compute.manager [req-49587267-7f34-4ff9-bfc6-e7a484f82522 req-403c1748-f9f8-46ce-adc5-b29541546868 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] No waiting events found dispatching network-vif-plugged-1eebaade-abb1-412c-95f2-2b7240026f85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.725 349552 WARNING nova.compute.manager [req-49587267-7f34-4ff9-bfc6-e7a484f82522 req-403c1748-f9f8-46ce-adc5-b29541546868 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Received unexpected event network-vif-plugged-1eebaade-abb1-412c-95f2-2b7240026f85 for instance with vm_state deleted and task_state None.
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.757 349552 INFO nova.compute.manager [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Took 15.55 seconds to spawn the instance on the hypervisor.
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.757 349552 DEBUG nova.compute.manager [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.835 349552 INFO nova.compute.manager [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Took 17.10 seconds to build instance.
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.850 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:08:56 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/617628783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.884 349552 DEBUG oslo_concurrency.processutils [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.892 349552 DEBUG nova.compute.provider_tree [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.920 349552 DEBUG nova.scheduler.client.report [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.962 349552 DEBUG oslo_concurrency.lockutils [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:57 compute-0 nova_compute[349548]: 2025-12-05 02:08:57.009 349552 INFO nova.scheduler.client.report [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Deleted allocations for instance a2605a46-d779-4fc3-aeff-1e040dbcf17d
Dec 05 02:08:57 compute-0 nova_compute[349548]: 2025-12-05 02:08:57.069 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:57 compute-0 nova_compute[349548]: 2025-12-05 02:08:57.077 349552 DEBUG oslo_concurrency.lockutils [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:08:57 compute-0 ceph-mon[192914]: pgmap v1797: 321 pgs: 321 active+clean; 171 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 35 KiB/s wr, 192 op/s
Dec 05 02:08:57 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/617628783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:08:57 compute-0 nova_compute[349548]: 2025-12-05 02:08:57.962 349552 DEBUG nova.network.neutron [req-24ccabad-56cd-426b-aa2b-0bf498b425f2 req-9bdb6494-60c3-405e-8c7c-83ca88dddb3d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updated VIF entry in instance network info cache for port 2ac46e0a-6888-440f-b155-d4b0e8677304. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 02:08:57 compute-0 nova_compute[349548]: 2025-12-05 02:08:57.962 349552 DEBUG nova.network.neutron [req-24ccabad-56cd-426b-aa2b-0bf498b425f2 req-9bdb6494-60c3-405e-8c7c-83ca88dddb3d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updating instance_info_cache with network_info: [{"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:08:57 compute-0 nova_compute[349548]: 2025-12-05 02:08:57.992 349552 DEBUG oslo_concurrency.lockutils [req-24ccabad-56cd-426b-aa2b-0bf498b425f2 req-9bdb6494-60c3-405e-8c7c-83ca88dddb3d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:08:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:08:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1798: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 19 KiB/s wr, 197 op/s
Dec 05 02:08:58 compute-0 nova_compute[349548]: 2025-12-05 02:08:58.462 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:08:58 compute-0 nova_compute[349548]: 2025-12-05 02:08:58.678 349552 DEBUG nova.compute.manager [req-f30cd29f-ff82-4031-bc41-63cc7818f4ba req-10c24bfa-7eee-42eb-9432-2176db123970 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Received event network-vif-deleted-1eebaade-abb1-412c-95f2-2b7240026f85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:08:58 compute-0 podman[443038]: 2025-12-05 02:08:58.739224781 +0000 UTC m=+0.146903539 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, architecture=x86_64, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., version=9.4, container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, config_id=edpm)
Dec 05 02:08:59 compute-0 ceph-mon[192914]: pgmap v1798: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 19 KiB/s wr, 197 op/s
Dec 05 02:08:59 compute-0 podman[158197]: time="2025-12-05T02:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:08:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45045 "" "Go-http-client/1.1"
Dec 05 02:08:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9097 "" "Go-http-client/1.1"
Dec 05 02:09:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1799: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 16 KiB/s wr, 165 op/s
Dec 05 02:09:01 compute-0 nova_compute[349548]: 2025-12-05 02:09:01.008 349552 DEBUG nova.compute.manager [req-2ee89b52-fdfc-4751-8dd3-641a03651246 req-ed91affe-ae01-4dd8-8b90-49725a932e86 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received event network-changed-a240e2ef-1773-4509-ac04-eae1f5d36e08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:09:01 compute-0 nova_compute[349548]: 2025-12-05 02:09:01.010 349552 DEBUG nova.compute.manager [req-2ee89b52-fdfc-4751-8dd3-641a03651246 req-ed91affe-ae01-4dd8-8b90-49725a932e86 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Refreshing instance network info cache due to event network-changed-a240e2ef-1773-4509-ac04-eae1f5d36e08. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 02:09:01 compute-0 nova_compute[349548]: 2025-12-05 02:09:01.010 349552 DEBUG oslo_concurrency.lockutils [req-2ee89b52-fdfc-4751-8dd3-641a03651246 req-ed91affe-ae01-4dd8-8b90-49725a932e86 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:09:01 compute-0 nova_compute[349548]: 2025-12-05 02:09:01.010 349552 DEBUG oslo_concurrency.lockutils [req-2ee89b52-fdfc-4751-8dd3-641a03651246 req-ed91affe-ae01-4dd8-8b90-49725a932e86 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:09:01 compute-0 nova_compute[349548]: 2025-12-05 02:09:01.011 349552 DEBUG nova.network.neutron [req-2ee89b52-fdfc-4751-8dd3-641a03651246 req-ed91affe-ae01-4dd8-8b90-49725a932e86 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Refreshing network info cache for port a240e2ef-1773-4509-ac04-eae1f5d36e08 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 02:09:01 compute-0 ceph-mon[192914]: pgmap v1799: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 16 KiB/s wr, 165 op/s
Dec 05 02:09:01 compute-0 openstack_network_exporter[366555]: ERROR   02:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:09:01 compute-0 openstack_network_exporter[366555]: ERROR   02:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:09:01 compute-0 openstack_network_exporter[366555]: ERROR   02:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:09:01 compute-0 openstack_network_exporter[366555]: ERROR   02:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:09:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:09:01 compute-0 openstack_network_exporter[366555]: ERROR   02:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:09:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:09:02 compute-0 nova_compute[349548]: 2025-12-05 02:09:02.070 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1800: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 16 KiB/s wr, 213 op/s
Dec 05 02:09:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:09:03 compute-0 nova_compute[349548]: 2025-12-05 02:09:03.089 349552 DEBUG nova.network.neutron [req-2ee89b52-fdfc-4751-8dd3-641a03651246 req-ed91affe-ae01-4dd8-8b90-49725a932e86 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Updated VIF entry in instance network info cache for port a240e2ef-1773-4509-ac04-eae1f5d36e08. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 02:09:03 compute-0 nova_compute[349548]: 2025-12-05 02:09:03.090 349552 DEBUG nova.network.neutron [req-2ee89b52-fdfc-4751-8dd3-641a03651246 req-ed91affe-ae01-4dd8-8b90-49725a932e86 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Updating instance_info_cache with network_info: [{"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:09:03 compute-0 nova_compute[349548]: 2025-12-05 02:09:03.111 349552 DEBUG oslo_concurrency.lockutils [req-2ee89b52-fdfc-4751-8dd3-641a03651246 req-ed91affe-ae01-4dd8-8b90-49725a932e86 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:09:03 compute-0 ceph-mon[192914]: pgmap v1800: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 16 KiB/s wr, 213 op/s
Dec 05 02:09:03 compute-0 ovn_controller[89286]: 2025-12-05T02:09:03Z|00092|binding|INFO|Releasing lport 3d0916d7-6f03-4daf-8f3b-126228223c53 from this chassis (sb_readonly=0)
Dec 05 02:09:03 compute-0 ovn_controller[89286]: 2025-12-05T02:09:03Z|00093|binding|INFO|Releasing lport 5f3160d9-2dc7-4f0c-9f4e-c46a8a847823 from this chassis (sb_readonly=0)
Dec 05 02:09:03 compute-0 nova_compute[349548]: 2025-12-05 02:09:03.466 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:03 compute-0 nova_compute[349548]: 2025-12-05 02:09:03.479 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:03 compute-0 nova_compute[349548]: 2025-12-05 02:09:03.648 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1801: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 16 KiB/s wr, 129 op/s
Dec 05 02:09:05 compute-0 ceph-mon[192914]: pgmap v1801: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 16 KiB/s wr, 129 op/s
Dec 05 02:09:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1802: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Dec 05 02:09:07 compute-0 nova_compute[349548]: 2025-12-05 02:09:07.056 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:07 compute-0 nova_compute[349548]: 2025-12-05 02:09:07.072 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:07 compute-0 ceph-mon[192914]: pgmap v1802: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Dec 05 02:09:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:09:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1803: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 92 op/s
Dec 05 02:09:08 compute-0 nova_compute[349548]: 2025-12-05 02:09:08.419 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764900533.418778, a2605a46-d779-4fc3-aeff-1e040dbcf17d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:09:08 compute-0 nova_compute[349548]: 2025-12-05 02:09:08.421 349552 INFO nova.compute.manager [-] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] VM Stopped (Lifecycle Event)
Dec 05 02:09:08 compute-0 nova_compute[349548]: 2025-12-05 02:09:08.448 349552 DEBUG nova.compute.manager [None req-93cafbe0-3a2a-4f56-b853-bae93cafa81c - - - - - -] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:09:08 compute-0 nova_compute[349548]: 2025-12-05 02:09:08.469 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:08 compute-0 podman[443058]: 2025-12-05 02:09:08.684622563 +0000 UTC m=+0.087075552 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 02:09:08 compute-0 podman[443057]: 2025-12-05 02:09:08.702162294 +0000 UTC m=+0.091330951 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 02:09:08 compute-0 podman[443064]: 2025-12-05 02:09:08.726007863 +0000 UTC m=+0.097526625 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, distribution-scope=public, container_name=openstack_network_exporter, managed_by=edpm_ansible)
Dec 05 02:09:08 compute-0 podman[443059]: 2025-12-05 02:09:08.781000173 +0000 UTC m=+0.158708818 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 02:09:09 compute-0 ceph-mon[192914]: pgmap v1803: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 92 op/s
Dec 05 02:09:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1804: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 05 02:09:11 compute-0 ceph-mon[192914]: pgmap v1804: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 05 02:09:12 compute-0 nova_compute[349548]: 2025-12-05 02:09:12.074 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1805: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 05 02:09:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:09:13 compute-0 ceph-mon[192914]: pgmap v1805: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 05 02:09:13 compute-0 nova_compute[349548]: 2025-12-05 02:09:13.473 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:13 compute-0 nova_compute[349548]: 2025-12-05 02:09:13.600 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Acquiring lock "86d3faa9-af9e-47de-bc0f-3e211167604f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:09:13 compute-0 nova_compute[349548]: 2025-12-05 02:09:13.601 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:09:13 compute-0 nova_compute[349548]: 2025-12-05 02:09:13.624 349552 DEBUG nova.compute.manager [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 05 02:09:13 compute-0 nova_compute[349548]: 2025-12-05 02:09:13.711 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:09:13 compute-0 nova_compute[349548]: 2025-12-05 02:09:13.712 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:09:13 compute-0 nova_compute[349548]: 2025-12-05 02:09:13.726 349552 DEBUG nova.virt.hardware [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 05 02:09:13 compute-0 nova_compute[349548]: 2025-12-05 02:09:13.727 349552 INFO nova.compute.claims [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Claim successful on node compute-0.ctlplane.example.com
Dec 05 02:09:13 compute-0 nova_compute[349548]: 2025-12-05 02:09:13.885 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:09:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1806: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 507 KiB/s rd, 16 op/s
Dec 05 02:09:14 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:09:14 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4109865076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.469 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.484 349552 DEBUG nova.compute.provider_tree [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.509 349552 DEBUG nova.scheduler.client.report [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.531 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.532 349552 DEBUG nova.compute.manager [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 05 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.573 349552 DEBUG nova.compute.manager [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 05 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.573 349552 DEBUG nova.network.neutron [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 05 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.595 349552 INFO nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 05 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.611 349552 DEBUG nova.compute.manager [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 05 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.729 349552 DEBUG nova.compute.manager [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 05 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.731 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 05 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.732 349552 INFO nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Creating image(s)
Dec 05 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.773 349552 DEBUG nova.storage.rbd_utils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] rbd image 86d3faa9-af9e-47de-bc0f-3e211167604f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.815 349552 DEBUG nova.storage.rbd_utils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] rbd image 86d3faa9-af9e-47de-bc0f-3e211167604f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.845 349552 DEBUG nova.storage.rbd_utils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] rbd image 86d3faa9-af9e-47de-bc0f-3e211167604f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.853 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.900 349552 DEBUG nova.policy [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7eb322b6163b466fb7721796e0d10c1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7771751d84d348319b2c3d632191b59c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 05 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.938 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.938 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Acquiring lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.939 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.939 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.978 349552 DEBUG nova.storage.rbd_utils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] rbd image 86d3faa9-af9e-47de-bc0f-3e211167604f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.985 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 86d3faa9-af9e-47de-bc0f-3e211167604f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:09:15 compute-0 ceph-mon[192914]: pgmap v1806: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 507 KiB/s rd, 16 op/s
Dec 05 02:09:15 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4109865076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:09:15 compute-0 nova_compute[349548]: 2025-12-05 02:09:15.402 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 86d3faa9-af9e-47de-bc0f-3e211167604f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:09:15 compute-0 nova_compute[349548]: 2025-12-05 02:09:15.556 349552 DEBUG nova.storage.rbd_utils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] resizing rbd image 86d3faa9-af9e-47de-bc0f-3e211167604f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 05 02:09:15 compute-0 nova_compute[349548]: 2025-12-05 02:09:15.774 349552 DEBUG nova.objects.instance [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lazy-loading 'migration_context' on Instance uuid 86d3faa9-af9e-47de-bc0f-3e211167604f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:09:15 compute-0 nova_compute[349548]: 2025-12-05 02:09:15.790 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 05 02:09:15 compute-0 nova_compute[349548]: 2025-12-05 02:09:15.791 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Ensure instance console log exists: /var/lib/nova/instances/86d3faa9-af9e-47de-bc0f-3e211167604f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 05 02:09:15 compute-0 nova_compute[349548]: 2025-12-05 02:09:15.791 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:09:15 compute-0 nova_compute[349548]: 2025-12-05 02:09:15.791 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:09:15 compute-0 nova_compute[349548]: 2025-12-05 02:09:15.792 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:09:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1807: 321 pgs: 321 active+clean; 153 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 71 KiB/s wr, 1 op/s
Dec 05 02:09:16 compute-0 nova_compute[349548]: 2025-12-05 02:09:16.117 349552 DEBUG nova.network.neutron [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Successfully created port: 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 05 02:09:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:09:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:09:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:09:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:09:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:09:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:09:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:09:16
Dec 05 02:09:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:09:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:09:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'images', 'default.rgw.control', 'vms', '.mgr', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'volumes', 'default.rgw.meta']
Dec 05 02:09:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:09:17 compute-0 nova_compute[349548]: 2025-12-05 02:09:17.077 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:17 compute-0 nova_compute[349548]: 2025-12-05 02:09:17.305 349552 DEBUG nova.network.neutron [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Successfully updated port: 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 05 02:09:17 compute-0 nova_compute[349548]: 2025-12-05 02:09:17.321 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Acquiring lock "refresh_cache-86d3faa9-af9e-47de-bc0f-3e211167604f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:09:17 compute-0 nova_compute[349548]: 2025-12-05 02:09:17.321 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Acquired lock "refresh_cache-86d3faa9-af9e-47de-bc0f-3e211167604f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:09:17 compute-0 nova_compute[349548]: 2025-12-05 02:09:17.322 349552 DEBUG nova.network.neutron [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 05 02:09:17 compute-0 ceph-mon[192914]: pgmap v1807: 321 pgs: 321 active+clean; 153 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 71 KiB/s wr, 1 op/s
Dec 05 02:09:17 compute-0 nova_compute[349548]: 2025-12-05 02:09:17.584 349552 DEBUG nova.compute.manager [req-b37398ab-98ea-42e2-8849-eaeec8ef8c64 req-04a4bd84-b4d7-4128-839c-1384fa03a437 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Received event network-changed-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:09:17 compute-0 nova_compute[349548]: 2025-12-05 02:09:17.585 349552 DEBUG nova.compute.manager [req-b37398ab-98ea-42e2-8849-eaeec8ef8c64 req-04a4bd84-b4d7-4128-839c-1384fa03a437 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Refreshing instance network info cache due to event network-changed-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 02:09:17 compute-0 nova_compute[349548]: 2025-12-05 02:09:17.587 349552 DEBUG oslo_concurrency.lockutils [req-b37398ab-98ea-42e2-8849-eaeec8ef8c64 req-04a4bd84-b4d7-4128-839c-1384fa03a437 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-86d3faa9-af9e-47de-bc0f-3e211167604f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:09:17 compute-0 nova_compute[349548]: 2025-12-05 02:09:17.710 349552 DEBUG nova.network.neutron [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 05 02:09:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:09:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:09:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:09:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:09:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:09:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:09:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:09:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:09:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:09:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:09:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:09:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1808: 321 pgs: 321 active+clean; 187 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Dec 05 02:09:18 compute-0 nova_compute[349548]: 2025-12-05 02:09:18.477 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:19 compute-0 ceph-mon[192914]: pgmap v1808: 321 pgs: 321 active+clean; 187 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Dec 05 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.631 349552 DEBUG nova.network.neutron [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Updating instance_info_cache with network_info: [{"id": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "address": "fa:16:3e:57:08:95", "network": {"id": "f5a068ec-72e0-4934-878b-07d85634c361", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-965896294-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7771751d84d348319b2c3d632191b59c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ce2a2f7-a9", "ovs_interfaceid": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.658 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Releasing lock "refresh_cache-86d3faa9-af9e-47de-bc0f-3e211167604f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.659 349552 DEBUG nova.compute.manager [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Instance network_info: |[{"id": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "address": "fa:16:3e:57:08:95", "network": {"id": "f5a068ec-72e0-4934-878b-07d85634c361", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-965896294-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7771751d84d348319b2c3d632191b59c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ce2a2f7-a9", "ovs_interfaceid": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 05 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.661 349552 DEBUG oslo_concurrency.lockutils [req-b37398ab-98ea-42e2-8849-eaeec8ef8c64 req-04a4bd84-b4d7-4128-839c-1384fa03a437 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-86d3faa9-af9e-47de-bc0f-3e211167604f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.662 349552 DEBUG nova.network.neutron [req-b37398ab-98ea-42e2-8849-eaeec8ef8c64 req-04a4bd84-b4d7-4128-839c-1384fa03a437 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Refreshing network info cache for port 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.667 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Start _get_guest_xml network_info=[{"id": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "address": "fa:16:3e:57:08:95", "network": {"id": "f5a068ec-72e0-4934-878b-07d85634c361", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-965896294-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7771751d84d348319b2c3d632191b59c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ce2a2f7-a9", "ovs_interfaceid": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'e9091bfb-b431-47c9-a284-79372046956b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 05 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.680 349552 WARNING nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.696 349552 DEBUG nova.virt.libvirt.host [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 05 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.697 349552 DEBUG nova.virt.libvirt.host [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 05 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.705 349552 DEBUG nova.virt.libvirt.host [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 05 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.706 349552 DEBUG nova.virt.libvirt.host [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 05 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.707 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 05 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.707 349552 DEBUG nova.virt.hardware [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T02:07:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 05 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.709 349552 DEBUG nova.virt.hardware [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 05 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.709 349552 DEBUG nova.virt.hardware [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 05 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.710 349552 DEBUG nova.virt.hardware [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 05 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.711 349552 DEBUG nova.virt.hardware [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 05 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.712 349552 DEBUG nova.virt.hardware [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 05 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.712 349552 DEBUG nova.virt.hardware [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 05 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.713 349552 DEBUG nova.virt.hardware [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 05 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.714 349552 DEBUG nova.virt.hardware [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 05 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.715 349552 DEBUG nova.virt.hardware [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 05 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.715 349552 DEBUG nova.virt.hardware [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 05 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.721 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:09:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1809: 321 pgs: 321 active+clean; 187 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Dec 05 02:09:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 02:09:20 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3827995369' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.318 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.356 349552 DEBUG nova.storage.rbd_utils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] rbd image 86d3faa9-af9e-47de-bc0f-3e211167604f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:09:20 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3827995369' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.383 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:09:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 02:09:20 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1270812209' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.891 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.897 349552 DEBUG nova.virt.libvirt.vif [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:09:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1615802566',display_name='tempest-ServersTestManualDisk-server-1615802566',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1615802566',id=9,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOcRO97guGa63+bps+A9FhbwCKswROHpaWQg4mABL2o9peSWqfNCYb59UZjb6DzrVFgPcALMXfGD8Zcw0e20RtTOhbatKip3vjrwBqcfA+Ox6W1aF5tJ18LwMyhNTkj73A==',key_name='tempest-keypair-1736515978',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7771751d84d348319b2c3d632191b59c',ramdisk_id='',reservation_id='r-8rl5cwmf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1464391732',owner_user_name='tempest-ServersTestManualDisk-1464391732-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:09:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7eb322b6163b466fb7721796e0d10c1f',uuid=86d3faa9-af9e-47de-bc0f-3e211167604f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "address": "fa:16:3e:57:08:95", "network": {"id": "f5a068ec-72e0-4934-878b-07d85634c361", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-965896294-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7771751d84d348319b2c3d632191b59c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ce2a2f7-a9", "ovs_interfaceid": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 05 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.900 349552 DEBUG nova.network.os_vif_util [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Converting VIF {"id": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "address": "fa:16:3e:57:08:95", "network": {"id": "f5a068ec-72e0-4934-878b-07d85634c361", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-965896294-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7771751d84d348319b2c3d632191b59c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ce2a2f7-a9", "ovs_interfaceid": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.905 349552 DEBUG nova.network.os_vif_util [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:08:95,bridge_name='br-int',has_traffic_filtering=True,id=5ce2a2f7-a9e2-4922-b684-fefcfe3f6307,network=Network(f5a068ec-72e0-4934-878b-07d85634c361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ce2a2f7-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.911 349552 DEBUG nova.objects.instance [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lazy-loading 'pci_devices' on Instance uuid 86d3faa9-af9e-47de-bc0f-3e211167604f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.965 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] End _get_guest_xml xml=<domain type="kvm">
Dec 05 02:09:20 compute-0 nova_compute[349548]:   <uuid>86d3faa9-af9e-47de-bc0f-3e211167604f</uuid>
Dec 05 02:09:20 compute-0 nova_compute[349548]:   <name>instance-00000009</name>
Dec 05 02:09:20 compute-0 nova_compute[349548]:   <memory>131072</memory>
Dec 05 02:09:20 compute-0 nova_compute[349548]:   <vcpu>1</vcpu>
Dec 05 02:09:20 compute-0 nova_compute[349548]:   <metadata>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <nova:name>tempest-ServersTestManualDisk-server-1615802566</nova:name>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <nova:creationTime>2025-12-05 02:09:19</nova:creationTime>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <nova:flavor name="m1.nano">
Dec 05 02:09:20 compute-0 nova_compute[349548]:         <nova:memory>128</nova:memory>
Dec 05 02:09:20 compute-0 nova_compute[349548]:         <nova:disk>1</nova:disk>
Dec 05 02:09:20 compute-0 nova_compute[349548]:         <nova:swap>0</nova:swap>
Dec 05 02:09:20 compute-0 nova_compute[349548]:         <nova:ephemeral>0</nova:ephemeral>
Dec 05 02:09:20 compute-0 nova_compute[349548]:         <nova:vcpus>1</nova:vcpus>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       </nova:flavor>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <nova:owner>
Dec 05 02:09:20 compute-0 nova_compute[349548]:         <nova:user uuid="7eb322b6163b466fb7721796e0d10c1f">tempest-ServersTestManualDisk-1464391732-project-member</nova:user>
Dec 05 02:09:20 compute-0 nova_compute[349548]:         <nova:project uuid="7771751d84d348319b2c3d632191b59c">tempest-ServersTestManualDisk-1464391732</nova:project>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       </nova:owner>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <nova:root type="image" uuid="e9091bfb-b431-47c9-a284-79372046956b"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <nova:ports>
Dec 05 02:09:20 compute-0 nova_compute[349548]:         <nova:port uuid="5ce2a2f7-a9e2-4922-b684-fefcfe3f6307">
Dec 05 02:09:20 compute-0 nova_compute[349548]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:         </nova:port>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       </nova:ports>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     </nova:instance>
Dec 05 02:09:20 compute-0 nova_compute[349548]:   </metadata>
Dec 05 02:09:20 compute-0 nova_compute[349548]:   <sysinfo type="smbios">
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <system>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <entry name="manufacturer">RDO</entry>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <entry name="product">OpenStack Compute</entry>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <entry name="serial">86d3faa9-af9e-47de-bc0f-3e211167604f</entry>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <entry name="uuid">86d3faa9-af9e-47de-bc0f-3e211167604f</entry>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <entry name="family">Virtual Machine</entry>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     </system>
Dec 05 02:09:20 compute-0 nova_compute[349548]:   </sysinfo>
Dec 05 02:09:20 compute-0 nova_compute[349548]:   <os>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <boot dev="hd"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <smbios mode="sysinfo"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:   </os>
Dec 05 02:09:20 compute-0 nova_compute[349548]:   <features>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <acpi/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <apic/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <vmcoreinfo/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:   </features>
Dec 05 02:09:20 compute-0 nova_compute[349548]:   <clock offset="utc">
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <timer name="pit" tickpolicy="delay"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <timer name="hpet" present="no"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:   </clock>
Dec 05 02:09:20 compute-0 nova_compute[349548]:   <cpu mode="host-model" match="exact">
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <topology sockets="1" cores="1" threads="1"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:   </cpu>
Dec 05 02:09:20 compute-0 nova_compute[349548]:   <devices>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <disk type="network" device="disk">
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/86d3faa9-af9e-47de-bc0f-3e211167604f_disk">
Dec 05 02:09:20 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       </source>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 02:09:20 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       </auth>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <target dev="vda" bus="virtio"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     </disk>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <disk type="network" device="cdrom">
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/86d3faa9-af9e-47de-bc0f-3e211167604f_disk.config">
Dec 05 02:09:20 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       </source>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 02:09:20 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       </auth>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <target dev="sda" bus="sata"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     </disk>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <interface type="ethernet">
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <mac address="fa:16:3e:57:08:95"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <driver name="vhost" rx_queue_size="512"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <mtu size="1442"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <target dev="tap5ce2a2f7-a9"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     </interface>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <serial type="pty">
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <log file="/var/lib/nova/instances/86d3faa9-af9e-47de-bc0f-3e211167604f/console.log" append="off"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     </serial>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <video>
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     </video>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <input type="tablet" bus="usb"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <rng model="virtio">
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <backend model="random">/dev/urandom</backend>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     </rng>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <controller type="usb" index="0"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     <memballoon model="virtio">
Dec 05 02:09:20 compute-0 nova_compute[349548]:       <stats period="10"/>
Dec 05 02:09:20 compute-0 nova_compute[349548]:     </memballoon>
Dec 05 02:09:20 compute-0 nova_compute[349548]:   </devices>
Dec 05 02:09:20 compute-0 nova_compute[349548]: </domain>
Dec 05 02:09:20 compute-0 nova_compute[349548]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 05 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.984 349552 DEBUG nova.compute.manager [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Preparing to wait for external event network-vif-plugged-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 05 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.985 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Acquiring lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.985 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.985 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.987 349552 DEBUG nova.virt.libvirt.vif [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:09:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1615802566',display_name='tempest-ServersTestManualDisk-server-1615802566',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1615802566',id=9,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOcRO97guGa63+bps+A9FhbwCKswROHpaWQg4mABL2o9peSWqfNCYb59UZjb6DzrVFgPcALMXfGD8Zcw0e20RtTOhbatKip3vjrwBqcfA+Ox6W1aF5tJ18LwMyhNTkj73A==',key_name='tempest-keypair-1736515978',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7771751d84d348319b2c3d632191b59c',ramdisk_id='',reservation_id='r-8rl5cwmf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1464391732',owner_user_name='tempest-ServersTestManualDisk-1464391732-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:09:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7eb322b6163b466fb7721796e0d10c1f',uuid=86d3faa9-af9e-47de-bc0f-3e211167604f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "address": "fa:16:3e:57:08:95", "network": {"id": "f5a068ec-72e0-4934-878b-07d85634c361", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-965896294-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7771751d84d348319b2c3d632191b59c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ce2a2f7-a9", "ovs_interfaceid": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 05 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.987 349552 DEBUG nova.network.os_vif_util [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Converting VIF {"id": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "address": "fa:16:3e:57:08:95", "network": {"id": "f5a068ec-72e0-4934-878b-07d85634c361", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-965896294-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7771751d84d348319b2c3d632191b59c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ce2a2f7-a9", "ovs_interfaceid": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.988 349552 DEBUG nova.network.os_vif_util [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:08:95,bridge_name='br-int',has_traffic_filtering=True,id=5ce2a2f7-a9e2-4922-b684-fefcfe3f6307,network=Network(f5a068ec-72e0-4934-878b-07d85634c361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ce2a2f7-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.989 349552 DEBUG os_vif [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:08:95,bridge_name='br-int',has_traffic_filtering=True,id=5ce2a2f7-a9e2-4922-b684-fefcfe3f6307,network=Network(f5a068ec-72e0-4934-878b-07d85634c361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ce2a2f7-a9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 05 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.990 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.991 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.993 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.997 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.997 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ce2a2f7-a9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.998 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5ce2a2f7-a9, col_values=(('external_ids', {'iface-id': '5ce2a2f7-a9e2-4922-b684-fefcfe3f6307', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:08:95', 'vm-uuid': '86d3faa9-af9e-47de-bc0f-3e211167604f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:09:21 compute-0 NetworkManager[49092]: <info>  [1764900561.0014] manager: (tap5ce2a2f7-a9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Dec 05 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.000 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.004 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.011 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.013 349552 INFO os_vif [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:08:95,bridge_name='br-int',has_traffic_filtering=True,id=5ce2a2f7-a9e2-4922-b684-fefcfe3f6307,network=Network(f5a068ec-72e0-4934-878b-07d85634c361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ce2a2f7-a9')
Dec 05 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.086 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.088 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.088 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] No VIF found with MAC fa:16:3e:57:08:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 05 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.089 349552 INFO nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Using config drive
Dec 05 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.139 349552 DEBUG nova.storage.rbd_utils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] rbd image 86d3faa9-af9e-47de-bc0f-3e211167604f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.352 349552 DEBUG nova.network.neutron [req-b37398ab-98ea-42e2-8849-eaeec8ef8c64 req-04a4bd84-b4d7-4128-839c-1384fa03a437 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Updated VIF entry in instance network info cache for port 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.353 349552 DEBUG nova.network.neutron [req-b37398ab-98ea-42e2-8849-eaeec8ef8c64 req-04a4bd84-b4d7-4128-839c-1384fa03a437 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Updating instance_info_cache with network_info: [{"id": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "address": "fa:16:3e:57:08:95", "network": {"id": "f5a068ec-72e0-4934-878b-07d85634c361", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-965896294-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7771751d84d348319b2c3d632191b59c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ce2a2f7-a9", "ovs_interfaceid": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.367 349552 DEBUG oslo_concurrency.lockutils [req-b37398ab-98ea-42e2-8849-eaeec8ef8c64 req-04a4bd84-b4d7-4128-839c-1384fa03a437 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-86d3faa9-af9e-47de-bc0f-3e211167604f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:09:21 compute-0 ceph-mon[192914]: pgmap v1809: 321 pgs: 321 active+clean; 187 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Dec 05 02:09:21 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1270812209' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.614 349552 INFO nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Creating config drive at /var/lib/nova/instances/86d3faa9-af9e-47de-bc0f-3e211167604f/disk.config
Dec 05 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.622 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/86d3faa9-af9e-47de-bc0f-3e211167604f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphwbo5204 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.780 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/86d3faa9-af9e-47de-bc0f-3e211167604f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphwbo5204" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.817 349552 DEBUG nova.storage.rbd_utils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] rbd image 86d3faa9-af9e-47de-bc0f-3e211167604f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.827 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/86d3faa9-af9e-47de-bc0f-3e211167604f/disk.config 86d3faa9-af9e-47de-bc0f-3e211167604f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.076 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/86d3faa9-af9e-47de-bc0f-3e211167604f/disk.config 86d3faa9-af9e-47de-bc0f-3e211167604f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.078 349552 INFO nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Deleting local config drive /var/lib/nova/instances/86d3faa9-af9e-47de-bc0f-3e211167604f/disk.config because it was imported into RBD.
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.079 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1810: 321 pgs: 321 active+clean; 205 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 2.5 MiB/s wr, 43 op/s
Dec 05 02:09:22 compute-0 kernel: tap5ce2a2f7-a9: entered promiscuous mode
Dec 05 02:09:22 compute-0 NetworkManager[49092]: <info>  [1764900562.1333] manager: (tap5ce2a2f7-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Dec 05 02:09:22 compute-0 ovn_controller[89286]: 2025-12-05T02:09:22Z|00094|binding|INFO|Claiming lport 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 for this chassis.
Dec 05 02:09:22 compute-0 ovn_controller[89286]: 2025-12-05T02:09:22Z|00095|binding|INFO|5ce2a2f7-a9e2-4922-b684-fefcfe3f6307: Claiming fa:16:3e:57:08:95 10.100.0.8
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.140 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:22 compute-0 ovn_controller[89286]: 2025-12-05T02:09:22Z|00096|binding|INFO|Setting lport 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 ovn-installed in OVS
Dec 05 02:09:22 compute-0 ovn_controller[89286]: 2025-12-05T02:09:22Z|00097|binding|INFO|Setting lport 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 up in Southbound
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.149 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:08:95 10.100.0.8'], port_security=['fa:16:3e:57:08:95 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '86d3faa9-af9e-47de-bc0f-3e211167604f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5a068ec-72e0-4934-878b-07d85634c361', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7771751d84d348319b2c3d632191b59c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '90f6337f-8150-484e-95c9-0297abbd01b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0bc16ee-3841-439b-8236-7c21ef336dbd, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=5ce2a2f7-a9e2-4922-b684-fefcfe3f6307) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.152 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.158 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 in datapath f5a068ec-72e0-4934-878b-07d85634c361 bound to our chassis
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.161 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f5a068ec-72e0-4934-878b-07d85634c361
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.176 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c94c99-99a5-4b15-9a6d-a86beaf4baf1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.177 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf5a068ec-71 in ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.180 412744 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf5a068ec-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.180 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[19ff237d-9806-4cbb-8905-3a5f2b84a7af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.181 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[7b2799c3-a565-4ada-abb5-eec6d150b7df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:09:22 compute-0 systemd-machined[138700]: New machine qemu-9-instance-00000009.
Dec 05 02:09:22 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.196 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[97721fc3-07f3-477d-b161-63095e8e4c6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:09:22 compute-0 systemd-udevd[443466]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 02:09:22 compute-0 NetworkManager[49092]: <info>  [1764900562.2248] device (tap5ce2a2f7-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 05 02:09:22 compute-0 NetworkManager[49092]: <info>  [1764900562.2256] device (tap5ce2a2f7-a9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.245 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[2c6f24ba-deaf-4a07-8176-1699947b2fb9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.279 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[05e56605-c46e-491e-9b81-b54ceb685838]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:09:22 compute-0 NetworkManager[49092]: <info>  [1764900562.2870] manager: (tapf5a068ec-70): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.287 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[093b7c9a-3db7-471e-9279-fa430bdab6cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.334 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[16d7e619-d5d1-4e21-bacb-779924431c04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.342 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[a8649c2e-15a0-424c-849a-3d7cd1c3735c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:09:22 compute-0 NetworkManager[49092]: <info>  [1764900562.3764] device (tapf5a068ec-70): carrier: link connected
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.383 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a6912f-ace9-4141-894c-7282373bd766]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.408 349552 DEBUG nova.compute.manager [req-b3cb8dcf-598f-4c06-8f2b-0e0399508a26 req-94f4af22-3b5a-4eef-be69-ee5af86c966e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Received event network-vif-plugged-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.409 349552 DEBUG oslo_concurrency.lockutils [req-b3cb8dcf-598f-4c06-8f2b-0e0399508a26 req-94f4af22-3b5a-4eef-be69-ee5af86c966e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.409 349552 DEBUG oslo_concurrency.lockutils [req-b3cb8dcf-598f-4c06-8f2b-0e0399508a26 req-94f4af22-3b5a-4eef-be69-ee5af86c966e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.410 349552 DEBUG oslo_concurrency.lockutils [req-b3cb8dcf-598f-4c06-8f2b-0e0399508a26 req-94f4af22-3b5a-4eef-be69-ee5af86c966e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.411 349552 DEBUG nova.compute.manager [req-b3cb8dcf-598f-4c06-8f2b-0e0399508a26 req-94f4af22-3b5a-4eef-be69-ee5af86c966e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Processing event network-vif-plugged-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.418 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[10f4b1ec-aeea-4680-9659-d1bbccfc1b54]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5a068ec-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:fe:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664903, 'reachable_time': 16555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 443497, 'error': None, 'target': 'ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.442 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[064a2da4-d865-486c-a7f1-c4a36262f667]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feba:fe72'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 664903, 'tstamp': 664903}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 443498, 'error': None, 'target': 'ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.476 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[5a8652f6-b231-4f73-a60a-06d15fc4bdeb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5a068ec-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:fe:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664903, 'reachable_time': 16555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 443499, 'error': None, 'target': 'ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.518 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[c221df4e-2aae-4f86-9dd7-1075a11c2adf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.636 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[0c5f9f2a-2d86-4b4d-b7b1-f27726fba883]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.638 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5a068ec-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.639 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.640 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5a068ec-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:09:22 compute-0 NetworkManager[49092]: <info>  [1764900562.6453] manager: (tapf5a068ec-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.646 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:22 compute-0 kernel: tapf5a068ec-70: entered promiscuous mode
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.656 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.659 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf5a068ec-70, col_values=(('external_ids', {'iface-id': '607284a9-7bf5-4106-9085-2fdecab38aa1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.661 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:22 compute-0 ovn_controller[89286]: 2025-12-05T02:09:22Z|00098|binding|INFO|Releasing lport 607284a9-7bf5-4106-9085-2fdecab38aa1 from this chassis (sb_readonly=0)
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.680 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.685 287122 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f5a068ec-72e0-4934-878b-07d85634c361.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f5a068ec-72e0-4934-878b-07d85634c361.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.686 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[63ac9297-e948-436b-b0bb-6263ebedfd20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.691 287122 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: global
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]:     log         /dev/log local0 debug
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]:     log-tag     haproxy-metadata-proxy-f5a068ec-72e0-4934-878b-07d85634c361
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]:     user        root
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]:     group       root
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]:     maxconn     1024
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]:     pidfile     /var/lib/neutron/external/pids/f5a068ec-72e0-4934-878b-07d85634c361.pid.haproxy
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]:     daemon
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: defaults
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]:     log global
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]:     mode http
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]:     option httplog
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]:     option dontlognull
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]:     option http-server-close
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]:     option forwardfor
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]:     retries                 3
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]:     timeout http-request    30s
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]:     timeout connect         30s
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]:     timeout client          32s
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]:     timeout server          32s
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]:     timeout http-keep-alive 30s
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: listen listener
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]:     bind 169.254.169.254:80
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]:     server metadata /var/lib/neutron/metadata_proxy
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]:     http-request add-header X-OVN-Network-ID f5a068ec-72e0-4934-878b-07d85634c361
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 05 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.693 287122 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361', 'env', 'PROCESS_TAG=haproxy-f5a068ec-72e0-4934-878b-07d85634c361', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f5a068ec-72e0-4934-878b-07d85634c361.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.844 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900562.8440719, 86d3faa9-af9e-47de-bc0f-3e211167604f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.845 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] VM Started (Lifecycle Event)
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.847 349552 DEBUG nova.compute.manager [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.852 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.858 349552 INFO nova.virt.libvirt.driver [-] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Instance spawned successfully.
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.859 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.872 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.881 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.886 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.888 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.897 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.899 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.901 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.903 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.909 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.910 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900562.8441749, 86d3faa9-af9e-47de-bc0f-3e211167604f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.911 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] VM Paused (Lifecycle Event)
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.940 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.948 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900562.8512092, 86d3faa9-af9e-47de-bc0f-3e211167604f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.948 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] VM Resumed (Lifecycle Event)
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.971 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.976 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.981 349552 INFO nova.compute.manager [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Took 8.25 seconds to spawn the instance on the hypervisor.
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.981 349552 DEBUG nova.compute.manager [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.996 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 02:09:23 compute-0 nova_compute[349548]: 2025-12-05 02:09:23.045 349552 INFO nova.compute.manager [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Took 9.36 seconds to build instance.
Dec 05 02:09:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:09:23 compute-0 nova_compute[349548]: 2025-12-05 02:09:23.064 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:09:23 compute-0 podman[443574]: 2025-12-05 02:09:23.236545161 +0000 UTC m=+0.074151499 container create b603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 05 02:09:23 compute-0 podman[443574]: 2025-12-05 02:09:23.19406906 +0000 UTC m=+0.031675448 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 05 02:09:23 compute-0 systemd[1]: Started libpod-conmon-b603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1.scope.
Dec 05 02:09:23 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e031ad808e71024fd09cbfc3c286046cee526722de24473065b1214afbd5c889/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 02:09:23 compute-0 podman[443574]: 2025-12-05 02:09:23.361162414 +0000 UTC m=+0.198768762 container init b603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 02:09:23 compute-0 podman[443574]: 2025-12-05 02:09:23.369544539 +0000 UTC m=+0.207150877 container start b603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:09:23 compute-0 ceph-mon[192914]: pgmap v1810: 321 pgs: 321 active+clean; 205 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 2.5 MiB/s wr, 43 op/s
Dec 05 02:09:23 compute-0 neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361[443588]: [NOTICE]   (443592) : New worker (443594) forked
Dec 05 02:09:23 compute-0 neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361[443588]: [NOTICE]   (443592) : Loading success.
Dec 05 02:09:23 compute-0 ovn_controller[89286]: 2025-12-05T02:09:23Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ca:ba:4f 10.100.0.11
Dec 05 02:09:23 compute-0 ovn_controller[89286]: 2025-12-05T02:09:23Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ca:ba:4f 10.100.0.11
Dec 05 02:09:23 compute-0 podman[443604]: 2025-12-05 02:09:23.69531611 +0000 UTC m=+0.113393709 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:09:23 compute-0 podman[443603]: 2025-12-05 02:09:23.695749002 +0000 UTC m=+0.105403855 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 05 02:09:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1811: 321 pgs: 321 active+clean; 217 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 236 KiB/s rd, 3.6 MiB/s wr, 71 op/s
Dec 05 02:09:24 compute-0 nova_compute[349548]: 2025-12-05 02:09:24.526 349552 DEBUG nova.compute.manager [req-b574ea25-a7ca-48ca-9ecb-f927b2b65d6b req-c1db0f98-5fec-4534-a01d-fb46918cb2a5 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Received event network-vif-plugged-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:09:24 compute-0 nova_compute[349548]: 2025-12-05 02:09:24.527 349552 DEBUG oslo_concurrency.lockutils [req-b574ea25-a7ca-48ca-9ecb-f927b2b65d6b req-c1db0f98-5fec-4534-a01d-fb46918cb2a5 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:09:24 compute-0 nova_compute[349548]: 2025-12-05 02:09:24.527 349552 DEBUG oslo_concurrency.lockutils [req-b574ea25-a7ca-48ca-9ecb-f927b2b65d6b req-c1db0f98-5fec-4534-a01d-fb46918cb2a5 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:09:24 compute-0 nova_compute[349548]: 2025-12-05 02:09:24.527 349552 DEBUG oslo_concurrency.lockutils [req-b574ea25-a7ca-48ca-9ecb-f927b2b65d6b req-c1db0f98-5fec-4534-a01d-fb46918cb2a5 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:09:24 compute-0 nova_compute[349548]: 2025-12-05 02:09:24.527 349552 DEBUG nova.compute.manager [req-b574ea25-a7ca-48ca-9ecb-f927b2b65d6b req-c1db0f98-5fec-4534-a01d-fb46918cb2a5 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] No waiting events found dispatching network-vif-plugged-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:09:24 compute-0 nova_compute[349548]: 2025-12-05 02:09:24.528 349552 WARNING nova.compute.manager [req-b574ea25-a7ca-48ca-9ecb-f927b2b65d6b req-c1db0f98-5fec-4534-a01d-fb46918cb2a5 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Received unexpected event network-vif-plugged-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 for instance with vm_state active and task_state None.
Dec 05 02:09:25 compute-0 ceph-mon[192914]: pgmap v1811: 321 pgs: 321 active+clean; 217 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 236 KiB/s rd, 3.6 MiB/s wr, 71 op/s
Dec 05 02:09:26 compute-0 nova_compute[349548]: 2025-12-05 02:09:26.001 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1812: 321 pgs: 321 active+clean; 225 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 362 KiB/s rd, 3.8 MiB/s wr, 81 op/s
Dec 05 02:09:26 compute-0 nova_compute[349548]: 2025-12-05 02:09:26.700 349552 DEBUG nova.compute.manager [req-d6c855d8-92e6-4477-8874-f66366e70da1 req-f7332377-0ba4-4fcf-bfa3-4c33d607d233 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Received event network-changed-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:09:26 compute-0 nova_compute[349548]: 2025-12-05 02:09:26.701 349552 DEBUG nova.compute.manager [req-d6c855d8-92e6-4477-8874-f66366e70da1 req-f7332377-0ba4-4fcf-bfa3-4c33d607d233 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Refreshing instance network info cache due to event network-changed-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 02:09:26 compute-0 nova_compute[349548]: 2025-12-05 02:09:26.702 349552 DEBUG oslo_concurrency.lockutils [req-d6c855d8-92e6-4477-8874-f66366e70da1 req-f7332377-0ba4-4fcf-bfa3-4c33d607d233 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-86d3faa9-af9e-47de-bc0f-3e211167604f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:09:26 compute-0 nova_compute[349548]: 2025-12-05 02:09:26.703 349552 DEBUG oslo_concurrency.lockutils [req-d6c855d8-92e6-4477-8874-f66366e70da1 req-f7332377-0ba4-4fcf-bfa3-4c33d607d233 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-86d3faa9-af9e-47de-bc0f-3e211167604f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:09:26 compute-0 nova_compute[349548]: 2025-12-05 02:09:26.703 349552 DEBUG nova.network.neutron [req-d6c855d8-92e6-4477-8874-f66366e70da1 req-f7332377-0ba4-4fcf-bfa3-4c33d607d233 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Refreshing network info cache for port 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001442745979032363 of space, bias 1.0, pg target 0.4328237937097089 quantized to 32 (current 32)
Dec 05 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 05 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.080 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:27 compute-0 ceph-mon[192914]: pgmap v1812: 321 pgs: 321 active+clean; 225 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 362 KiB/s rd, 3.8 MiB/s wr, 81 op/s
Dec 05 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.514 349552 DEBUG oslo_concurrency.lockutils [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Acquiring lock "86d3faa9-af9e-47de-bc0f-3e211167604f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.515 349552 DEBUG oslo_concurrency.lockutils [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.516 349552 DEBUG oslo_concurrency.lockutils [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Acquiring lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.517 349552 DEBUG oslo_concurrency.lockutils [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.518 349552 DEBUG oslo_concurrency.lockutils [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.521 349552 INFO nova.compute.manager [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Terminating instance
Dec 05 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.523 349552 DEBUG nova.compute.manager [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 05 02:09:27 compute-0 kernel: tap5ce2a2f7-a9 (unregistering): left promiscuous mode
Dec 05 02:09:27 compute-0 NetworkManager[49092]: <info>  [1764900567.6260] device (tap5ce2a2f7-a9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 05 02:09:27 compute-0 ovn_controller[89286]: 2025-12-05T02:09:27Z|00099|binding|INFO|Releasing lport 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 from this chassis (sb_readonly=0)
Dec 05 02:09:27 compute-0 ovn_controller[89286]: 2025-12-05T02:09:27Z|00100|binding|INFO|Setting lport 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 down in Southbound
Dec 05 02:09:27 compute-0 ovn_controller[89286]: 2025-12-05T02:09:27Z|00101|binding|INFO|Removing iface tap5ce2a2f7-a9 ovn-installed in OVS
Dec 05 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.644 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.646 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:27.652 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:08:95 10.100.0.8'], port_security=['fa:16:3e:57:08:95 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '86d3faa9-af9e-47de-bc0f-3e211167604f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5a068ec-72e0-4934-878b-07d85634c361', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7771751d84d348319b2c3d632191b59c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '90f6337f-8150-484e-95c9-0297abbd01b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0bc16ee-3841-439b-8236-7c21ef336dbd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=5ce2a2f7-a9e2-4922-b684-fefcfe3f6307) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:09:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:27.655 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 in datapath f5a068ec-72e0-4934-878b-07d85634c361 unbound from our chassis
Dec 05 02:09:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:27.657 287122 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f5a068ec-72e0-4934-878b-07d85634c361, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.661 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:27.659 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[5e6a0463-faf8-4787-b21d-9e5854c045bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:09:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:27.663 287122 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361 namespace which is not needed anymore
Dec 05 02:09:27 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Dec 05 02:09:27 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 5.838s CPU time.
Dec 05 02:09:27 compute-0 systemd-machined[138700]: Machine qemu-9-instance-00000009 terminated.
Dec 05 02:09:27 compute-0 podman[443643]: 2025-12-05 02:09:27.747731826 +0000 UTC m=+0.143903294 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 02:09:27 compute-0 podman[443642]: 2025-12-05 02:09:27.750447582 +0000 UTC m=+0.161493937 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 05 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.767 349552 INFO nova.virt.libvirt.driver [-] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Instance destroyed successfully.
Dec 05 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.768 349552 DEBUG nova.objects.instance [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lazy-loading 'resources' on Instance uuid 86d3faa9-af9e-47de-bc0f-3e211167604f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.791 349552 DEBUG nova.virt.libvirt.vif [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:09:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1615802566',display_name='tempest-ServersTestManualDisk-server-1615802566',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1615802566',id=9,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOcRO97guGa63+bps+A9FhbwCKswROHpaWQg4mABL2o9peSWqfNCYb59UZjb6DzrVFgPcALMXfGD8Zcw0e20RtTOhbatKip3vjrwBqcfA+Ox6W1aF5tJ18LwMyhNTkj73A==',key_name='tempest-keypair-1736515978',keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:09:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7771751d84d348319b2c3d632191b59c',ramdisk_id='',reservation_id='r-8rl5cwmf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-1464391732',owner_user_name='tempest-ServersTestManualDisk-1464391732-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:09:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7eb322b6163b466fb7721796e0d10c1f',uuid=86d3faa9-af9e-47de-bc0f-3e211167604f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "address": "fa:16:3e:57:08:95", "network": {"id": "f5a068ec-72e0-4934-878b-07d85634c361", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-965896294-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7771751d84d348319b2c3d632191b59c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ce2a2f7-a9", "ovs_interfaceid": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 05 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.791 349552 DEBUG nova.network.os_vif_util [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Converting VIF {"id": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "address": "fa:16:3e:57:08:95", "network": {"id": "f5a068ec-72e0-4934-878b-07d85634c361", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-965896294-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7771751d84d348319b2c3d632191b59c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ce2a2f7-a9", "ovs_interfaceid": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.792 349552 DEBUG nova.network.os_vif_util [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:08:95,bridge_name='br-int',has_traffic_filtering=True,id=5ce2a2f7-a9e2-4922-b684-fefcfe3f6307,network=Network(f5a068ec-72e0-4934-878b-07d85634c361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ce2a2f7-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.792 349552 DEBUG os_vif [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:08:95,bridge_name='br-int',has_traffic_filtering=True,id=5ce2a2f7-a9e2-4922-b684-fefcfe3f6307,network=Network(f5a068ec-72e0-4934-878b-07d85634c361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ce2a2f7-a9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 05 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.794 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.794 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ce2a2f7-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.796 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.800 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.803 349552 INFO os_vif [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:08:95,bridge_name='br-int',has_traffic_filtering=True,id=5ce2a2f7-a9e2-4922-b684-fefcfe3f6307,network=Network(f5a068ec-72e0-4934-878b-07d85634c361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ce2a2f7-a9')
Dec 05 02:09:27 compute-0 neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361[443588]: [NOTICE]   (443592) : haproxy version is 2.8.14-c23fe91
Dec 05 02:09:27 compute-0 neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361[443588]: [NOTICE]   (443592) : path to executable is /usr/sbin/haproxy
Dec 05 02:09:27 compute-0 neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361[443588]: [WARNING]  (443592) : Exiting Master process...
Dec 05 02:09:27 compute-0 neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361[443588]: [ALERT]    (443592) : Current worker (443594) exited with code 143 (Terminated)
Dec 05 02:09:27 compute-0 neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361[443588]: [WARNING]  (443592) : All workers exited. Exiting... (0)
Dec 05 02:09:27 compute-0 systemd[1]: libpod-b603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1.scope: Deactivated successfully.
Dec 05 02:09:27 compute-0 podman[443710]: 2025-12-05 02:09:27.840394024 +0000 UTC m=+0.063025468 container died b603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Dec 05 02:09:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1-userdata-shm.mount: Deactivated successfully.
Dec 05 02:09:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e031ad808e71024fd09cbfc3c286046cee526722de24473065b1214afbd5c889-merged.mount: Deactivated successfully.
Dec 05 02:09:27 compute-0 podman[443710]: 2025-12-05 02:09:27.898621176 +0000 UTC m=+0.121252620 container cleanup b603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 05 02:09:27 compute-0 systemd[1]: libpod-conmon-b603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1.scope: Deactivated successfully.
Dec 05 02:09:27 compute-0 podman[443755]: 2025-12-05 02:09:27.997581279 +0000 UTC m=+0.070110046 container remove b603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 02:09:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:28.026 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[41101bde-cb3b-403d-8123-9102592dbd7b]: (4, ('Fri Dec  5 02:09:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361 (b603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1)\nb603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1\nFri Dec  5 02:09:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361 (b603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1)\nb603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:09:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:28.028 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[17f6aa88-71a5-48a3-9efc-25157f7b5cba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:09:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:28.029 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5a068ec-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:09:28 compute-0 nova_compute[349548]: 2025-12-05 02:09:28.031 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:28 compute-0 kernel: tapf5a068ec-70: left promiscuous mode
Dec 05 02:09:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:09:28 compute-0 nova_compute[349548]: 2025-12-05 02:09:28.059 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:28 compute-0 nova_compute[349548]: 2025-12-05 02:09:28.061 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:28.065 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[9d386ec2-86ba-440a-8771-97b24bbace7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:09:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:28.075 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[5d47cae4-5367-415a-ab6b-ff9dde4928b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:09:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:28.076 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[dc3ffe86-6b81-411a-b6b2-b7360c69c2bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:09:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:28.096 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[bd45135b-13ca-4e37-b6f3-9da867f784a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664893, 'reachable_time': 30665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 443770, 'error': None, 'target': 'ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:09:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:28.098 287504 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 05 02:09:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:28.099 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[b7aaa2df-29ab-4538-b0cb-f59d49462972]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:09:28 compute-0 systemd[1]: run-netns-ovnmeta\x2df5a068ec\x2d72e0\x2d4934\x2d878b\x2d07d85634c361.mount: Deactivated successfully.
Dec 05 02:09:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1813: 321 pgs: 321 active+clean; 229 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.8 MiB/s wr, 148 op/s
Dec 05 02:09:28 compute-0 nova_compute[349548]: 2025-12-05 02:09:28.654 349552 INFO nova.virt.libvirt.driver [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Deleting instance files /var/lib/nova/instances/86d3faa9-af9e-47de-bc0f-3e211167604f_del
Dec 05 02:09:28 compute-0 nova_compute[349548]: 2025-12-05 02:09:28.656 349552 INFO nova.virt.libvirt.driver [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Deletion of /var/lib/nova/instances/86d3faa9-af9e-47de-bc0f-3e211167604f_del complete
Dec 05 02:09:28 compute-0 nova_compute[349548]: 2025-12-05 02:09:28.729 349552 INFO nova.compute.manager [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Took 1.20 seconds to destroy the instance on the hypervisor.
Dec 05 02:09:28 compute-0 nova_compute[349548]: 2025-12-05 02:09:28.730 349552 DEBUG oslo.service.loopingcall [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 05 02:09:28 compute-0 nova_compute[349548]: 2025-12-05 02:09:28.731 349552 DEBUG nova.compute.manager [-] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 05 02:09:28 compute-0 nova_compute[349548]: 2025-12-05 02:09:28.731 349552 DEBUG nova.network.neutron [-] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 05 02:09:28 compute-0 nova_compute[349548]: 2025-12-05 02:09:28.843 349552 DEBUG nova.network.neutron [req-d6c855d8-92e6-4477-8874-f66366e70da1 req-f7332377-0ba4-4fcf-bfa3-4c33d607d233 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Updated VIF entry in instance network info cache for port 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 02:09:28 compute-0 nova_compute[349548]: 2025-12-05 02:09:28.844 349552 DEBUG nova.network.neutron [req-d6c855d8-92e6-4477-8874-f66366e70da1 req-f7332377-0ba4-4fcf-bfa3-4c33d607d233 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Updating instance_info_cache with network_info: [{"id": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "address": "fa:16:3e:57:08:95", "network": {"id": "f5a068ec-72e0-4934-878b-07d85634c361", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-965896294-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7771751d84d348319b2c3d632191b59c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ce2a2f7-a9", "ovs_interfaceid": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:09:28 compute-0 nova_compute[349548]: 2025-12-05 02:09:28.864 349552 DEBUG oslo_concurrency.lockutils [req-d6c855d8-92e6-4477-8874-f66366e70da1 req-f7332377-0ba4-4fcf-bfa3-4c33d607d233 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-86d3faa9-af9e-47de-bc0f-3e211167604f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:09:29 compute-0 nova_compute[349548]: 2025-12-05 02:09:29.116 349552 DEBUG nova.compute.manager [req-b94fbc1b-6d96-47cc-8b41-206c5f518d0c req-43090a92-906b-4427-9fa1-901a7e4b211e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Received event network-vif-unplugged-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:09:29 compute-0 nova_compute[349548]: 2025-12-05 02:09:29.119 349552 DEBUG oslo_concurrency.lockutils [req-b94fbc1b-6d96-47cc-8b41-206c5f518d0c req-43090a92-906b-4427-9fa1-901a7e4b211e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:09:29 compute-0 nova_compute[349548]: 2025-12-05 02:09:29.120 349552 DEBUG oslo_concurrency.lockutils [req-b94fbc1b-6d96-47cc-8b41-206c5f518d0c req-43090a92-906b-4427-9fa1-901a7e4b211e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:09:29 compute-0 nova_compute[349548]: 2025-12-05 02:09:29.121 349552 DEBUG oslo_concurrency.lockutils [req-b94fbc1b-6d96-47cc-8b41-206c5f518d0c req-43090a92-906b-4427-9fa1-901a7e4b211e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:09:29 compute-0 nova_compute[349548]: 2025-12-05 02:09:29.123 349552 DEBUG nova.compute.manager [req-b94fbc1b-6d96-47cc-8b41-206c5f518d0c req-43090a92-906b-4427-9fa1-901a7e4b211e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] No waiting events found dispatching network-vif-unplugged-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:09:29 compute-0 nova_compute[349548]: 2025-12-05 02:09:29.124 349552 DEBUG nova.compute.manager [req-b94fbc1b-6d96-47cc-8b41-206c5f518d0c req-43090a92-906b-4427-9fa1-901a7e4b211e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Received event network-vif-unplugged-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 05 02:09:29 compute-0 nova_compute[349548]: 2025-12-05 02:09:29.125 349552 DEBUG nova.compute.manager [req-b94fbc1b-6d96-47cc-8b41-206c5f518d0c req-43090a92-906b-4427-9fa1-901a7e4b211e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Received event network-vif-plugged-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:09:29 compute-0 nova_compute[349548]: 2025-12-05 02:09:29.126 349552 DEBUG oslo_concurrency.lockutils [req-b94fbc1b-6d96-47cc-8b41-206c5f518d0c req-43090a92-906b-4427-9fa1-901a7e4b211e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:09:29 compute-0 nova_compute[349548]: 2025-12-05 02:09:29.128 349552 DEBUG oslo_concurrency.lockutils [req-b94fbc1b-6d96-47cc-8b41-206c5f518d0c req-43090a92-906b-4427-9fa1-901a7e4b211e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:09:29 compute-0 nova_compute[349548]: 2025-12-05 02:09:29.129 349552 DEBUG oslo_concurrency.lockutils [req-b94fbc1b-6d96-47cc-8b41-206c5f518d0c req-43090a92-906b-4427-9fa1-901a7e4b211e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:09:29 compute-0 nova_compute[349548]: 2025-12-05 02:09:29.130 349552 DEBUG nova.compute.manager [req-b94fbc1b-6d96-47cc-8b41-206c5f518d0c req-43090a92-906b-4427-9fa1-901a7e4b211e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] No waiting events found dispatching network-vif-plugged-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:09:29 compute-0 nova_compute[349548]: 2025-12-05 02:09:29.131 349552 WARNING nova.compute.manager [req-b94fbc1b-6d96-47cc-8b41-206c5f518d0c req-43090a92-906b-4427-9fa1-901a7e4b211e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Received unexpected event network-vif-plugged-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 for instance with vm_state active and task_state deleting.
Dec 05 02:09:29 compute-0 ceph-mon[192914]: pgmap v1813: 321 pgs: 321 active+clean; 229 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.8 MiB/s wr, 148 op/s
Dec 05 02:09:29 compute-0 podman[443774]: 2025-12-05 02:09:29.694101692 +0000 UTC m=+0.108233665 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=edpm, io.buildah.version=1.29.0, release=1214.1726694543, release-0.7.12=, version=9.4, container_name=kepler, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git)
Dec 05 02:09:29 compute-0 podman[158197]: time="2025-12-05T02:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:09:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45045 "" "Go-http-client/1.1"
Dec 05 02:09:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9098 "" "Go-http-client/1.1"
Dec 05 02:09:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1814: 321 pgs: 321 active+clean; 229 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.6 MiB/s wr, 123 op/s
Dec 05 02:09:30 compute-0 nova_compute[349548]: 2025-12-05 02:09:30.601 349552 DEBUG nova.network.neutron [-] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:09:30 compute-0 nova_compute[349548]: 2025-12-05 02:09:30.622 349552 INFO nova.compute.manager [-] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Took 1.89 seconds to deallocate network for instance.
Dec 05 02:09:30 compute-0 nova_compute[349548]: 2025-12-05 02:09:30.680 349552 DEBUG oslo_concurrency.lockutils [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:09:30 compute-0 nova_compute[349548]: 2025-12-05 02:09:30.681 349552 DEBUG oslo_concurrency.lockutils [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:09:30 compute-0 nova_compute[349548]: 2025-12-05 02:09:30.802 349552 DEBUG oslo_concurrency.processutils [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:09:31 compute-0 sshd-session[443772]: Connection reset by authenticating user root 91.202.233.33 port 39144 [preauth]
Dec 05 02:09:31 compute-0 nova_compute[349548]: 2025-12-05 02:09:31.257 349552 DEBUG nova.compute.manager [req-68f2878b-1058-4f92-8068-5b00662a7b1d req-0cf5325b-3d22-4842-8676-6fea7e9265e5 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Received event network-vif-deleted-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:09:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:09:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2446295945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:09:31 compute-0 nova_compute[349548]: 2025-12-05 02:09:31.340 349552 DEBUG oslo_concurrency.processutils [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:09:31 compute-0 nova_compute[349548]: 2025-12-05 02:09:31.349 349552 DEBUG nova.compute.provider_tree [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:09:31 compute-0 nova_compute[349548]: 2025-12-05 02:09:31.367 349552 DEBUG nova.scheduler.client.report [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:09:31 compute-0 nova_compute[349548]: 2025-12-05 02:09:31.393 349552 DEBUG oslo_concurrency.lockutils [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:09:31 compute-0 nova_compute[349548]: 2025-12-05 02:09:31.419 349552 INFO nova.scheduler.client.report [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Deleted allocations for instance 86d3faa9-af9e-47de-bc0f-3e211167604f
Dec 05 02:09:31 compute-0 openstack_network_exporter[366555]: ERROR   02:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:09:31 compute-0 openstack_network_exporter[366555]: ERROR   02:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:09:31 compute-0 openstack_network_exporter[366555]: ERROR   02:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:09:31 compute-0 openstack_network_exporter[366555]: ERROR   02:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:09:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:09:31 compute-0 openstack_network_exporter[366555]: ERROR   02:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:09:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:09:31 compute-0 ceph-mon[192914]: pgmap v1814: 321 pgs: 321 active+clean; 229 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.6 MiB/s wr, 123 op/s
Dec 05 02:09:31 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2446295945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:09:31 compute-0 nova_compute[349548]: 2025-12-05 02:09:31.492 349552 DEBUG oslo_concurrency.lockutils [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.977s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:09:32 compute-0 nova_compute[349548]: 2025-12-05 02:09:32.083 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1815: 321 pgs: 321 active+clean; 209 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 175 op/s
Dec 05 02:09:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:32.340 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:09:32 compute-0 nova_compute[349548]: 2025-12-05 02:09:32.340 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:32.344 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 02:09:32 compute-0 nova_compute[349548]: 2025-12-05 02:09:32.798 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:33 compute-0 ovn_controller[89286]: 2025-12-05T02:09:33Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:16:81:87 10.100.0.10
Dec 05 02:09:33 compute-0 ovn_controller[89286]: 2025-12-05T02:09:33Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:16:81:87 10.100.0.10
Dec 05 02:09:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:09:33 compute-0 ceph-mon[192914]: pgmap v1815: 321 pgs: 321 active+clean; 209 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 175 op/s
Dec 05 02:09:33 compute-0 sshd-session[443815]: Connection reset by authenticating user root 91.202.233.33 port 23552 [preauth]
Dec 05 02:09:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1816: 321 pgs: 321 active+clean; 196 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 175 op/s
Dec 05 02:09:35 compute-0 ceph-mon[192914]: pgmap v1816: 321 pgs: 321 active+clean; 196 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 175 op/s
Dec 05 02:09:36 compute-0 nova_compute[349548]: 2025-12-05 02:09:36.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:09:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1817: 321 pgs: 321 active+clean; 214 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.5 MiB/s wr, 174 op/s
Dec 05 02:09:37 compute-0 nova_compute[349548]: 2025-12-05 02:09:37.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:09:37 compute-0 nova_compute[349548]: 2025-12-05 02:09:37.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:09:37 compute-0 nova_compute[349548]: 2025-12-05 02:09:37.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:09:37 compute-0 nova_compute[349548]: 2025-12-05 02:09:37.086 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:37 compute-0 ceph-mon[192914]: pgmap v1817: 321 pgs: 321 active+clean; 214 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.5 MiB/s wr, 174 op/s
Dec 05 02:09:37 compute-0 nova_compute[349548]: 2025-12-05 02:09:37.801 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:09:38 compute-0 nova_compute[349548]: 2025-12-05 02:09:38.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:09:38 compute-0 nova_compute[349548]: 2025-12-05 02:09:38.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:09:38 compute-0 nova_compute[349548]: 2025-12-05 02:09:38.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:09:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1818: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 171 op/s
Dec 05 02:09:38 compute-0 nova_compute[349548]: 2025-12-05 02:09:38.252 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:09:38 compute-0 nova_compute[349548]: 2025-12-05 02:09:38.253 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:09:38 compute-0 nova_compute[349548]: 2025-12-05 02:09:38.253 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 02:09:38 compute-0 nova_compute[349548]: 2025-12-05 02:09:38.254 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 939ae9f2-b89c-4a19-96de-ab4dfc882a35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:09:39 compute-0 sshd-session[443817]: Invalid user ftpuser from 91.202.233.33 port 23586
Dec 05 02:09:39 compute-0 ceph-mon[192914]: pgmap v1818: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 171 op/s
Dec 05 02:09:39 compute-0 podman[443820]: 2025-12-05 02:09:39.551664682 +0000 UTC m=+0.116854896 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 02:09:39 compute-0 podman[443822]: 2025-12-05 02:09:39.572949829 +0000 UTC m=+0.120790067 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, version=9.6, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm)
Dec 05 02:09:39 compute-0 podman[443819]: 2025-12-05 02:09:39.583358821 +0000 UTC m=+0.144278065 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Dec 05 02:09:39 compute-0 podman[443821]: 2025-12-05 02:09:39.609179164 +0000 UTC m=+0.167149216 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec 05 02:09:39 compute-0 sshd-session[443817]: Connection reset by invalid user ftpuser 91.202.233.33 port 23586 [preauth]
Dec 05 02:09:39 compute-0 nova_compute[349548]: 2025-12-05 02:09:39.977 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updating instance_info_cache with network_info: [{"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:09:39 compute-0 nova_compute[349548]: 2025-12-05 02:09:39.994 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:09:39 compute-0 nova_compute[349548]: 2025-12-05 02:09:39.995 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 02:09:39 compute-0 nova_compute[349548]: 2025-12-05 02:09:39.995 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:09:39 compute-0 nova_compute[349548]: 2025-12-05 02:09:39.996 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:09:40 compute-0 nova_compute[349548]: 2025-12-05 02:09:40.023 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:09:40 compute-0 nova_compute[349548]: 2025-12-05 02:09:40.024 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:09:40 compute-0 nova_compute[349548]: 2025-12-05 02:09:40.024 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:09:40 compute-0 nova_compute[349548]: 2025-12-05 02:09:40.024 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:09:40 compute-0 nova_compute[349548]: 2025-12-05 02:09:40.025 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:09:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1819: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 840 KiB/s rd, 2.1 MiB/s wr, 104 op/s
Dec 05 02:09:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:09:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/743466495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:09:40 compute-0 nova_compute[349548]: 2025-12-05 02:09:40.523 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:09:40 compute-0 nova_compute[349548]: 2025-12-05 02:09:40.619 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:09:40 compute-0 nova_compute[349548]: 2025-12-05 02:09:40.620 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:09:40 compute-0 nova_compute[349548]: 2025-12-05 02:09:40.625 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:09:40 compute-0 nova_compute[349548]: 2025-12-05 02:09:40.625 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:09:41 compute-0 ovn_controller[89286]: 2025-12-05T02:09:41Z|00102|binding|INFO|Releasing lport 3d0916d7-6f03-4daf-8f3b-126228223c53 from this chassis (sb_readonly=0)
Dec 05 02:09:41 compute-0 ovn_controller[89286]: 2025-12-05T02:09:41Z|00103|binding|INFO|Releasing lport 5f3160d9-2dc7-4f0c-9f4e-c46a8a847823 from this chassis (sb_readonly=0)
Dec 05 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.138 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.139 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3627MB free_disk=59.89728927612305GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.139 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.140 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.227 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 939ae9f2-b89c-4a19-96de-ab4dfc882a35 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.228 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 59e35a32-9023-4e49-be56-9da10df3027f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.228 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.229 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.233 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.310 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:09:41 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:41.348 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:09:41 compute-0 ceph-mon[192914]: pgmap v1819: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 840 KiB/s rd, 2.1 MiB/s wr, 104 op/s
Dec 05 02:09:41 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/743466495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:09:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:09:41 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3169650476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.829 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.844 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.867 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.901 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.902 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:09:42 compute-0 nova_compute[349548]: 2025-12-05 02:09:42.092 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1820: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 840 KiB/s rd, 2.2 MiB/s wr, 104 op/s
Dec 05 02:09:42 compute-0 sshd-session[443901]: Connection reset by authenticating user root 91.202.233.33 port 23774 [preauth]
Dec 05 02:09:42 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3169650476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:09:42 compute-0 nova_compute[349548]: 2025-12-05 02:09:42.764 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764900567.762436, 86d3faa9-af9e-47de-bc0f-3e211167604f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:09:42 compute-0 nova_compute[349548]: 2025-12-05 02:09:42.765 349552 INFO nova.compute.manager [-] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] VM Stopped (Lifecycle Event)
Dec 05 02:09:42 compute-0 nova_compute[349548]: 2025-12-05 02:09:42.801 349552 DEBUG nova.compute.manager [None req-7ff89984-929c-48d5-b3f7-6d728247f215 - - - - - -] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:09:42 compute-0 nova_compute[349548]: 2025-12-05 02:09:42.805 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:42 compute-0 nova_compute[349548]: 2025-12-05 02:09:42.973 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:09:42 compute-0 nova_compute[349548]: 2025-12-05 02:09:42.974 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:09:43 compute-0 sudo[443948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:09:43 compute-0 sudo[443948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:09:43 compute-0 sudo[443948]: pam_unix(sudo:session): session closed for user root
Dec 05 02:09:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:09:43 compute-0 sudo[443974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:09:43 compute-0 sudo[443974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:09:43 compute-0 sudo[443974]: pam_unix(sudo:session): session closed for user root
Dec 05 02:09:43 compute-0 sudo[443999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:09:43 compute-0 sudo[443999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:09:43 compute-0 sudo[443999]: pam_unix(sudo:session): session closed for user root
Dec 05 02:09:43 compute-0 sudo[444024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:09:43 compute-0 sudo[444024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:09:43 compute-0 ceph-mon[192914]: pgmap v1820: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 840 KiB/s rd, 2.2 MiB/s wr, 104 op/s
Dec 05 02:09:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1821: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 221 KiB/s rd, 1.2 MiB/s wr, 53 op/s
Dec 05 02:09:44 compute-0 sudo[444024]: pam_unix(sudo:session): session closed for user root
Dec 05 02:09:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:09:44 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:09:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:09:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:09:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:09:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:09:44 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 507f4b1c-554c-426e-afae-a0606ea0ae7c does not exist
Dec 05 02:09:44 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7f770fcd-d9c9-4a52-ad27-97b4a63de641 does not exist
Dec 05 02:09:44 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev cfa94e93-ce9c-4a56-93c4-3bc8fa06d7b2 does not exist
Dec 05 02:09:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:09:44 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:09:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:09:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:09:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:09:44 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:09:44 compute-0 sudo[444079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:09:44 compute-0 ceph-mon[192914]: pgmap v1821: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 221 KiB/s rd, 1.2 MiB/s wr, 53 op/s
Dec 05 02:09:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:09:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:09:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:09:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:09:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:09:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:09:44 compute-0 sudo[444079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:09:44 compute-0 sudo[444079]: pam_unix(sudo:session): session closed for user root
Dec 05 02:09:44 compute-0 sudo[444104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:09:44 compute-0 sudo[444104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:09:44 compute-0 sudo[444104]: pam_unix(sudo:session): session closed for user root
Dec 05 02:09:44 compute-0 sudo[444129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:09:44 compute-0 sudo[444129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:09:44 compute-0 sudo[444129]: pam_unix(sudo:session): session closed for user root
Dec 05 02:09:44 compute-0 sudo[444154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:09:44 compute-0 sudo[444154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:09:45 compute-0 sshd-session[443947]: Invalid user user from 91.202.233.33 port 23788
Dec 05 02:09:45 compute-0 nova_compute[349548]: 2025-12-05 02:09:45.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:09:45 compute-0 nova_compute[349548]: 2025-12-05 02:09:45.092 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:09:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:09:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/103136362' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:09:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:09:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/103136362' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:09:45 compute-0 podman[444214]: 2025-12-05 02:09:45.430405639 +0000 UTC m=+0.065965190 container create 43d87703764eadecd58e29c49be47e39966fea57674fcd0ded733394db7fa82c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 02:09:45 compute-0 sshd-session[443947]: Connection reset by invalid user user 91.202.233.33 port 23788 [preauth]
Dec 05 02:09:45 compute-0 podman[444214]: 2025-12-05 02:09:45.403456294 +0000 UTC m=+0.039015805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:09:45 compute-0 systemd[1]: Started libpod-conmon-43d87703764eadecd58e29c49be47e39966fea57674fcd0ded733394db7fa82c.scope.
Dec 05 02:09:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/103136362' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:09:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/103136362' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:09:45 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:09:45 compute-0 podman[444214]: 2025-12-05 02:09:45.581948187 +0000 UTC m=+0.217507718 container init 43d87703764eadecd58e29c49be47e39966fea57674fcd0ded733394db7fa82c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 02:09:45 compute-0 podman[444214]: 2025-12-05 02:09:45.59883476 +0000 UTC m=+0.234394311 container start 43d87703764eadecd58e29c49be47e39966fea57674fcd0ded733394db7fa82c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 02:09:45 compute-0 podman[444214]: 2025-12-05 02:09:45.606067703 +0000 UTC m=+0.241627304 container attach 43d87703764eadecd58e29c49be47e39966fea57674fcd0ded733394db7fa82c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 05 02:09:45 compute-0 clever_beaver[444229]: 167 167
Dec 05 02:09:45 compute-0 systemd[1]: libpod-43d87703764eadecd58e29c49be47e39966fea57674fcd0ded733394db7fa82c.scope: Deactivated successfully.
Dec 05 02:09:45 compute-0 podman[444214]: 2025-12-05 02:09:45.612059881 +0000 UTC m=+0.247619432 container died 43d87703764eadecd58e29c49be47e39966fea57674fcd0ded733394db7fa82c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 05 02:09:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-c209eca1529f4ae98dee9e390e0a531c193d59a1a2485d0af26fac056218c198-merged.mount: Deactivated successfully.
Dec 05 02:09:45 compute-0 podman[444214]: 2025-12-05 02:09:45.691035874 +0000 UTC m=+0.326595395 container remove 43d87703764eadecd58e29c49be47e39966fea57674fcd0ded733394db7fa82c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:09:45 compute-0 systemd[1]: libpod-conmon-43d87703764eadecd58e29c49be47e39966fea57674fcd0ded733394db7fa82c.scope: Deactivated successfully.
Dec 05 02:09:45 compute-0 podman[444255]: 2025-12-05 02:09:45.956130025 +0000 UTC m=+0.072694469 container create 44298cdafa473f4e8cd9dd5ccff9c1bd67cb793020844fbd935a3aca5c89eb1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_babbage, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 05 02:09:46 compute-0 podman[444255]: 2025-12-05 02:09:45.926648598 +0000 UTC m=+0.043213092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:09:46 compute-0 systemd[1]: Started libpod-conmon-44298cdafa473f4e8cd9dd5ccff9c1bd67cb793020844fbd935a3aca5c89eb1d.scope.
Dec 05 02:09:46 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:09:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb70900c1830ad3fbd998f05e115c1e891d2915c71254a749735c18078eb3994/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:09:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb70900c1830ad3fbd998f05e115c1e891d2915c71254a749735c18078eb3994/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:09:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb70900c1830ad3fbd998f05e115c1e891d2915c71254a749735c18078eb3994/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:09:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb70900c1830ad3fbd998f05e115c1e891d2915c71254a749735c18078eb3994/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:09:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb70900c1830ad3fbd998f05e115c1e891d2915c71254a749735c18078eb3994/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:09:46 compute-0 podman[444255]: 2025-12-05 02:09:46.104183065 +0000 UTC m=+0.220747549 container init 44298cdafa473f4e8cd9dd5ccff9c1bd67cb793020844fbd935a3aca5c89eb1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 05 02:09:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1822: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 179 KiB/s rd, 796 KiB/s wr, 34 op/s
Dec 05 02:09:46 compute-0 podman[444255]: 2025-12-05 02:09:46.132359204 +0000 UTC m=+0.248923658 container start 44298cdafa473f4e8cd9dd5ccff9c1bd67cb793020844fbd935a3aca5c89eb1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 05 02:09:46 compute-0 podman[444255]: 2025-12-05 02:09:46.137816337 +0000 UTC m=+0.254380821 container attach 44298cdafa473f4e8cd9dd5ccff9c1bd67cb793020844fbd935a3aca5c89eb1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_babbage, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:09:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:09:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:09:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:09:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:09:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:09:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:09:46 compute-0 ceph-mon[192914]: pgmap v1822: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 179 KiB/s rd, 796 KiB/s wr, 34 op/s
Dec 05 02:09:47 compute-0 nova_compute[349548]: 2025-12-05 02:09:47.095 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:47 compute-0 gifted_babbage[444269]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:09:47 compute-0 gifted_babbage[444269]: --> relative data size: 1.0
Dec 05 02:09:47 compute-0 gifted_babbage[444269]: --> All data devices are unavailable
Dec 05 02:09:47 compute-0 systemd[1]: libpod-44298cdafa473f4e8cd9dd5ccff9c1bd67cb793020844fbd935a3aca5c89eb1d.scope: Deactivated successfully.
Dec 05 02:09:47 compute-0 podman[444255]: 2025-12-05 02:09:47.525454392 +0000 UTC m=+1.642018906 container died 44298cdafa473f4e8cd9dd5ccff9c1bd67cb793020844fbd935a3aca5c89eb1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_babbage, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:09:47 compute-0 systemd[1]: libpod-44298cdafa473f4e8cd9dd5ccff9c1bd67cb793020844fbd935a3aca5c89eb1d.scope: Consumed 1.321s CPU time.
Dec 05 02:09:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb70900c1830ad3fbd998f05e115c1e891d2915c71254a749735c18078eb3994-merged.mount: Deactivated successfully.
Dec 05 02:09:47 compute-0 podman[444255]: 2025-12-05 02:09:47.62954018 +0000 UTC m=+1.746104654 container remove 44298cdafa473f4e8cd9dd5ccff9c1bd67cb793020844fbd935a3aca5c89eb1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_babbage, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:09:47 compute-0 systemd[1]: libpod-conmon-44298cdafa473f4e8cd9dd5ccff9c1bd67cb793020844fbd935a3aca5c89eb1d.scope: Deactivated successfully.
Dec 05 02:09:47 compute-0 sudo[444154]: pam_unix(sudo:session): session closed for user root
Dec 05 02:09:47 compute-0 nova_compute[349548]: 2025-12-05 02:09:47.808 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:47 compute-0 sudo[444308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:09:47 compute-0 sudo[444308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:09:47 compute-0 sudo[444308]: pam_unix(sudo:session): session closed for user root
Dec 05 02:09:47 compute-0 sudo[444333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:09:47 compute-0 sudo[444333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:09:47 compute-0 sudo[444333]: pam_unix(sudo:session): session closed for user root
Dec 05 02:09:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.070093) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900588070142, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 902, "num_deletes": 257, "total_data_size": 1145184, "memory_usage": 1163296, "flush_reason": "Manual Compaction"}
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900588080372, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 1133727, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36795, "largest_seqno": 37696, "table_properties": {"data_size": 1129197, "index_size": 2118, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10112, "raw_average_key_size": 19, "raw_value_size": 1119931, "raw_average_value_size": 2153, "num_data_blocks": 94, "num_entries": 520, "num_filter_entries": 520, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764900516, "oldest_key_time": 1764900516, "file_creation_time": 1764900588, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 10313 microseconds, and 3680 cpu microseconds.
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.080413) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 1133727 bytes OK
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.080427) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.082344) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.082355) EVENT_LOG_v1 {"time_micros": 1764900588082351, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.082370) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1140737, prev total WAL file size 1140737, number of live WAL files 2.
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.083225) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323535' seq:72057594037927935, type:22 .. '6C6F676D0031353038' seq:0, type:0; will stop at (end)
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(1107KB)], [83(8386KB)]
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900588083277, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 9721784, "oldest_snapshot_seqno": -1}
Dec 05 02:09:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1823: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 22 KiB/s wr, 8 op/s
Dec 05 02:09:48 compute-0 sudo[444358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:09:48 compute-0 sudo[444358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:09:48 compute-0 sudo[444358]: pam_unix(sudo:session): session closed for user root
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 5659 keys, 9618790 bytes, temperature: kUnknown
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900588156863, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 9618790, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9579864, "index_size": 23648, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14213, "raw_key_size": 143771, "raw_average_key_size": 25, "raw_value_size": 9476413, "raw_average_value_size": 1674, "num_data_blocks": 971, "num_entries": 5659, "num_filter_entries": 5659, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764900588, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.158100) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 9618790 bytes
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.161129) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 131.8 rd, 130.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 8.2 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(17.1) write-amplify(8.5) OK, records in: 6189, records dropped: 530 output_compression: NoCompression
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.161172) EVENT_LOG_v1 {"time_micros": 1764900588161154, "job": 48, "event": "compaction_finished", "compaction_time_micros": 73761, "compaction_time_cpu_micros": 40880, "output_level": 6, "num_output_files": 1, "total_output_size": 9618790, "num_input_records": 6189, "num_output_records": 5659, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900588163096, "job": 48, "event": "table_file_deletion", "file_number": 85}
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900588167876, "job": 48, "event": "table_file_deletion", "file_number": 83}
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.082980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.168173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.168183) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.168188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.168193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.168198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:09:48 compute-0 sudo[444383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:09:48 compute-0 sudo[444383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:09:48 compute-0 podman[444446]: 2025-12-05 02:09:48.893827556 +0000 UTC m=+0.086725072 container create 0c1c9ffff881008654d0e2c6791fb576dfa563ba46688aad46f88805a764a987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_haibt, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 05 02:09:48 compute-0 podman[444446]: 2025-12-05 02:09:48.862836667 +0000 UTC m=+0.055734183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:09:48 compute-0 systemd[1]: Started libpod-conmon-0c1c9ffff881008654d0e2c6791fb576dfa563ba46688aad46f88805a764a987.scope.
Dec 05 02:09:49 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:09:49 compute-0 podman[444446]: 2025-12-05 02:09:49.05521673 +0000 UTC m=+0.248114306 container init 0c1c9ffff881008654d0e2c6791fb576dfa563ba46688aad46f88805a764a987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 05 02:09:49 compute-0 podman[444446]: 2025-12-05 02:09:49.072615547 +0000 UTC m=+0.265513063 container start 0c1c9ffff881008654d0e2c6791fb576dfa563ba46688aad46f88805a764a987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_haibt, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:09:49 compute-0 eager_haibt[444461]: 167 167
Dec 05 02:09:49 compute-0 podman[444446]: 2025-12-05 02:09:49.080632022 +0000 UTC m=+0.273529588 container attach 0c1c9ffff881008654d0e2c6791fb576dfa563ba46688aad46f88805a764a987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 02:09:49 compute-0 systemd[1]: libpod-0c1c9ffff881008654d0e2c6791fb576dfa563ba46688aad46f88805a764a987.scope: Deactivated successfully.
Dec 05 02:09:49 compute-0 podman[444446]: 2025-12-05 02:09:49.083834442 +0000 UTC m=+0.276731918 container died 0c1c9ffff881008654d0e2c6791fb576dfa563ba46688aad46f88805a764a987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 05 02:09:49 compute-0 ceph-mon[192914]: pgmap v1823: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 22 KiB/s wr, 8 op/s
Dec 05 02:09:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-c98996a6939f00fe69cc1db8392d6aacbc1e99cd585e003e46c1556617997079-merged.mount: Deactivated successfully.
Dec 05 02:09:49 compute-0 podman[444446]: 2025-12-05 02:09:49.163491975 +0000 UTC m=+0.356389491 container remove 0c1c9ffff881008654d0e2c6791fb576dfa563ba46688aad46f88805a764a987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 05 02:09:49 compute-0 systemd[1]: libpod-conmon-0c1c9ffff881008654d0e2c6791fb576dfa563ba46688aad46f88805a764a987.scope: Deactivated successfully.
Dec 05 02:09:49 compute-0 podman[444486]: 2025-12-05 02:09:49.492766434 +0000 UTC m=+0.109215222 container create 736a3cc9ebfaaf25da2ca9de354a1e5ba9febfe77a485bf025460c5eb215e766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_herschel, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:09:49 compute-0 podman[444486]: 2025-12-05 02:09:49.455694635 +0000 UTC m=+0.072143473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:09:49 compute-0 systemd[1]: Started libpod-conmon-736a3cc9ebfaaf25da2ca9de354a1e5ba9febfe77a485bf025460c5eb215e766.scope.
Dec 05 02:09:49 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:09:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/380ef00d82ea0efe88f9aa2acc87765f841530d53e4ce1167cd8de34170728cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:09:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/380ef00d82ea0efe88f9aa2acc87765f841530d53e4ce1167cd8de34170728cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:09:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/380ef00d82ea0efe88f9aa2acc87765f841530d53e4ce1167cd8de34170728cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:09:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/380ef00d82ea0efe88f9aa2acc87765f841530d53e4ce1167cd8de34170728cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:09:49 compute-0 podman[444486]: 2025-12-05 02:09:49.68633986 +0000 UTC m=+0.302788708 container init 736a3cc9ebfaaf25da2ca9de354a1e5ba9febfe77a485bf025460c5eb215e766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_herschel, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 02:09:49 compute-0 podman[444486]: 2025-12-05 02:09:49.718049099 +0000 UTC m=+0.334497877 container start 736a3cc9ebfaaf25da2ca9de354a1e5ba9febfe77a485bf025460c5eb215e766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_herschel, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:09:49 compute-0 podman[444486]: 2025-12-05 02:09:49.725507948 +0000 UTC m=+0.341956706 container attach 736a3cc9ebfaaf25da2ca9de354a1e5ba9febfe77a485bf025460c5eb215e766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_herschel, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 05 02:09:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1824: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 13 KiB/s wr, 0 op/s
Dec 05 02:09:50 compute-0 festive_herschel[444503]: {
Dec 05 02:09:50 compute-0 festive_herschel[444503]:     "0": [
Dec 05 02:09:50 compute-0 festive_herschel[444503]:         {
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "devices": [
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "/dev/loop3"
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             ],
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "lv_name": "ceph_lv0",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "lv_size": "21470642176",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "name": "ceph_lv0",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "tags": {
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.cluster_name": "ceph",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.crush_device_class": "",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.encrypted": "0",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.osd_id": "0",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.type": "block",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.vdo": "0"
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             },
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "type": "block",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "vg_name": "ceph_vg0"
Dec 05 02:09:50 compute-0 festive_herschel[444503]:         }
Dec 05 02:09:50 compute-0 festive_herschel[444503]:     ],
Dec 05 02:09:50 compute-0 festive_herschel[444503]:     "1": [
Dec 05 02:09:50 compute-0 festive_herschel[444503]:         {
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "devices": [
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "/dev/loop4"
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             ],
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "lv_name": "ceph_lv1",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "lv_size": "21470642176",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "name": "ceph_lv1",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "tags": {
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.cluster_name": "ceph",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.crush_device_class": "",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.encrypted": "0",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.osd_id": "1",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.type": "block",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.vdo": "0"
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             },
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "type": "block",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "vg_name": "ceph_vg1"
Dec 05 02:09:50 compute-0 festive_herschel[444503]:         }
Dec 05 02:09:50 compute-0 festive_herschel[444503]:     ],
Dec 05 02:09:50 compute-0 festive_herschel[444503]:     "2": [
Dec 05 02:09:50 compute-0 festive_herschel[444503]:         {
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "devices": [
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "/dev/loop5"
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             ],
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "lv_name": "ceph_lv2",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "lv_size": "21470642176",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "name": "ceph_lv2",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "tags": {
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.cluster_name": "ceph",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.crush_device_class": "",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.encrypted": "0",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.osd_id": "2",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.type": "block",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:                 "ceph.vdo": "0"
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             },
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "type": "block",
Dec 05 02:09:50 compute-0 festive_herschel[444503]:             "vg_name": "ceph_vg2"
Dec 05 02:09:50 compute-0 festive_herschel[444503]:         }
Dec 05 02:09:50 compute-0 festive_herschel[444503]:     ]
Dec 05 02:09:50 compute-0 festive_herschel[444503]: }
Dec 05 02:09:50 compute-0 systemd[1]: libpod-736a3cc9ebfaaf25da2ca9de354a1e5ba9febfe77a485bf025460c5eb215e766.scope: Deactivated successfully.
Dec 05 02:09:50 compute-0 podman[444486]: 2025-12-05 02:09:50.647354607 +0000 UTC m=+1.263803365 container died 736a3cc9ebfaaf25da2ca9de354a1e5ba9febfe77a485bf025460c5eb215e766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 02:09:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-380ef00d82ea0efe88f9aa2acc87765f841530d53e4ce1167cd8de34170728cc-merged.mount: Deactivated successfully.
Dec 05 02:09:50 compute-0 podman[444486]: 2025-12-05 02:09:50.754815719 +0000 UTC m=+1.371264507 container remove 736a3cc9ebfaaf25da2ca9de354a1e5ba9febfe77a485bf025460c5eb215e766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 05 02:09:50 compute-0 systemd[1]: libpod-conmon-736a3cc9ebfaaf25da2ca9de354a1e5ba9febfe77a485bf025460c5eb215e766.scope: Deactivated successfully.
Dec 05 02:09:50 compute-0 sudo[444383]: pam_unix(sudo:session): session closed for user root
Dec 05 02:09:50 compute-0 sudo[444525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:09:50 compute-0 sudo[444525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:09:50 compute-0 sudo[444525]: pam_unix(sudo:session): session closed for user root
Dec 05 02:09:51 compute-0 sudo[444550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:09:51 compute-0 sudo[444550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:09:51 compute-0 sudo[444550]: pam_unix(sudo:session): session closed for user root
Dec 05 02:09:51 compute-0 ceph-mon[192914]: pgmap v1824: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 13 KiB/s wr, 0 op/s
Dec 05 02:09:51 compute-0 sudo[444575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:09:51 compute-0 sudo[444575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:09:51 compute-0 sudo[444575]: pam_unix(sudo:session): session closed for user root
Dec 05 02:09:51 compute-0 sudo[444600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:09:51 compute-0 sudo[444600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:09:51 compute-0 podman[444663]: 2025-12-05 02:09:51.918345221 +0000 UTC m=+0.063544402 container create c724ea35775044323311f17570eb743af89e5e201dfc7fe56708ce1cfe75cc5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_murdock, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:09:51 compute-0 systemd[1]: Started libpod-conmon-c724ea35775044323311f17570eb743af89e5e201dfc7fe56708ce1cfe75cc5b.scope.
Dec 05 02:09:51 compute-0 podman[444663]: 2025-12-05 02:09:51.895763208 +0000 UTC m=+0.040962379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:09:52 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:09:52 compute-0 nova_compute[349548]: 2025-12-05 02:09:52.013 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:52 compute-0 podman[444663]: 2025-12-05 02:09:52.033385315 +0000 UTC m=+0.178584526 container init c724ea35775044323311f17570eb743af89e5e201dfc7fe56708ce1cfe75cc5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_murdock, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 02:09:52 compute-0 podman[444663]: 2025-12-05 02:09:52.049088615 +0000 UTC m=+0.194287826 container start c724ea35775044323311f17570eb743af89e5e201dfc7fe56708ce1cfe75cc5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:09:52 compute-0 podman[444663]: 2025-12-05 02:09:52.056216035 +0000 UTC m=+0.201415246 container attach c724ea35775044323311f17570eb743af89e5e201dfc7fe56708ce1cfe75cc5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:09:52 compute-0 admiring_murdock[444678]: 167 167
Dec 05 02:09:52 compute-0 systemd[1]: libpod-c724ea35775044323311f17570eb743af89e5e201dfc7fe56708ce1cfe75cc5b.scope: Deactivated successfully.
Dec 05 02:09:52 compute-0 podman[444663]: 2025-12-05 02:09:52.061713679 +0000 UTC m=+0.206912860 container died c724ea35775044323311f17570eb743af89e5e201dfc7fe56708ce1cfe75cc5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_murdock, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 02:09:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1550ea99a5c93a50b29fa34b54f9760e2c92222d3c699c1fcace7d59daa0003-merged.mount: Deactivated successfully.
Dec 05 02:09:52 compute-0 nova_compute[349548]: 2025-12-05 02:09:52.101 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1825: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 13 KiB/s wr, 0 op/s
Dec 05 02:09:52 compute-0 podman[444663]: 2025-12-05 02:09:52.119814058 +0000 UTC m=+0.265013239 container remove c724ea35775044323311f17570eb743af89e5e201dfc7fe56708ce1cfe75cc5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_murdock, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:09:52 compute-0 systemd[1]: libpod-conmon-c724ea35775044323311f17570eb743af89e5e201dfc7fe56708ce1cfe75cc5b.scope: Deactivated successfully.
Dec 05 02:09:52 compute-0 podman[444702]: 2025-12-05 02:09:52.358124088 +0000 UTC m=+0.072413261 container create 8a272fe825a97a820a907735da859e15eb1a77cbd60a42908b6eb972826e55c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_clarke, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 02:09:52 compute-0 systemd[1]: Started libpod-conmon-8a272fe825a97a820a907735da859e15eb1a77cbd60a42908b6eb972826e55c8.scope.
Dec 05 02:09:52 compute-0 podman[444702]: 2025-12-05 02:09:52.334547867 +0000 UTC m=+0.048837060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:09:52 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:09:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe6b7f7d33d57c601afdf4fab07022be39464421b3eda4609a3e1faf18fc10fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:09:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe6b7f7d33d57c601afdf4fab07022be39464421b3eda4609a3e1faf18fc10fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:09:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe6b7f7d33d57c601afdf4fab07022be39464421b3eda4609a3e1faf18fc10fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:09:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe6b7f7d33d57c601afdf4fab07022be39464421b3eda4609a3e1faf18fc10fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:09:52 compute-0 podman[444702]: 2025-12-05 02:09:52.505371565 +0000 UTC m=+0.219660818 container init 8a272fe825a97a820a907735da859e15eb1a77cbd60a42908b6eb972826e55c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 05 02:09:52 compute-0 podman[444702]: 2025-12-05 02:09:52.541831777 +0000 UTC m=+0.256120970 container start 8a272fe825a97a820a907735da859e15eb1a77cbd60a42908b6eb972826e55c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_clarke, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 05 02:09:52 compute-0 podman[444702]: 2025-12-05 02:09:52.548660868 +0000 UTC m=+0.262950121 container attach 8a272fe825a97a820a907735da859e15eb1a77cbd60a42908b6eb972826e55c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:09:52 compute-0 nova_compute[349548]: 2025-12-05 02:09:52.811 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:09:53 compute-0 ceph-mon[192914]: pgmap v1825: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 13 KiB/s wr, 0 op/s
Dec 05 02:09:53 compute-0 trusting_clarke[444717]: {
Dec 05 02:09:53 compute-0 trusting_clarke[444717]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:09:53 compute-0 trusting_clarke[444717]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:09:53 compute-0 trusting_clarke[444717]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:09:53 compute-0 trusting_clarke[444717]:         "osd_id": 0,
Dec 05 02:09:53 compute-0 trusting_clarke[444717]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:09:53 compute-0 trusting_clarke[444717]:         "type": "bluestore"
Dec 05 02:09:53 compute-0 trusting_clarke[444717]:     },
Dec 05 02:09:53 compute-0 trusting_clarke[444717]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:09:53 compute-0 trusting_clarke[444717]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:09:53 compute-0 trusting_clarke[444717]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:09:53 compute-0 trusting_clarke[444717]:         "osd_id": 1,
Dec 05 02:09:53 compute-0 trusting_clarke[444717]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:09:53 compute-0 trusting_clarke[444717]:         "type": "bluestore"
Dec 05 02:09:53 compute-0 trusting_clarke[444717]:     },
Dec 05 02:09:53 compute-0 trusting_clarke[444717]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:09:53 compute-0 trusting_clarke[444717]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:09:53 compute-0 trusting_clarke[444717]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:09:53 compute-0 trusting_clarke[444717]:         "osd_id": 2,
Dec 05 02:09:53 compute-0 trusting_clarke[444717]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:09:53 compute-0 trusting_clarke[444717]:         "type": "bluestore"
Dec 05 02:09:53 compute-0 trusting_clarke[444717]:     }
Dec 05 02:09:53 compute-0 trusting_clarke[444717]: }
Dec 05 02:09:53 compute-0 systemd[1]: libpod-8a272fe825a97a820a907735da859e15eb1a77cbd60a42908b6eb972826e55c8.scope: Deactivated successfully.
Dec 05 02:09:53 compute-0 systemd[1]: libpod-8a272fe825a97a820a907735da859e15eb1a77cbd60a42908b6eb972826e55c8.scope: Consumed 1.097s CPU time.
Dec 05 02:09:53 compute-0 podman[444702]: 2025-12-05 02:09:53.644326729 +0000 UTC m=+1.358615922 container died 8a272fe825a97a820a907735da859e15eb1a77cbd60a42908b6eb972826e55c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 05 02:09:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe6b7f7d33d57c601afdf4fab07022be39464421b3eda4609a3e1faf18fc10fc-merged.mount: Deactivated successfully.
Dec 05 02:09:53 compute-0 podman[444702]: 2025-12-05 02:09:53.724503937 +0000 UTC m=+1.438793090 container remove 8a272fe825a97a820a907735da859e15eb1a77cbd60a42908b6eb972826e55c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_clarke, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 05 02:09:53 compute-0 systemd[1]: libpod-conmon-8a272fe825a97a820a907735da859e15eb1a77cbd60a42908b6eb972826e55c8.scope: Deactivated successfully.
Dec 05 02:09:53 compute-0 sudo[444600]: pam_unix(sudo:session): session closed for user root
Dec 05 02:09:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:09:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:09:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:09:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:09:53 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c41b4246-d5e3-4803-acf9-602258eb2971 does not exist
Dec 05 02:09:53 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 2f74545f-d479-4cfe-80d3-89e00adab0be does not exist
Dec 05 02:09:53 compute-0 podman[444760]: 2025-12-05 02:09:53.865475678 +0000 UTC m=+0.088864742 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 05 02:09:53 compute-0 sudo[444775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:09:53 compute-0 sudo[444775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:09:53 compute-0 podman[444761]: 2025-12-05 02:09:53.902353332 +0000 UTC m=+0.121530588 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:09:53 compute-0 sudo[444775]: pam_unix(sudo:session): session closed for user root
Dec 05 02:09:53 compute-0 sudo[444823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:09:54 compute-0 sudo[444823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:09:54 compute-0 sudo[444823]: pam_unix(sudo:session): session closed for user root
Dec 05 02:09:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1826: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 426 B/s wr, 0 op/s
Dec 05 02:09:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:09:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:09:54 compute-0 ceph-mon[192914]: pgmap v1826: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 426 B/s wr, 0 op/s
Dec 05 02:09:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1827: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.7 KiB/s wr, 0 op/s
Dec 05 02:09:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:56.205 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:09:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:56.206 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:09:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:56.207 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:09:57 compute-0 nova_compute[349548]: 2025-12-05 02:09:57.102 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:57 compute-0 ceph-mon[192914]: pgmap v1827: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.7 KiB/s wr, 0 op/s
Dec 05 02:09:57 compute-0 nova_compute[349548]: 2025-12-05 02:09:57.585 349552 DEBUG nova.objects.instance [None req-66ba1571-6b32-4c06-a012-9edeb15b9cae 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lazy-loading 'flavor' on Instance uuid 939ae9f2-b89c-4a19-96de-ab4dfc882a35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:09:57 compute-0 nova_compute[349548]: 2025-12-05 02:09:57.638 349552 DEBUG oslo_concurrency.lockutils [None req-66ba1571-6b32-4c06-a012-9edeb15b9cae 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquiring lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:09:57 compute-0 nova_compute[349548]: 2025-12-05 02:09:57.638 349552 DEBUG oslo_concurrency.lockutils [None req-66ba1571-6b32-4c06-a012-9edeb15b9cae 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquired lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:09:57 compute-0 nova_compute[349548]: 2025-12-05 02:09:57.815 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:09:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:09:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1828: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 3.5 KiB/s wr, 1 op/s
Dec 05 02:09:58 compute-0 podman[444848]: 2025-12-05 02:09:58.731745437 +0000 UTC m=+0.128633256 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 02:09:58 compute-0 podman[444849]: 2025-12-05 02:09:58.733212369 +0000 UTC m=+0.131567459 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:09:59 compute-0 ceph-mon[192914]: pgmap v1828: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 3.5 KiB/s wr, 1 op/s
Dec 05 02:09:59 compute-0 podman[158197]: time="2025-12-05T02:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:09:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45045 "" "Go-http-client/1.1"
Dec 05 02:09:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9108 "" "Go-http-client/1.1"
Dec 05 02:10:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1829: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 3.5 KiB/s wr, 1 op/s
Dec 05 02:10:00 compute-0 podman[444885]: 2025-12-05 02:10:00.723175278 +0000 UTC m=+0.122476281 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, name=ubi9, release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, version=9.4, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git)
Dec 05 02:10:00 compute-0 nova_compute[349548]: 2025-12-05 02:10:00.892 349552 DEBUG nova.network.neutron [None req-66ba1571-6b32-4c06-a012-9edeb15b9cae 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 05 02:10:01 compute-0 nova_compute[349548]: 2025-12-05 02:10:01.045 349552 DEBUG nova.compute.manager [req-75a9498b-c287-4f54-8603-345e4d5bcd94 req-9e0c7872-4283-4f5b-9c26-41599d194a39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Received event network-changed-2ac46e0a-6888-440f-b155-d4b0e8677304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:10:01 compute-0 nova_compute[349548]: 2025-12-05 02:10:01.046 349552 DEBUG nova.compute.manager [req-75a9498b-c287-4f54-8603-345e4d5bcd94 req-9e0c7872-4283-4f5b-9c26-41599d194a39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Refreshing instance network info cache due to event network-changed-2ac46e0a-6888-440f-b155-d4b0e8677304. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 02:10:01 compute-0 nova_compute[349548]: 2025-12-05 02:10:01.046 349552 DEBUG oslo_concurrency.lockutils [req-75a9498b-c287-4f54-8603-345e4d5bcd94 req-9e0c7872-4283-4f5b-9c26-41599d194a39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:10:01 compute-0 ceph-mon[192914]: pgmap v1829: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 3.5 KiB/s wr, 1 op/s
Dec 05 02:10:01 compute-0 openstack_network_exporter[366555]: ERROR   02:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:10:01 compute-0 openstack_network_exporter[366555]: ERROR   02:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:10:01 compute-0 openstack_network_exporter[366555]: ERROR   02:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:10:01 compute-0 openstack_network_exporter[366555]: ERROR   02:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:10:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:10:01 compute-0 openstack_network_exporter[366555]: ERROR   02:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:10:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:10:01 compute-0 nova_compute[349548]: 2025-12-05 02:10:01.679 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:02 compute-0 nova_compute[349548]: 2025-12-05 02:10:02.105 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1830: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 5.2 KiB/s wr, 1 op/s
Dec 05 02:10:02 compute-0 nova_compute[349548]: 2025-12-05 02:10:02.820 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:10:03 compute-0 ceph-mon[192914]: pgmap v1830: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 5.2 KiB/s wr, 1 op/s
Dec 05 02:10:03 compute-0 nova_compute[349548]: 2025-12-05 02:10:03.929 349552 DEBUG nova.network.neutron [None req-66ba1571-6b32-4c06-a012-9edeb15b9cae 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updating instance_info_cache with network_info: [{"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:10:03 compute-0 nova_compute[349548]: 2025-12-05 02:10:03.961 349552 DEBUG oslo_concurrency.lockutils [None req-66ba1571-6b32-4c06-a012-9edeb15b9cae 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Releasing lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:10:03 compute-0 nova_compute[349548]: 2025-12-05 02:10:03.962 349552 DEBUG nova.compute.manager [None req-66ba1571-6b32-4c06-a012-9edeb15b9cae 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Dec 05 02:10:03 compute-0 nova_compute[349548]: 2025-12-05 02:10:03.963 349552 DEBUG nova.compute.manager [None req-66ba1571-6b32-4c06-a012-9edeb15b9cae 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] network_info to inject: |[{"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Dec 05 02:10:03 compute-0 nova_compute[349548]: 2025-12-05 02:10:03.965 349552 DEBUG oslo_concurrency.lockutils [req-75a9498b-c287-4f54-8603-345e4d5bcd94 req-9e0c7872-4283-4f5b-9c26-41599d194a39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:10:03 compute-0 nova_compute[349548]: 2025-12-05 02:10:03.966 349552 DEBUG nova.network.neutron [req-75a9498b-c287-4f54-8603-345e4d5bcd94 req-9e0c7872-4283-4f5b-9c26-41599d194a39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Refreshing network info cache for port 2ac46e0a-6888-440f-b155-d4b0e8677304 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 02:10:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1831: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 5.1 KiB/s wr, 1 op/s
Dec 05 02:10:05 compute-0 ceph-mon[192914]: pgmap v1831: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 5.1 KiB/s wr, 1 op/s
Dec 05 02:10:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1832: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec 05 02:10:06 compute-0 nova_compute[349548]: 2025-12-05 02:10:06.345 349552 DEBUG nova.objects.instance [None req-462b6329-01a2-45a9-b19f-01d82fb4c16c 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lazy-loading 'flavor' on Instance uuid 939ae9f2-b89c-4a19-96de-ab4dfc882a35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:10:06 compute-0 nova_compute[349548]: 2025-12-05 02:10:06.349 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Acquiring lock "3391e1ba-0e6b-4113-b402-027e997b3cb9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:10:06 compute-0 nova_compute[349548]: 2025-12-05 02:10:06.349 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:10:06 compute-0 nova_compute[349548]: 2025-12-05 02:10:06.380 349552 DEBUG oslo_concurrency.lockutils [None req-462b6329-01a2-45a9-b19f-01d82fb4c16c 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquiring lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:10:06 compute-0 nova_compute[349548]: 2025-12-05 02:10:06.392 349552 DEBUG nova.compute.manager [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 05 02:10:06 compute-0 nova_compute[349548]: 2025-12-05 02:10:06.532 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:10:06 compute-0 nova_compute[349548]: 2025-12-05 02:10:06.534 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:10:06 compute-0 nova_compute[349548]: 2025-12-05 02:10:06.548 349552 DEBUG nova.virt.hardware [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 05 02:10:06 compute-0 nova_compute[349548]: 2025-12-05 02:10:06.550 349552 INFO nova.compute.claims [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Claim successful on node compute-0.ctlplane.example.com
Dec 05 02:10:06 compute-0 nova_compute[349548]: 2025-12-05 02:10:06.710 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.100 349552 DEBUG nova.network.neutron [req-75a9498b-c287-4f54-8603-345e4d5bcd94 req-9e0c7872-4283-4f5b-9c26-41599d194a39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updated VIF entry in instance network info cache for port 2ac46e0a-6888-440f-b155-d4b0e8677304. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.103 349552 DEBUG nova.network.neutron [req-75a9498b-c287-4f54-8603-345e4d5bcd94 req-9e0c7872-4283-4f5b-9c26-41599d194a39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updating instance_info_cache with network_info: [{"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.108 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.136 349552 DEBUG oslo_concurrency.lockutils [req-75a9498b-c287-4f54-8603-345e4d5bcd94 req-9e0c7872-4283-4f5b-9c26-41599d194a39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.138 349552 DEBUG oslo_concurrency.lockutils [None req-462b6329-01a2-45a9-b19f-01d82fb4c16c 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquired lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:10:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:10:07 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3826563883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.233 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.249 349552 DEBUG nova.compute.provider_tree [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:10:07 compute-0 ceph-mon[192914]: pgmap v1832: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec 05 02:10:07 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3826563883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.273 349552 DEBUG nova.scheduler.client.report [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.304 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.306 349552 DEBUG nova.compute.manager [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.376 349552 DEBUG nova.compute.manager [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.377 349552 DEBUG nova.network.neutron [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.415 349552 INFO nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.462 349552 DEBUG nova.compute.manager [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.563 349552 DEBUG nova.compute.manager [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.566 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.567 349552 INFO nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Creating image(s)
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.618 349552 DEBUG nova.storage.rbd_utils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] rbd image 3391e1ba-0e6b-4113-b402-027e997b3cb9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.677 349552 DEBUG nova.storage.rbd_utils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] rbd image 3391e1ba-0e6b-4113-b402-027e997b3cb9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.738 349552 DEBUG nova.storage.rbd_utils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] rbd image 3391e1ba-0e6b-4113-b402-027e997b3cb9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.749 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.824 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.846 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.848 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Acquiring lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.849 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.850 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.900 349552 DEBUG nova.storage.rbd_utils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] rbd image 3391e1ba-0e6b-4113-b402-027e997b3cb9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.920 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 3391e1ba-0e6b-4113-b402-027e997b3cb9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.951 349552 DEBUG nova.policy [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f18ce80284524cbb9497cac2c6e6bf32', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f120ce30568246929ef2dc1a9f0bd0c7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 05 02:10:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:10:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1833: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 15 KiB/s wr, 2 op/s
Dec 05 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.265 349552 DEBUG oslo_concurrency.lockutils [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.266 349552 DEBUG oslo_concurrency.lockutils [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.269 349552 INFO nova.compute.manager [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Rebooting instance
Dec 05 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.296 349552 DEBUG oslo_concurrency.lockutils [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquiring lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.296 349552 DEBUG oslo_concurrency.lockutils [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquired lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.297 349552 DEBUG nova.network.neutron [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 05 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.435 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 3391e1ba-0e6b-4113-b402-027e997b3cb9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.582 349552 DEBUG nova.storage.rbd_utils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] resizing rbd image 3391e1ba-0e6b-4113-b402-027e997b3cb9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 05 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.818 349552 DEBUG nova.objects.instance [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lazy-loading 'migration_context' on Instance uuid 3391e1ba-0e6b-4113-b402-027e997b3cb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.836 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 05 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.836 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Ensure instance console log exists: /var/lib/nova/instances/3391e1ba-0e6b-4113-b402-027e997b3cb9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 05 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.837 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.837 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.838 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:10:09 compute-0 ceph-mon[192914]: pgmap v1833: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 15 KiB/s wr, 2 op/s
Dec 05 02:10:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1834: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 1 op/s
Dec 05 02:10:10 compute-0 nova_compute[349548]: 2025-12-05 02:10:10.261 349552 DEBUG nova.network.neutron [None req-462b6329-01a2-45a9-b19f-01d82fb4c16c 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 05 02:10:10 compute-0 nova_compute[349548]: 2025-12-05 02:10:10.500 349552 DEBUG nova.compute.manager [req-06a18c23-30b8-4680-9ca3-f4b33a766b4e req-e4f00246-7028-4fa6-b2f1-0f915a73aadd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Received event network-changed-2ac46e0a-6888-440f-b155-d4b0e8677304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:10:10 compute-0 nova_compute[349548]: 2025-12-05 02:10:10.501 349552 DEBUG nova.compute.manager [req-06a18c23-30b8-4680-9ca3-f4b33a766b4e req-e4f00246-7028-4fa6-b2f1-0f915a73aadd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Refreshing instance network info cache due to event network-changed-2ac46e0a-6888-440f-b155-d4b0e8677304. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 02:10:10 compute-0 nova_compute[349548]: 2025-12-05 02:10:10.501 349552 DEBUG oslo_concurrency.lockutils [req-06a18c23-30b8-4680-9ca3-f4b33a766b4e req-e4f00246-7028-4fa6-b2f1-0f915a73aadd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:10:10 compute-0 podman[445091]: 2025-12-05 02:10:10.709042346 +0000 UTC m=+0.108321833 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 02:10:10 compute-0 podman[445092]: 2025-12-05 02:10:10.713950074 +0000 UTC m=+0.109690542 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:10:10 compute-0 podman[445094]: 2025-12-05 02:10:10.745336346 +0000 UTC m=+0.127529963 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, release=1755695350, config_id=edpm, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Dec 05 02:10:10 compute-0 podman[445093]: 2025-12-05 02:10:10.750406388 +0000 UTC m=+0.139818368 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:10:11 compute-0 nova_compute[349548]: 2025-12-05 02:10:11.250 349552 DEBUG nova.network.neutron [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Successfully created port: 26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 05 02:10:11 compute-0 ceph-mon[192914]: pgmap v1834: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 1 op/s
Dec 05 02:10:11 compute-0 nova_compute[349548]: 2025-12-05 02:10:11.400 349552 DEBUG nova.network.neutron [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Updating instance_info_cache with network_info: [{"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.113 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1835: 321 pgs: 321 active+clean; 250 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.5 MiB/s wr, 29 op/s
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.133 349552 DEBUG oslo_concurrency.lockutils [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Releasing lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.136 349552 DEBUG nova.compute.manager [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:10:12 compute-0 kernel: tapa240e2ef-17 (unregistering): left promiscuous mode
Dec 05 02:10:12 compute-0 NetworkManager[49092]: <info>  [1764900612.4742] device (tapa240e2ef-17): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.493 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:12 compute-0 ovn_controller[89286]: 2025-12-05T02:10:12Z|00104|binding|INFO|Releasing lport a240e2ef-1773-4509-ac04-eae1f5d36e08 from this chassis (sb_readonly=0)
Dec 05 02:10:12 compute-0 ovn_controller[89286]: 2025-12-05T02:10:12Z|00105|binding|INFO|Setting lport a240e2ef-1773-4509-ac04-eae1f5d36e08 down in Southbound
Dec 05 02:10:12 compute-0 ovn_controller[89286]: 2025-12-05T02:10:12Z|00106|binding|INFO|Removing iface tapa240e2ef-17 ovn-installed in OVS
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.496 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:12 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:12.503 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:81:87 10.100.0.10'], port_security=['fa:16:3e:16:81:87 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '59e35a32-9023-4e49-be56-9da10df3027f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd34a6a62cf94436a2b836fa4f49c4fa', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0ad1486e-ab79-4bad-bad5-777f54ed0ef1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=880ae0ff-40ec-4de0-a5e7-7c2cf13ecf72, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=a240e2ef-1773-4509-ac04-eae1f5d36e08) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:10:12 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:12.507 287122 INFO neutron.agent.ovn.metadata.agent [-] Port a240e2ef-1773-4509-ac04-eae1f5d36e08 in datapath a9bc378d-2d4b-4990-99ce-02656b1fec0d unbound from our chassis
Dec 05 02:10:12 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:12.509 287122 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a9bc378d-2d4b-4990-99ce-02656b1fec0d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 02:10:12 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:12.511 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[7c8247dd-216e-4cb0-a2ff-ce8ec0804fc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:12 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:12.512 287122 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d namespace which is not needed anymore
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.531 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:12 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Dec 05 02:10:12 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 45.020s CPU time.
Dec 05 02:10:12 compute-0 systemd-machined[138700]: Machine qemu-8-instance-00000008 terminated.
Dec 05 02:10:12 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[442967]: [NOTICE]   (442971) : haproxy version is 2.8.14-c23fe91
Dec 05 02:10:12 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[442967]: [NOTICE]   (442971) : path to executable is /usr/sbin/haproxy
Dec 05 02:10:12 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[442967]: [WARNING]  (442971) : Exiting Master process...
Dec 05 02:10:12 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[442967]: [WARNING]  (442971) : Exiting Master process...
Dec 05 02:10:12 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[442967]: [ALERT]    (442971) : Current worker (442973) exited with code 143 (Terminated)
Dec 05 02:10:12 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[442967]: [WARNING]  (442971) : All workers exited. Exiting... (0)
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.752 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:12 compute-0 systemd[1]: libpod-4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05.scope: Deactivated successfully.
Dec 05 02:10:12 compute-0 conmon[442967]: conmon 4c5edeef5f34dfd67481 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05.scope/container/memory.events
Dec 05 02:10:12 compute-0 podman[445193]: 2025-12-05 02:10:12.764671631 +0000 UTC m=+0.095050831 container died 4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.766 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.772 349552 INFO nova.virt.libvirt.driver [-] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Instance destroyed successfully.
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.772 349552 DEBUG nova.objects.instance [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lazy-loading 'resources' on Instance uuid 59e35a32-9023-4e49-be56-9da10df3027f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.796 349552 DEBUG nova.virt.libvirt.vif [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:08:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1678320742',display_name='tempest-ServerActionsTestJSON-server-1678320742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1678320742',id=8,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmirf5PzEcVuq6RNudVuflcugnc6r3Jy50MVVEH7tkttBe4cf5zv9kQC3Ss53DUYZTE/QaGNMMsby6pKc4tzWxZGKXsndhFMr79gHGA5klSxVz8kWH2nsbelSj8zkK0fg==',key_name='tempest-keypair-1953156472',keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:08:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dd34a6a62cf94436a2b836fa4f49c4fa',ramdisk_id='',reservation_id='r-i4td7gfo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1914764435',owner_user_name='tempest-ServerActionsTestJSON-1914764435-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:10:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4745812b7eb47908ded25b1eb7c7328',uuid=59e35a32-9023-4e49-be56-9da10df3027f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.797 349552 DEBUG nova.network.os_vif_util [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Converting VIF {"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.798 349552 DEBUG nova.network.os_vif_util [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.799 349552 DEBUG os_vif [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.802 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.802 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa240e2ef-17, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.804 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.807 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.811 349552 INFO os_vif [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17')
Dec 05 02:10:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05-userdata-shm.mount: Deactivated successfully.
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.822 349552 DEBUG nova.virt.libvirt.driver [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Start _get_guest_xml network_info=[{"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'e9091bfb-b431-47c9-a284-79372046956b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 05 02:10:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-834861e6ee78dc388a1bf92deca51436b692390ae47802f4ad88169beea7eb85-merged.mount: Deactivated successfully.
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.833 349552 WARNING nova.virt.libvirt.driver [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.841 349552 DEBUG nova.virt.libvirt.host [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.842 349552 DEBUG nova.virt.libvirt.host [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 05 02:10:12 compute-0 podman[445193]: 2025-12-05 02:10:12.843368341 +0000 UTC m=+0.173747541 container cleanup 4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.853 349552 DEBUG nova.virt.libvirt.host [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.853 349552 DEBUG nova.virt.libvirt.host [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.854 349552 DEBUG nova.virt.libvirt.driver [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.854 349552 DEBUG nova.virt.hardware [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T02:07:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.854 349552 DEBUG nova.virt.hardware [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.855 349552 DEBUG nova.virt.hardware [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.855 349552 DEBUG nova.virt.hardware [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.855 349552 DEBUG nova.virt.hardware [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.855 349552 DEBUG nova.virt.hardware [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.856 349552 DEBUG nova.virt.hardware [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.856 349552 DEBUG nova.virt.hardware [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.856 349552 DEBUG nova.virt.hardware [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.856 349552 DEBUG nova.virt.hardware [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.856 349552 DEBUG nova.virt.hardware [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.857 349552 DEBUG nova.objects.instance [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lazy-loading 'vcpu_model' on Instance uuid 59e35a32-9023-4e49-be56-9da10df3027f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:10:12 compute-0 systemd[1]: libpod-conmon-4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05.scope: Deactivated successfully.
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.873 349552 DEBUG oslo_concurrency.processutils [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:10:12 compute-0 podman[445229]: 2025-12-05 02:10:12.957714033 +0000 UTC m=+0.081010016 container remove 4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 02:10:12 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:12.971 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5943a3-698b-47e2-846a-36a68656f019]: (4, ('Fri Dec  5 02:10:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d (4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05)\n4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05\nFri Dec  5 02:10:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d (4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05)\n4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:12 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:12.975 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[012f2406-55b5-4fa5-b948-e1aa64e63fae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:12 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:12.977 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa9bc378d-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:10:12 compute-0 kernel: tapa9bc378d-20: left promiscuous mode
Dec 05 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.984 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:13 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:13.004 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[abd352f9-90a1-4f19-b3fc-0ca7deac7f60]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.013 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:13 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:13.025 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[6bb386c9-945d-44b8-acfe-694f778aeb16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:13 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:13.027 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[780c55b8-dfa2-4b65-9a7c-acfa5f83ec0a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:13 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:13.056 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[6b3baa72-d6f0-4f54-a286-2305a30882a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662057, 'reachable_time': 32430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 445252, 'error': None, 'target': 'ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:13 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:13.060 287504 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 05 02:10:13 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:13.060 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[ff0dc74c-55f9-4686-bf88-e1c8567e7e8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:13 compute-0 systemd[1]: run-netns-ovnmeta\x2da9bc378d\x2d2d4b\x2d4990\x2d99ce\x2d02656b1fec0d.mount: Deactivated successfully.
Dec 05 02:10:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.093 349552 DEBUG nova.compute.manager [req-1288787f-8daf-4588-a3d0-97b8e5aac3ce req-f65f77a0-9ea3-4f2a-a511-533c3eafee26 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received event network-vif-unplugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.093 349552 DEBUG oslo_concurrency.lockutils [req-1288787f-8daf-4588-a3d0-97b8e5aac3ce req-f65f77a0-9ea3-4f2a-a511-533c3eafee26 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.093 349552 DEBUG oslo_concurrency.lockutils [req-1288787f-8daf-4588-a3d0-97b8e5aac3ce req-f65f77a0-9ea3-4f2a-a511-533c3eafee26 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.094 349552 DEBUG oslo_concurrency.lockutils [req-1288787f-8daf-4588-a3d0-97b8e5aac3ce req-f65f77a0-9ea3-4f2a-a511-533c3eafee26 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.094 349552 DEBUG nova.compute.manager [req-1288787f-8daf-4588-a3d0-97b8e5aac3ce req-f65f77a0-9ea3-4f2a-a511-533c3eafee26 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] No waiting events found dispatching network-vif-unplugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.094 349552 WARNING nova.compute.manager [req-1288787f-8daf-4588-a3d0-97b8e5aac3ce req-f65f77a0-9ea3-4f2a-a511-533c3eafee26 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received unexpected event network-vif-unplugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 for instance with vm_state active and task_state reboot_started_hard.
Dec 05 02:10:13 compute-0 ceph-mon[192914]: pgmap v1835: 321 pgs: 321 active+clean; 250 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.5 MiB/s wr, 29 op/s
Dec 05 02:10:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 02:10:13 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/684689122' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.418 349552 DEBUG oslo_concurrency.processutils [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.492 349552 DEBUG oslo_concurrency.processutils [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:10:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 02:10:13 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2797718717' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.969 349552 DEBUG oslo_concurrency.processutils [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.971 349552 DEBUG nova.virt.libvirt.vif [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:08:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1678320742',display_name='tempest-ServerActionsTestJSON-server-1678320742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1678320742',id=8,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmirf5PzEcVuq6RNudVuflcugnc6r3Jy50MVVEH7tkttBe4cf5zv9kQC3Ss53DUYZTE/QaGNMMsby6pKc4tzWxZGKXsndhFMr79gHGA5klSxVz8kWH2nsbelSj8zkK0fg==',key_name='tempest-keypair-1953156472',keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:08:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dd34a6a62cf94436a2b836fa4f49c4fa',ramdisk_id='',reservation_id='r-i4td7gfo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1914764435',owner_user_name='tempest-ServerActionsTestJSON-1914764435-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:10:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4745812b7eb47908ded25b1eb7c7328',uuid=59e35a32-9023-4e49-be56-9da10df3027f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 05 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.971 349552 DEBUG nova.network.os_vif_util [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Converting VIF {"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.972 349552 DEBUG nova.network.os_vif_util [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.974 349552 DEBUG nova.objects.instance [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lazy-loading 'pci_devices' on Instance uuid 59e35a32-9023-4e49-be56-9da10df3027f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.997 349552 DEBUG nova.virt.libvirt.driver [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] End _get_guest_xml xml=<domain type="kvm">
Dec 05 02:10:13 compute-0 nova_compute[349548]:   <uuid>59e35a32-9023-4e49-be56-9da10df3027f</uuid>
Dec 05 02:10:13 compute-0 nova_compute[349548]:   <name>instance-00000008</name>
Dec 05 02:10:13 compute-0 nova_compute[349548]:   <memory>131072</memory>
Dec 05 02:10:13 compute-0 nova_compute[349548]:   <vcpu>1</vcpu>
Dec 05 02:10:13 compute-0 nova_compute[349548]:   <metadata>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <nova:name>tempest-ServerActionsTestJSON-server-1678320742</nova:name>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <nova:creationTime>2025-12-05 02:10:12</nova:creationTime>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <nova:flavor name="m1.nano">
Dec 05 02:10:13 compute-0 nova_compute[349548]:         <nova:memory>128</nova:memory>
Dec 05 02:10:13 compute-0 nova_compute[349548]:         <nova:disk>1</nova:disk>
Dec 05 02:10:13 compute-0 nova_compute[349548]:         <nova:swap>0</nova:swap>
Dec 05 02:10:13 compute-0 nova_compute[349548]:         <nova:ephemeral>0</nova:ephemeral>
Dec 05 02:10:13 compute-0 nova_compute[349548]:         <nova:vcpus>1</nova:vcpus>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       </nova:flavor>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <nova:owner>
Dec 05 02:10:13 compute-0 nova_compute[349548]:         <nova:user uuid="b4745812b7eb47908ded25b1eb7c7328">tempest-ServerActionsTestJSON-1914764435-project-member</nova:user>
Dec 05 02:10:13 compute-0 nova_compute[349548]:         <nova:project uuid="dd34a6a62cf94436a2b836fa4f49c4fa">tempest-ServerActionsTestJSON-1914764435</nova:project>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       </nova:owner>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <nova:root type="image" uuid="e9091bfb-b431-47c9-a284-79372046956b"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <nova:ports>
Dec 05 02:10:13 compute-0 nova_compute[349548]:         <nova:port uuid="a240e2ef-1773-4509-ac04-eae1f5d36e08">
Dec 05 02:10:13 compute-0 nova_compute[349548]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:         </nova:port>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       </nova:ports>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     </nova:instance>
Dec 05 02:10:13 compute-0 nova_compute[349548]:   </metadata>
Dec 05 02:10:13 compute-0 nova_compute[349548]:   <sysinfo type="smbios">
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <system>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <entry name="manufacturer">RDO</entry>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <entry name="product">OpenStack Compute</entry>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <entry name="serial">59e35a32-9023-4e49-be56-9da10df3027f</entry>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <entry name="uuid">59e35a32-9023-4e49-be56-9da10df3027f</entry>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <entry name="family">Virtual Machine</entry>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     </system>
Dec 05 02:10:13 compute-0 nova_compute[349548]:   </sysinfo>
Dec 05 02:10:13 compute-0 nova_compute[349548]:   <os>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <boot dev="hd"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <smbios mode="sysinfo"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:   </os>
Dec 05 02:10:13 compute-0 nova_compute[349548]:   <features>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <acpi/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <apic/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <vmcoreinfo/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:   </features>
Dec 05 02:10:13 compute-0 nova_compute[349548]:   <clock offset="utc">
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <timer name="pit" tickpolicy="delay"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <timer name="hpet" present="no"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:   </clock>
Dec 05 02:10:13 compute-0 nova_compute[349548]:   <cpu mode="host-model" match="exact">
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <topology sockets="1" cores="1" threads="1"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:   </cpu>
Dec 05 02:10:13 compute-0 nova_compute[349548]:   <devices>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <disk type="network" device="disk">
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/59e35a32-9023-4e49-be56-9da10df3027f_disk">
Dec 05 02:10:13 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       </source>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 02:10:13 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       </auth>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <target dev="vda" bus="virtio"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     </disk>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <disk type="network" device="cdrom">
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/59e35a32-9023-4e49-be56-9da10df3027f_disk.config">
Dec 05 02:10:13 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       </source>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 02:10:13 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       </auth>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <target dev="sda" bus="sata"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     </disk>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <interface type="ethernet">
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <mac address="fa:16:3e:16:81:87"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <driver name="vhost" rx_queue_size="512"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <mtu size="1442"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <target dev="tapa240e2ef-17"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     </interface>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <serial type="pty">
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <log file="/var/lib/nova/instances/59e35a32-9023-4e49-be56-9da10df3027f/console.log" append="off"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     </serial>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <video>
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     </video>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <input type="tablet" bus="usb"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <input type="keyboard" bus="usb"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <rng model="virtio">
Dec 05 02:10:13 compute-0 nova_compute[349548]:       <backend model="random">/dev/urandom</backend>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     </rng>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:13 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:14 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:14 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:14 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:14 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:14 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:14 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:14 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:14 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:14 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:14 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:14 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:14 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:14 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:14 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:14 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:14 compute-0 nova_compute[349548]:     <controller type="usb" index="0"/>
Dec 05 02:10:14 compute-0 nova_compute[349548]:     <memballoon model="virtio">
Dec 05 02:10:14 compute-0 nova_compute[349548]:       <stats period="10"/>
Dec 05 02:10:14 compute-0 nova_compute[349548]:     </memballoon>
Dec 05 02:10:14 compute-0 nova_compute[349548]:   </devices>
Dec 05 02:10:14 compute-0 nova_compute[349548]: </domain>
Dec 05 02:10:14 compute-0 nova_compute[349548]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.998 349552 DEBUG nova.virt.libvirt.driver [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.999 349552 DEBUG nova.virt.libvirt.driver [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.000 349552 DEBUG nova.virt.libvirt.vif [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:08:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1678320742',display_name='tempest-ServerActionsTestJSON-server-1678320742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1678320742',id=8,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmirf5PzEcVuq6RNudVuflcugnc6r3Jy50MVVEH7tkttBe4cf5zv9kQC3Ss53DUYZTE/QaGNMMsby6pKc4tzWxZGKXsndhFMr79gHGA5klSxVz8kWH2nsbelSj8zkK0fg==',key_name='tempest-keypair-1953156472',keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:08:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='dd34a6a62cf94436a2b836fa4f49c4fa',ramdisk_id='',reservation_id='r-i4td7gfo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1914764435',owner_user_name='tempest-ServerActionsTestJSON-1914764435-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:10:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4745812b7eb47908ded25b1eb7c7328',uuid=59e35a32-9023-4e49-be56-9da10df3027f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.000 349552 DEBUG nova.network.os_vif_util [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Converting VIF {"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.000 349552 DEBUG nova.network.os_vif_util [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.001 349552 DEBUG os_vif [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.002 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.002 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.003 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.006 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.006 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa240e2ef-17, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.007 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa240e2ef-17, col_values=(('external_ids', {'iface-id': 'a240e2ef-1773-4509-ac04-eae1f5d36e08', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:81:87', 'vm-uuid': '59e35a32-9023-4e49-be56-9da10df3027f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.009 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:14 compute-0 NetworkManager[49092]: <info>  [1764900614.0113] manager: (tapa240e2ef-17): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.013 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.021 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.024 349552 INFO os_vif [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17')
Dec 05 02:10:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1836: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Dec 05 02:10:14 compute-0 kernel: tapa240e2ef-17: entered promiscuous mode
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.149 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:14 compute-0 systemd-udevd[445178]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 02:10:14 compute-0 ovn_controller[89286]: 2025-12-05T02:10:14Z|00107|binding|INFO|Claiming lport a240e2ef-1773-4509-ac04-eae1f5d36e08 for this chassis.
Dec 05 02:10:14 compute-0 ovn_controller[89286]: 2025-12-05T02:10:14Z|00108|binding|INFO|a240e2ef-1773-4509-ac04-eae1f5d36e08: Claiming fa:16:3e:16:81:87 10.100.0.10
Dec 05 02:10:14 compute-0 NetworkManager[49092]: <info>  [1764900614.1520] manager: (tapa240e2ef-17): new Tun device (/org/freedesktop/NetworkManager/Devices/54)
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.160 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:81:87 10.100.0.10'], port_security=['fa:16:3e:16:81:87 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '59e35a32-9023-4e49-be56-9da10df3027f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd34a6a62cf94436a2b836fa4f49c4fa', 'neutron:revision_number': '5', 'neutron:security_group_ids': '0ad1486e-ab79-4bad-bad5-777f54ed0ef1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=880ae0ff-40ec-4de0-a5e7-7c2cf13ecf72, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=a240e2ef-1773-4509-ac04-eae1f5d36e08) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.162 287122 INFO neutron.agent.ovn.metadata.agent [-] Port a240e2ef-1773-4509-ac04-eae1f5d36e08 in datapath a9bc378d-2d4b-4990-99ce-02656b1fec0d bound to our chassis
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.165 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a9bc378d-2d4b-4990-99ce-02656b1fec0d
Dec 05 02:10:14 compute-0 ovn_controller[89286]: 2025-12-05T02:10:14Z|00109|binding|INFO|Setting lport a240e2ef-1773-4509-ac04-eae1f5d36e08 ovn-installed in OVS
Dec 05 02:10:14 compute-0 ovn_controller[89286]: 2025-12-05T02:10:14Z|00110|binding|INFO|Setting lport a240e2ef-1773-4509-ac04-eae1f5d36e08 up in Southbound
Dec 05 02:10:14 compute-0 NetworkManager[49092]: <info>  [1764900614.1767] device (tapa240e2ef-17): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.180 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.180 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[7d303a4b-e81f-4427-8db6-81915c4d7d09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.182 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa9bc378d-21 in ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.184 412744 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa9bc378d-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.184 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[b37c84f6-ca83-4974-83b1-05e0cb869a57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:14 compute-0 NetworkManager[49092]: <info>  [1764900614.1862] device (tapa240e2ef-17): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.186 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[f9311d52-696b-45cd-87d4-3753af3639c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.201 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[1baeb28a-9e0f-411f-a1db-dabbe935c884]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:14 compute-0 systemd-machined[138700]: New machine qemu-10-instance-00000008.
Dec 05 02:10:14 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-00000008.
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.231 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[79e300bd-46cd-4992-a825-ec434e2f7b2b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.264 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[73c0394c-74b1-4603-9716-9fbe12d24ceb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:14 compute-0 NetworkManager[49092]: <info>  [1764900614.2723] manager: (tapa9bc378d-20): new Veth device (/org/freedesktop/NetworkManager/Devices/55)
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.271 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[28ab3055-c708-4827-ab9e-fcbbb14386c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.309 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[1b82918c-3a11-489f-b89e-17c3c2322201]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.313 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[17e29d4f-3c4f-4607-8b07-ed9277853f2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:14 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/684689122' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:10:14 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2797718717' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:10:14 compute-0 NetworkManager[49092]: <info>  [1764900614.3423] device (tapa9bc378d-20): carrier: link connected
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.349 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe454db-0a1c-431b-b594-2b95ebefa4c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.370 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[c81264b6-4bb9-48ae-b5a4-a9febab48746]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa9bc378d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c2:fe:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670100, 'reachable_time': 44877, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 445350, 'error': None, 'target': 'ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.391 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[f0355278-e171-44f3-8c57-069e13c9900f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec2:feea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 670100, 'tstamp': 670100}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 445351, 'error': None, 'target': 'ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.410 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[e49a377f-1997-4b93-bce6-515d7b210f90]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa9bc378d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c2:fe:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670100, 'reachable_time': 44877, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 445352, 'error': None, 'target': 'ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.460 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[009903df-7345-49ad-a606-cdce9f9c1190]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.538 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[9bef3ccc-51aa-4dc6-9f99-0e15736143d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.539 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa9bc378d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.540 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.541 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa9bc378d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.543 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:14 compute-0 kernel: tapa9bc378d-20: entered promiscuous mode
Dec 05 02:10:14 compute-0 NetworkManager[49092]: <info>  [1764900614.5483] manager: (tapa9bc378d-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.547 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.549 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa9bc378d-20, col_values=(('external_ids', {'iface-id': '3d0916d7-6f03-4daf-8f3b-126228223c53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.551 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:14 compute-0 ovn_controller[89286]: 2025-12-05T02:10:14Z|00111|binding|INFO|Releasing lport 3d0916d7-6f03-4daf-8f3b-126228223c53 from this chassis (sb_readonly=0)
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.583 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.586 287122 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a9bc378d-2d4b-4990-99ce-02656b1fec0d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a9bc378d-2d4b-4990-99ce-02656b1fec0d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.587 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[ea29af44-e7cb-4690-a8de-35bb772cc23f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.589 287122 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: global
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]:     log         /dev/log local0 debug
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]:     log-tag     haproxy-metadata-proxy-a9bc378d-2d4b-4990-99ce-02656b1fec0d
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]:     user        root
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]:     group       root
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]:     maxconn     1024
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]:     pidfile     /var/lib/neutron/external/pids/a9bc378d-2d4b-4990-99ce-02656b1fec0d.pid.haproxy
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]:     daemon
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: defaults
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]:     log global
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]:     mode http
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]:     option httplog
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]:     option dontlognull
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]:     option http-server-close
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]:     option forwardfor
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]:     retries                 3
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]:     timeout http-request    30s
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]:     timeout connect         30s
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]:     timeout client          32s
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]:     timeout server          32s
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]:     timeout http-keep-alive 30s
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: listen listener
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]:     bind 169.254.169.254:80
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]:     server metadata /var/lib/neutron/metadata_proxy
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]:     http-request add-header X-OVN-Network-ID a9bc378d-2d4b-4990-99ce-02656b1fec0d
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 05 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.590 287122 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'env', 'PROCESS_TAG=haproxy-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a9bc378d-2d4b-4990-99ce-02656b1fec0d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.909 349552 DEBUG nova.network.neutron [None req-462b6329-01a2-45a9-b19f-01d82fb4c16c 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updating instance_info_cache with network_info: [{"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.927 349552 DEBUG oslo_concurrency.lockutils [None req-462b6329-01a2-45a9-b19f-01d82fb4c16c 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Releasing lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.928 349552 DEBUG nova.compute.manager [None req-462b6329-01a2-45a9-b19f-01d82fb4c16c 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.928 349552 DEBUG nova.compute.manager [None req-462b6329-01a2-45a9-b19f-01d82fb4c16c 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] network_info to inject: |[{"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.930 349552 DEBUG oslo_concurrency.lockutils [req-06a18c23-30b8-4680-9ca3-f4b33a766b4e req-e4f00246-7028-4fa6-b2f1-0f915a73aadd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.930 349552 DEBUG nova.network.neutron [req-06a18c23-30b8-4680-9ca3-f4b33a766b4e req-e4f00246-7028-4fa6-b2f1-0f915a73aadd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Refreshing network info cache for port 2ac46e0a-6888-440f-b155-d4b0e8677304 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.070 349552 DEBUG nova.network.neutron [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Successfully updated port: 26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.090 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Acquiring lock "refresh_cache-3391e1ba-0e6b-4113-b402-027e997b3cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.091 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Acquired lock "refresh_cache-3391e1ba-0e6b-4113-b402-027e997b3cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.091 349552 DEBUG nova.network.neutron [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.093 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Removed pending event for 59e35a32-9023-4e49-be56-9da10df3027f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.093 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900615.0912287, 59e35a32-9023-4e49-be56-9da10df3027f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.093 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] VM Resumed (Lifecycle Event)
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.095 349552 DEBUG nova.compute.manager [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 05 02:10:15 compute-0 podman[445424]: 2025-12-05 02:10:15.098743874 +0000 UTC m=+0.081434218 container create 2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.100 349552 INFO nova.virt.libvirt.driver [-] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Instance rebooted successfully.
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.100 349552 DEBUG nova.compute.manager [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.129 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.133 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 02:10:15 compute-0 systemd[1]: Started libpod-conmon-2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da.scope.
Dec 05 02:10:15 compute-0 podman[445424]: 2025-12-05 02:10:15.0665834 +0000 UTC m=+0.049273774 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 05 02:10:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.177 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.178 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900615.0963786, 59e35a32-9023-4e49-be56-9da10df3027f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.178 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] VM Started (Lifecycle Event)
Dec 05 02:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a4098034b79b17a1b0a33ca61c1f904969485d36ccd5269a78d56bbd845de7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.200 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.205 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 02:10:15 compute-0 podman[445424]: 2025-12-05 02:10:15.209150815 +0000 UTC m=+0.191841179 container init 2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.215 349552 DEBUG oslo_concurrency.lockutils [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:10:15 compute-0 podman[445424]: 2025-12-05 02:10:15.216020078 +0000 UTC m=+0.198710422 container start 2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:10:15 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[445441]: [NOTICE]   (445445) : New worker (445447) forked
Dec 05 02:10:15 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[445441]: [NOTICE]   (445445) : Loading success.
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.273 349552 DEBUG nova.compute.manager [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.273 349552 DEBUG oslo_concurrency.lockutils [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.273 349552 DEBUG oslo_concurrency.lockutils [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.274 349552 DEBUG oslo_concurrency.lockutils [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.274 349552 DEBUG nova.compute.manager [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] No waiting events found dispatching network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.274 349552 WARNING nova.compute.manager [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received unexpected event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 for instance with vm_state active and task_state None.
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.274 349552 DEBUG nova.compute.manager [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.274 349552 DEBUG oslo_concurrency.lockutils [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.275 349552 DEBUG oslo_concurrency.lockutils [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.275 349552 DEBUG oslo_concurrency.lockutils [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.275 349552 DEBUG nova.compute.manager [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] No waiting events found dispatching network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.275 349552 WARNING nova.compute.manager [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received unexpected event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 for instance with vm_state active and task_state None.
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.276 349552 DEBUG nova.compute.manager [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Received event network-changed-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.276 349552 DEBUG nova.compute.manager [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Refreshing instance network info cache due to event network-changed-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.276 349552 DEBUG oslo_concurrency.lockutils [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-3391e1ba-0e6b-4113-b402-027e997b3cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:10:15 compute-0 ceph-mon[192914]: pgmap v1836: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Dec 05 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.482 349552 DEBUG nova.network.neutron [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 05 02:10:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1837: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.188 349552 DEBUG oslo_concurrency.lockutils [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquiring lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.189 349552 DEBUG oslo_concurrency.lockutils [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.190 349552 DEBUG oslo_concurrency.lockutils [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquiring lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.191 349552 DEBUG oslo_concurrency.lockutils [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.192 349552 DEBUG oslo_concurrency.lockutils [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.194 349552 INFO nova.compute.manager [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Terminating instance
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.197 349552 DEBUG nova.compute.manager [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 05 02:10:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:10:16 compute-0 kernel: tap2ac46e0a-68 (unregistering): left promiscuous mode
Dec 05 02:10:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:10:16 compute-0 NetworkManager[49092]: <info>  [1764900616.3258] device (tap2ac46e0a-68): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 05 02:10:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:10:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:10:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:10:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:10:16 compute-0 ovn_controller[89286]: 2025-12-05T02:10:16Z|00112|binding|INFO|Releasing lport 2ac46e0a-6888-440f-b155-d4b0e8677304 from this chassis (sb_readonly=0)
Dec 05 02:10:16 compute-0 ovn_controller[89286]: 2025-12-05T02:10:16Z|00113|binding|INFO|Setting lport 2ac46e0a-6888-440f-b155-d4b0e8677304 down in Southbound
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.357 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:16 compute-0 ovn_controller[89286]: 2025-12-05T02:10:16Z|00114|binding|INFO|Removing iface tap2ac46e0a-68 ovn-installed in OVS
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.361 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.369 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:ba:4f 10.100.0.11'], port_security=['fa:16:3e:ca:ba:4f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '939ae9f2-b89c-4a19-96de-ab4dfc882a35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77ae1103-3871-4354-8e08-09bb5c0c1ad1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '70b71e0f6ffe47ed86a910f90d71557a', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'fd91b173-28fd-4506-a2d4-b70d7da34ab9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1a9bd25-2abf-40fe-aac7-26f2653ba067, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=2ac46e0a-6888-440f-b155-d4b0e8677304) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:10:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:10:16
Dec 05 02:10:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:10:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:10:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['backups', '.mgr', 'default.rgw.control', 'images', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log']
Dec 05 02:10:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.375 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 2ac46e0a-6888-440f-b155-d4b0e8677304 in datapath 77ae1103-3871-4354-8e08-09bb5c0c1ad1 unbound from our chassis
Dec 05 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.379 287122 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77ae1103-3871-4354-8e08-09bb5c0c1ad1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.380 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e3df8d-70b7-4254-b8e7-81f9d0e2e647]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.381 287122 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1 namespace which is not needed anymore
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.393 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:16 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Dec 05 02:10:16 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 46.899s CPU time.
Dec 05 02:10:16 compute-0 systemd-machined[138700]: Machine qemu-7-instance-00000007 terminated.
Dec 05 02:10:16 compute-0 NetworkManager[49092]: <info>  [1764900616.4344] manager: (tap2ac46e0a-68): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.438 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.453 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.461 349552 INFO nova.virt.libvirt.driver [-] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Instance destroyed successfully.
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.461 349552 DEBUG nova.objects.instance [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lazy-loading 'resources' on Instance uuid 939ae9f2-b89c-4a19-96de-ab4dfc882a35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.483 349552 DEBUG nova.virt.libvirt.vif [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:08:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-604018291',display_name='tempest-AttachInterfacesUnderV243Test-server-604018291',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-604018291',id=7,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKU6bELVlVCoUJIshERiWUVj0OnvYD2CYxIalQbnWU21bRDwU7WBbW97LN2cH4XlAr/7mmUrM7ksINLIA4cX46Z53k6IEf2IAXFLlXwCAxrx7KcHDeFsx/HWqs2AH5gWDA==',key_name='tempest-keypair-1932183514',keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:08:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='70b71e0f6ffe47ed86a910f90d71557a',ramdisk_id='',reservation_id='r-agiyf4o6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-532006644',owner_user_name='tempest-AttachInterfacesUnderV243Test-532006644-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:10:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3439b5cde2ff4830bb0294f007842282',uuid=939ae9f2-b89c-4a19-96de-ab4dfc882a35,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.483 349552 DEBUG nova.network.os_vif_util [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Converting VIF {"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.484 349552 DEBUG nova.network.os_vif_util [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ca:ba:4f,bridge_name='br-int',has_traffic_filtering=True,id=2ac46e0a-6888-440f-b155-d4b0e8677304,network=Network(77ae1103-3871-4354-8e08-09bb5c0c1ad1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ac46e0a-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.485 349552 DEBUG os_vif [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ca:ba:4f,bridge_name='br-int',has_traffic_filtering=True,id=2ac46e0a-6888-440f-b155-d4b0e8677304,network=Network(77ae1103-3871-4354-8e08-09bb5c0c1ad1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ac46e0a-68') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.488 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.489 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ac46e0a-68, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.491 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.494 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.494 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.497 349552 INFO os_vif [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ca:ba:4f,bridge_name='br-int',has_traffic_filtering=True,id=2ac46e0a-6888-440f-b155-d4b0e8677304,network=Network(77ae1103-3871-4354-8e08-09bb5c0c1ad1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ac46e0a-68')
Dec 05 02:10:16 compute-0 neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1[442534]: [NOTICE]   (442557) : haproxy version is 2.8.14-c23fe91
Dec 05 02:10:16 compute-0 neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1[442534]: [NOTICE]   (442557) : path to executable is /usr/sbin/haproxy
Dec 05 02:10:16 compute-0 neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1[442534]: [WARNING]  (442557) : Exiting Master process...
Dec 05 02:10:16 compute-0 neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1[442534]: [WARNING]  (442557) : Exiting Master process...
Dec 05 02:10:16 compute-0 neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1[442534]: [ALERT]    (442557) : Current worker (442559) exited with code 143 (Terminated)
Dec 05 02:10:16 compute-0 neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1[442534]: [WARNING]  (442557) : All workers exited. Exiting... (0)
Dec 05 02:10:16 compute-0 systemd[1]: libpod-12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16.scope: Deactivated successfully.
Dec 05 02:10:16 compute-0 podman[445500]: 2025-12-05 02:10:16.602622151 +0000 UTC m=+0.064276826 container died 12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:10:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16-userdata-shm.mount: Deactivated successfully.
Dec 05 02:10:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-c66e076ce0b97b1ffb0be792f84404fb2f83ab9c6ac5cd8cc44b4f6206b0bf01-merged.mount: Deactivated successfully.
Dec 05 02:10:16 compute-0 podman[445500]: 2025-12-05 02:10:16.65493111 +0000 UTC m=+0.116585775 container cleanup 12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 02:10:16 compute-0 systemd[1]: libpod-conmon-12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16.scope: Deactivated successfully.
Dec 05 02:10:16 compute-0 podman[445531]: 2025-12-05 02:10:16.7646203 +0000 UTC m=+0.070062478 container remove 12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.779 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[f5185954-fed7-450d-aa7e-1bc570526a0b]: (4, ('Fri Dec  5 02:10:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1 (12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16)\n12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16\nFri Dec  5 02:10:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1 (12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16)\n12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.782 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4faea1-29f1-43ef-b1b1-194d4918ca06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.786 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77ae1103-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.788 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:16 compute-0 kernel: tap77ae1103-30: left promiscuous mode
Dec 05 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.796 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[693f6dc2-4cbe-44f9-88fc-83f03cf1a281]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.814 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[732f6eaf-1118-4b09-ac69-4b27d1abb871]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.815 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[d98ac2f6-c49a-496d-af1a-85291d9d18fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.818 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.839 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[858247d4-c3ce-4767-a3e9-74714e6a38fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661233, 'reachable_time': 42827, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 445545, 'error': None, 'target': 'ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:16 compute-0 systemd[1]: run-netns-ovnmeta\x2d77ae1103\x2d3871\x2d4354\x2d8e08\x2d09bb5c0c1ad1.mount: Deactivated successfully.
Dec 05 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.843 287504 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 05 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.844 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[6b1d0f26-afee-4c5d-b127-2bc91bf10660]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:17 compute-0 nova_compute[349548]: 2025-12-05 02:10:17.116 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:17 compute-0 nova_compute[349548]: 2025-12-05 02:10:17.214 349552 INFO nova.virt.libvirt.driver [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Deleting instance files /var/lib/nova/instances/939ae9f2-b89c-4a19-96de-ab4dfc882a35_del
Dec 05 02:10:17 compute-0 nova_compute[349548]: 2025-12-05 02:10:17.214 349552 INFO nova.virt.libvirt.driver [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Deletion of /var/lib/nova/instances/939ae9f2-b89c-4a19-96de-ab4dfc882a35_del complete
Dec 05 02:10:17 compute-0 nova_compute[349548]: 2025-12-05 02:10:17.303 349552 INFO nova.compute.manager [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Took 1.10 seconds to destroy the instance on the hypervisor.
Dec 05 02:10:17 compute-0 nova_compute[349548]: 2025-12-05 02:10:17.303 349552 DEBUG oslo.service.loopingcall [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 05 02:10:17 compute-0 nova_compute[349548]: 2025-12-05 02:10:17.304 349552 DEBUG nova.compute.manager [-] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 05 02:10:17 compute-0 nova_compute[349548]: 2025-12-05 02:10:17.304 349552 DEBUG nova.network.neutron [-] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 05 02:10:17 compute-0 ceph-mon[192914]: pgmap v1837: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Dec 05 02:10:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:10:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:10:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:10:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:10:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:10:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:10:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:10:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:10:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:10:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:10:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:10:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1838: 321 pgs: 321 active+clean; 202 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.296 349552 DEBUG nova.compute.manager [req-f99c7165-fa3e-456e-bb36-8180c0016a8d req-924f7264-dabe-4ccd-962c-f7b31be44d87 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.297 349552 DEBUG oslo_concurrency.lockutils [req-f99c7165-fa3e-456e-bb36-8180c0016a8d req-924f7264-dabe-4ccd-962c-f7b31be44d87 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.297 349552 DEBUG oslo_concurrency.lockutils [req-f99c7165-fa3e-456e-bb36-8180c0016a8d req-924f7264-dabe-4ccd-962c-f7b31be44d87 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.298 349552 DEBUG oslo_concurrency.lockutils [req-f99c7165-fa3e-456e-bb36-8180c0016a8d req-924f7264-dabe-4ccd-962c-f7b31be44d87 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.298 349552 DEBUG nova.compute.manager [req-f99c7165-fa3e-456e-bb36-8180c0016a8d req-924f7264-dabe-4ccd-962c-f7b31be44d87 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] No waiting events found dispatching network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.299 349552 WARNING nova.compute.manager [req-f99c7165-fa3e-456e-bb36-8180c0016a8d req-924f7264-dabe-4ccd-962c-f7b31be44d87 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received unexpected event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 for instance with vm_state active and task_state None.
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.300 349552 DEBUG nova.compute.manager [req-f99c7165-fa3e-456e-bb36-8180c0016a8d req-924f7264-dabe-4ccd-962c-f7b31be44d87 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Received event network-vif-unplugged-2ac46e0a-6888-440f-b155-d4b0e8677304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.300 349552 DEBUG oslo_concurrency.lockutils [req-f99c7165-fa3e-456e-bb36-8180c0016a8d req-924f7264-dabe-4ccd-962c-f7b31be44d87 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.301 349552 DEBUG oslo_concurrency.lockutils [req-f99c7165-fa3e-456e-bb36-8180c0016a8d req-924f7264-dabe-4ccd-962c-f7b31be44d87 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.302 349552 DEBUG oslo_concurrency.lockutils [req-f99c7165-fa3e-456e-bb36-8180c0016a8d req-924f7264-dabe-4ccd-962c-f7b31be44d87 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.302 349552 DEBUG nova.compute.manager [req-f99c7165-fa3e-456e-bb36-8180c0016a8d req-924f7264-dabe-4ccd-962c-f7b31be44d87 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] No waiting events found dispatching network-vif-unplugged-2ac46e0a-6888-440f-b155-d4b0e8677304 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.303 349552 DEBUG nova.compute.manager [req-f99c7165-fa3e-456e-bb36-8180c0016a8d req-924f7264-dabe-4ccd-962c-f7b31be44d87 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Received event network-vif-unplugged-2ac46e0a-6888-440f-b155-d4b0e8677304 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.363 349552 DEBUG nova.network.neutron [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Updating instance_info_cache with network_info: [{"id": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "address": "fa:16:3e:6a:63:ca", "network": {"id": "ff773210-0089-4a3b-936f-15f2b6743c77", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-4563358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f120ce30568246929ef2dc1a9f0bd0c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26b950d4-e9", "ovs_interfaceid": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.406 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Releasing lock "refresh_cache-3391e1ba-0e6b-4113-b402-027e997b3cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.407 349552 DEBUG nova.compute.manager [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Instance network_info: |[{"id": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "address": "fa:16:3e:6a:63:ca", "network": {"id": "ff773210-0089-4a3b-936f-15f2b6743c77", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-4563358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f120ce30568246929ef2dc1a9f0bd0c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26b950d4-e9", "ovs_interfaceid": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.408 349552 DEBUG oslo_concurrency.lockutils [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-3391e1ba-0e6b-4113-b402-027e997b3cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.409 349552 DEBUG nova.network.neutron [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Refreshing network info cache for port 26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.414 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Start _get_guest_xml network_info=[{"id": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "address": "fa:16:3e:6a:63:ca", "network": {"id": "ff773210-0089-4a3b-936f-15f2b6743c77", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-4563358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f120ce30568246929ef2dc1a9f0bd0c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26b950d4-e9", "ovs_interfaceid": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'e9091bfb-b431-47c9-a284-79372046956b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.428 349552 WARNING nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.449 349552 DEBUG nova.virt.libvirt.host [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.450 349552 DEBUG nova.virt.libvirt.host [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.457 349552 DEBUG nova.virt.libvirt.host [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.458 349552 DEBUG nova.virt.libvirt.host [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.459 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.459 349552 DEBUG nova.virt.hardware [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T02:07:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.460 349552 DEBUG nova.virt.hardware [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.461 349552 DEBUG nova.virt.hardware [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.461 349552 DEBUG nova.virt.hardware [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.461 349552 DEBUG nova.virt.hardware [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.462 349552 DEBUG nova.virt.hardware [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.462 349552 DEBUG nova.virt.hardware [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.463 349552 DEBUG nova.virt.hardware [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.463 349552 DEBUG nova.virt.hardware [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.464 349552 DEBUG nova.virt.hardware [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.464 349552 DEBUG nova.virt.hardware [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.468 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.896 349552 DEBUG nova.network.neutron [req-06a18c23-30b8-4680-9ca3-f4b33a766b4e req-e4f00246-7028-4fa6-b2f1-0f915a73aadd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updated VIF entry in instance network info cache for port 2ac46e0a-6888-440f-b155-d4b0e8677304. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.898 349552 DEBUG nova.network.neutron [req-06a18c23-30b8-4680-9ca3-f4b33a766b4e req-e4f00246-7028-4fa6-b2f1-0f915a73aadd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updating instance_info_cache with network_info: [{"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.916 349552 DEBUG oslo_concurrency.lockutils [req-06a18c23-30b8-4680-9ca3-f4b33a766b4e req-e4f00246-7028-4fa6-b2f1-0f915a73aadd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:10:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 02:10:18 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3988017926' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.990 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.047 349552 DEBUG nova.storage.rbd_utils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] rbd image 3391e1ba-0e6b-4113-b402-027e997b3cb9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.056 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:10:19 compute-0 ceph-mon[192914]: pgmap v1838: 321 pgs: 321 active+clean; 202 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 05 02:10:19 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3988017926' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:10:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 02:10:19 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/860217210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.514 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.517 349552 DEBUG nova.virt.libvirt.vif [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:10:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-2017371141',display_name='tempest-ServerAddressesTestJSON-server-2017371141',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-2017371141',id=10,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f120ce30568246929ef2dc1a9f0bd0c7',ramdisk_id='',reservation_id='r-mwlljk6u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1048961571',owner_user_name='tempest-ServerAddressesTestJSON-1048961571-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:10:07Z,user_data=None,user_id='f18ce80284524cbb9497cac2c6e6bf32',uuid=3391e1ba-0e6b-4113-b402-027e997b3cb9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "address": "fa:16:3e:6a:63:ca", "network": {"id": "ff773210-0089-4a3b-936f-15f2b6743c77", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-4563358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f120ce30568246929ef2dc1a9f0bd0c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26b950d4-e9", "ovs_interfaceid": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.518 349552 DEBUG nova.network.os_vif_util [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Converting VIF {"id": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "address": "fa:16:3e:6a:63:ca", "network": {"id": "ff773210-0089-4a3b-936f-15f2b6743c77", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-4563358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f120ce30568246929ef2dc1a9f0bd0c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26b950d4-e9", "ovs_interfaceid": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.520 349552 DEBUG nova.network.os_vif_util [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:63:ca,bridge_name='br-int',has_traffic_filtering=True,id=26b950d4-e9c2-45ea-8e3a-bd06bf2227d4,network=Network(ff773210-0089-4a3b-936f-15f2b6743c77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26b950d4-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.523 349552 DEBUG nova.objects.instance [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3391e1ba-0e6b-4113-b402-027e997b3cb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.550 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] End _get_guest_xml xml=<domain type="kvm">
Dec 05 02:10:19 compute-0 nova_compute[349548]:   <uuid>3391e1ba-0e6b-4113-b402-027e997b3cb9</uuid>
Dec 05 02:10:19 compute-0 nova_compute[349548]:   <name>instance-0000000a</name>
Dec 05 02:10:19 compute-0 nova_compute[349548]:   <memory>131072</memory>
Dec 05 02:10:19 compute-0 nova_compute[349548]:   <vcpu>1</vcpu>
Dec 05 02:10:19 compute-0 nova_compute[349548]:   <metadata>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <nova:name>tempest-ServerAddressesTestJSON-server-2017371141</nova:name>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <nova:creationTime>2025-12-05 02:10:18</nova:creationTime>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <nova:flavor name="m1.nano">
Dec 05 02:10:19 compute-0 nova_compute[349548]:         <nova:memory>128</nova:memory>
Dec 05 02:10:19 compute-0 nova_compute[349548]:         <nova:disk>1</nova:disk>
Dec 05 02:10:19 compute-0 nova_compute[349548]:         <nova:swap>0</nova:swap>
Dec 05 02:10:19 compute-0 nova_compute[349548]:         <nova:ephemeral>0</nova:ephemeral>
Dec 05 02:10:19 compute-0 nova_compute[349548]:         <nova:vcpus>1</nova:vcpus>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       </nova:flavor>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <nova:owner>
Dec 05 02:10:19 compute-0 nova_compute[349548]:         <nova:user uuid="f18ce80284524cbb9497cac2c6e6bf32">tempest-ServerAddressesTestJSON-1048961571-project-member</nova:user>
Dec 05 02:10:19 compute-0 nova_compute[349548]:         <nova:project uuid="f120ce30568246929ef2dc1a9f0bd0c7">tempest-ServerAddressesTestJSON-1048961571</nova:project>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       </nova:owner>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <nova:root type="image" uuid="e9091bfb-b431-47c9-a284-79372046956b"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <nova:ports>
Dec 05 02:10:19 compute-0 nova_compute[349548]:         <nova:port uuid="26b950d4-e9c2-45ea-8e3a-bd06bf2227d4">
Dec 05 02:10:19 compute-0 nova_compute[349548]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:         </nova:port>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       </nova:ports>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     </nova:instance>
Dec 05 02:10:19 compute-0 nova_compute[349548]:   </metadata>
Dec 05 02:10:19 compute-0 nova_compute[349548]:   <sysinfo type="smbios">
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <system>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <entry name="manufacturer">RDO</entry>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <entry name="product">OpenStack Compute</entry>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <entry name="serial">3391e1ba-0e6b-4113-b402-027e997b3cb9</entry>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <entry name="uuid">3391e1ba-0e6b-4113-b402-027e997b3cb9</entry>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <entry name="family">Virtual Machine</entry>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     </system>
Dec 05 02:10:19 compute-0 nova_compute[349548]:   </sysinfo>
Dec 05 02:10:19 compute-0 nova_compute[349548]:   <os>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <boot dev="hd"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <smbios mode="sysinfo"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:   </os>
Dec 05 02:10:19 compute-0 nova_compute[349548]:   <features>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <acpi/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <apic/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <vmcoreinfo/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:   </features>
Dec 05 02:10:19 compute-0 nova_compute[349548]:   <clock offset="utc">
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <timer name="pit" tickpolicy="delay"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <timer name="hpet" present="no"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:   </clock>
Dec 05 02:10:19 compute-0 nova_compute[349548]:   <cpu mode="host-model" match="exact">
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <topology sockets="1" cores="1" threads="1"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:   </cpu>
Dec 05 02:10:19 compute-0 nova_compute[349548]:   <devices>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <disk type="network" device="disk">
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/3391e1ba-0e6b-4113-b402-027e997b3cb9_disk">
Dec 05 02:10:19 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       </source>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 02:10:19 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       </auth>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <target dev="vda" bus="virtio"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     </disk>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <disk type="network" device="cdrom">
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/3391e1ba-0e6b-4113-b402-027e997b3cb9_disk.config">
Dec 05 02:10:19 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       </source>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 02:10:19 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       </auth>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <target dev="sda" bus="sata"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     </disk>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <interface type="ethernet">
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <mac address="fa:16:3e:6a:63:ca"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <driver name="vhost" rx_queue_size="512"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <mtu size="1442"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <target dev="tap26b950d4-e9"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     </interface>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <serial type="pty">
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <log file="/var/lib/nova/instances/3391e1ba-0e6b-4113-b402-027e997b3cb9/console.log" append="off"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     </serial>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <video>
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     </video>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <input type="tablet" bus="usb"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <rng model="virtio">
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <backend model="random">/dev/urandom</backend>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     </rng>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <controller type="usb" index="0"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     <memballoon model="virtio">
Dec 05 02:10:19 compute-0 nova_compute[349548]:       <stats period="10"/>
Dec 05 02:10:19 compute-0 nova_compute[349548]:     </memballoon>
Dec 05 02:10:19 compute-0 nova_compute[349548]:   </devices>
Dec 05 02:10:19 compute-0 nova_compute[349548]: </domain>
Dec 05 02:10:19 compute-0 nova_compute[349548]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.553 349552 DEBUG nova.compute.manager [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Preparing to wait for external event network-vif-plugged-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.555 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Acquiring lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.556 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.557 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.558 349552 DEBUG nova.virt.libvirt.vif [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:10:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-2017371141',display_name='tempest-ServerAddressesTestJSON-server-2017371141',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-2017371141',id=10,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f120ce30568246929ef2dc1a9f0bd0c7',ramdisk_id='',reservation_id='r-mwlljk6u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1048961571',owner_user_name='tempest-ServerAddressesTestJSON-1048961571-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:10:07Z,user_data=None,user_id='f18ce80284524cbb9497cac2c6e6bf32',uuid=3391e1ba-0e6b-4113-b402-027e997b3cb9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "address": "fa:16:3e:6a:63:ca", "network": {"id": "ff773210-0089-4a3b-936f-15f2b6743c77", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-4563358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f120ce30568246929ef2dc1a9f0bd0c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26b950d4-e9", "ovs_interfaceid": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.559 349552 DEBUG nova.network.os_vif_util [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Converting VIF {"id": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "address": "fa:16:3e:6a:63:ca", "network": {"id": "ff773210-0089-4a3b-936f-15f2b6743c77", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-4563358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f120ce30568246929ef2dc1a9f0bd0c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26b950d4-e9", "ovs_interfaceid": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.560 349552 DEBUG nova.network.os_vif_util [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:63:ca,bridge_name='br-int',has_traffic_filtering=True,id=26b950d4-e9c2-45ea-8e3a-bd06bf2227d4,network=Network(ff773210-0089-4a3b-936f-15f2b6743c77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26b950d4-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.561 349552 DEBUG os_vif [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:63:ca,bridge_name='br-int',has_traffic_filtering=True,id=26b950d4-e9c2-45ea-8e3a-bd06bf2227d4,network=Network(ff773210-0089-4a3b-936f-15f2b6743c77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26b950d4-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.564 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.565 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.566 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.571 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.572 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26b950d4-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.573 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap26b950d4-e9, col_values=(('external_ids', {'iface-id': '26b950d4-e9c2-45ea-8e3a-bd06bf2227d4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:63:ca', 'vm-uuid': '3391e1ba-0e6b-4113-b402-027e997b3cb9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.576 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:19 compute-0 NetworkManager[49092]: <info>  [1764900619.5793] manager: (tap26b950d4-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.583 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.585 349552 INFO os_vif [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:63:ca,bridge_name='br-int',has_traffic_filtering=True,id=26b950d4-e9c2-45ea-8e3a-bd06bf2227d4,network=Network(ff773210-0089-4a3b-936f-15f2b6743c77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26b950d4-e9')
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.661 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.666 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.668 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] No VIF found with MAC fa:16:3e:6a:63:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.669 349552 INFO nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Using config drive
Dec 05 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.736 349552 DEBUG nova.storage.rbd_utils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] rbd image 3391e1ba-0e6b-4113-b402-027e997b3cb9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:10:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1839: 321 pgs: 321 active+clean; 202 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Dec 05 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.369 349552 DEBUG nova.network.neutron [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Updated VIF entry in instance network info cache for port 26b950d4-e9c2-45ea-8e3a-bd06bf2227d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.370 349552 DEBUG nova.network.neutron [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Updating instance_info_cache with network_info: [{"id": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "address": "fa:16:3e:6a:63:ca", "network": {"id": "ff773210-0089-4a3b-936f-15f2b6743c77", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-4563358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f120ce30568246929ef2dc1a9f0bd0c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26b950d4-e9", "ovs_interfaceid": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:10:20 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/860217210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.387 349552 DEBUG oslo_concurrency.lockutils [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-3391e1ba-0e6b-4113-b402-027e997b3cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.420 349552 INFO nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Creating config drive at /var/lib/nova/instances/3391e1ba-0e6b-4113-b402-027e997b3cb9/disk.config
Dec 05 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.426 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3391e1ba-0e6b-4113-b402-027e997b3cb9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpntct65h4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.457 349552 DEBUG nova.compute.manager [req-80077e0e-46ec-4a59-8722-ffd4821907e4 req-73b7d638-c2ef-4fce-9644-cd216782537e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Received event network-vif-plugged-2ac46e0a-6888-440f-b155-d4b0e8677304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.458 349552 DEBUG oslo_concurrency.lockutils [req-80077e0e-46ec-4a59-8722-ffd4821907e4 req-73b7d638-c2ef-4fce-9644-cd216782537e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.458 349552 DEBUG oslo_concurrency.lockutils [req-80077e0e-46ec-4a59-8722-ffd4821907e4 req-73b7d638-c2ef-4fce-9644-cd216782537e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.459 349552 DEBUG oslo_concurrency.lockutils [req-80077e0e-46ec-4a59-8722-ffd4821907e4 req-73b7d638-c2ef-4fce-9644-cd216782537e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.459 349552 DEBUG nova.compute.manager [req-80077e0e-46ec-4a59-8722-ffd4821907e4 req-73b7d638-c2ef-4fce-9644-cd216782537e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] No waiting events found dispatching network-vif-plugged-2ac46e0a-6888-440f-b155-d4b0e8677304 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.460 349552 WARNING nova.compute.manager [req-80077e0e-46ec-4a59-8722-ffd4821907e4 req-73b7d638-c2ef-4fce-9644-cd216782537e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Received unexpected event network-vif-plugged-2ac46e0a-6888-440f-b155-d4b0e8677304 for instance with vm_state active and task_state deleting.
Dec 05 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.524 349552 DEBUG nova.network.neutron [-] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.547 349552 INFO nova.compute.manager [-] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Took 3.24 seconds to deallocate network for instance.
Dec 05 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.570 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3391e1ba-0e6b-4113-b402-027e997b3cb9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpntct65h4" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.617 349552 DEBUG nova.storage.rbd_utils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] rbd image 3391e1ba-0e6b-4113-b402-027e997b3cb9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.627 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3391e1ba-0e6b-4113-b402-027e997b3cb9/disk.config 3391e1ba-0e6b-4113-b402-027e997b3cb9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.663 349552 DEBUG oslo_concurrency.lockutils [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.665 349552 DEBUG oslo_concurrency.lockutils [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.773 349552 DEBUG oslo_concurrency.processutils [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.908 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3391e1ba-0e6b-4113-b402-027e997b3cb9/disk.config 3391e1ba-0e6b-4113-b402-027e997b3cb9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.910 349552 INFO nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Deleting local config drive /var/lib/nova/instances/3391e1ba-0e6b-4113-b402-027e997b3cb9/disk.config because it was imported into RBD.
Dec 05 02:10:20 compute-0 kernel: tap26b950d4-e9: entered promiscuous mode
Dec 05 02:10:20 compute-0 NetworkManager[49092]: <info>  [1764900620.9867] manager: (tap26b950d4-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Dec 05 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.986 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:20 compute-0 ovn_controller[89286]: 2025-12-05T02:10:20Z|00115|binding|INFO|Claiming lport 26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 for this chassis.
Dec 05 02:10:20 compute-0 ovn_controller[89286]: 2025-12-05T02:10:20Z|00116|binding|INFO|26b950d4-e9c2-45ea-8e3a-bd06bf2227d4: Claiming fa:16:3e:6a:63:ca 10.100.0.12
Dec 05 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.993 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.009 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:63:ca 10.100.0.12'], port_security=['fa:16:3e:6a:63:ca 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3391e1ba-0e6b-4113-b402-027e997b3cb9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff773210-0089-4a3b-936f-15f2b6743c77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f120ce30568246929ef2dc1a9f0bd0c7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8661fcbe-cefc-4ef8-b7d8-1566fb9b4df4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fd9eaded-949d-4594-9bc0-f87080068e48, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=26b950d4-e9c2-45ea-8e3a-bd06bf2227d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.011 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 in datapath ff773210-0089-4a3b-936f-15f2b6743c77 bound to our chassis
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.012 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ff773210-0089-4a3b-936f-15f2b6743c77
Dec 05 02:10:21 compute-0 ovn_controller[89286]: 2025-12-05T02:10:21Z|00117|binding|INFO|Setting lport 26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 ovn-installed in OVS
Dec 05 02:10:21 compute-0 ovn_controller[89286]: 2025-12-05T02:10:21Z|00118|binding|INFO|Setting lport 26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 up in Southbound
Dec 05 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.027 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.031 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea57c14-a9f3-40c8-a17b-ce6fed4c0e0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.032 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapff773210-01 in ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.035 412744 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapff773210-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.035 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[28cb1315-2a10-4d00-b8c7-fa6116273017]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.036 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[2b91fe73-7005-482e-baa1-1c65e98757f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:21 compute-0 systemd-udevd[445703]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.051 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[36263673-c419-48ce-b799-0d7c945a15f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:21 compute-0 NetworkManager[49092]: <info>  [1764900621.0573] device (tap26b950d4-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 05 02:10:21 compute-0 NetworkManager[49092]: <info>  [1764900621.0617] device (tap26b950d4-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 05 02:10:21 compute-0 systemd-machined[138700]: New machine qemu-11-instance-0000000a.
Dec 05 02:10:21 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000a.
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.080 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[fe7450c2-405f-484f-8084-7152f8282941]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.120 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[2adcb426-a8f5-4226-a357-7e0a65ef58e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.129 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[83ff9854-50b1-406d-9bd4-0b5989a66908]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:21 compute-0 systemd-udevd[445707]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 02:10:21 compute-0 NetworkManager[49092]: <info>  [1764900621.1350] manager: (tapff773210-00): new Veth device (/org/freedesktop/NetworkManager/Devices/60)
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.168 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[5c190967-3b0f-44e3-bd26-83b1c1b1ed37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.173 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[60420da3-4206-4cda-9f47-3b1be7d597c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:21 compute-0 NetworkManager[49092]: <info>  [1764900621.2070] device (tapff773210-00): carrier: link connected
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.211 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4de7ea-4022-4ab6-bbfc-7208c0414d86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.234 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[fd036e86-e232-4a46-b119-c8dd84fa8678]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff773210-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:a7:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670786, 'reachable_time': 43944, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 445736, 'error': None, 'target': 'ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.258 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f4bb63-b4b7-462f-b09a-c650c2916e4b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:a70b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 670786, 'tstamp': 670786}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 445737, 'error': None, 'target': 'ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:10:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2024992637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.278 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[1dd8fdf8-67f8-4723-a858-b5d36826e5f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff773210-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:a7:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670786, 'reachable_time': 43944, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 445738, 'error': None, 'target': 'ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.305 349552 DEBUG oslo_concurrency.processutils [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.314 349552 DEBUG nova.compute.provider_tree [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.330 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[7a0cf593-fff9-41a4-8ce6-01cf9d44a72d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.332 349552 DEBUG nova.scheduler.client.report [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.371 349552 DEBUG oslo_concurrency.lockutils [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:10:21 compute-0 ceph-mon[192914]: pgmap v1839: 321 pgs: 321 active+clean; 202 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Dec 05 02:10:21 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2024992637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.419 349552 INFO nova.scheduler.client.report [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Deleted allocations for instance 939ae9f2-b89c-4a19-96de-ab4dfc882a35
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.428 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[0622f133-2875-4adf-b48e-daf7de652dfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.429 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff773210-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.429 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.430 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff773210-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:10:21 compute-0 kernel: tapff773210-00: entered promiscuous mode
Dec 05 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.432 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:21 compute-0 NetworkManager[49092]: <info>  [1764900621.4362] manager: (tapff773210-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.440 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapff773210-00, col_values=(('external_ids', {'iface-id': 'ff2931b3-fb94-4976-be60-545b1f5dca2f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.442 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:21 compute-0 ovn_controller[89286]: 2025-12-05T02:10:21Z|00119|binding|INFO|Releasing lport ff2931b3-fb94-4976-be60-545b1f5dca2f from this chassis (sb_readonly=0)
Dec 05 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.468 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.472 287122 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ff773210-0089-4a3b-936f-15f2b6743c77.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ff773210-0089-4a3b-936f-15f2b6743c77.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.474 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[e4fe77d5-f3f3-4af9-96f9-72059a3f1f0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.475 287122 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: global
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]:     log         /dev/log local0 debug
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]:     log-tag     haproxy-metadata-proxy-ff773210-0089-4a3b-936f-15f2b6743c77
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]:     user        root
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]:     group       root
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]:     maxconn     1024
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]:     pidfile     /var/lib/neutron/external/pids/ff773210-0089-4a3b-936f-15f2b6743c77.pid.haproxy
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]:     daemon
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: defaults
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]:     log global
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]:     mode http
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]:     option httplog
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]:     option dontlognull
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]:     option http-server-close
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]:     option forwardfor
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]:     retries                 3
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]:     timeout http-request    30s
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]:     timeout connect         30s
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]:     timeout client          32s
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]:     timeout server          32s
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]:     timeout http-keep-alive 30s
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: listen listener
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]:     bind 169.254.169.254:80
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]:     server metadata /var/lib/neutron/metadata_proxy
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]:     http-request add-header X-OVN-Network-ID ff773210-0089-4a3b-936f-15f2b6743c77
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 05 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.476 287122 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77', 'env', 'PROCESS_TAG=haproxy-ff773210-0089-4a3b-936f-15f2b6743c77', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ff773210-0089-4a3b-936f-15f2b6743c77.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 05 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.517 349552 DEBUG oslo_concurrency.lockutils [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.328s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.701 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900621.7012355, 3391e1ba-0e6b-4113-b402-027e997b3cb9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.702 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] VM Started (Lifecycle Event)
Dec 05 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.817 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.825 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900621.7013443, 3391e1ba-0e6b-4113-b402-027e997b3cb9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.825 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] VM Paused (Lifecycle Event)
Dec 05 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.864 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.871 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.891 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 02:10:21 compute-0 podman[445811]: 2025-12-05 02:10:21.941354042 +0000 UTC m=+0.102657234 container create 1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 02:10:21 compute-0 podman[445811]: 2025-12-05 02:10:21.878072715 +0000 UTC m=+0.039375897 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 05 02:10:22 compute-0 systemd[1]: Started libpod-conmon-1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d.scope.
Dec 05 02:10:22 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:10:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f87d456a5d6144246a464ab2071935c153697f26dcdbc4ce01d7704fe82715/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 02:10:22 compute-0 podman[445811]: 2025-12-05 02:10:22.095209364 +0000 UTC m=+0.256512596 container init 1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 05 02:10:22 compute-0 podman[445811]: 2025-12-05 02:10:22.105115502 +0000 UTC m=+0.266418694 container start 1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.118 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:22 compute-0 neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77[445826]: [NOTICE]   (445830) : New worker (445832) forked
Dec 05 02:10:22 compute-0 neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77[445826]: [NOTICE]   (445830) : Loading success.
Dec 05 02:10:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1840: 321 pgs: 321 active+clean; 183 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.543 349552 DEBUG nova.compute.manager [req-53d30055-d424-4818-9e0f-3e20d2749fad req-ede8446c-b0bd-4409-897a-960012128c21 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Received event network-vif-deleted-2ac46e0a-6888-440f-b155-d4b0e8677304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.544 349552 DEBUG nova.compute.manager [req-53d30055-d424-4818-9e0f-3e20d2749fad req-ede8446c-b0bd-4409-897a-960012128c21 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Received event network-vif-plugged-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.545 349552 DEBUG oslo_concurrency.lockutils [req-53d30055-d424-4818-9e0f-3e20d2749fad req-ede8446c-b0bd-4409-897a-960012128c21 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.546 349552 DEBUG oslo_concurrency.lockutils [req-53d30055-d424-4818-9e0f-3e20d2749fad req-ede8446c-b0bd-4409-897a-960012128c21 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.546 349552 DEBUG oslo_concurrency.lockutils [req-53d30055-d424-4818-9e0f-3e20d2749fad req-ede8446c-b0bd-4409-897a-960012128c21 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.547 349552 DEBUG nova.compute.manager [req-53d30055-d424-4818-9e0f-3e20d2749fad req-ede8446c-b0bd-4409-897a-960012128c21 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Processing event network-vif-plugged-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.548 349552 DEBUG nova.compute.manager [req-53d30055-d424-4818-9e0f-3e20d2749fad req-ede8446c-b0bd-4409-897a-960012128c21 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Received event network-vif-plugged-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.549 349552 DEBUG oslo_concurrency.lockutils [req-53d30055-d424-4818-9e0f-3e20d2749fad req-ede8446c-b0bd-4409-897a-960012128c21 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.550 349552 DEBUG oslo_concurrency.lockutils [req-53d30055-d424-4818-9e0f-3e20d2749fad req-ede8446c-b0bd-4409-897a-960012128c21 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.550 349552 DEBUG oslo_concurrency.lockutils [req-53d30055-d424-4818-9e0f-3e20d2749fad req-ede8446c-b0bd-4409-897a-960012128c21 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.551 349552 DEBUG nova.compute.manager [req-53d30055-d424-4818-9e0f-3e20d2749fad req-ede8446c-b0bd-4409-897a-960012128c21 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] No waiting events found dispatching network-vif-plugged-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.552 349552 WARNING nova.compute.manager [req-53d30055-d424-4818-9e0f-3e20d2749fad req-ede8446c-b0bd-4409-897a-960012128c21 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Received unexpected event network-vif-plugged-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 for instance with vm_state building and task_state spawning.
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.553 349552 DEBUG nova.compute.manager [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.559 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900622.559234, 3391e1ba-0e6b-4113-b402-027e997b3cb9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.561 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] VM Resumed (Lifecycle Event)
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.564 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.571 349552 INFO nova.virt.libvirt.driver [-] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Instance spawned successfully.
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.572 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.578 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.587 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.619 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.622 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.623 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.624 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.625 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.626 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.627 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.678 349552 INFO nova.compute.manager [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Took 15.11 seconds to spawn the instance on the hypervisor.
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.679 349552 DEBUG nova.compute.manager [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.740 349552 INFO nova.compute.manager [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Took 16.27 seconds to build instance.
Dec 05 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.761 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.412s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:10:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:10:23 compute-0 ceph-mon[192914]: pgmap v1840: 321 pgs: 321 active+clean; 183 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Dec 05 02:10:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1841: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 305 KiB/s wr, 109 op/s
Dec 05 02:10:24 compute-0 nova_compute[349548]: 2025-12-05 02:10:24.580 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:24 compute-0 podman[445841]: 2025-12-05 02:10:24.703818588 +0000 UTC m=+0.103665783 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:10:24 compute-0 podman[445842]: 2025-12-05 02:10:24.736087994 +0000 UTC m=+0.130567458 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:24.998 349552 DEBUG oslo_concurrency.lockutils [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Acquiring lock "3391e1ba-0e6b-4113-b402-027e997b3cb9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.000 349552 DEBUG oslo_concurrency.lockutils [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.008 349552 DEBUG oslo_concurrency.lockutils [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Acquiring lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.009 349552 DEBUG oslo_concurrency.lockutils [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.009 349552 DEBUG oslo_concurrency.lockutils [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.011 349552 INFO nova.compute.manager [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Terminating instance
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.013 349552 DEBUG nova.compute.manager [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 05 02:10:25 compute-0 kernel: tap26b950d4-e9 (unregistering): left promiscuous mode
Dec 05 02:10:25 compute-0 NetworkManager[49092]: <info>  [1764900625.1193] device (tap26b950d4-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.140 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:25 compute-0 ovn_controller[89286]: 2025-12-05T02:10:25Z|00120|binding|INFO|Releasing lport 26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 from this chassis (sb_readonly=0)
Dec 05 02:10:25 compute-0 ovn_controller[89286]: 2025-12-05T02:10:25Z|00121|binding|INFO|Setting lport 26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 down in Southbound
Dec 05 02:10:25 compute-0 ovn_controller[89286]: 2025-12-05T02:10:25Z|00122|binding|INFO|Removing iface tap26b950d4-e9 ovn-installed in OVS
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.143 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.147 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:63:ca 10.100.0.12'], port_security=['fa:16:3e:6a:63:ca 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3391e1ba-0e6b-4113-b402-027e997b3cb9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff773210-0089-4a3b-936f-15f2b6743c77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f120ce30568246929ef2dc1a9f0bd0c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8661fcbe-cefc-4ef8-b7d8-1566fb9b4df4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fd9eaded-949d-4594-9bc0-f87080068e48, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=26b950d4-e9c2-45ea-8e3a-bd06bf2227d4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.150 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 in datapath ff773210-0089-4a3b-936f-15f2b6743c77 unbound from our chassis
Dec 05 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.152 287122 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ff773210-0089-4a3b-936f-15f2b6743c77, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.155 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[d023610a-abaa-45dc-9ef0-c26149adf90f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.156 287122 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77 namespace which is not needed anymore
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.171 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:25 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Dec 05 02:10:25 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000a.scope: Consumed 3.271s CPU time.
Dec 05 02:10:25 compute-0 systemd-machined[138700]: Machine qemu-11-instance-0000000a terminated.
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.272 349552 INFO nova.virt.libvirt.driver [-] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Instance destroyed successfully.
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.272 349552 DEBUG nova.objects.instance [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lazy-loading 'resources' on Instance uuid 3391e1ba-0e6b-4113-b402-027e997b3cb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.295 349552 DEBUG nova.virt.libvirt.vif [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:10:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-2017371141',display_name='tempest-ServerAddressesTestJSON-server-2017371141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-2017371141',id=10,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:10:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f120ce30568246929ef2dc1a9f0bd0c7',ramdisk_id='',reservation_id='r-mwlljk6u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1048961571',owner_user_name='tempest-ServerAddressesTestJSON-1048961571-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:10:22Z,user_data=None,user_id='f18ce80284524cbb9497cac2c6e6bf32',uuid=3391e1ba-0e6b-4113-b402-027e997b3cb9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "address": "fa:16:3e:6a:63:ca", "network": {"id": "ff773210-0089-4a3b-936f-15f2b6743c77", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-4563358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f120ce30568246929ef2dc1a9f0bd0c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26b950d4-e9", "ovs_interfaceid": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.296 349552 DEBUG nova.network.os_vif_util [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Converting VIF {"id": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "address": "fa:16:3e:6a:63:ca", "network": {"id": "ff773210-0089-4a3b-936f-15f2b6743c77", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-4563358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f120ce30568246929ef2dc1a9f0bd0c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26b950d4-e9", "ovs_interfaceid": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.298 349552 DEBUG nova.network.os_vif_util [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:63:ca,bridge_name='br-int',has_traffic_filtering=True,id=26b950d4-e9c2-45ea-8e3a-bd06bf2227d4,network=Network(ff773210-0089-4a3b-936f-15f2b6743c77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26b950d4-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.299 349552 DEBUG os_vif [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:63:ca,bridge_name='br-int',has_traffic_filtering=True,id=26b950d4-e9c2-45ea-8e3a-bd06bf2227d4,network=Network(ff773210-0089-4a3b-936f-15f2b6743c77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26b950d4-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.301 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.302 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26b950d4-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.304 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.307 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.310 349552 INFO os_vif [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:63:ca,bridge_name='br-int',has_traffic_filtering=True,id=26b950d4-e9c2-45ea-8e3a-bd06bf2227d4,network=Network(ff773210-0089-4a3b-936f-15f2b6743c77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26b950d4-e9')
Dec 05 02:10:25 compute-0 ceph-mon[192914]: pgmap v1841: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 305 KiB/s wr, 109 op/s
Dec 05 02:10:25 compute-0 neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77[445826]: [NOTICE]   (445830) : haproxy version is 2.8.14-c23fe91
Dec 05 02:10:25 compute-0 neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77[445826]: [NOTICE]   (445830) : path to executable is /usr/sbin/haproxy
Dec 05 02:10:25 compute-0 neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77[445826]: [WARNING]  (445830) : Exiting Master process...
Dec 05 02:10:25 compute-0 neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77[445826]: [WARNING]  (445830) : Exiting Master process...
Dec 05 02:10:25 compute-0 neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77[445826]: [ALERT]    (445830) : Current worker (445832) exited with code 143 (Terminated)
Dec 05 02:10:25 compute-0 neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77[445826]: [WARNING]  (445830) : All workers exited. Exiting... (0)
Dec 05 02:10:25 compute-0 systemd[1]: libpod-1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d.scope: Deactivated successfully.
Dec 05 02:10:25 compute-0 podman[445921]: 2025-12-05 02:10:25.434973613 +0000 UTC m=+0.091373948 container died 1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:10:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d-userdata-shm.mount: Deactivated successfully.
Dec 05 02:10:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9f87d456a5d6144246a464ab2071935c153697f26dcdbc4ce01d7704fe82715-merged.mount: Deactivated successfully.
Dec 05 02:10:25 compute-0 podman[445921]: 2025-12-05 02:10:25.507607993 +0000 UTC m=+0.164008328 container cleanup 1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:10:25 compute-0 systemd[1]: libpod-conmon-1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d.scope: Deactivated successfully.
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.551 349552 DEBUG nova.compute.manager [req-a0ab87e5-3bee-45f2-989a-3b8ad2449188 req-29f313c1-5948-4321-8c47-e7796844ee41 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Received event network-vif-unplugged-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.551 349552 DEBUG oslo_concurrency.lockutils [req-a0ab87e5-3bee-45f2-989a-3b8ad2449188 req-29f313c1-5948-4321-8c47-e7796844ee41 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.552 349552 DEBUG oslo_concurrency.lockutils [req-a0ab87e5-3bee-45f2-989a-3b8ad2449188 req-29f313c1-5948-4321-8c47-e7796844ee41 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.552 349552 DEBUG oslo_concurrency.lockutils [req-a0ab87e5-3bee-45f2-989a-3b8ad2449188 req-29f313c1-5948-4321-8c47-e7796844ee41 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.552 349552 DEBUG nova.compute.manager [req-a0ab87e5-3bee-45f2-989a-3b8ad2449188 req-29f313c1-5948-4321-8c47-e7796844ee41 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] No waiting events found dispatching network-vif-unplugged-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.553 349552 DEBUG nova.compute.manager [req-a0ab87e5-3bee-45f2-989a-3b8ad2449188 req-29f313c1-5948-4321-8c47-e7796844ee41 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Received event network-vif-unplugged-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 05 02:10:25 compute-0 podman[445962]: 2025-12-05 02:10:25.632878261 +0000 UTC m=+0.083228248 container remove 1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.644 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[69f07dd1-2a6c-4c10-9ca6-843dd9f061d9]: (4, ('Fri Dec  5 02:10:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77 (1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d)\n1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d\nFri Dec  5 02:10:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77 (1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d)\n1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.646 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[1d7b1105-9ee7-447e-8111-2446b88bd4d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.647 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff773210-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:10:25 compute-0 kernel: tapff773210-00: left promiscuous mode
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.650 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.676 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.679 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[6617b5ba-ef4b-4ecf-ba36-326eab0483d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.695 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[ba2396ee-73dc-4967-8653-b58a2affe564]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.696 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[cb071d55-8859-4f25-8e3e-d105530421e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.720 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[c482841b-10ce-48cd-a4b5-aefb407694c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670777, 'reachable_time': 18995, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 445975, 'error': None, 'target': 'ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.723 287504 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 05 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.724 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[10355490-7fc5-4a2b-a1c7-01a50d564dca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:10:25 compute-0 systemd[1]: run-netns-ovnmeta\x2dff773210\x2d0089\x2d4a3b\x2d936f\x2d15f2b6743c77.mount: Deactivated successfully.
Dec 05 02:10:26 compute-0 nova_compute[349548]: 2025-12-05 02:10:26.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:10:26 compute-0 nova_compute[349548]: 2025-12-05 02:10:26.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 02:10:26 compute-0 nova_compute[349548]: 2025-12-05 02:10:26.086 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 02:10:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1842: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 114 op/s
Dec 05 02:10:26 compute-0 nova_compute[349548]: 2025-12-05 02:10:26.297 349552 INFO nova.virt.libvirt.driver [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Deleting instance files /var/lib/nova/instances/3391e1ba-0e6b-4113-b402-027e997b3cb9_del
Dec 05 02:10:26 compute-0 nova_compute[349548]: 2025-12-05 02:10:26.298 349552 INFO nova.virt.libvirt.driver [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Deletion of /var/lib/nova/instances/3391e1ba-0e6b-4113-b402-027e997b3cb9_del complete
Dec 05 02:10:26 compute-0 nova_compute[349548]: 2025-12-05 02:10:26.356 349552 INFO nova.compute.manager [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Took 1.34 seconds to destroy the instance on the hypervisor.
Dec 05 02:10:26 compute-0 nova_compute[349548]: 2025-12-05 02:10:26.357 349552 DEBUG oslo.service.loopingcall [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 05 02:10:26 compute-0 nova_compute[349548]: 2025-12-05 02:10:26.358 349552 DEBUG nova.compute.manager [-] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 05 02:10:26 compute-0 nova_compute[349548]: 2025-12-05 02:10:26.359 349552 DEBUG nova.network.neutron [-] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 05 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011085768094354367 of space, bias 1.0, pg target 0.332573042830631 quantized to 32 (current 32)
Dec 05 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 05 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:10:27 compute-0 nova_compute[349548]: 2025-12-05 02:10:27.121 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:27 compute-0 ceph-mon[192914]: pgmap v1842: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 114 op/s
Dec 05 02:10:27 compute-0 nova_compute[349548]: 2025-12-05 02:10:27.786 349552 DEBUG nova.compute.manager [req-0b7ef9a9-f90e-4f48-921c-7f782dce2192 req-ca4e5998-e056-47cc-8fac-5053555b40b1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Received event network-vif-plugged-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:10:27 compute-0 nova_compute[349548]: 2025-12-05 02:10:27.786 349552 DEBUG oslo_concurrency.lockutils [req-0b7ef9a9-f90e-4f48-921c-7f782dce2192 req-ca4e5998-e056-47cc-8fac-5053555b40b1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:10:27 compute-0 nova_compute[349548]: 2025-12-05 02:10:27.787 349552 DEBUG oslo_concurrency.lockutils [req-0b7ef9a9-f90e-4f48-921c-7f782dce2192 req-ca4e5998-e056-47cc-8fac-5053555b40b1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:10:27 compute-0 nova_compute[349548]: 2025-12-05 02:10:27.787 349552 DEBUG oslo_concurrency.lockutils [req-0b7ef9a9-f90e-4f48-921c-7f782dce2192 req-ca4e5998-e056-47cc-8fac-5053555b40b1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:10:27 compute-0 nova_compute[349548]: 2025-12-05 02:10:27.788 349552 DEBUG nova.compute.manager [req-0b7ef9a9-f90e-4f48-921c-7f782dce2192 req-ca4e5998-e056-47cc-8fac-5053555b40b1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] No waiting events found dispatching network-vif-plugged-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:10:27 compute-0 nova_compute[349548]: 2025-12-05 02:10:27.788 349552 WARNING nova.compute.manager [req-0b7ef9a9-f90e-4f48-921c-7f782dce2192 req-ca4e5998-e056-47cc-8fac-5053555b40b1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Received unexpected event network-vif-plugged-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 for instance with vm_state active and task_state deleting.
Dec 05 02:10:27 compute-0 nova_compute[349548]: 2025-12-05 02:10:27.837 349552 DEBUG nova.network.neutron [-] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:10:27 compute-0 nova_compute[349548]: 2025-12-05 02:10:27.870 349552 INFO nova.compute.manager [-] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Took 1.51 seconds to deallocate network for instance.
Dec 05 02:10:27 compute-0 nova_compute[349548]: 2025-12-05 02:10:27.916 349552 DEBUG oslo_concurrency.lockutils [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:10:27 compute-0 nova_compute[349548]: 2025-12-05 02:10:27.917 349552 DEBUG oslo_concurrency.lockutils [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:10:28 compute-0 nova_compute[349548]: 2025-12-05 02:10:28.000 349552 DEBUG oslo_concurrency.processutils [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:10:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:10:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1843: 321 pgs: 321 active+clean; 158 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 15 KiB/s wr, 153 op/s
Dec 05 02:10:28 compute-0 ovn_controller[89286]: 2025-12-05T02:10:28Z|00123|binding|INFO|Releasing lport 3d0916d7-6f03-4daf-8f3b-126228223c53 from this chassis (sb_readonly=0)
Dec 05 02:10:28 compute-0 nova_compute[349548]: 2025-12-05 02:10:28.433 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:10:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3138225154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:10:28 compute-0 nova_compute[349548]: 2025-12-05 02:10:28.572 349552 DEBUG oslo_concurrency.processutils [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:10:28 compute-0 nova_compute[349548]: 2025-12-05 02:10:28.586 349552 DEBUG nova.compute.provider_tree [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:10:28 compute-0 nova_compute[349548]: 2025-12-05 02:10:28.615 349552 DEBUG nova.scheduler.client.report [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:10:28 compute-0 nova_compute[349548]: 2025-12-05 02:10:28.659 349552 DEBUG oslo_concurrency.lockutils [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:10:28 compute-0 nova_compute[349548]: 2025-12-05 02:10:28.728 349552 INFO nova.scheduler.client.report [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Deleted allocations for instance 3391e1ba-0e6b-4113-b402-027e997b3cb9
Dec 05 02:10:28 compute-0 nova_compute[349548]: 2025-12-05 02:10:28.813 349552 DEBUG oslo_concurrency.lockutils [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:10:29 compute-0 ceph-mon[192914]: pgmap v1843: 321 pgs: 321 active+clean; 158 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 15 KiB/s wr, 153 op/s
Dec 05 02:10:29 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3138225154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:10:29 compute-0 podman[446000]: 2025-12-05 02:10:29.7303133 +0000 UTC m=+0.130179767 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 02:10:29 compute-0 podman[445999]: 2025-12-05 02:10:29.746642899 +0000 UTC m=+0.147529774 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 02:10:29 compute-0 podman[158197]: time="2025-12-05T02:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:10:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:10:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8650 "" "Go-http-client/1.1"
Dec 05 02:10:30 compute-0 nova_compute[349548]: 2025-12-05 02:10:30.047 349552 DEBUG nova.compute.manager [req-f9d46564-3e51-48bc-bc47-7c222d674257 req-29c9e701-2f99-4ed5-90e6-bcd8ee1ffe85 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Received event network-vif-deleted-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:10:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1844: 321 pgs: 321 active+clean; 158 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 14 KiB/s wr, 125 op/s
Dec 05 02:10:30 compute-0 nova_compute[349548]: 2025-12-05 02:10:30.306 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:31 compute-0 openstack_network_exporter[366555]: ERROR   02:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:10:31 compute-0 openstack_network_exporter[366555]: ERROR   02:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:10:31 compute-0 openstack_network_exporter[366555]: ERROR   02:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:10:31 compute-0 openstack_network_exporter[366555]: ERROR   02:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:10:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:10:31 compute-0 openstack_network_exporter[366555]: ERROR   02:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:10:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:10:31 compute-0 ceph-mon[192914]: pgmap v1844: 321 pgs: 321 active+clean; 158 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 14 KiB/s wr, 125 op/s
Dec 05 02:10:31 compute-0 nova_compute[349548]: 2025-12-05 02:10:31.456 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764900616.4531763, 939ae9f2-b89c-4a19-96de-ab4dfc882a35 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:10:31 compute-0 nova_compute[349548]: 2025-12-05 02:10:31.456 349552 INFO nova.compute.manager [-] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] VM Stopped (Lifecycle Event)
Dec 05 02:10:31 compute-0 nova_compute[349548]: 2025-12-05 02:10:31.475 349552 DEBUG nova.compute.manager [None req-4023f92e-94f5-4bee-b40b-db03e84fd6c0 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:10:31 compute-0 ovn_controller[89286]: 2025-12-05T02:10:31Z|00124|binding|INFO|Releasing lport 3d0916d7-6f03-4daf-8f3b-126228223c53 from this chassis (sb_readonly=0)
Dec 05 02:10:31 compute-0 nova_compute[349548]: 2025-12-05 02:10:31.651 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:31 compute-0 podman[446035]: 2025-12-05 02:10:31.750290292 +0000 UTC m=+0.132634446 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.expose-services=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, managed_by=edpm_ansible, vendor=Red Hat, Inc., vcs-type=git, name=ubi9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, release=1214.1726694543, maintainer=Red Hat, Inc.)
Dec 05 02:10:32 compute-0 nova_compute[349548]: 2025-12-05 02:10:32.123 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1845: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 14 KiB/s wr, 130 op/s
Dec 05 02:10:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:32.651 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:10:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:32.654 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 02:10:32 compute-0 nova_compute[349548]: 2025-12-05 02:10:32.654 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:10:33 compute-0 ceph-mon[192914]: pgmap v1845: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 14 KiB/s wr, 130 op/s
Dec 05 02:10:33 compute-0 nova_compute[349548]: 2025-12-05 02:10:33.494 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1846: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.6 KiB/s wr, 68 op/s
Dec 05 02:10:35 compute-0 nova_compute[349548]: 2025-12-05 02:10:35.309 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:35 compute-0 ceph-mon[192914]: pgmap v1846: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.6 KiB/s wr, 68 op/s
Dec 05 02:10:35 compute-0 nova_compute[349548]: 2025-12-05 02:10:35.922 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1847: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 850 KiB/s rd, 1.3 KiB/s wr, 55 op/s
Dec 05 02:10:37 compute-0 nova_compute[349548]: 2025-12-05 02:10:37.087 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:10:37 compute-0 nova_compute[349548]: 2025-12-05 02:10:37.088 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:10:37 compute-0 nova_compute[349548]: 2025-12-05 02:10:37.088 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:10:37 compute-0 nova_compute[349548]: 2025-12-05 02:10:37.127 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:37 compute-0 ceph-mon[192914]: pgmap v1847: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 850 KiB/s rd, 1.3 KiB/s wr, 55 op/s
Dec 05 02:10:37 compute-0 nova_compute[349548]: 2025-12-05 02:10:37.672 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:38 compute-0 nova_compute[349548]: 2025-12-05 02:10:38.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:10:38 compute-0 nova_compute[349548]: 2025-12-05 02:10:38.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:10:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:10:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1848: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 713 KiB/s rd, 1.2 KiB/s wr, 48 op/s
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.323 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.324 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.336 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 59e35a32-9023-4e49-be56-9da10df3027f from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 05 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.338 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/59e35a32-9023-4e49-be56-9da10df3027f -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}03a5c5085f72a10a14834caf2c8f725d7bea9761ee1da0af3d318eb89d91a8ae" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 05 02:10:38 compute-0 nova_compute[349548]: 2025-12-05 02:10:38.931 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:10:38 compute-0 nova_compute[349548]: 2025-12-05 02:10:38.933 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:10:38 compute-0 nova_compute[349548]: 2025-12-05 02:10:38.934 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.307 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1982 Content-Type: application/json Date: Fri, 05 Dec 2025 02:10:38 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-f3ceec08-ee75-4990-8404-30178aea2e92 x-openstack-request-id: req-f3ceec08-ee75-4990-8404-30178aea2e92 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.308 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "59e35a32-9023-4e49-be56-9da10df3027f", "name": "tempest-ServerActionsTestJSON-server-1678320742", "status": "ACTIVE", "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "user_id": "b4745812b7eb47908ded25b1eb7c7328", "metadata": {}, "hostId": "ec24e2cce3283e55f968b7a36269e7bf355c27e7ccc9833dd73aa657", "image": {"id": "e9091bfb-b431-47c9-a284-79372046956b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/e9091bfb-b431-47c9-a284-79372046956b"}]}, "flavor": {"id": "bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49"}]}, "created": "2025-12-05T02:08:38Z", "updated": "2025-12-05T02:10:15Z", "addresses": {"tempest-ServerActionsTestJSON-2010351729-network": [{"version": 4, "addr": "10.100.0.10", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:16:81:87"}, {"version": 4, "addr": "192.168.122.206", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:16:81:87"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/59e35a32-9023-4e49-be56-9da10df3027f"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/59e35a32-9023-4e49-be56-9da10df3027f"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-1953156472", "OS-SRV-USG:launched_at": "2025-12-05T02:08:56.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1840647419"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000008", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.308 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/59e35a32-9023-4e49-be56-9da10df3027f used request id req-f3ceec08-ee75-4990-8404-30178aea2e92 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.310 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '59e35a32-9023-4e49-be56-9da10df3027f', 'name': 'tempest-ServerActionsTestJSON-server-1678320742', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'e9091bfb-b431-47c9-a284-79372046956b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000008', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd34a6a62cf94436a2b836fa4f49c4fa', 'user_id': 'b4745812b7eb47908ded25b1eb7c7328', 'hostId': 'ec24e2cce3283e55f968b7a36269e7bf355c27e7ccc9833dd73aa657', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.311 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.311 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.311 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.312 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.313 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T02:10:39.312183) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.314 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.315 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.315 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.315 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.316 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.316 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T02:10:39.315830) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.340 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.340 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.341 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.342 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.342 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.342 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.342 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.343 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.343 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T02:10:39.342493) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.344 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.344 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.344 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.344 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.345 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.345 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1678320742>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1678320742>]
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.346 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.346 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-05T02:10:39.344703) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.347 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.347 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.347 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T02:10:39.347397) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.409 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.410 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.411 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.411 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.411 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.412 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.412 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.412 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.412 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.read.latency volume: 1626049265 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.413 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T02:10:39.412451) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.414 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.read.latency volume: 2427288 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.415 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.415 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.415 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.416 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.416 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.416 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.416 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T02:10:39.416365) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.417 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.418 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.418 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.419 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.419 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.419 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.419 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.420 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.420 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.421 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.422 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.422 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.423 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T02:10:39.420095) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.423 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.423 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.424 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.424 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.424 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.425 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.426 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T02:10:39.424374) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.426 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.427 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.427 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.427 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.428 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.428 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.428 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T02:10:39.428139) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.473 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.474 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.474 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.475 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.475 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.475 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.475 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.476 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T02:10:39.475446) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.476 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.476 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.477 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.477 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.478 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.478 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.478 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.478 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.478 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.479 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.480 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.480 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.480 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T02:10:39.478550) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.481 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.481 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.481 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.481 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.482 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T02:10:39.481709) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.489 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 59e35a32-9023-4e49-be56-9da10df3027f / tapa240e2ef-17 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.489 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.490 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.490 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.491 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.491 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.491 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.491 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.492 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T02:10:39.491618) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.492 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.493 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.493 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.494 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.494 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.494 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.494 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.495 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T02:10:39.494632) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.495 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.496 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.496 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.497 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.497 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.497 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.497 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.498 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T02:10:39.497832) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.497 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.499 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.500 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.500 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.500 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.500 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.501 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.501 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.501 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.502 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.502 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.503 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.503 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T02:10:39.501218) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.504 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.504 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.504 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.505 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.505 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T02:10:39.504724) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.506 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.507 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.507 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.507 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.507 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-05T02:10:39.507632) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.508 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.508 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1678320742>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1678320742>]
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.510 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.510 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.510 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.510 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.511 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.511 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 59e35a32-9023-4e49-be56-9da10df3027f: ceilometer.compute.pollsters.NoVolumeException
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.511 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.512 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T02:10:39.510813) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.512 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.514 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceph-mon[192914]: pgmap v1848: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 713 KiB/s rd, 1.2 KiB/s wr, 48 op/s
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.515 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.517 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.523 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T02:10:39.517784) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.518 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.523 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.524 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.524 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.524 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.524 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T02:10:39.524427) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.525 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.525 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.525 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.525 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.526 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.526 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.526 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T02:10:39.526171) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.526 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.527 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.527 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.527 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.527 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.527 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.527 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/cpu volume: 23500000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.528 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.528 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.528 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.528 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.528 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.528 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.529 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T02:10:39.527591) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.529 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T02:10:39.528792) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.529 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.530 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.530 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.530 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.531 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.531 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.531 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.536 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T02:10:39.530999) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:10:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1849: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 340 B/s wr, 4 op/s
Dec 05 02:10:40 compute-0 nova_compute[349548]: 2025-12-05 02:10:40.257 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764900625.2545006, 3391e1ba-0e6b-4113-b402-027e997b3cb9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:10:40 compute-0 nova_compute[349548]: 2025-12-05 02:10:40.258 349552 INFO nova.compute.manager [-] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] VM Stopped (Lifecycle Event)
Dec 05 02:10:40 compute-0 nova_compute[349548]: 2025-12-05 02:10:40.277 349552 DEBUG nova.compute.manager [None req-4d4ac99f-cb50-4b9f-aff1-457682faf163 - - - - - -] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:10:40 compute-0 nova_compute[349548]: 2025-12-05 02:10:40.312 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:40 compute-0 ceph-mon[192914]: pgmap v1849: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 340 B/s wr, 4 op/s
Dec 05 02:10:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:40.656 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:10:40 compute-0 nova_compute[349548]: 2025-12-05 02:10:40.925 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:41 compute-0 podman[446056]: 2025-12-05 02:10:41.704240626 +0000 UTC m=+0.107968203 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 02:10:41 compute-0 podman[446057]: 2025-12-05 02:10:41.716271344 +0000 UTC m=+0.128125719 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 02:10:41 compute-0 podman[446059]: 2025-12-05 02:10:41.724436044 +0000 UTC m=+0.108235491 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=edpm, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 05 02:10:41 compute-0 podman[446058]: 2025-12-05 02:10:41.758610513 +0000 UTC m=+0.156200468 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 05 02:10:41 compute-0 nova_compute[349548]: 2025-12-05 02:10:41.775 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Updating instance_info_cache with network_info: [{"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:10:41 compute-0 nova_compute[349548]: 2025-12-05 02:10:41.802 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:10:41 compute-0 nova_compute[349548]: 2025-12-05 02:10:41.803 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 02:10:41 compute-0 nova_compute[349548]: 2025-12-05 02:10:41.804 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:10:41 compute-0 nova_compute[349548]: 2025-12-05 02:10:41.805 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:10:41 compute-0 nova_compute[349548]: 2025-12-05 02:10:41.806 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:10:41 compute-0 nova_compute[349548]: 2025-12-05 02:10:41.834 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:10:41 compute-0 nova_compute[349548]: 2025-12-05 02:10:41.835 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:10:41 compute-0 nova_compute[349548]: 2025-12-05 02:10:41.836 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:10:41 compute-0 nova_compute[349548]: 2025-12-05 02:10:41.838 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:10:41 compute-0 nova_compute[349548]: 2025-12-05 02:10:41.839 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:10:42 compute-0 nova_compute[349548]: 2025-12-05 02:10:42.130 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1850: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 341 B/s wr, 4 op/s
Dec 05 02:10:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:10:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2151506995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:10:42 compute-0 nova_compute[349548]: 2025-12-05 02:10:42.343 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:10:42 compute-0 nova_compute[349548]: 2025-12-05 02:10:42.447 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:10:42 compute-0 nova_compute[349548]: 2025-12-05 02:10:42.449 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:10:42 compute-0 nova_compute[349548]: 2025-12-05 02:10:42.926 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:10:42 compute-0 nova_compute[349548]: 2025-12-05 02:10:42.927 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3901MB free_disk=59.94267654418945GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:10:42 compute-0 nova_compute[349548]: 2025-12-05 02:10:42.928 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:10:42 compute-0 nova_compute[349548]: 2025-12-05 02:10:42.929 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:10:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:10:43 compute-0 ceph-mon[192914]: pgmap v1850: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 341 B/s wr, 4 op/s
Dec 05 02:10:43 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2151506995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:10:43 compute-0 nova_compute[349548]: 2025-12-05 02:10:43.369 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 59e35a32-9023-4e49-be56-9da10df3027f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:10:43 compute-0 nova_compute[349548]: 2025-12-05 02:10:43.370 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:10:43 compute-0 nova_compute[349548]: 2025-12-05 02:10:43.370 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:10:43 compute-0 nova_compute[349548]: 2025-12-05 02:10:43.569 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:10:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:10:44 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/364089829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:10:44 compute-0 nova_compute[349548]: 2025-12-05 02:10:44.081 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:10:44 compute-0 nova_compute[349548]: 2025-12-05 02:10:44.093 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:10:44 compute-0 nova_compute[349548]: 2025-12-05 02:10:44.111 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:10:44 compute-0 nova_compute[349548]: 2025-12-05 02:10:44.135 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:10:44 compute-0 nova_compute[349548]: 2025-12-05 02:10:44.135 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.207s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:10:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1851: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:10:44 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/364089829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:10:44 compute-0 nova_compute[349548]: 2025-12-05 02:10:44.397 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:10:44 compute-0 nova_compute[349548]: 2025-12-05 02:10:44.398 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:10:45 compute-0 ceph-mon[192914]: pgmap v1851: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:10:45 compute-0 nova_compute[349548]: 2025-12-05 02:10:45.314 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:10:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/848933106' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:10:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:10:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/848933106' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:10:46 compute-0 nova_compute[349548]: 2025-12-05 02:10:46.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:10:46 compute-0 nova_compute[349548]: 2025-12-05 02:10:46.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:10:46 compute-0 nova_compute[349548]: 2025-12-05 02:10:46.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 02:10:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1852: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:10:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/848933106' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:10:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/848933106' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:10:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:10:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:10:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:10:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:10:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:10:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:10:46 compute-0 ovn_controller[89286]: 2025-12-05T02:10:46Z|00125|binding|INFO|Releasing lport 3d0916d7-6f03-4daf-8f3b-126228223c53 from this chassis (sb_readonly=0)
Dec 05 02:10:46 compute-0 nova_compute[349548]: 2025-12-05 02:10:46.878 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:47 compute-0 nova_compute[349548]: 2025-12-05 02:10:47.005 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:47 compute-0 nova_compute[349548]: 2025-12-05 02:10:47.132 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:47 compute-0 ceph-mon[192914]: pgmap v1852: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:10:47 compute-0 nova_compute[349548]: 2025-12-05 02:10:47.694 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:10:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1853: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:10:49 compute-0 nova_compute[349548]: 2025-12-05 02:10:49.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:10:49 compute-0 ceph-mon[192914]: pgmap v1853: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:10:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1854: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:10:50 compute-0 nova_compute[349548]: 2025-12-05 02:10:50.318 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:50 compute-0 nova_compute[349548]: 2025-12-05 02:10:50.759 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:51 compute-0 ceph-mon[192914]: pgmap v1854: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:10:51 compute-0 ovn_controller[89286]: 2025-12-05T02:10:51Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:16:81:87 10.100.0.10
Dec 05 02:10:51 compute-0 nova_compute[349548]: 2025-12-05 02:10:51.794 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:52 compute-0 nova_compute[349548]: 2025-12-05 02:10:52.138 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1855: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 355 KiB/s rd, 11 KiB/s wr, 32 op/s
Dec 05 02:10:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:10:53 compute-0 ceph-mon[192914]: pgmap v1855: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 355 KiB/s rd, 11 KiB/s wr, 32 op/s
Dec 05 02:10:54 compute-0 sudo[446184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:10:54 compute-0 sudo[446184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:10:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1856: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 401 KiB/s rd, 11 KiB/s wr, 35 op/s
Dec 05 02:10:54 compute-0 sudo[446184]: pam_unix(sudo:session): session closed for user root
Dec 05 02:10:54 compute-0 sudo[446209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:10:54 compute-0 sudo[446209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:10:54 compute-0 sudo[446209]: pam_unix(sudo:session): session closed for user root
Dec 05 02:10:54 compute-0 sudo[446234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:10:54 compute-0 sudo[446234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:10:54 compute-0 sudo[446234]: pam_unix(sudo:session): session closed for user root
Dec 05 02:10:54 compute-0 sudo[446259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 05 02:10:54 compute-0 sudo[446259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:10:55 compute-0 podman[446325]: 2025-12-05 02:10:55.311169822 +0000 UTC m=+0.138953954 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 05 02:10:55 compute-0 podman[446326]: 2025-12-05 02:10:55.314530416 +0000 UTC m=+0.138034118 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 02:10:55 compute-0 nova_compute[349548]: 2025-12-05 02:10:55.320 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:55 compute-0 ceph-mon[192914]: pgmap v1856: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 401 KiB/s rd, 11 KiB/s wr, 35 op/s
Dec 05 02:10:55 compute-0 podman[446394]: 2025-12-05 02:10:55.536752247 +0000 UTC m=+0.114502157 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:10:55 compute-0 podman[446394]: 2025-12-05 02:10:55.655704508 +0000 UTC m=+0.233454418 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 02:10:56 compute-0 ovn_controller[89286]: 2025-12-05T02:10:56Z|00126|binding|INFO|Releasing lport 3d0916d7-6f03-4daf-8f3b-126228223c53 from this chassis (sb_readonly=0)
Dec 05 02:10:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1857: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 525 KiB/s rd, 11 KiB/s wr, 42 op/s
Dec 05 02:10:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:56.206 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:10:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:56.207 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:10:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:56.207 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:10:56 compute-0 nova_compute[349548]: 2025-12-05 02:10:56.221 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:56 compute-0 nova_compute[349548]: 2025-12-05 02:10:56.686 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:56 compute-0 sudo[446259]: pam_unix(sudo:session): session closed for user root
Dec 05 02:10:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:10:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:10:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:10:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:10:57 compute-0 sudo[446541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:10:57 compute-0 sudo[446541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:10:57 compute-0 sudo[446541]: pam_unix(sudo:session): session closed for user root
Dec 05 02:10:57 compute-0 sudo[446566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:10:57 compute-0 nova_compute[349548]: 2025-12-05 02:10:57.142 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:10:57 compute-0 sudo[446566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:10:57 compute-0 sudo[446566]: pam_unix(sudo:session): session closed for user root
Dec 05 02:10:57 compute-0 sudo[446591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:10:57 compute-0 sudo[446591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:10:57 compute-0 sudo[446591]: pam_unix(sudo:session): session closed for user root
Dec 05 02:10:57 compute-0 ceph-mon[192914]: pgmap v1857: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 525 KiB/s rd, 11 KiB/s wr, 42 op/s
Dec 05 02:10:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:10:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:10:57 compute-0 sudo[446616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:10:57 compute-0 sudo[446616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:10:58 compute-0 sudo[446616]: pam_unix(sudo:session): session closed for user root
Dec 05 02:10:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:10:58 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:10:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:10:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:10:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:10:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:10:58 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7f1bb807-9954-47a4-a714-30dc91a93359 does not exist
Dec 05 02:10:58 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7664a778-cc9c-49b4-bd84-cd7885c21a05 does not exist
Dec 05 02:10:58 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 906892c3-4c74-4b5f-bfba-2f94fd8bd5db does not exist
Dec 05 02:10:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:10:58 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:10:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:10:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:10:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:10:58 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:10:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:10:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1858: 321 pgs: 321 active+clean; 138 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 528 KiB/s rd, 22 KiB/s wr, 44 op/s
Dec 05 02:10:58 compute-0 sudo[446672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:10:58 compute-0 sudo[446672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:10:58 compute-0 sudo[446672]: pam_unix(sudo:session): session closed for user root
Dec 05 02:10:58 compute-0 sudo[446697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:10:58 compute-0 sudo[446697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:10:58 compute-0 sudo[446697]: pam_unix(sudo:session): session closed for user root
Dec 05 02:10:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:10:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:10:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:10:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:10:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:10:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:10:58 compute-0 sudo[446722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:10:58 compute-0 sudo[446722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:10:58 compute-0 sudo[446722]: pam_unix(sudo:session): session closed for user root
Dec 05 02:10:58 compute-0 sudo[446747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:10:58 compute-0 sudo[446747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:10:59 compute-0 podman[446810]: 2025-12-05 02:10:59.220477197 +0000 UTC m=+0.090545724 container create aa95ab66662aeaf231aa458c691ae6a5bfa13437adae88a2f6fd9dffa9fedf1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:10:59 compute-0 podman[446810]: 2025-12-05 02:10:59.18428521 +0000 UTC m=+0.054353787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:10:59 compute-0 systemd[1]: Started libpod-conmon-aa95ab66662aeaf231aa458c691ae6a5bfa13437adae88a2f6fd9dffa9fedf1b.scope.
Dec 05 02:10:59 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:10:59 compute-0 ceph-mon[192914]: pgmap v1858: 321 pgs: 321 active+clean; 138 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 528 KiB/s rd, 22 KiB/s wr, 44 op/s
Dec 05 02:10:59 compute-0 podman[446810]: 2025-12-05 02:10:59.373744092 +0000 UTC m=+0.243812659 container init aa95ab66662aeaf231aa458c691ae6a5bfa13437adae88a2f6fd9dffa9fedf1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 05 02:10:59 compute-0 podman[446810]: 2025-12-05 02:10:59.391513501 +0000 UTC m=+0.261582018 container start aa95ab66662aeaf231aa458c691ae6a5bfa13437adae88a2f6fd9dffa9fedf1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dhawan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:10:59 compute-0 podman[446810]: 2025-12-05 02:10:59.397757756 +0000 UTC m=+0.267826323 container attach aa95ab66662aeaf231aa458c691ae6a5bfa13437adae88a2f6fd9dffa9fedf1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:10:59 compute-0 quizzical_dhawan[446826]: 167 167
Dec 05 02:10:59 compute-0 systemd[1]: libpod-aa95ab66662aeaf231aa458c691ae6a5bfa13437adae88a2f6fd9dffa9fedf1b.scope: Deactivated successfully.
Dec 05 02:10:59 compute-0 podman[446810]: 2025-12-05 02:10:59.40394546 +0000 UTC m=+0.274013977 container died aa95ab66662aeaf231aa458c691ae6a5bfa13437adae88a2f6fd9dffa9fedf1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 05 02:10:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba47947482d24e67feee9b152caf25bdfa716a3ab21debe23ded31f028a96000-merged.mount: Deactivated successfully.
Dec 05 02:10:59 compute-0 podman[446810]: 2025-12-05 02:10:59.487165727 +0000 UTC m=+0.357234244 container remove aa95ab66662aeaf231aa458c691ae6a5bfa13437adae88a2f6fd9dffa9fedf1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dhawan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Dec 05 02:10:59 compute-0 systemd[1]: libpod-conmon-aa95ab66662aeaf231aa458c691ae6a5bfa13437adae88a2f6fd9dffa9fedf1b.scope: Deactivated successfully.
Dec 05 02:10:59 compute-0 podman[158197]: time="2025-12-05T02:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:10:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:10:59 compute-0 podman[446848]: 2025-12-05 02:10:59.7878252 +0000 UTC m=+0.102706745 container create 8fddc251eab8b72e383dff0dad75640265f1bdd08ae4e2056b8c3e3fb3ea8853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 05 02:10:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8654 "" "Go-http-client/1.1"
Dec 05 02:10:59 compute-0 podman[446848]: 2025-12-05 02:10:59.752603061 +0000 UTC m=+0.067484656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:10:59 compute-0 systemd[1]: Started libpod-conmon-8fddc251eab8b72e383dff0dad75640265f1bdd08ae4e2056b8c3e3fb3ea8853.scope.
Dec 05 02:10:59 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec6b7cfdbfeed907f929288f90e8dcdcdc96f4dce5c9693016c9e909c9e4d95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec6b7cfdbfeed907f929288f90e8dcdcdc96f4dce5c9693016c9e909c9e4d95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec6b7cfdbfeed907f929288f90e8dcdcdc96f4dce5c9693016c9e909c9e4d95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec6b7cfdbfeed907f929288f90e8dcdcdc96f4dce5c9693016c9e909c9e4d95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec6b7cfdbfeed907f929288f90e8dcdcdc96f4dce5c9693016c9e909c9e4d95/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:10:59 compute-0 podman[446848]: 2025-12-05 02:10:59.932291088 +0000 UTC m=+0.247172643 container init 8fddc251eab8b72e383dff0dad75640265f1bdd08ae4e2056b8c3e3fb3ea8853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 02:10:59 compute-0 podman[446848]: 2025-12-05 02:10:59.967744264 +0000 UTC m=+0.282625799 container start 8fddc251eab8b72e383dff0dad75640265f1bdd08ae4e2056b8c3e3fb3ea8853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Dec 05 02:10:59 compute-0 podman[446848]: 2025-12-05 02:10:59.97404063 +0000 UTC m=+0.288922185 container attach 8fddc251eab8b72e383dff0dad75640265f1bdd08ae4e2056b8c3e3fb3ea8853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:10:59 compute-0 podman[446861]: 2025-12-05 02:10:59.980938774 +0000 UTC m=+0.124611481 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 02:10:59 compute-0 podman[446860]: 2025-12-05 02:10:59.998663132 +0000 UTC m=+0.145963601 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 02:11:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1859: 321 pgs: 321 active+clean; 138 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 528 KiB/s rd, 22 KiB/s wr, 44 op/s
Dec 05 02:11:00 compute-0 nova_compute[349548]: 2025-12-05 02:11:00.324 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:01 compute-0 beautiful_banzai[446874]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:11:01 compute-0 beautiful_banzai[446874]: --> relative data size: 1.0
Dec 05 02:11:01 compute-0 beautiful_banzai[446874]: --> All data devices are unavailable
Dec 05 02:11:01 compute-0 systemd[1]: libpod-8fddc251eab8b72e383dff0dad75640265f1bdd08ae4e2056b8c3e3fb3ea8853.scope: Deactivated successfully.
Dec 05 02:11:01 compute-0 systemd[1]: libpod-8fddc251eab8b72e383dff0dad75640265f1bdd08ae4e2056b8c3e3fb3ea8853.scope: Consumed 1.269s CPU time.
Dec 05 02:11:01 compute-0 podman[446848]: 2025-12-05 02:11:01.313682156 +0000 UTC m=+1.628563701 container died 8fddc251eab8b72e383dff0dad75640265f1bdd08ae4e2056b8c3e3fb3ea8853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:11:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ec6b7cfdbfeed907f929288f90e8dcdcdc96f4dce5c9693016c9e909c9e4d95-merged.mount: Deactivated successfully.
Dec 05 02:11:01 compute-0 ceph-mon[192914]: pgmap v1859: 321 pgs: 321 active+clean; 138 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 528 KiB/s rd, 22 KiB/s wr, 44 op/s
Dec 05 02:11:01 compute-0 podman[446848]: 2025-12-05 02:11:01.408220211 +0000 UTC m=+1.723101736 container remove 8fddc251eab8b72e383dff0dad75640265f1bdd08ae4e2056b8c3e3fb3ea8853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:11:01 compute-0 openstack_network_exporter[366555]: ERROR   02:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:11:01 compute-0 openstack_network_exporter[366555]: ERROR   02:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:11:01 compute-0 openstack_network_exporter[366555]: ERROR   02:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:11:01 compute-0 openstack_network_exporter[366555]: ERROR   02:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:11:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:11:01 compute-0 openstack_network_exporter[366555]: ERROR   02:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:11:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:11:01 compute-0 systemd[1]: libpod-conmon-8fddc251eab8b72e383dff0dad75640265f1bdd08ae4e2056b8c3e3fb3ea8853.scope: Deactivated successfully.
Dec 05 02:11:01 compute-0 sudo[446747]: pam_unix(sudo:session): session closed for user root
Dec 05 02:11:01 compute-0 sudo[446936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:11:01 compute-0 sudo[446936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:11:01 compute-0 sudo[446936]: pam_unix(sudo:session): session closed for user root
Dec 05 02:11:01 compute-0 sudo[446961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:11:01 compute-0 sudo[446961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:11:01 compute-0 sudo[446961]: pam_unix(sudo:session): session closed for user root
Dec 05 02:11:01 compute-0 sudo[446986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:11:01 compute-0 sudo[446986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:11:01 compute-0 sudo[446986]: pam_unix(sudo:session): session closed for user root
Dec 05 02:11:02 compute-0 sudo[447014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:11:02 compute-0 sudo[447014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:11:02 compute-0 podman[447010]: 2025-12-05 02:11:02.052488985 +0000 UTC m=+0.138059118 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.buildah.version=1.29.0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, architecture=x86_64, release-0.7.12=, version=9.4, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9)
Dec 05 02:11:02 compute-0 nova_compute[349548]: 2025-12-05 02:11:02.146 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1860: 321 pgs: 321 active+clean; 138 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 528 KiB/s rd, 22 KiB/s wr, 44 op/s
Dec 05 02:11:02 compute-0 podman[447091]: 2025-12-05 02:11:02.626420465 +0000 UTC m=+0.092445898 container create c9de4190c83aa6f4a5aa42a12d48677337295eb879e53b5669e8faeafb68f97b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 02:11:02 compute-0 podman[447091]: 2025-12-05 02:11:02.593551042 +0000 UTC m=+0.059576535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:11:02 compute-0 systemd[1]: Started libpod-conmon-c9de4190c83aa6f4a5aa42a12d48677337295eb879e53b5669e8faeafb68f97b.scope.
Dec 05 02:11:02 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:11:02 compute-0 podman[447091]: 2025-12-05 02:11:02.771612903 +0000 UTC m=+0.237638396 container init c9de4190c83aa6f4a5aa42a12d48677337295eb879e53b5669e8faeafb68f97b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Dec 05 02:11:02 compute-0 podman[447091]: 2025-12-05 02:11:02.788825766 +0000 UTC m=+0.254851189 container start c9de4190c83aa6f4a5aa42a12d48677337295eb879e53b5669e8faeafb68f97b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brahmagupta, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 05 02:11:02 compute-0 podman[447091]: 2025-12-05 02:11:02.794609549 +0000 UTC m=+0.260634992 container attach c9de4190c83aa6f4a5aa42a12d48677337295eb879e53b5669e8faeafb68f97b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brahmagupta, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:11:02 compute-0 keen_brahmagupta[447107]: 167 167
Dec 05 02:11:02 compute-0 systemd[1]: libpod-c9de4190c83aa6f4a5aa42a12d48677337295eb879e53b5669e8faeafb68f97b.scope: Deactivated successfully.
Dec 05 02:11:02 compute-0 podman[447091]: 2025-12-05 02:11:02.800296988 +0000 UTC m=+0.266322441 container died c9de4190c83aa6f4a5aa42a12d48677337295eb879e53b5669e8faeafb68f97b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:11:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-93a175741fca3584742b0c518b33b753782242c1e2b8754409801493a17c84ba-merged.mount: Deactivated successfully.
Dec 05 02:11:02 compute-0 podman[447091]: 2025-12-05 02:11:02.884255236 +0000 UTC m=+0.350280679 container remove c9de4190c83aa6f4a5aa42a12d48677337295eb879e53b5669e8faeafb68f97b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:11:02 compute-0 systemd[1]: libpod-conmon-c9de4190c83aa6f4a5aa42a12d48677337295eb879e53b5669e8faeafb68f97b.scope: Deactivated successfully.
Dec 05 02:11:03 compute-0 nova_compute[349548]: 2025-12-05 02:11:03.155 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:11:03 compute-0 podman[447130]: 2025-12-05 02:11:03.176435223 +0000 UTC m=+0.104127006 container create baa07be29eeebc3cc46d08c026f0d5aebcfa38865f38f9f14d23a6bbcc3bdfe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bose, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 05 02:11:03 compute-0 podman[447130]: 2025-12-05 02:11:03.131462529 +0000 UTC m=+0.059154372 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:11:03 compute-0 systemd[1]: Started libpod-conmon-baa07be29eeebc3cc46d08c026f0d5aebcfa38865f38f9f14d23a6bbcc3bdfe6.scope.
Dec 05 02:11:03 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:11:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f463dac364302e5579c57a1d704ab4f472ff238a957a2c17db09a24ec0a5718/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:11:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f463dac364302e5579c57a1d704ab4f472ff238a957a2c17db09a24ec0a5718/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:11:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f463dac364302e5579c57a1d704ab4f472ff238a957a2c17db09a24ec0a5718/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:11:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f463dac364302e5579c57a1d704ab4f472ff238a957a2c17db09a24ec0a5718/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:11:03 compute-0 podman[447130]: 2025-12-05 02:11:03.359286167 +0000 UTC m=+0.286977990 container init baa07be29eeebc3cc46d08c026f0d5aebcfa38865f38f9f14d23a6bbcc3bdfe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bose, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 05 02:11:03 compute-0 podman[447130]: 2025-12-05 02:11:03.379284779 +0000 UTC m=+0.306976572 container start baa07be29eeebc3cc46d08c026f0d5aebcfa38865f38f9f14d23a6bbcc3bdfe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:11:03 compute-0 podman[447130]: 2025-12-05 02:11:03.385720169 +0000 UTC m=+0.313412012 container attach baa07be29eeebc3cc46d08c026f0d5aebcfa38865f38f9f14d23a6bbcc3bdfe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 02:11:03 compute-0 ceph-mon[192914]: pgmap v1860: 321 pgs: 321 active+clean; 138 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 528 KiB/s rd, 22 KiB/s wr, 44 op/s
Dec 05 02:11:03 compute-0 ovn_controller[89286]: 2025-12-05T02:11:03Z|00127|binding|INFO|Releasing lport 3d0916d7-6f03-4daf-8f3b-126228223c53 from this chassis (sb_readonly=0)
Dec 05 02:11:04 compute-0 nova_compute[349548]: 2025-12-05 02:11:04.009 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:04 compute-0 gifted_bose[447147]: {
Dec 05 02:11:04 compute-0 gifted_bose[447147]:     "0": [
Dec 05 02:11:04 compute-0 gifted_bose[447147]:         {
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "devices": [
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "/dev/loop3"
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             ],
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "lv_name": "ceph_lv0",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "lv_size": "21470642176",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "name": "ceph_lv0",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "tags": {
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.cluster_name": "ceph",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.crush_device_class": "",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.encrypted": "0",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.osd_id": "0",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.type": "block",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.vdo": "0"
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             },
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "type": "block",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "vg_name": "ceph_vg0"
Dec 05 02:11:04 compute-0 gifted_bose[447147]:         }
Dec 05 02:11:04 compute-0 gifted_bose[447147]:     ],
Dec 05 02:11:04 compute-0 gifted_bose[447147]:     "1": [
Dec 05 02:11:04 compute-0 gifted_bose[447147]:         {
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "devices": [
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "/dev/loop4"
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             ],
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "lv_name": "ceph_lv1",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "lv_size": "21470642176",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "name": "ceph_lv1",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "tags": {
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.cluster_name": "ceph",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.crush_device_class": "",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.encrypted": "0",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.osd_id": "1",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.type": "block",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.vdo": "0"
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             },
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "type": "block",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "vg_name": "ceph_vg1"
Dec 05 02:11:04 compute-0 gifted_bose[447147]:         }
Dec 05 02:11:04 compute-0 gifted_bose[447147]:     ],
Dec 05 02:11:04 compute-0 gifted_bose[447147]:     "2": [
Dec 05 02:11:04 compute-0 gifted_bose[447147]:         {
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "devices": [
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "/dev/loop5"
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             ],
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "lv_name": "ceph_lv2",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "lv_size": "21470642176",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "name": "ceph_lv2",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "tags": {
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.cluster_name": "ceph",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.crush_device_class": "",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.encrypted": "0",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.osd_id": "2",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.type": "block",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:                 "ceph.vdo": "0"
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             },
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "type": "block",
Dec 05 02:11:04 compute-0 gifted_bose[447147]:             "vg_name": "ceph_vg2"
Dec 05 02:11:04 compute-0 gifted_bose[447147]:         }
Dec 05 02:11:04 compute-0 gifted_bose[447147]:     ]
Dec 05 02:11:04 compute-0 gifted_bose[447147]: }
Dec 05 02:11:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1861: 321 pgs: 321 active+clean; 138 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 173 KiB/s rd, 11 KiB/s wr, 11 op/s
Dec 05 02:11:04 compute-0 systemd[1]: libpod-baa07be29eeebc3cc46d08c026f0d5aebcfa38865f38f9f14d23a6bbcc3bdfe6.scope: Deactivated successfully.
Dec 05 02:11:04 compute-0 podman[447130]: 2025-12-05 02:11:04.209957859 +0000 UTC m=+1.137649622 container died baa07be29eeebc3cc46d08c026f0d5aebcfa38865f38f9f14d23a6bbcc3bdfe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bose, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:11:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f463dac364302e5579c57a1d704ab4f472ff238a957a2c17db09a24ec0a5718-merged.mount: Deactivated successfully.
Dec 05 02:11:04 compute-0 podman[447130]: 2025-12-05 02:11:04.296653244 +0000 UTC m=+1.224345007 container remove baa07be29eeebc3cc46d08c026f0d5aebcfa38865f38f9f14d23a6bbcc3bdfe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 05 02:11:04 compute-0 systemd[1]: libpod-conmon-baa07be29eeebc3cc46d08c026f0d5aebcfa38865f38f9f14d23a6bbcc3bdfe6.scope: Deactivated successfully.
Dec 05 02:11:04 compute-0 sudo[447014]: pam_unix(sudo:session): session closed for user root
Dec 05 02:11:04 compute-0 sudo[447167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:11:04 compute-0 sudo[447167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:11:04 compute-0 sudo[447167]: pam_unix(sudo:session): session closed for user root
Dec 05 02:11:04 compute-0 sudo[447192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:11:04 compute-0 sudo[447192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:11:04 compute-0 sudo[447192]: pam_unix(sudo:session): session closed for user root
Dec 05 02:11:04 compute-0 sudo[447217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:11:04 compute-0 sudo[447217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:11:04 compute-0 sudo[447217]: pam_unix(sudo:session): session closed for user root
Dec 05 02:11:04 compute-0 sudo[447242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:11:04 compute-0 sudo[447242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:11:05 compute-0 nova_compute[349548]: 2025-12-05 02:11:05.328 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:05 compute-0 ceph-mon[192914]: pgmap v1861: 321 pgs: 321 active+clean; 138 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 173 KiB/s rd, 11 KiB/s wr, 11 op/s
Dec 05 02:11:05 compute-0 podman[447305]: 2025-12-05 02:11:05.511280678 +0000 UTC m=+0.105338290 container create 6086dc202c1b27895b64a531c496d77759b8f3cb6ca3e95a56a31b6781bc3f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pasteur, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 02:11:05 compute-0 podman[447305]: 2025-12-05 02:11:05.453272839 +0000 UTC m=+0.047330521 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:11:05 compute-0 systemd[1]: Started libpod-conmon-6086dc202c1b27895b64a531c496d77759b8f3cb6ca3e95a56a31b6781bc3f19.scope.
Dec 05 02:11:05 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:11:05 compute-0 podman[447305]: 2025-12-05 02:11:05.657611088 +0000 UTC m=+0.251668700 container init 6086dc202c1b27895b64a531c496d77759b8f3cb6ca3e95a56a31b6781bc3f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pasteur, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 05 02:11:05 compute-0 podman[447305]: 2025-12-05 02:11:05.676248901 +0000 UTC m=+0.270306533 container start 6086dc202c1b27895b64a531c496d77759b8f3cb6ca3e95a56a31b6781bc3f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:11:05 compute-0 podman[447305]: 2025-12-05 02:11:05.683664359 +0000 UTC m=+0.277721971 container attach 6086dc202c1b27895b64a531c496d77759b8f3cb6ca3e95a56a31b6781bc3f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pasteur, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:11:05 compute-0 compassionate_pasteur[447320]: 167 167
Dec 05 02:11:05 compute-0 systemd[1]: libpod-6086dc202c1b27895b64a531c496d77759b8f3cb6ca3e95a56a31b6781bc3f19.scope: Deactivated successfully.
Dec 05 02:11:05 compute-0 podman[447305]: 2025-12-05 02:11:05.690291815 +0000 UTC m=+0.284349457 container died 6086dc202c1b27895b64a531c496d77759b8f3cb6ca3e95a56a31b6781bc3f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:11:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f65fc6282d4fd209e4892a2b78507b078f0874069acf1daa5cf880bffcdf93bb-merged.mount: Deactivated successfully.
Dec 05 02:11:05 compute-0 podman[447305]: 2025-12-05 02:11:05.784167202 +0000 UTC m=+0.378224844 container remove 6086dc202c1b27895b64a531c496d77759b8f3cb6ca3e95a56a31b6781bc3f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:11:05 compute-0 systemd[1]: libpod-conmon-6086dc202c1b27895b64a531c496d77759b8f3cb6ca3e95a56a31b6781bc3f19.scope: Deactivated successfully.
Dec 05 02:11:06 compute-0 podman[447344]: 2025-12-05 02:11:06.081024029 +0000 UTC m=+0.086358826 container create 9fd97c4e20ad393be0371cc61eab7767c974e72757fecb3c0cf9e85f0a09fc3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:11:06 compute-0 podman[447344]: 2025-12-05 02:11:06.060165554 +0000 UTC m=+0.065500371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:11:06 compute-0 systemd[1]: Started libpod-conmon-9fd97c4e20ad393be0371cc61eab7767c974e72757fecb3c0cf9e85f0a09fc3b.scope.
Dec 05 02:11:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1862: 321 pgs: 321 active+clean; 138 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 11 KiB/s wr, 9 op/s
Dec 05 02:11:06 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:11:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e6f5614f73b16173fd3e2b1f8fb60b92a45ad3ff91a78fd55998170f30bcb23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:11:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e6f5614f73b16173fd3e2b1f8fb60b92a45ad3ff91a78fd55998170f30bcb23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:11:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e6f5614f73b16173fd3e2b1f8fb60b92a45ad3ff91a78fd55998170f30bcb23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:11:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e6f5614f73b16173fd3e2b1f8fb60b92a45ad3ff91a78fd55998170f30bcb23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:11:06 compute-0 podman[447344]: 2025-12-05 02:11:06.247095324 +0000 UTC m=+0.252430211 container init 9fd97c4e20ad393be0371cc61eab7767c974e72757fecb3c0cf9e85f0a09fc3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:11:06 compute-0 podman[447344]: 2025-12-05 02:11:06.259389939 +0000 UTC m=+0.264724776 container start 9fd97c4e20ad393be0371cc61eab7767c974e72757fecb3c0cf9e85f0a09fc3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:11:06 compute-0 podman[447344]: 2025-12-05 02:11:06.268812014 +0000 UTC m=+0.274146891 container attach 9fd97c4e20ad393be0371cc61eab7767c974e72757fecb3c0cf9e85f0a09fc3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:11:07 compute-0 nova_compute[349548]: 2025-12-05 02:11:07.154 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:07 compute-0 unruffled_neumann[447360]: {
Dec 05 02:11:07 compute-0 unruffled_neumann[447360]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:11:07 compute-0 unruffled_neumann[447360]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:11:07 compute-0 unruffled_neumann[447360]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:11:07 compute-0 unruffled_neumann[447360]:         "osd_id": 0,
Dec 05 02:11:07 compute-0 unruffled_neumann[447360]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:11:07 compute-0 unruffled_neumann[447360]:         "type": "bluestore"
Dec 05 02:11:07 compute-0 unruffled_neumann[447360]:     },
Dec 05 02:11:07 compute-0 unruffled_neumann[447360]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:11:07 compute-0 unruffled_neumann[447360]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:11:07 compute-0 unruffled_neumann[447360]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:11:07 compute-0 unruffled_neumann[447360]:         "osd_id": 1,
Dec 05 02:11:07 compute-0 unruffled_neumann[447360]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:11:07 compute-0 unruffled_neumann[447360]:         "type": "bluestore"
Dec 05 02:11:07 compute-0 unruffled_neumann[447360]:     },
Dec 05 02:11:07 compute-0 unruffled_neumann[447360]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:11:07 compute-0 unruffled_neumann[447360]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:11:07 compute-0 unruffled_neumann[447360]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:11:07 compute-0 unruffled_neumann[447360]:         "osd_id": 2,
Dec 05 02:11:07 compute-0 unruffled_neumann[447360]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:11:07 compute-0 unruffled_neumann[447360]:         "type": "bluestore"
Dec 05 02:11:07 compute-0 unruffled_neumann[447360]:     }
Dec 05 02:11:07 compute-0 unruffled_neumann[447360]: }
Dec 05 02:11:07 compute-0 systemd[1]: libpod-9fd97c4e20ad393be0371cc61eab7767c974e72757fecb3c0cf9e85f0a09fc3b.scope: Deactivated successfully.
Dec 05 02:11:07 compute-0 ceph-mon[192914]: pgmap v1862: 321 pgs: 321 active+clean; 138 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 11 KiB/s wr, 9 op/s
Dec 05 02:11:07 compute-0 systemd[1]: libpod-9fd97c4e20ad393be0371cc61eab7767c974e72757fecb3c0cf9e85f0a09fc3b.scope: Consumed 1.168s CPU time.
Dec 05 02:11:07 compute-0 conmon[447360]: conmon 9fd97c4e20ad393be037 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9fd97c4e20ad393be0371cc61eab7767c974e72757fecb3c0cf9e85f0a09fc3b.scope/container/memory.events
Dec 05 02:11:07 compute-0 podman[447344]: 2025-12-05 02:11:07.431102407 +0000 UTC m=+1.436437214 container died 9fd97c4e20ad393be0371cc61eab7767c974e72757fecb3c0cf9e85f0a09fc3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:11:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e6f5614f73b16173fd3e2b1f8fb60b92a45ad3ff91a78fd55998170f30bcb23-merged.mount: Deactivated successfully.
Dec 05 02:11:07 compute-0 podman[447344]: 2025-12-05 02:11:07.563843515 +0000 UTC m=+1.569178342 container remove 9fd97c4e20ad393be0371cc61eab7767c974e72757fecb3c0cf9e85f0a09fc3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:11:07 compute-0 systemd[1]: libpod-conmon-9fd97c4e20ad393be0371cc61eab7767c974e72757fecb3c0cf9e85f0a09fc3b.scope: Deactivated successfully.
Dec 05 02:11:07 compute-0 sudo[447242]: pam_unix(sudo:session): session closed for user root
Dec 05 02:11:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:11:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:11:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:11:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:11:07 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 15dcc09c-f284-4bee-a320-a3596a46e16b does not exist
Dec 05 02:11:07 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 481959c2-9325-4171-9f64-9a4d58d3ef54 does not exist
Dec 05 02:11:07 compute-0 sudo[447403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:11:07 compute-0 sudo[447403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:11:07 compute-0 sudo[447403]: pam_unix(sudo:session): session closed for user root
Dec 05 02:11:07 compute-0 sudo[447428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:11:07 compute-0 sudo[447428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:11:07 compute-0 sudo[447428]: pam_unix(sudo:session): session closed for user root
Dec 05 02:11:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 05 02:11:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1863: 321 pgs: 321 active+clean; 138 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 14 KiB/s wr, 7 op/s
Dec 05 02:11:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:11:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:11:08 compute-0 ceph-mon[192914]: pgmap v1863: 321 pgs: 321 active+clean; 138 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 14 KiB/s wr, 7 op/s
Dec 05 02:11:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Dec 05 02:11:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Dec 05 02:11:08 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Dec 05 02:11:09 compute-0 ceph-mon[192914]: osdmap e138: 3 total, 3 up, 3 in
Dec 05 02:11:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1865: 321 pgs: 321 active+clean; 138 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 3.6 KiB/s wr, 6 op/s
Dec 05 02:11:10 compute-0 nova_compute[349548]: 2025-12-05 02:11:10.333 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:10 compute-0 ceph-mon[192914]: pgmap v1865: 321 pgs: 321 active+clean; 138 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 3.6 KiB/s wr, 6 op/s
Dec 05 02:11:11 compute-0 sshd-session[447453]: Invalid user orangepi from 123.253.22.45 port 56962
Dec 05 02:11:11 compute-0 sshd-session[447453]: Connection closed by invalid user orangepi 123.253.22.45 port 56962 [preauth]
Dec 05 02:11:12 compute-0 nova_compute[349548]: 2025-12-05 02:11:12.158 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1866: 321 pgs: 321 active+clean; 138 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 4.8 KiB/s wr, 11 op/s
Dec 05 02:11:12 compute-0 podman[447458]: 2025-12-05 02:11:12.722248811 +0000 UTC m=+0.109387893 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-type=git, version=9.6, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 05 02:11:12 compute-0 podman[447456]: 2025-12-05 02:11:12.72968133 +0000 UTC m=+0.129960401 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 02:11:12 compute-0 podman[447455]: 2025-12-05 02:11:12.735358109 +0000 UTC m=+0.136608608 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 02:11:12 compute-0 podman[447457]: 2025-12-05 02:11:12.768437258 +0000 UTC m=+0.164204273 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:11:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:11:13 compute-0 ceph-mon[192914]: pgmap v1866: 321 pgs: 321 active+clean; 138 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 4.8 KiB/s wr, 11 op/s
Dec 05 02:11:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1867: 321 pgs: 321 active+clean; 159 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Dec 05 02:11:15 compute-0 ceph-mon[192914]: pgmap v1867: 321 pgs: 321 active+clean; 159 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Dec 05 02:11:15 compute-0 nova_compute[349548]: 2025-12-05 02:11:15.338 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1868: 321 pgs: 321 active+clean; 159 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Dec 05 02:11:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:11:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:11:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:11:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:11:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:11:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:11:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:11:16
Dec 05 02:11:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:11:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:11:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['vms', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.control', 'default.rgw.log', '.mgr']
Dec 05 02:11:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:11:16 compute-0 nova_compute[349548]: 2025-12-05 02:11:16.591 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "292fd084-0808-4a80-adc1-6ab1f28e188a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:11:16 compute-0 nova_compute[349548]: 2025-12-05 02:11:16.592 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:11:16 compute-0 nova_compute[349548]: 2025-12-05 02:11:16.611 349552 DEBUG nova.compute.manager [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 05 02:11:16 compute-0 nova_compute[349548]: 2025-12-05 02:11:16.732 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:16 compute-0 nova_compute[349548]: 2025-12-05 02:11:16.736 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:11:16 compute-0 nova_compute[349548]: 2025-12-05 02:11:16.737 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:11:16 compute-0 nova_compute[349548]: 2025-12-05 02:11:16.747 349552 DEBUG nova.virt.hardware [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 05 02:11:16 compute-0 nova_compute[349548]: 2025-12-05 02:11:16.747 349552 INFO nova.compute.claims [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Claim successful on node compute-0.ctlplane.example.com
Dec 05 02:11:16 compute-0 nova_compute[349548]: 2025-12-05 02:11:16.885 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.160 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:11:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3918919140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:11:17 compute-0 ceph-mon[192914]: pgmap v1868: 321 pgs: 321 active+clean; 159 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Dec 05 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.665 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.780s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.679 349552 DEBUG nova.compute.provider_tree [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.705 349552 DEBUG nova.scheduler.client.report [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.756 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.758 349552 DEBUG nova.compute.manager [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 05 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.821 349552 DEBUG nova.compute.manager [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 05 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.823 349552 DEBUG nova.network.neutron [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 05 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.847 349552 INFO nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 05 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.870 349552 DEBUG nova.compute.manager [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 05 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.991 349552 DEBUG nova.compute.manager [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 05 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.994 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 05 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.995 349552 INFO nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Creating image(s)
Dec 05 02:11:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:11:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:11:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:11:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:11:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:11:18 compute-0 nova_compute[349548]: 2025-12-05 02:11:18.049 349552 DEBUG nova.storage.rbd_utils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image 292fd084-0808-4a80-adc1-6ab1f28e188a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:11:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:11:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:11:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:11:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:11:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:11:18 compute-0 nova_compute[349548]: 2025-12-05 02:11:18.120 349552 DEBUG nova.storage.rbd_utils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image 292fd084-0808-4a80-adc1-6ab1f28e188a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:11:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:11:18 compute-0 nova_compute[349548]: 2025-12-05 02:11:18.188 349552 DEBUG nova.storage.rbd_utils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image 292fd084-0808-4a80-adc1-6ab1f28e188a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:11:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1869: 321 pgs: 321 active+clean; 159 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 2.0 MiB/s wr, 13 op/s
Dec 05 02:11:18 compute-0 nova_compute[349548]: 2025-12-05 02:11:18.200 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "ce40e952b4771285622230948599d16442d55b06" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:11:18 compute-0 nova_compute[349548]: 2025-12-05 02:11:18.202 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "ce40e952b4771285622230948599d16442d55b06" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:11:18 compute-0 nova_compute[349548]: 2025-12-05 02:11:18.211 349552 DEBUG nova.policy [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 05 02:11:18 compute-0 nova_compute[349548]: 2025-12-05 02:11:18.537 349552 DEBUG nova.virt.libvirt.imagebackend [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Image locations are: [{'url': 'rbd://cbd280d3-cbd8-528b-ace6-2b3a887cdcee/images/773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://cbd280d3-cbd8-528b-ace6-2b3a887cdcee/images/773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 05 02:11:18 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3918919140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:11:18 compute-0 ceph-mon[192914]: pgmap v1869: 321 pgs: 321 active+clean; 159 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 2.0 MiB/s wr, 13 op/s
Dec 05 02:11:19 compute-0 nova_compute[349548]: 2025-12-05 02:11:19.261 349552 DEBUG nova.network.neutron [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Successfully created port: 706f9405-4061-481e-a252-9b14f4534a4e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 05 02:11:19 compute-0 nova_compute[349548]: 2025-12-05 02:11:19.615 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:11:19 compute-0 nova_compute[349548]: 2025-12-05 02:11:19.709 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06.part --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:11:19 compute-0 nova_compute[349548]: 2025-12-05 02:11:19.711 349552 DEBUG nova.virt.images [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] 773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Dec 05 02:11:19 compute-0 nova_compute[349548]: 2025-12-05 02:11:19.713 349552 DEBUG nova.privsep.utils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 05 02:11:19 compute-0 nova_compute[349548]: 2025-12-05 02:11:19.714 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06.part /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:11:19 compute-0 nova_compute[349548]: 2025-12-05 02:11:19.979 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06.part /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06.converted" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:11:19 compute-0 nova_compute[349548]: 2025-12-05 02:11:19.988 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.105 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06.converted --force-share --output=json" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.108 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "ce40e952b4771285622230948599d16442d55b06" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.168 349552 DEBUG nova.storage.rbd_utils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image 292fd084-0808-4a80-adc1-6ab1f28e188a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.181 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06 292fd084-0808-4a80-adc1-6ab1f28e188a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:11:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1870: 321 pgs: 321 active+clean; 159 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 1.8 MiB/s wr, 11 op/s
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.226 349552 DEBUG oslo_concurrency.lockutils [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.227 349552 DEBUG oslo_concurrency.lockutils [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.228 349552 DEBUG oslo_concurrency.lockutils [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.229 349552 DEBUG oslo_concurrency.lockutils [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.230 349552 DEBUG oslo_concurrency.lockutils [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.234 349552 INFO nova.compute.manager [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Terminating instance
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.237 349552 DEBUG nova.compute.manager [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.250 349552 DEBUG nova.network.neutron [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Successfully updated port: 706f9405-4061-481e-a252-9b14f4534a4e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.270 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.271 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquired lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.271 349552 DEBUG nova.network.neutron [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.342 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:20 compute-0 kernel: tapa240e2ef-17 (unregistering): left promiscuous mode
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.374 349552 DEBUG nova.compute.manager [req-21032ee1-a71c-44d9-a0a1-ad388778cfde req-fcc5a807-5a03-4e35-b1f2-ec4c759aff2b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Received event network-changed-706f9405-4061-481e-a252-9b14f4534a4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.375 349552 DEBUG nova.compute.manager [req-21032ee1-a71c-44d9-a0a1-ad388778cfde req-fcc5a807-5a03-4e35-b1f2-ec4c759aff2b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Refreshing instance network info cache due to event network-changed-706f9405-4061-481e-a252-9b14f4534a4e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.375 349552 DEBUG oslo_concurrency.lockutils [req-21032ee1-a71c-44d9-a0a1-ad388778cfde req-fcc5a807-5a03-4e35-b1f2-ec4c759aff2b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:11:20 compute-0 NetworkManager[49092]: <info>  [1764900680.3854] device (tapa240e2ef-17): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.396 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:20 compute-0 ovn_controller[89286]: 2025-12-05T02:11:20Z|00128|binding|INFO|Releasing lport a240e2ef-1773-4509-ac04-eae1f5d36e08 from this chassis (sb_readonly=0)
Dec 05 02:11:20 compute-0 ovn_controller[89286]: 2025-12-05T02:11:20Z|00129|binding|INFO|Setting lport a240e2ef-1773-4509-ac04-eae1f5d36e08 down in Southbound
Dec 05 02:11:20 compute-0 ovn_controller[89286]: 2025-12-05T02:11:20Z|00130|binding|INFO|Removing iface tapa240e2ef-17 ovn-installed in OVS
Dec 05 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.420 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:81:87 10.100.0.10'], port_security=['fa:16:3e:16:81:87 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '59e35a32-9023-4e49-be56-9da10df3027f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd34a6a62cf94436a2b836fa4f49c4fa', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0ad1486e-ab79-4bad-bad5-777f54ed0ef1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.206', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=880ae0ff-40ec-4de0-a5e7-7c2cf13ecf72, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=a240e2ef-1773-4509-ac04-eae1f5d36e08) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.418 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.424 287122 INFO neutron.agent.ovn.metadata.agent [-] Port a240e2ef-1773-4509-ac04-eae1f5d36e08 in datapath a9bc378d-2d4b-4990-99ce-02656b1fec0d unbound from our chassis
Dec 05 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.426 287122 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a9bc378d-2d4b-4990-99ce-02656b1fec0d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.429 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[87387183-1404-4998-b659-ae390afe87a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.430 287122 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d namespace which is not needed anymore
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.439 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:20 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000008.scope: Deactivated successfully.
Dec 05 02:11:20 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000008.scope: Consumed 48.502s CPU time.
Dec 05 02:11:20 compute-0 systemd-machined[138700]: Machine qemu-10-instance-00000008 terminated.
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.482 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.492 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.506 349552 INFO nova.virt.libvirt.driver [-] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Instance destroyed successfully.
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.506 349552 DEBUG nova.objects.instance [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lazy-loading 'resources' on Instance uuid 59e35a32-9023-4e49-be56-9da10df3027f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.524 349552 DEBUG nova.virt.libvirt.vif [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:08:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1678320742',display_name='tempest-ServerActionsTestJSON-server-1678320742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1678320742',id=8,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmirf5PzEcVuq6RNudVuflcugnc6r3Jy50MVVEH7tkttBe4cf5zv9kQC3Ss53DUYZTE/QaGNMMsby6pKc4tzWxZGKXsndhFMr79gHGA5klSxVz8kWH2nsbelSj8zkK0fg==',key_name='tempest-keypair-1953156472',keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:08:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dd34a6a62cf94436a2b836fa4f49c4fa',ramdisk_id='',reservation_id='r-i4td7gfo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1914764435',owner_user_name='tempest-ServerActionsTestJSON-1914764435-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:10:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4745812b7eb47908ded25b1eb7c7328',uuid=59e35a32-9023-4e49-be56-9da10df3027f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.524 349552 DEBUG nova.network.os_vif_util [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Converting VIF {"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.525 349552 DEBUG nova.network.os_vif_util [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.525 349552 DEBUG os_vif [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.527 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.527 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa240e2ef-17, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.529 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.531 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.532 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.536 349552 INFO os_vif [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17')
Dec 05 02:11:20 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[445441]: [NOTICE]   (445445) : haproxy version is 2.8.14-c23fe91
Dec 05 02:11:20 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[445441]: [NOTICE]   (445445) : path to executable is /usr/sbin/haproxy
Dec 05 02:11:20 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[445441]: [WARNING]  (445445) : Exiting Master process...
Dec 05 02:11:20 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[445441]: [WARNING]  (445445) : Exiting Master process...
Dec 05 02:11:20 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[445441]: [ALERT]    (445445) : Current worker (445447) exited with code 143 (Terminated)
Dec 05 02:11:20 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[445441]: [WARNING]  (445445) : All workers exited. Exiting... (0)
Dec 05 02:11:20 compute-0 systemd[1]: libpod-2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da.scope: Deactivated successfully.
Dec 05 02:11:20 compute-0 podman[447704]: 2025-12-05 02:11:20.646802886 +0000 UTC m=+0.078837035 container died 2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.652 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06 292fd084-0808-4a80-adc1-6ab1f28e188a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:11:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da-userdata-shm.mount: Deactivated successfully.
Dec 05 02:11:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7a4098034b79b17a1b0a33ca61c1f904969485d36ccd5269a78d56bbd845de7-merged.mount: Deactivated successfully.
Dec 05 02:11:20 compute-0 podman[447704]: 2025-12-05 02:11:20.723217372 +0000 UTC m=+0.155251491 container cleanup 2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 02:11:20 compute-0 systemd[1]: libpod-conmon-2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da.scope: Deactivated successfully.
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.793 349552 DEBUG nova.storage.rbd_utils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] resizing rbd image 292fd084-0808-4a80-adc1-6ab1f28e188a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 05 02:11:20 compute-0 podman[447766]: 2025-12-05 02:11:20.824707063 +0000 UTC m=+0.068050542 container remove 2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.838 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[538396d8-3fb6-4c16-9c81-e4e049d7c71f]: (4, ('Fri Dec  5 02:11:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d (2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da)\n2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da\nFri Dec  5 02:11:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d (2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da)\n2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.842 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[d8989a44-bc83-4279-88cf-9e61df040bf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.844 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa9bc378d-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.847 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:20 compute-0 kernel: tapa9bc378d-20: left promiscuous mode
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.854 349552 DEBUG nova.network.neutron [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.862 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.866 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[de89a6ba-86fb-4495-9959-b5e507debfb6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.883 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[0a117b6f-e069-49d1-b5c0-076be9a9f3cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.884 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[acb97763-dde2-4b4e-973c-144c38026d73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.905 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[63a6bf10-2168-422c-b88b-42569d227c56]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670092, 'reachable_time': 27790, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 447810, 'error': None, 'target': 'ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.907 287504 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 05 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.907 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[733590c3-df23-4a7e-805b-573b29813295]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:20 compute-0 systemd[1]: run-netns-ovnmeta\x2da9bc378d\x2d2d4b\x2d4990\x2d99ce\x2d02656b1fec0d.mount: Deactivated successfully.
Dec 05 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.986 349552 DEBUG nova.objects.instance [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lazy-loading 'migration_context' on Instance uuid 292fd084-0808-4a80-adc1-6ab1f28e188a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.014 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 05 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.015 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Ensure instance console log exists: /var/lib/nova/instances/292fd084-0808-4a80-adc1-6ab1f28e188a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 05 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.015 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.015 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.016 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:11:21 compute-0 ceph-mon[192914]: pgmap v1870: 321 pgs: 321 active+clean; 159 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 1.8 MiB/s wr, 11 op/s
Dec 05 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.338 349552 DEBUG nova.compute.manager [req-5cf25255-d5cb-4637-81d6-708b1a4645a2 req-7e5b43c2-856f-4f4a-a7c7-6fd554fad6be a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received event network-vif-unplugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.339 349552 DEBUG oslo_concurrency.lockutils [req-5cf25255-d5cb-4637-81d6-708b1a4645a2 req-7e5b43c2-856f-4f4a-a7c7-6fd554fad6be a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.339 349552 DEBUG oslo_concurrency.lockutils [req-5cf25255-d5cb-4637-81d6-708b1a4645a2 req-7e5b43c2-856f-4f4a-a7c7-6fd554fad6be a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.340 349552 DEBUG oslo_concurrency.lockutils [req-5cf25255-d5cb-4637-81d6-708b1a4645a2 req-7e5b43c2-856f-4f4a-a7c7-6fd554fad6be a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.340 349552 DEBUG nova.compute.manager [req-5cf25255-d5cb-4637-81d6-708b1a4645a2 req-7e5b43c2-856f-4f4a-a7c7-6fd554fad6be a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] No waiting events found dispatching network-vif-unplugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.340 349552 DEBUG nova.compute.manager [req-5cf25255-d5cb-4637-81d6-708b1a4645a2 req-7e5b43c2-856f-4f4a-a7c7-6fd554fad6be a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received event network-vif-unplugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 05 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.388 349552 INFO nova.virt.libvirt.driver [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Deleting instance files /var/lib/nova/instances/59e35a32-9023-4e49-be56-9da10df3027f_del
Dec 05 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.389 349552 INFO nova.virt.libvirt.driver [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Deletion of /var/lib/nova/instances/59e35a32-9023-4e49-be56-9da10df3027f_del complete
Dec 05 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.440 349552 INFO nova.compute.manager [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Took 1.20 seconds to destroy the instance on the hypervisor.
Dec 05 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.441 349552 DEBUG oslo.service.loopingcall [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 05 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.442 349552 DEBUG nova.compute.manager [-] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 05 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.442 349552 DEBUG nova.network.neutron [-] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.018 349552 DEBUG nova.network.neutron [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updating instance_info_cache with network_info: [{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.050 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Releasing lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.050 349552 DEBUG nova.compute.manager [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Instance network_info: |[{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.051 349552 DEBUG oslo_concurrency.lockutils [req-21032ee1-a71c-44d9-a0a1-ad388778cfde req-fcc5a807-5a03-4e35-b1f2-ec4c759aff2b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.051 349552 DEBUG nova.network.neutron [req-21032ee1-a71c-44d9-a0a1-ad388778cfde req-fcc5a807-5a03-4e35-b1f2-ec4c759aff2b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Refreshing network info cache for port 706f9405-4061-481e-a252-9b14f4534a4e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.056 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Start _get_guest_xml network_info=[{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:11:06Z,direct_url=<?>,disk_format='qcow2',id=773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e,min_disk=0,min_ram=0,name='tempest-scenario-img--2105045224',owner='b01709a3378347e1a3f25eeb2b8b1bca',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:11:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.069 349552 WARNING nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.086 349552 DEBUG nova.virt.libvirt.host [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.087 349552 DEBUG nova.virt.libvirt.host [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.093 349552 DEBUG nova.virt.libvirt.host [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.094 349552 DEBUG nova.virt.libvirt.host [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.094 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.095 349552 DEBUG nova.virt.hardware [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T02:07:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:11:06Z,direct_url=<?>,disk_format='qcow2',id=773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e,min_disk=0,min_ram=0,name='tempest-scenario-img--2105045224',owner='b01709a3378347e1a3f25eeb2b8b1bca',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:11:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.095 349552 DEBUG nova.virt.hardware [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.096 349552 DEBUG nova.virt.hardware [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.096 349552 DEBUG nova.virt.hardware [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.097 349552 DEBUG nova.virt.hardware [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.097 349552 DEBUG nova.virt.hardware [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.098 349552 DEBUG nova.virt.hardware [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.098 349552 DEBUG nova.virt.hardware [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.098 349552 DEBUG nova.virt.hardware [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.099 349552 DEBUG nova.virt.hardware [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.099 349552 DEBUG nova.virt.hardware [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.104 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.163 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1871: 321 pgs: 321 active+clean; 162 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.8 MiB/s wr, 38 op/s
Dec 05 02:11:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 02:11:22 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4120132925' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.630 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.681 349552 DEBUG nova.storage.rbd_utils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image 292fd084-0808-4a80-adc1-6ab1f28e188a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.706 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.759 349552 DEBUG nova.network.neutron [-] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.786 349552 INFO nova.compute.manager [-] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Took 1.34 seconds to deallocate network for instance.
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.838 349552 DEBUG oslo_concurrency.lockutils [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.839 349552 DEBUG oslo_concurrency.lockutils [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.924 349552 DEBUG oslo_concurrency.processutils [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:11:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 02:11:23 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4210066393' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.140 349552 DEBUG nova.network.neutron [req-21032ee1-a71c-44d9-a0a1-ad388778cfde req-fcc5a807-5a03-4e35-b1f2-ec4c759aff2b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updated VIF entry in instance network info cache for port 706f9405-4061-481e-a252-9b14f4534a4e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.142 349552 DEBUG nova.network.neutron [req-21032ee1-a71c-44d9-a0a1-ad388778cfde req-fcc5a807-5a03-4e35-b1f2-ec4c759aff2b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updating instance_info_cache with network_info: [{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:11:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.165 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.167 349552 DEBUG nova.virt.libvirt.vif [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:11:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa',id=11,image_ref='773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='92ca195d-98d1-443c-9947-dcb7ca7b926a'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b01709a3378347e1a3f25eeb2b8b1bca',ramdisk_id='',reservation_id='r-d903m2ip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-257639068',owner_user_name='tempest-PrometheusGabbiTest-257639068-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:11:17Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='99591ed8361e41579fee1d14f16bf0f7',uuid=292fd084-0808-4a80-adc1-6ab1f28e188a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.168 349552 DEBUG nova.network.os_vif_util [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Converting VIF {"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.169 349552 DEBUG nova.network.os_vif_util [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:10:bc,bridge_name='br-int',has_traffic_filtering=True,id=706f9405-4061-481e-a252-9b14f4534a4e,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706f9405-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.172 349552 DEBUG nova.objects.instance [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lazy-loading 'pci_devices' on Instance uuid 292fd084-0808-4a80-adc1-6ab1f28e188a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.175 349552 DEBUG oslo_concurrency.lockutils [req-21032ee1-a71c-44d9-a0a1-ad388778cfde req-fcc5a807-5a03-4e35-b1f2-ec4c759aff2b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.189 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] End _get_guest_xml xml=<domain type="kvm">
Dec 05 02:11:23 compute-0 nova_compute[349548]:   <uuid>292fd084-0808-4a80-adc1-6ab1f28e188a</uuid>
Dec 05 02:11:23 compute-0 nova_compute[349548]:   <name>instance-0000000b</name>
Dec 05 02:11:23 compute-0 nova_compute[349548]:   <memory>131072</memory>
Dec 05 02:11:23 compute-0 nova_compute[349548]:   <vcpu>1</vcpu>
Dec 05 02:11:23 compute-0 nova_compute[349548]:   <metadata>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <nova:name>te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa</nova:name>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <nova:creationTime>2025-12-05 02:11:22</nova:creationTime>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <nova:flavor name="m1.nano">
Dec 05 02:11:23 compute-0 nova_compute[349548]:         <nova:memory>128</nova:memory>
Dec 05 02:11:23 compute-0 nova_compute[349548]:         <nova:disk>1</nova:disk>
Dec 05 02:11:23 compute-0 nova_compute[349548]:         <nova:swap>0</nova:swap>
Dec 05 02:11:23 compute-0 nova_compute[349548]:         <nova:ephemeral>0</nova:ephemeral>
Dec 05 02:11:23 compute-0 nova_compute[349548]:         <nova:vcpus>1</nova:vcpus>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       </nova:flavor>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <nova:owner>
Dec 05 02:11:23 compute-0 nova_compute[349548]:         <nova:user uuid="99591ed8361e41579fee1d14f16bf0f7">tempest-PrometheusGabbiTest-257639068-project-member</nova:user>
Dec 05 02:11:23 compute-0 nova_compute[349548]:         <nova:project uuid="b01709a3378347e1a3f25eeb2b8b1bca">tempest-PrometheusGabbiTest-257639068</nova:project>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       </nova:owner>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <nova:root type="image" uuid="773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <nova:ports>
Dec 05 02:11:23 compute-0 nova_compute[349548]:         <nova:port uuid="706f9405-4061-481e-a252-9b14f4534a4e">
Dec 05 02:11:23 compute-0 nova_compute[349548]:           <nova:ip type="fixed" address="10.100.0.151" ipVersion="4"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:         </nova:port>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       </nova:ports>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     </nova:instance>
Dec 05 02:11:23 compute-0 nova_compute[349548]:   </metadata>
Dec 05 02:11:23 compute-0 nova_compute[349548]:   <sysinfo type="smbios">
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <system>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <entry name="manufacturer">RDO</entry>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <entry name="product">OpenStack Compute</entry>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <entry name="serial">292fd084-0808-4a80-adc1-6ab1f28e188a</entry>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <entry name="uuid">292fd084-0808-4a80-adc1-6ab1f28e188a</entry>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <entry name="family">Virtual Machine</entry>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     </system>
Dec 05 02:11:23 compute-0 nova_compute[349548]:   </sysinfo>
Dec 05 02:11:23 compute-0 nova_compute[349548]:   <os>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <boot dev="hd"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <smbios mode="sysinfo"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:   </os>
Dec 05 02:11:23 compute-0 nova_compute[349548]:   <features>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <acpi/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <apic/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <vmcoreinfo/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:   </features>
Dec 05 02:11:23 compute-0 nova_compute[349548]:   <clock offset="utc">
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <timer name="pit" tickpolicy="delay"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <timer name="hpet" present="no"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:   </clock>
Dec 05 02:11:23 compute-0 nova_compute[349548]:   <cpu mode="host-model" match="exact">
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <topology sockets="1" cores="1" threads="1"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:   </cpu>
Dec 05 02:11:23 compute-0 nova_compute[349548]:   <devices>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <disk type="network" device="disk">
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/292fd084-0808-4a80-adc1-6ab1f28e188a_disk">
Dec 05 02:11:23 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       </source>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 02:11:23 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       </auth>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <target dev="vda" bus="virtio"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     </disk>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <disk type="network" device="cdrom">
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/292fd084-0808-4a80-adc1-6ab1f28e188a_disk.config">
Dec 05 02:11:23 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       </source>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 02:11:23 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       </auth>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <target dev="sda" bus="sata"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     </disk>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <interface type="ethernet">
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <mac address="fa:16:3e:cf:10:bc"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <driver name="vhost" rx_queue_size="512"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <mtu size="1442"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <target dev="tap706f9405-40"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     </interface>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <serial type="pty">
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <log file="/var/lib/nova/instances/292fd084-0808-4a80-adc1-6ab1f28e188a/console.log" append="off"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     </serial>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <video>
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     </video>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <input type="tablet" bus="usb"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <rng model="virtio">
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <backend model="random">/dev/urandom</backend>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     </rng>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <controller type="usb" index="0"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     <memballoon model="virtio">
Dec 05 02:11:23 compute-0 nova_compute[349548]:       <stats period="10"/>
Dec 05 02:11:23 compute-0 nova_compute[349548]:     </memballoon>
Dec 05 02:11:23 compute-0 nova_compute[349548]:   </devices>
Dec 05 02:11:23 compute-0 nova_compute[349548]: </domain>
Dec 05 02:11:23 compute-0 nova_compute[349548]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.190 349552 DEBUG nova.compute.manager [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Preparing to wait for external event network-vif-plugged-706f9405-4061-481e-a252-9b14f4534a4e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.191 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.191 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.192 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.194 349552 DEBUG nova.virt.libvirt.vif [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:11:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa',id=11,image_ref='773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='92ca195d-98d1-443c-9947-dcb7ca7b926a'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b01709a3378347e1a3f25eeb2b8b1bca',ramdisk_id='',reservation_id='r-d903m2ip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-257639068',owner_user_name='tempest-PrometheusGabbiTest-257639068-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:11:17Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='99591ed8361e41579fee1d14f16bf0f7',uuid=292fd084-0808-4a80-adc1-6ab1f28e188a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.194 349552 DEBUG nova.network.os_vif_util [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Converting VIF {"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.195 349552 DEBUG nova.network.os_vif_util [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:10:bc,bridge_name='br-int',has_traffic_filtering=True,id=706f9405-4061-481e-a252-9b14f4534a4e,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706f9405-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.196 349552 DEBUG os_vif [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:10:bc,bridge_name='br-int',has_traffic_filtering=True,id=706f9405-4061-481e-a252-9b14f4534a4e,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706f9405-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.197 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.197 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.198 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.203 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.204 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap706f9405-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.205 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap706f9405-40, col_values=(('external_ids', {'iface-id': '706f9405-4061-481e-a252-9b14f4534a4e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:10:bc', 'vm-uuid': '292fd084-0808-4a80-adc1-6ab1f28e188a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:11:23 compute-0 NetworkManager[49092]: <info>  [1764900683.2091] manager: (tap706f9405-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.207 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.210 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.218 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.220 349552 INFO os_vif [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:10:bc,bridge_name='br-int',has_traffic_filtering=True,id=706f9405-4061-481e-a252-9b14f4534a4e,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706f9405-40')
Dec 05 02:11:23 compute-0 ceph-mon[192914]: pgmap v1871: 321 pgs: 321 active+clean; 162 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.8 MiB/s wr, 38 op/s
Dec 05 02:11:23 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4120132925' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:11:23 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4210066393' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.310 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.311 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.311 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] No VIF found with MAC fa:16:3e:cf:10:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.312 349552 INFO nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Using config drive
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.362 349552 DEBUG nova.storage.rbd_utils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image 292fd084-0808-4a80-adc1-6ab1f28e188a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:11:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:11:23 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4116091337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.456 349552 DEBUG oslo_concurrency.processutils [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.465 349552 DEBUG nova.compute.manager [req-bb7ce5b0-28a1-4d59-8a93-255ac5f3edc5 req-149b21eb-0122-47d8-8fd3-93a18afa19c0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.465 349552 DEBUG oslo_concurrency.lockutils [req-bb7ce5b0-28a1-4d59-8a93-255ac5f3edc5 req-149b21eb-0122-47d8-8fd3-93a18afa19c0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.466 349552 DEBUG oslo_concurrency.lockutils [req-bb7ce5b0-28a1-4d59-8a93-255ac5f3edc5 req-149b21eb-0122-47d8-8fd3-93a18afa19c0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.466 349552 DEBUG oslo_concurrency.lockutils [req-bb7ce5b0-28a1-4d59-8a93-255ac5f3edc5 req-149b21eb-0122-47d8-8fd3-93a18afa19c0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.466 349552 DEBUG nova.compute.manager [req-bb7ce5b0-28a1-4d59-8a93-255ac5f3edc5 req-149b21eb-0122-47d8-8fd3-93a18afa19c0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] No waiting events found dispatching network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.467 349552 WARNING nova.compute.manager [req-bb7ce5b0-28a1-4d59-8a93-255ac5f3edc5 req-149b21eb-0122-47d8-8fd3-93a18afa19c0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received unexpected event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 for instance with vm_state deleted and task_state None.
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.467 349552 DEBUG nova.compute.manager [req-bb7ce5b0-28a1-4d59-8a93-255ac5f3edc5 req-149b21eb-0122-47d8-8fd3-93a18afa19c0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received event network-vif-deleted-a240e2ef-1773-4509-ac04-eae1f5d36e08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.473 349552 DEBUG nova.compute.provider_tree [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.490 349552 DEBUG nova.scheduler.client.report [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.516 349552 DEBUG oslo_concurrency.lockutils [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.540 349552 INFO nova.scheduler.client.report [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Deleted allocations for instance 59e35a32-9023-4e49-be56-9da10df3027f
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.590 349552 DEBUG oslo_concurrency.lockutils [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.724 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.778 349552 INFO nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Creating config drive at /var/lib/nova/instances/292fd084-0808-4a80-adc1-6ab1f28e188a/disk.config
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.791 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/292fd084-0808-4a80-adc1-6ab1f28e188a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprrp1zrla execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.937 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/292fd084-0808-4a80-adc1-6ab1f28e188a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprrp1zrla" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.984 349552 DEBUG nova.storage.rbd_utils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image 292fd084-0808-4a80-adc1-6ab1f28e188a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.993 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/292fd084-0808-4a80-adc1-6ab1f28e188a/disk.config 292fd084-0808-4a80-adc1-6ab1f28e188a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:11:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1872: 321 pgs: 321 active+clean; 148 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.3 MiB/s wr, 53 op/s
Dec 05 02:11:24 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4116091337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:11:24 compute-0 nova_compute[349548]: 2025-12-05 02:11:24.321 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/292fd084-0808-4a80-adc1-6ab1f28e188a/disk.config 292fd084-0808-4a80-adc1-6ab1f28e188a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:11:24 compute-0 nova_compute[349548]: 2025-12-05 02:11:24.322 349552 INFO nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Deleting local config drive /var/lib/nova/instances/292fd084-0808-4a80-adc1-6ab1f28e188a/disk.config because it was imported into RBD.
Dec 05 02:11:24 compute-0 kernel: tap706f9405-40: entered promiscuous mode
Dec 05 02:11:24 compute-0 NetworkManager[49092]: <info>  [1764900684.4058] manager: (tap706f9405-40): new Tun device (/org/freedesktop/NetworkManager/Devices/63)
Dec 05 02:11:24 compute-0 ovn_controller[89286]: 2025-12-05T02:11:24Z|00131|binding|INFO|Claiming lport 706f9405-4061-481e-a252-9b14f4534a4e for this chassis.
Dec 05 02:11:24 compute-0 ovn_controller[89286]: 2025-12-05T02:11:24Z|00132|binding|INFO|706f9405-4061-481e-a252-9b14f4534a4e: Claiming fa:16:3e:cf:10:bc 10.100.0.151
Dec 05 02:11:24 compute-0 nova_compute[349548]: 2025-12-05 02:11:24.408 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:24 compute-0 nova_compute[349548]: 2025-12-05 02:11:24.416 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.421 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:10:bc 10.100.0.151'], port_security=['fa:16:3e:cf:10:bc 10.100.0.151'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.151/16', 'neutron:device_id': '292fd084-0808-4a80-adc1-6ab1f28e188a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cb556767-8d1b-4432-9d0a-485dcba856ee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40610b26-f7eb-46a6-9c49-714ab1f77db8, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=706f9405-4061-481e-a252-9b14f4534a4e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.423 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 706f9405-4061-481e-a252-9b14f4534a4e in datapath d7842201-32d0-4f34-ad6b-51f98e5f8322 bound to our chassis
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.424 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7842201-32d0-4f34-ad6b-51f98e5f8322
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.441 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[45cde6d6-2875-4a24-83e1-0718adb4dda1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.442 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd7842201-31 in ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.445 412744 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd7842201-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.445 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[b15ed5b3-1d76-4a5e-8604-fdcb5f5d8246]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.446 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[cf51c271-9200-4ae4-ba07-1ac27eeca4ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:24 compute-0 nova_compute[349548]: 2025-12-05 02:11:24.461 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:24 compute-0 ovn_controller[89286]: 2025-12-05T02:11:24Z|00133|binding|INFO|Setting lport 706f9405-4061-481e-a252-9b14f4534a4e ovn-installed in OVS
Dec 05 02:11:24 compute-0 ovn_controller[89286]: 2025-12-05T02:11:24Z|00134|binding|INFO|Setting lport 706f9405-4061-481e-a252-9b14f4534a4e up in Southbound
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.469 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[fb41cc9e-9689-44a2-a501-70ad03a7468d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:24 compute-0 systemd-udevd[447990]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 02:11:24 compute-0 nova_compute[349548]: 2025-12-05 02:11:24.472 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:24 compute-0 systemd-machined[138700]: New machine qemu-12-instance-0000000b.
Dec 05 02:11:24 compute-0 NetworkManager[49092]: <info>  [1764900684.4973] device (tap706f9405-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 05 02:11:24 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000b.
Dec 05 02:11:24 compute-0 NetworkManager[49092]: <info>  [1764900684.4981] device (tap706f9405-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.505 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[05294f40-f537-45d0-abc3-8fa4285d1264]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.548 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[ecabc19f-9b9c-48da-94cf-714bcb6e5596]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.555 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe2e687-7c54-447f-9826-2a1082fb4152]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:24 compute-0 NetworkManager[49092]: <info>  [1764900684.5580] manager: (tapd7842201-30): new Veth device (/org/freedesktop/NetworkManager/Devices/64)
Dec 05 02:11:24 compute-0 systemd-udevd[447993]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.589 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[76af0377-8203-4c80-b389-118a50095911]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.594 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[d22b9c55-26bc-479d-813f-875dc9b7269e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:24 compute-0 NetworkManager[49092]: <info>  [1764900684.6239] device (tapd7842201-30): carrier: link connected
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.629 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[29bd6d18-2ae3-40da-80ca-70b32befeb60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.658 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[0444becc-5cc8-4002-b738-4567d60337b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7842201-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:26:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677128, 'reachable_time': 18430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 448021, 'error': None, 'target': 'ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.675 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[94e7cc0c-05e1-4f47-af1f-c66f93dd50ed]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5b:2670'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677128, 'tstamp': 677128}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 448022, 'error': None, 'target': 'ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.696 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[17a2c770-b2e6-421f-8d19-94574feaafc2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7842201-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:26:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677128, 'reachable_time': 18430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 448023, 'error': None, 'target': 'ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.736 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed58de0-a3ad-41ee-8483-f23339c0777c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.814 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[0c441a62-138b-4257-b7ba-bc17deccf97d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.816 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7842201-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.816 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.817 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7842201-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:11:24 compute-0 nova_compute[349548]: 2025-12-05 02:11:24.819 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:24 compute-0 kernel: tapd7842201-30: entered promiscuous mode
Dec 05 02:11:24 compute-0 NetworkManager[49092]: <info>  [1764900684.8227] manager: (tapd7842201-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Dec 05 02:11:24 compute-0 nova_compute[349548]: 2025-12-05 02:11:24.823 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.825 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7842201-30, col_values=(('external_ids', {'iface-id': '9309009c-26a0-4ed9-8142-14ad142ca1c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:11:24 compute-0 nova_compute[349548]: 2025-12-05 02:11:24.827 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:24 compute-0 ovn_controller[89286]: 2025-12-05T02:11:24Z|00135|binding|INFO|Releasing lport 9309009c-26a0-4ed9-8142-14ad142ca1c0 from this chassis (sb_readonly=0)
Dec 05 02:11:24 compute-0 nova_compute[349548]: 2025-12-05 02:11:24.860 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:24 compute-0 nova_compute[349548]: 2025-12-05 02:11:24.862 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.865 287122 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d7842201-32d0-4f34-ad6b-51f98e5f8322.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d7842201-32d0-4f34-ad6b-51f98e5f8322.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.867 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[5a70d8ef-086d-449a-af50-08258ab900ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.869 287122 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: global
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]:     log         /dev/log local0 debug
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]:     log-tag     haproxy-metadata-proxy-d7842201-32d0-4f34-ad6b-51f98e5f8322
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]:     user        root
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]:     group       root
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]:     maxconn     1024
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]:     pidfile     /var/lib/neutron/external/pids/d7842201-32d0-4f34-ad6b-51f98e5f8322.pid.haproxy
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]:     daemon
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: defaults
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]:     log global
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]:     mode http
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]:     option httplog
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]:     option dontlognull
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]:     option http-server-close
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]:     option forwardfor
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]:     retries                 3
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]:     timeout http-request    30s
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]:     timeout connect         30s
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]:     timeout client          32s
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]:     timeout server          32s
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]:     timeout http-keep-alive 30s
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: listen listener
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]:     bind 169.254.169.254:80
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]:     server metadata /var/lib/neutron/metadata_proxy
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]:     http-request add-header X-OVN-Network-ID d7842201-32d0-4f34-ad6b-51f98e5f8322
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 05 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.870 287122 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'env', 'PROCESS_TAG=haproxy-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d7842201-32d0-4f34-ad6b-51f98e5f8322.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 05 02:11:25 compute-0 nova_compute[349548]: 2025-12-05 02:11:25.288 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900685.2874427, 292fd084-0808-4a80-adc1-6ab1f28e188a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:11:25 compute-0 nova_compute[349548]: 2025-12-05 02:11:25.288 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] VM Started (Lifecycle Event)
Dec 05 02:11:25 compute-0 ceph-mon[192914]: pgmap v1872: 321 pgs: 321 active+clean; 148 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.3 MiB/s wr, 53 op/s
Dec 05 02:11:25 compute-0 nova_compute[349548]: 2025-12-05 02:11:25.322 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:11:25 compute-0 nova_compute[349548]: 2025-12-05 02:11:25.331 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900685.2876163, 292fd084-0808-4a80-adc1-6ab1f28e188a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:11:25 compute-0 nova_compute[349548]: 2025-12-05 02:11:25.332 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] VM Paused (Lifecycle Event)
Dec 05 02:11:25 compute-0 nova_compute[349548]: 2025-12-05 02:11:25.355 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:11:25 compute-0 nova_compute[349548]: 2025-12-05 02:11:25.363 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 02:11:25 compute-0 nova_compute[349548]: 2025-12-05 02:11:25.388 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 02:11:25 compute-0 podman[448095]: 2025-12-05 02:11:25.496230564 +0000 UTC m=+0.110235477 container create 41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 02:11:25 compute-0 podman[448095]: 2025-12-05 02:11:25.439720166 +0000 UTC m=+0.053725129 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 05 02:11:25 compute-0 systemd[1]: Started libpod-conmon-41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a.scope.
Dec 05 02:11:25 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:11:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b0054f478c906d197442626f618ca33515a9d994cb127e662a2ffd07bf0dae3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 02:11:25 compute-0 podman[448095]: 2025-12-05 02:11:25.68018297 +0000 UTC m=+0.294187943 container init 41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 02:11:25 compute-0 podman[448095]: 2025-12-05 02:11:25.697153986 +0000 UTC m=+0.311158869 container start 41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 02:11:25 compute-0 podman[448109]: 2025-12-05 02:11:25.711843949 +0000 UTC m=+0.145773555 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:11:25 compute-0 podman[448108]: 2025-12-05 02:11:25.715350897 +0000 UTC m=+0.152494443 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 05 02:11:25 compute-0 neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322[448122]: [NOTICE]   (448153) : New worker (448155) forked
Dec 05 02:11:25 compute-0 neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322[448122]: [NOTICE]   (448153) : Loading success.
Dec 05 02:11:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1873: 321 pgs: 321 active+clean; 124 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 05 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Dec 05 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 05 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:11:27 compute-0 nova_compute[349548]: 2025-12-05 02:11:27.166 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:27 compute-0 ceph-mon[192914]: pgmap v1873: 321 pgs: 321 active+clean; 124 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 05 02:11:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:11:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1874: 321 pgs: 321 active+clean; 124 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 72 op/s
Dec 05 02:11:28 compute-0 nova_compute[349548]: 2025-12-05 02:11:28.209 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:29 compute-0 ovn_controller[89286]: 2025-12-05T02:11:29Z|00136|binding|INFO|Releasing lport 9309009c-26a0-4ed9-8142-14ad142ca1c0 from this chassis (sb_readonly=0)
Dec 05 02:11:29 compute-0 nova_compute[349548]: 2025-12-05 02:11:29.025 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:29 compute-0 ovn_controller[89286]: 2025-12-05T02:11:29Z|00137|binding|INFO|Releasing lport 9309009c-26a0-4ed9-8142-14ad142ca1c0 from this chassis (sb_readonly=0)
Dec 05 02:11:29 compute-0 nova_compute[349548]: 2025-12-05 02:11:29.296 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:29 compute-0 ceph-mon[192914]: pgmap v1874: 321 pgs: 321 active+clean; 124 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 72 op/s
Dec 05 02:11:29 compute-0 podman[158197]: time="2025-12-05T02:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:11:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:11:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8648 "" "Go-http-client/1.1"
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.102 349552 DEBUG nova.compute.manager [req-3eedf5c9-5795-4721-ad25-d222316f9e15 req-e38c6b5c-925c-43de-baee-30f1dea46b2f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Received event network-vif-plugged-706f9405-4061-481e-a252-9b14f4534a4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.102 349552 DEBUG oslo_concurrency.lockutils [req-3eedf5c9-5795-4721-ad25-d222316f9e15 req-e38c6b5c-925c-43de-baee-30f1dea46b2f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.103 349552 DEBUG oslo_concurrency.lockutils [req-3eedf5c9-5795-4721-ad25-d222316f9e15 req-e38c6b5c-925c-43de-baee-30f1dea46b2f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.103 349552 DEBUG oslo_concurrency.lockutils [req-3eedf5c9-5795-4721-ad25-d222316f9e15 req-e38c6b5c-925c-43de-baee-30f1dea46b2f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.103 349552 DEBUG nova.compute.manager [req-3eedf5c9-5795-4721-ad25-d222316f9e15 req-e38c6b5c-925c-43de-baee-30f1dea46b2f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Processing event network-vif-plugged-706f9405-4061-481e-a252-9b14f4534a4e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.105 349552 DEBUG nova.compute.manager [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.114 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900690.1128616, 292fd084-0808-4a80-adc1-6ab1f28e188a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.115 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] VM Resumed (Lifecycle Event)
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.117 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.126 349552 INFO nova.virt.libvirt.driver [-] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Instance spawned successfully.
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.126 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.141 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.152 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.158 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.159 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.160 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.161 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.163 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.163 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.177 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 02:11:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1875: 321 pgs: 321 active+clean; 124 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 72 op/s
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.225 349552 INFO nova.compute.manager [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Took 12.23 seconds to spawn the instance on the hypervisor.
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.225 349552 DEBUG nova.compute.manager [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.302 349552 INFO nova.compute.manager [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Took 13.63 seconds to build instance.
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.320 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.673 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.673 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.691 349552 DEBUG nova.compute.manager [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 05 02:11:30 compute-0 podman[448165]: 2025-12-05 02:11:30.697819992 +0000 UTC m=+0.110092763 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 05 02:11:30 compute-0 podman[448166]: 2025-12-05 02:11:30.722233767 +0000 UTC m=+0.120359881 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.783 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.783 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.795 349552 DEBUG nova.virt.hardware [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.795 349552 INFO nova.compute.claims [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Claim successful on node compute-0.ctlplane.example.com
Dec 05 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.948 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:11:31 compute-0 ceph-mon[192914]: pgmap v1875: 321 pgs: 321 active+clean; 124 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 72 op/s
Dec 05 02:11:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:11:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2478513699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:11:31 compute-0 openstack_network_exporter[366555]: ERROR   02:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:11:31 compute-0 openstack_network_exporter[366555]: ERROR   02:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:11:31 compute-0 openstack_network_exporter[366555]: ERROR   02:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:11:31 compute-0 openstack_network_exporter[366555]: ERROR   02:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:11:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:11:31 compute-0 openstack_network_exporter[366555]: ERROR   02:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:11:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.447 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.467 349552 DEBUG nova.compute.provider_tree [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.491 349552 DEBUG nova.scheduler.client.report [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.520 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.521 349552 DEBUG nova.compute.manager [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 05 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.584 349552 DEBUG nova.compute.manager [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 05 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.584 349552 DEBUG nova.network.neutron [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 05 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.608 349552 INFO nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 05 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.635 349552 DEBUG nova.compute.manager [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 05 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.739 349552 DEBUG nova.compute.manager [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 05 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.742 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 05 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.743 349552 INFO nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Creating image(s)
Dec 05 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.809 349552 DEBUG nova.storage.rbd_utils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.879 349552 DEBUG nova.storage.rbd_utils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.940 349552 DEBUG nova.storage.rbd_utils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.952 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.005 349552 DEBUG nova.policy [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2e61f46e24a240608d1523fb5265d3ac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6aaead05b2404fec8f687504ed800a2b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 05 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.071 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.072 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.073 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.073 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.117 349552 DEBUG nova.storage.rbd_utils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.128 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.169 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1876: 321 pgs: 321 active+clean; 124 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 76 op/s
Dec 05 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.229 349552 DEBUG nova.compute.manager [req-40483d47-b453-4021-96c8-e26cd8202297 req-350f0c71-4d1e-429f-af61-3d79ae78c205 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Received event network-vif-plugged-706f9405-4061-481e-a252-9b14f4534a4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.229 349552 DEBUG oslo_concurrency.lockutils [req-40483d47-b453-4021-96c8-e26cd8202297 req-350f0c71-4d1e-429f-af61-3d79ae78c205 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.230 349552 DEBUG oslo_concurrency.lockutils [req-40483d47-b453-4021-96c8-e26cd8202297 req-350f0c71-4d1e-429f-af61-3d79ae78c205 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.230 349552 DEBUG oslo_concurrency.lockutils [req-40483d47-b453-4021-96c8-e26cd8202297 req-350f0c71-4d1e-429f-af61-3d79ae78c205 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.230 349552 DEBUG nova.compute.manager [req-40483d47-b453-4021-96c8-e26cd8202297 req-350f0c71-4d1e-429f-af61-3d79ae78c205 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] No waiting events found dispatching network-vif-plugged-706f9405-4061-481e-a252-9b14f4534a4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.230 349552 WARNING nova.compute.manager [req-40483d47-b453-4021-96c8-e26cd8202297 req-350f0c71-4d1e-429f-af61-3d79ae78c205 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Received unexpected event network-vif-plugged-706f9405-4061-481e-a252-9b14f4534a4e for instance with vm_state active and task_state None.
Dec 05 02:11:32 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2478513699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.670 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:11:32 compute-0 podman[448317]: 2025-12-05 02:11:32.730060818 +0000 UTC m=+0.134536450 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=kepler, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.openshift.tags=base rhel9, architecture=x86_64, release-0.7.12=)
Dec 05 02:11:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:32.868 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:11:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:32.869 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.878 349552 DEBUG nova.storage.rbd_utils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] resizing rbd image 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 05 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.950 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.997 349552 DEBUG nova.network.neutron [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Successfully created port: 1e754fc7-106a-43d2-a675-79c30089904b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 05 02:11:33 compute-0 nova_compute[349548]: 2025-12-05 02:11:33.141 349552 DEBUG nova.objects.instance [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lazy-loading 'migration_context' on Instance uuid 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:11:33 compute-0 nova_compute[349548]: 2025-12-05 02:11:33.155 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 05 02:11:33 compute-0 nova_compute[349548]: 2025-12-05 02:11:33.155 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Ensure instance console log exists: /var/lib/nova/instances/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 05 02:11:33 compute-0 nova_compute[349548]: 2025-12-05 02:11:33.156 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:11:33 compute-0 nova_compute[349548]: 2025-12-05 02:11:33.157 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:11:33 compute-0 nova_compute[349548]: 2025-12-05 02:11:33.158 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:11:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:11:33 compute-0 nova_compute[349548]: 2025-12-05 02:11:33.211 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:33 compute-0 ceph-mon[192914]: pgmap v1876: 321 pgs: 321 active+clean; 124 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 76 op/s
Dec 05 02:11:33 compute-0 nova_compute[349548]: 2025-12-05 02:11:33.653 349552 DEBUG nova.network.neutron [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Successfully updated port: 1e754fc7-106a-43d2-a675-79c30089904b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 05 02:11:33 compute-0 nova_compute[349548]: 2025-12-05 02:11:33.669 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "refresh_cache-1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:11:33 compute-0 nova_compute[349548]: 2025-12-05 02:11:33.670 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquired lock "refresh_cache-1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:11:33 compute-0 nova_compute[349548]: 2025-12-05 02:11:33.670 349552 DEBUG nova.network.neutron [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 05 02:11:34 compute-0 nova_compute[349548]: 2025-12-05 02:11:34.177 349552 DEBUG nova.network.neutron [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 05 02:11:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1877: 321 pgs: 321 active+clean; 124 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 702 KiB/s wr, 53 op/s
Dec 05 02:11:34 compute-0 nova_compute[349548]: 2025-12-05 02:11:34.442 349552 DEBUG nova.compute.manager [req-81fbe01c-f284-4855-9d5e-9c1b0dc1b111 req-8bf0dcf1-c689-4504-975f-ebfd6584428c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Received event network-changed-1e754fc7-106a-43d2-a675-79c30089904b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:11:34 compute-0 nova_compute[349548]: 2025-12-05 02:11:34.443 349552 DEBUG nova.compute.manager [req-81fbe01c-f284-4855-9d5e-9c1b0dc1b111 req-8bf0dcf1-c689-4504-975f-ebfd6584428c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Refreshing instance network info cache due to event network-changed-1e754fc7-106a-43d2-a675-79c30089904b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 02:11:34 compute-0 nova_compute[349548]: 2025-12-05 02:11:34.443 349552 DEBUG oslo_concurrency.lockutils [req-81fbe01c-f284-4855-9d5e-9c1b0dc1b111 req-8bf0dcf1-c689-4504-975f-ebfd6584428c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:11:35 compute-0 ceph-mon[192914]: pgmap v1877: 321 pgs: 321 active+clean; 124 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 702 KiB/s wr, 53 op/s
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.500 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764900680.498941, 59e35a32-9023-4e49-be56-9da10df3027f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.501 349552 INFO nova.compute.manager [-] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] VM Stopped (Lifecycle Event)
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.523 349552 DEBUG nova.compute.manager [None req-79a1cbc6-9a1c-47ed-9889-85557fa4e2de - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.529 349552 DEBUG nova.network.neutron [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Updating instance_info_cache with network_info: [{"id": "1e754fc7-106a-43d2-a675-79c30089904b", "address": "fa:16:3e:ab:49:42", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e754fc7-10", "ovs_interfaceid": "1e754fc7-106a-43d2-a675-79c30089904b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.549 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Releasing lock "refresh_cache-1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.549 349552 DEBUG nova.compute.manager [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Instance network_info: |[{"id": "1e754fc7-106a-43d2-a675-79c30089904b", "address": "fa:16:3e:ab:49:42", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e754fc7-10", "ovs_interfaceid": "1e754fc7-106a-43d2-a675-79c30089904b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.550 349552 DEBUG oslo_concurrency.lockutils [req-81fbe01c-f284-4855-9d5e-9c1b0dc1b111 req-8bf0dcf1-c689-4504-975f-ebfd6584428c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.551 349552 DEBUG nova.network.neutron [req-81fbe01c-f284-4855-9d5e-9c1b0dc1b111 req-8bf0dcf1-c689-4504-975f-ebfd6584428c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Refreshing network info cache for port 1e754fc7-106a-43d2-a675-79c30089904b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.556 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Start _get_guest_xml network_info=[{"id": "1e754fc7-106a-43d2-a675-79c30089904b", "address": "fa:16:3e:ab:49:42", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e754fc7-10", "ovs_interfaceid": "1e754fc7-106a-43d2-a675-79c30089904b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'e9091bfb-b431-47c9-a284-79372046956b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.569 349552 WARNING nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.586 349552 DEBUG nova.virt.libvirt.host [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.587 349552 DEBUG nova.virt.libvirt.host [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.595 349552 DEBUG nova.virt.libvirt.host [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.596 349552 DEBUG nova.virt.libvirt.host [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.597 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.598 349552 DEBUG nova.virt.hardware [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T02:07:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.599 349552 DEBUG nova.virt.hardware [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.600 349552 DEBUG nova.virt.hardware [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.600 349552 DEBUG nova.virt.hardware [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.601 349552 DEBUG nova.virt.hardware [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.602 349552 DEBUG nova.virt.hardware [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.602 349552 DEBUG nova.virt.hardware [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.603 349552 DEBUG nova.virt.hardware [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.604 349552 DEBUG nova.virt.hardware [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.605 349552 DEBUG nova.virt.hardware [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.605 349552 DEBUG nova.virt.hardware [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 05 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.611 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:11:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1878: 321 pgs: 321 active+clean; 136 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 405 KiB/s rd, 469 KiB/s wr, 51 op/s
Dec 05 02:11:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 02:11:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1327023170' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:11:36 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1327023170' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:11:36 compute-0 nova_compute[349548]: 2025-12-05 02:11:36.417 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.806s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:11:36 compute-0 nova_compute[349548]: 2025-12-05 02:11:36.458 349552 DEBUG nova.storage.rbd_utils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:11:36 compute-0 nova_compute[349548]: 2025-12-05 02:11:36.481 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:11:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 02:11:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1047341207' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:11:36 compute-0 nova_compute[349548]: 2025-12-05 02:11:36.989 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:11:36 compute-0 nova_compute[349548]: 2025-12-05 02:11:36.993 349552 DEBUG nova.virt.libvirt.vif [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:11:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-593464214',display_name='tempest-TestNetworkBasicOps-server-593464214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-593464214',id=12,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPalP/AzwmHbA95rHCd/QJUJ7wbPS0Rqk62UPUO5FJAN2XrqFXhwvH10HGMSigesY1L3ja9sPfGII3cyjD9vy9gcLVsBBYGCRjTM6JwQSUcRRAf5rls2BCt8IBDTT+ISQg==',key_name='tempest-TestNetworkBasicOps-727356260',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6aaead05b2404fec8f687504ed800a2b',ramdisk_id='',reservation_id='r-bpaczbpy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-576606253',owner_user_name='tempest-TestNetworkBasicOps-576606253-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:11:31Z,user_data=None,user_id='2e61f46e24a240608d1523fb5265d3ac',uuid=1fcee2c4-ccfc-4651-bc90-a606a4e46e0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e754fc7-106a-43d2-a675-79c30089904b", "address": "fa:16:3e:ab:49:42", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e754fc7-10", "ovs_interfaceid": "1e754fc7-106a-43d2-a675-79c30089904b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 05 02:11:36 compute-0 nova_compute[349548]: 2025-12-05 02:11:36.994 349552 DEBUG nova.network.os_vif_util [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Converting VIF {"id": "1e754fc7-106a-43d2-a675-79c30089904b", "address": "fa:16:3e:ab:49:42", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e754fc7-10", "ovs_interfaceid": "1e754fc7-106a-43d2-a675-79c30089904b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:11:36 compute-0 nova_compute[349548]: 2025-12-05 02:11:36.996 349552 DEBUG nova.network.os_vif_util [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:49:42,bridge_name='br-int',has_traffic_filtering=True,id=1e754fc7-106a-43d2-a675-79c30089904b,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e754fc7-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:36.999 349552 DEBUG nova.objects.instance [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lazy-loading 'pci_devices' on Instance uuid 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.022 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] End _get_guest_xml xml=<domain type="kvm">
Dec 05 02:11:37 compute-0 nova_compute[349548]:   <uuid>1fcee2c4-ccfc-4651-bc90-a606a4e46e0f</uuid>
Dec 05 02:11:37 compute-0 nova_compute[349548]:   <name>instance-0000000c</name>
Dec 05 02:11:37 compute-0 nova_compute[349548]:   <memory>131072</memory>
Dec 05 02:11:37 compute-0 nova_compute[349548]:   <vcpu>1</vcpu>
Dec 05 02:11:37 compute-0 nova_compute[349548]:   <metadata>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <nova:name>tempest-TestNetworkBasicOps-server-593464214</nova:name>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <nova:creationTime>2025-12-05 02:11:35</nova:creationTime>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <nova:flavor name="m1.nano">
Dec 05 02:11:37 compute-0 nova_compute[349548]:         <nova:memory>128</nova:memory>
Dec 05 02:11:37 compute-0 nova_compute[349548]:         <nova:disk>1</nova:disk>
Dec 05 02:11:37 compute-0 nova_compute[349548]:         <nova:swap>0</nova:swap>
Dec 05 02:11:37 compute-0 nova_compute[349548]:         <nova:ephemeral>0</nova:ephemeral>
Dec 05 02:11:37 compute-0 nova_compute[349548]:         <nova:vcpus>1</nova:vcpus>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       </nova:flavor>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <nova:owner>
Dec 05 02:11:37 compute-0 nova_compute[349548]:         <nova:user uuid="2e61f46e24a240608d1523fb5265d3ac">tempest-TestNetworkBasicOps-576606253-project-member</nova:user>
Dec 05 02:11:37 compute-0 nova_compute[349548]:         <nova:project uuid="6aaead05b2404fec8f687504ed800a2b">tempest-TestNetworkBasicOps-576606253</nova:project>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       </nova:owner>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <nova:root type="image" uuid="e9091bfb-b431-47c9-a284-79372046956b"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <nova:ports>
Dec 05 02:11:37 compute-0 nova_compute[349548]:         <nova:port uuid="1e754fc7-106a-43d2-a675-79c30089904b">
Dec 05 02:11:37 compute-0 nova_compute[349548]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:         </nova:port>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       </nova:ports>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     </nova:instance>
Dec 05 02:11:37 compute-0 nova_compute[349548]:   </metadata>
Dec 05 02:11:37 compute-0 nova_compute[349548]:   <sysinfo type="smbios">
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <system>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <entry name="manufacturer">RDO</entry>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <entry name="product">OpenStack Compute</entry>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <entry name="serial">1fcee2c4-ccfc-4651-bc90-a606a4e46e0f</entry>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <entry name="uuid">1fcee2c4-ccfc-4651-bc90-a606a4e46e0f</entry>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <entry name="family">Virtual Machine</entry>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     </system>
Dec 05 02:11:37 compute-0 nova_compute[349548]:   </sysinfo>
Dec 05 02:11:37 compute-0 nova_compute[349548]:   <os>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <boot dev="hd"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <smbios mode="sysinfo"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:   </os>
Dec 05 02:11:37 compute-0 nova_compute[349548]:   <features>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <acpi/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <apic/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <vmcoreinfo/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:   </features>
Dec 05 02:11:37 compute-0 nova_compute[349548]:   <clock offset="utc">
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <timer name="pit" tickpolicy="delay"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <timer name="hpet" present="no"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:   </clock>
Dec 05 02:11:37 compute-0 nova_compute[349548]:   <cpu mode="host-model" match="exact">
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <topology sockets="1" cores="1" threads="1"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:   </cpu>
Dec 05 02:11:37 compute-0 nova_compute[349548]:   <devices>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <disk type="network" device="disk">
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk">
Dec 05 02:11:37 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       </source>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 02:11:37 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       </auth>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <target dev="vda" bus="virtio"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     </disk>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <disk type="network" device="cdrom">
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk.config">
Dec 05 02:11:37 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       </source>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 02:11:37 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       </auth>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <target dev="sda" bus="sata"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     </disk>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <interface type="ethernet">
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <mac address="fa:16:3e:ab:49:42"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <driver name="vhost" rx_queue_size="512"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <mtu size="1442"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <target dev="tap1e754fc7-10"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     </interface>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <serial type="pty">
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <log file="/var/lib/nova/instances/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/console.log" append="off"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     </serial>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <video>
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     </video>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <input type="tablet" bus="usb"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <rng model="virtio">
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <backend model="random">/dev/urandom</backend>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     </rng>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <controller type="usb" index="0"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     <memballoon model="virtio">
Dec 05 02:11:37 compute-0 nova_compute[349548]:       <stats period="10"/>
Dec 05 02:11:37 compute-0 nova_compute[349548]:     </memballoon>
Dec 05 02:11:37 compute-0 nova_compute[349548]:   </devices>
Dec 05 02:11:37 compute-0 nova_compute[349548]: </domain>
Dec 05 02:11:37 compute-0 nova_compute[349548]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.042 349552 DEBUG nova.compute.manager [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Preparing to wait for external event network-vif-plugged-1e754fc7-106a-43d2-a675-79c30089904b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.043 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.043 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.044 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.045 349552 DEBUG nova.virt.libvirt.vif [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:11:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-593464214',display_name='tempest-TestNetworkBasicOps-server-593464214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-593464214',id=12,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPalP/AzwmHbA95rHCd/QJUJ7wbPS0Rqk62UPUO5FJAN2XrqFXhwvH10HGMSigesY1L3ja9sPfGII3cyjD9vy9gcLVsBBYGCRjTM6JwQSUcRRAf5rls2BCt8IBDTT+ISQg==',key_name='tempest-TestNetworkBasicOps-727356260',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6aaead05b2404fec8f687504ed800a2b',ramdisk_id='',reservation_id='r-bpaczbpy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-576606253',owner_user_name='tempest-TestNetworkBasicOps-576606253-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:11:31Z,user_data=None,user_id='2e61f46e24a240608d1523fb5265d3ac',uuid=1fcee2c4-ccfc-4651-bc90-a606a4e46e0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e754fc7-106a-43d2-a675-79c30089904b", "address": "fa:16:3e:ab:49:42", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e754fc7-10", "ovs_interfaceid": "1e754fc7-106a-43d2-a675-79c30089904b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.046 349552 DEBUG nova.network.os_vif_util [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Converting VIF {"id": "1e754fc7-106a-43d2-a675-79c30089904b", "address": "fa:16:3e:ab:49:42", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e754fc7-10", "ovs_interfaceid": "1e754fc7-106a-43d2-a675-79c30089904b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.047 349552 DEBUG nova.network.os_vif_util [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:49:42,bridge_name='br-int',has_traffic_filtering=True,id=1e754fc7-106a-43d2-a675-79c30089904b,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e754fc7-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.048 349552 DEBUG os_vif [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:49:42,bridge_name='br-int',has_traffic_filtering=True,id=1e754fc7-106a-43d2-a675-79c30089904b,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e754fc7-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.049 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.050 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.051 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.056 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.057 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e754fc7-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.058 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1e754fc7-10, col_values=(('external_ids', {'iface-id': '1e754fc7-106a-43d2-a675-79c30089904b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ab:49:42', 'vm-uuid': '1fcee2c4-ccfc-4651-bc90-a606a4e46e0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.060 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.062 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 02:11:37 compute-0 NetworkManager[49092]: <info>  [1764900697.0625] manager: (tap1e754fc7-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.074 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.076 349552 INFO os_vif [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:49:42,bridge_name='br-int',has_traffic_filtering=True,id=1e754fc7-106a-43d2-a675-79c30089904b,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e754fc7-10')
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.164 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.166 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.166 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] No VIF found with MAC fa:16:3e:ab:49:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.167 349552 INFO nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Using config drive
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.220 349552 DEBUG nova.storage.rbd_utils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.249 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:37 compute-0 ceph-mon[192914]: pgmap v1878: 321 pgs: 321 active+clean; 136 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 405 KiB/s rd, 469 KiB/s wr, 51 op/s
Dec 05 02:11:37 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1047341207' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.927 349552 DEBUG nova.network.neutron [req-81fbe01c-f284-4855-9d5e-9c1b0dc1b111 req-8bf0dcf1-c689-4504-975f-ebfd6584428c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Updated VIF entry in instance network info cache for port 1e754fc7-106a-43d2-a675-79c30089904b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.928 349552 DEBUG nova.network.neutron [req-81fbe01c-f284-4855-9d5e-9c1b0dc1b111 req-8bf0dcf1-c689-4504-975f-ebfd6584428c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Updating instance_info_cache with network_info: [{"id": "1e754fc7-106a-43d2-a675-79c30089904b", "address": "fa:16:3e:ab:49:42", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e754fc7-10", "ovs_interfaceid": "1e754fc7-106a-43d2-a675-79c30089904b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.960 349552 DEBUG oslo_concurrency.lockutils [req-81fbe01c-f284-4855-9d5e-9c1b0dc1b111 req-8bf0dcf1-c689-4504-975f-ebfd6584428c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.048 349552 INFO nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Creating config drive at /var/lib/nova/instances/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.config
Dec 05 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.060 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcv0km2fq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.097 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.098 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.099 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.123 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 05 02:11:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:11:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1879: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 05 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.214 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcv0km2fq" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.274 349552 DEBUG nova.storage.rbd_utils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.285 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.config 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.319 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.320 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.320 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.320 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 292fd084-0808-4a80-adc1-6ab1f28e188a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.589 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.config 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.590 349552 INFO nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Deleting local config drive /var/lib/nova/instances/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.config because it was imported into RBD.
Dec 05 02:11:38 compute-0 kernel: tap1e754fc7-10: entered promiscuous mode
Dec 05 02:11:38 compute-0 NetworkManager[49092]: <info>  [1764900698.6787] manager: (tap1e754fc7-10): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Dec 05 02:11:38 compute-0 ovn_controller[89286]: 2025-12-05T02:11:38Z|00138|binding|INFO|Claiming lport 1e754fc7-106a-43d2-a675-79c30089904b for this chassis.
Dec 05 02:11:38 compute-0 ovn_controller[89286]: 2025-12-05T02:11:38Z|00139|binding|INFO|1e754fc7-106a-43d2-a675-79c30089904b: Claiming fa:16:3e:ab:49:42 10.100.0.11
Dec 05 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.681 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.695 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:49:42 10.100.0.11'], port_security=['fa:16:3e:ab:49:42 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '1fcee2c4-ccfc-4651-bc90-a606a4e46e0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6aaead05b2404fec8f687504ed800a2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6637e5fa-33c5-4d8a-98b9-4b42baed7ff5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee2a399c-ba53-4ea4-9f46-ca7b46a10984, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=1e754fc7-106a-43d2-a675-79c30089904b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.698 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 1e754fc7-106a-43d2-a675-79c30089904b in datapath 580f50f3-cfd1-4167-ba29-a8edbd53ee0f bound to our chassis
Dec 05 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.701 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 580f50f3-cfd1-4167-ba29-a8edbd53ee0f
Dec 05 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.719 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb45931-aa84-4655-853b-649248d45649]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.720 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap580f50f3-c1 in ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 05 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.726 412744 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap580f50f3-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 05 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.727 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[da5bfd09-0799-4144-afdb-423f6dea9298]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.728 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[88d8a3a6-d87a-41b3-a0f0-585a6e3a034a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:38 compute-0 systemd-machined[138700]: New machine qemu-13-instance-0000000c.
Dec 05 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.748 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[2d436138-9abc-4c83-862b-b277da3dc9dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:38 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000c.
Dec 05 02:11:38 compute-0 ovn_controller[89286]: 2025-12-05T02:11:38Z|00140|binding|INFO|Setting lport 1e754fc7-106a-43d2-a675-79c30089904b ovn-installed in OVS
Dec 05 02:11:38 compute-0 ovn_controller[89286]: 2025-12-05T02:11:38Z|00141|binding|INFO|Setting lport 1e754fc7-106a-43d2-a675-79c30089904b up in Southbound
Dec 05 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.778 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[02fac094-c095-4bd6-ba12-18a60bc24781]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.783 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:38 compute-0 systemd-udevd[448548]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 02:11:38 compute-0 NetworkManager[49092]: <info>  [1764900698.8223] device (tap1e754fc7-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 05 02:11:38 compute-0 NetworkManager[49092]: <info>  [1764900698.8283] device (tap1e754fc7-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 05 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.828 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[a812f1dd-2923-41d7-9728-6fc6258570c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.837 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[1c81b45e-13cc-4a42-b4fe-58721b8ef84c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:38 compute-0 systemd-udevd[448551]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 02:11:38 compute-0 NetworkManager[49092]: <info>  [1764900698.8400] manager: (tap580f50f3-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/68)
Dec 05 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.878 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[444708fa-ea63-4b8d-8cac-0b20b4a98253]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.884 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[a73a8d47-0a0c-4ca4-8da0-759cc1acca74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:38 compute-0 NetworkManager[49092]: <info>  [1764900698.9195] device (tap580f50f3-c0): carrier: link connected
Dec 05 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.931 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[72cba477-7ca0-4fe8-bc65-3d327b013f2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.964 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[424da552-26ec-44f9-8981-5680b791a26f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap580f50f3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:c2:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678558, 'reachable_time': 24166, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 448577, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.991 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[00490c83-dcdf-490a-9d8f-7a2df154ef66]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6d:c292'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678558, 'tstamp': 678558}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 448578, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:39.028 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[75156faa-16ca-41ae-95f7-70ceffded114]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap580f50f3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:c2:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678558, 'reachable_time': 24166, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 448579, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:39.097 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f2cc28-aa7a-42a3-9ef9-b54d58a01223]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:39.229 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[9bfc2303-16df-4851-bc31-d0357e0fe098]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:39.230 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap580f50f3-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:39.231 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:39.232 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap580f50f3-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.235 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:39 compute-0 NetworkManager[49092]: <info>  [1764900699.2364] manager: (tap580f50f3-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Dec 05 02:11:39 compute-0 kernel: tap580f50f3-c0: entered promiscuous mode
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.241 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:39.243 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap580f50f3-c0, col_values=(('external_ids', {'iface-id': '29ff39a2-9491-44bb-a004-0de689e8aadc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.246 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:39 compute-0 ovn_controller[89286]: 2025-12-05T02:11:39Z|00142|binding|INFO|Releasing lport 29ff39a2-9491-44bb-a004-0de689e8aadc from this chassis (sb_readonly=0)
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.279 349552 DEBUG nova.compute.manager [req-ed2c6d8f-3f36-42bf-8fe2-b59fb5443dda req-d8e6f61e-72a6-422e-ba8e-d8235e928fc6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Received event network-vif-plugged-1e754fc7-106a-43d2-a675-79c30089904b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:39.279 287122 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/580f50f3-cfd1-4167-ba29-a8edbd53ee0f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/580f50f3-cfd1-4167-ba29-a8edbd53ee0f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.281 349552 DEBUG oslo_concurrency.lockutils [req-ed2c6d8f-3f36-42bf-8fe2-b59fb5443dda req-d8e6f61e-72a6-422e-ba8e-d8235e928fc6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:39.281 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[ed38c737-c474-4298-991b-ba9d7277bd26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:39.282 287122 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]: global
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]:     log         /dev/log local0 debug
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]:     log-tag     haproxy-metadata-proxy-580f50f3-cfd1-4167-ba29-a8edbd53ee0f
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]:     user        root
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]:     group       root
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]:     maxconn     1024
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]:     pidfile     /var/lib/neutron/external/pids/580f50f3-cfd1-4167-ba29-a8edbd53ee0f.pid.haproxy
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]:     daemon
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]: defaults
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]:     log global
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]:     mode http
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]:     option httplog
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]:     option dontlognull
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]:     option http-server-close
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]:     option forwardfor
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]:     retries                 3
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]:     timeout http-request    30s
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]:     timeout connect         30s
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]:     timeout client          32s
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]:     timeout server          32s
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]:     timeout http-keep-alive 30s
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]: listen listener
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]:     bind 169.254.169.254:80
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]:     server metadata /var/lib/neutron/metadata_proxy
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]:     http-request add-header X-OVN-Network-ID 580f50f3-cfd1-4167-ba29-a8edbd53ee0f
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 05 02:11:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:39.283 287122 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'env', 'PROCESS_TAG=haproxy-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/580f50f3-cfd1-4167-ba29-a8edbd53ee0f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.287 349552 DEBUG oslo_concurrency.lockutils [req-ed2c6d8f-3f36-42bf-8fe2-b59fb5443dda req-d8e6f61e-72a6-422e-ba8e-d8235e928fc6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.288 349552 DEBUG oslo_concurrency.lockutils [req-ed2c6d8f-3f36-42bf-8fe2-b59fb5443dda req-d8e6f61e-72a6-422e-ba8e-d8235e928fc6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.288 349552 DEBUG nova.compute.manager [req-ed2c6d8f-3f36-42bf-8fe2-b59fb5443dda req-d8e6f61e-72a6-422e-ba8e-d8235e928fc6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Processing event network-vif-plugged-1e754fc7-106a-43d2-a675-79c30089904b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.289 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:39 compute-0 ceph-mon[192914]: pgmap v1879: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.646 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900699.645317, 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.648 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] VM Started (Lifecycle Event)
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.650 349552 DEBUG nova.compute.manager [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.656 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.665 349552 INFO nova.virt.libvirt.driver [-] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Instance spawned successfully.
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.665 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.746 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.754 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.754 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.755 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.755 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.756 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.757 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.760 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.791 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.792 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900699.6455514, 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.792 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] VM Paused (Lifecycle Event)
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.818 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.828 349552 INFO nova.compute.manager [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Took 8.09 seconds to spawn the instance on the hypervisor.
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.828 349552 DEBUG nova.compute.manager [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.829 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900699.654527, 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.829 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] VM Resumed (Lifecycle Event)
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.864 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:11:39 compute-0 podman[448650]: 2025-12-05 02:11:39.8668692 +0000 UTC m=+0.105998398 container create df4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.871 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.897 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 02:11:39 compute-0 podman[448650]: 2025-12-05 02:11:39.810819776 +0000 UTC m=+0.049949054 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.907 349552 INFO nova.compute.manager [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Took 9.16 seconds to build instance.
Dec 05 02:11:39 compute-0 systemd[1]: Started libpod-conmon-df4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e.scope.
Dec 05 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.924 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:11:39 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1f4a9f0791655bc892fe852878dc488b3de35fa469b0c521274d81205f10f4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 02:11:39 compute-0 podman[448650]: 2025-12-05 02:11:39.999595178 +0000 UTC m=+0.238724406 container init df4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 05 02:11:40 compute-0 podman[448650]: 2025-12-05 02:11:40.016138862 +0000 UTC m=+0.255268090 container start df4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 05 02:11:40 compute-0 neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f[448664]: [NOTICE]   (448668) : New worker (448670) forked
Dec 05 02:11:40 compute-0 neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f[448664]: [NOTICE]   (448668) : Loading success.
Dec 05 02:11:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1880: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Dec 05 02:11:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:40.871 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.903 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updating instance_info_cache with network_info: [{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.922 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.922 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.924 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.924 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.925 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.925 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.926 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.926 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.962 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.963 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.964 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.964 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.964 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:11:41 compute-0 nova_compute[349548]: 2025-12-05 02:11:41.399 349552 DEBUG nova.compute.manager [req-38b20a08-3cbf-47a3-9b02-ebafc3e4a353 req-43ccb437-3fbc-461c-b0c0-ba9fab8cc71a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Received event network-vif-plugged-1e754fc7-106a-43d2-a675-79c30089904b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:11:41 compute-0 nova_compute[349548]: 2025-12-05 02:11:41.400 349552 DEBUG oslo_concurrency.lockutils [req-38b20a08-3cbf-47a3-9b02-ebafc3e4a353 req-43ccb437-3fbc-461c-b0c0-ba9fab8cc71a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:11:41 compute-0 nova_compute[349548]: 2025-12-05 02:11:41.402 349552 DEBUG oslo_concurrency.lockutils [req-38b20a08-3cbf-47a3-9b02-ebafc3e4a353 req-43ccb437-3fbc-461c-b0c0-ba9fab8cc71a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:11:41 compute-0 nova_compute[349548]: 2025-12-05 02:11:41.402 349552 DEBUG oslo_concurrency.lockutils [req-38b20a08-3cbf-47a3-9b02-ebafc3e4a353 req-43ccb437-3fbc-461c-b0c0-ba9fab8cc71a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:11:41 compute-0 nova_compute[349548]: 2025-12-05 02:11:41.403 349552 DEBUG nova.compute.manager [req-38b20a08-3cbf-47a3-9b02-ebafc3e4a353 req-43ccb437-3fbc-461c-b0c0-ba9fab8cc71a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] No waiting events found dispatching network-vif-plugged-1e754fc7-106a-43d2-a675-79c30089904b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:11:41 compute-0 nova_compute[349548]: 2025-12-05 02:11:41.403 349552 WARNING nova.compute.manager [req-38b20a08-3cbf-47a3-9b02-ebafc3e4a353 req-43ccb437-3fbc-461c-b0c0-ba9fab8cc71a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Received unexpected event network-vif-plugged-1e754fc7-106a-43d2-a675-79c30089904b for instance with vm_state active and task_state None.
Dec 05 02:11:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:11:41 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/421907135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:11:41 compute-0 ceph-mon[192914]: pgmap v1880: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Dec 05 02:11:41 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/421907135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:11:41 compute-0 nova_compute[349548]: 2025-12-05 02:11:41.491 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:11:41 compute-0 nova_compute[349548]: 2025-12-05 02:11:41.608 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:11:41 compute-0 nova_compute[349548]: 2025-12-05 02:11:41.610 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:11:41 compute-0 nova_compute[349548]: 2025-12-05 02:11:41.619 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:11:41 compute-0 nova_compute[349548]: 2025-12-05 02:11:41.619 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.061 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.173 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.183 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.185 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3737MB free_disk=59.94660568237305GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.187 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.188 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:11:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1881: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Dec 05 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.297 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.298 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.299 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.299 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.356 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:11:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:11:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4030712250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.909 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.919 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.935 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.952 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.952 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:11:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:11:43 compute-0 ceph-mon[192914]: pgmap v1881: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Dec 05 02:11:43 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4030712250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:11:43 compute-0 podman[448724]: 2025-12-05 02:11:43.700336795 +0000 UTC m=+0.095816872 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 05 02:11:43 compute-0 podman[448727]: 2025-12-05 02:11:43.702763283 +0000 UTC m=+0.108120767 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_id=edpm, managed_by=edpm_ansible, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 02:11:43 compute-0 podman[448725]: 2025-12-05 02:11:43.718553467 +0000 UTC m=+0.123701325 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 02:11:43 compute-0 podman[448726]: 2025-12-05 02:11:43.737573721 +0000 UTC m=+0.129564740 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 02:11:43 compute-0 nova_compute[349548]: 2025-12-05 02:11:43.928 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:43 compute-0 NetworkManager[49092]: <info>  [1764900703.9381] manager: (patch-provnet-f36f4e0f-0425-4742-afb6-bfffeac36335-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Dec 05 02:11:43 compute-0 NetworkManager[49092]: <info>  [1764900703.9393] manager: (patch-br-int-to-provnet-f36f4e0f-0425-4742-afb6-bfffeac36335): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Dec 05 02:11:44 compute-0 nova_compute[349548]: 2025-12-05 02:11:44.124 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:44 compute-0 ovn_controller[89286]: 2025-12-05T02:11:44Z|00143|binding|INFO|Releasing lport 29ff39a2-9491-44bb-a004-0de689e8aadc from this chassis (sb_readonly=0)
Dec 05 02:11:44 compute-0 ovn_controller[89286]: 2025-12-05T02:11:44Z|00144|binding|INFO|Releasing lport 9309009c-26a0-4ed9-8142-14ad142ca1c0 from this chassis (sb_readonly=0)
Dec 05 02:11:44 compute-0 nova_compute[349548]: 2025-12-05 02:11:44.163 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1882: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 116 op/s
Dec 05 02:11:44 compute-0 nova_compute[349548]: 2025-12-05 02:11:44.528 349552 DEBUG nova.compute.manager [req-323cc9d4-29c3-47bb-9a87-c133d4375944 req-4a8ec755-fb6d-4073-91da-a00dcd162850 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Received event network-changed-1e754fc7-106a-43d2-a675-79c30089904b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:11:44 compute-0 nova_compute[349548]: 2025-12-05 02:11:44.529 349552 DEBUG nova.compute.manager [req-323cc9d4-29c3-47bb-9a87-c133d4375944 req-4a8ec755-fb6d-4073-91da-a00dcd162850 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Refreshing instance network info cache due to event network-changed-1e754fc7-106a-43d2-a675-79c30089904b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 02:11:44 compute-0 nova_compute[349548]: 2025-12-05 02:11:44.537 349552 DEBUG oslo_concurrency.lockutils [req-323cc9d4-29c3-47bb-9a87-c133d4375944 req-4a8ec755-fb6d-4073-91da-a00dcd162850 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:11:44 compute-0 nova_compute[349548]: 2025-12-05 02:11:44.538 349552 DEBUG oslo_concurrency.lockutils [req-323cc9d4-29c3-47bb-9a87-c133d4375944 req-4a8ec755-fb6d-4073-91da-a00dcd162850 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:11:44 compute-0 nova_compute[349548]: 2025-12-05 02:11:44.538 349552 DEBUG nova.network.neutron [req-323cc9d4-29c3-47bb-9a87-c133d4375944 req-4a8ec755-fb6d-4073-91da-a00dcd162850 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Refreshing network info cache for port 1e754fc7-106a-43d2-a675-79c30089904b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 02:11:45 compute-0 nova_compute[349548]: 2025-12-05 02:11:45.095 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:11:45 compute-0 nova_compute[349548]: 2025-12-05 02:11:45.095 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:11:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:11:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/882585099' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:11:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:11:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/882585099' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:11:45 compute-0 ceph-mon[192914]: pgmap v1882: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 116 op/s
Dec 05 02:11:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/882585099' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:11:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/882585099' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:11:46 compute-0 nova_compute[349548]: 2025-12-05 02:11:46.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:11:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1883: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Dec 05 02:11:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:11:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:11:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:11:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:11:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:11:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:11:46 compute-0 nova_compute[349548]: 2025-12-05 02:11:46.654 349552 DEBUG nova.network.neutron [req-323cc9d4-29c3-47bb-9a87-c133d4375944 req-4a8ec755-fb6d-4073-91da-a00dcd162850 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Updated VIF entry in instance network info cache for port 1e754fc7-106a-43d2-a675-79c30089904b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 02:11:46 compute-0 nova_compute[349548]: 2025-12-05 02:11:46.654 349552 DEBUG nova.network.neutron [req-323cc9d4-29c3-47bb-9a87-c133d4375944 req-4a8ec755-fb6d-4073-91da-a00dcd162850 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Updating instance_info_cache with network_info: [{"id": "1e754fc7-106a-43d2-a675-79c30089904b", "address": "fa:16:3e:ab:49:42", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e754fc7-10", "ovs_interfaceid": "1e754fc7-106a-43d2-a675-79c30089904b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:11:46 compute-0 nova_compute[349548]: 2025-12-05 02:11:46.699 349552 DEBUG oslo_concurrency.lockutils [req-323cc9d4-29c3-47bb-9a87-c133d4375944 req-4a8ec755-fb6d-4073-91da-a00dcd162850 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:11:47 compute-0 nova_compute[349548]: 2025-12-05 02:11:47.066 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:47 compute-0 nova_compute[349548]: 2025-12-05 02:11:47.180 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:47 compute-0 ceph-mon[192914]: pgmap v1883: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Dec 05 02:11:48 compute-0 nova_compute[349548]: 2025-12-05 02:11:48.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:11:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:11:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1884: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.5 MiB/s wr, 138 op/s
Dec 05 02:11:49 compute-0 ceph-mon[192914]: pgmap v1884: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.5 MiB/s wr, 138 op/s
Dec 05 02:11:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1885: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 05 02:11:51 compute-0 ceph-mon[192914]: pgmap v1885: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 05 02:11:52 compute-0 nova_compute[349548]: 2025-12-05 02:11:52.072 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:52 compute-0 nova_compute[349548]: 2025-12-05 02:11:52.180 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1886: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 05 02:11:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:11:53 compute-0 ceph-mon[192914]: pgmap v1886: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 05 02:11:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1887: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 12 KiB/s wr, 54 op/s
Dec 05 02:11:55 compute-0 ceph-mon[192914]: pgmap v1887: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 12 KiB/s wr, 54 op/s
Dec 05 02:11:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:56.207 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:11:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:56.208 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:11:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:56.209 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:11:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1888: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 43 op/s
Dec 05 02:11:56 compute-0 ceph-mon[192914]: pgmap v1888: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 43 op/s
Dec 05 02:11:56 compute-0 podman[448807]: 2025-12-05 02:11:56.734348104 +0000 UTC m=+0.122889752 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 02:11:56 compute-0 podman[448806]: 2025-12-05 02:11:56.756503377 +0000 UTC m=+0.153162103 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec 05 02:11:57 compute-0 nova_compute[349548]: 2025-12-05 02:11:57.075 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:57 compute-0 nova_compute[349548]: 2025-12-05 02:11:57.183 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:11:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:11:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1889: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 962 KiB/s rd, 30 op/s
Dec 05 02:11:59 compute-0 ceph-mon[192914]: pgmap v1889: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 962 KiB/s rd, 30 op/s
Dec 05 02:11:59 compute-0 podman[158197]: time="2025-12-05T02:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:11:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45046 "" "Go-http-client/1.1"
Dec 05 02:11:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9127 "" "Go-http-client/1.1"
Dec 05 02:12:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1890: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:12:01 compute-0 ceph-mon[192914]: pgmap v1890: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:12:01 compute-0 openstack_network_exporter[366555]: ERROR   02:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:12:01 compute-0 openstack_network_exporter[366555]: ERROR   02:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:12:01 compute-0 openstack_network_exporter[366555]: ERROR   02:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:12:01 compute-0 openstack_network_exporter[366555]: ERROR   02:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:12:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:12:01 compute-0 openstack_network_exporter[366555]: ERROR   02:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:12:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:12:01 compute-0 podman[448844]: 2025-12-05 02:12:01.698685759 +0000 UTC m=+0.112549042 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 02:12:01 compute-0 podman[448845]: 2025-12-05 02:12:01.736057158 +0000 UTC m=+0.140359713 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm)
Dec 05 02:12:02 compute-0 nova_compute[349548]: 2025-12-05 02:12:02.078 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:02 compute-0 nova_compute[349548]: 2025-12-05 02:12:02.187 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1891: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:12:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:12:03 compute-0 ceph-mon[192914]: pgmap v1891: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:12:03 compute-0 nova_compute[349548]: 2025-12-05 02:12:03.360 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:03 compute-0 podman[448880]: 2025-12-05 02:12:03.707479267 +0000 UTC m=+0.111871793 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, release-0.7.12=, vcs-type=git, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., version=9.4, container_name=kepler, managed_by=edpm_ansible, config_id=edpm)
Dec 05 02:12:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1892: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:12:05 compute-0 ceph-mon[192914]: pgmap v1892: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:12:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1893: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 6.0 KiB/s wr, 3 op/s
Dec 05 02:12:06 compute-0 ovn_controller[89286]: 2025-12-05T02:12:06Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cf:10:bc 10.100.0.151
Dec 05 02:12:06 compute-0 ovn_controller[89286]: 2025-12-05T02:12:06Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cf:10:bc 10.100.0.151
Dec 05 02:12:07 compute-0 nova_compute[349548]: 2025-12-05 02:12:07.083 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:07 compute-0 nova_compute[349548]: 2025-12-05 02:12:07.191 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:07 compute-0 ceph-mon[192914]: pgmap v1893: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 6.0 KiB/s wr, 3 op/s
Dec 05 02:12:08 compute-0 sudo[448899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:12:08 compute-0 sudo[448899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:12:08 compute-0 sudo[448899]: pam_unix(sudo:session): session closed for user root
Dec 05 02:12:08 compute-0 sudo[448924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:12:08 compute-0 sudo[448924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:12:08 compute-0 sudo[448924]: pam_unix(sudo:session): session closed for user root
Dec 05 02:12:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:12:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1894: 321 pgs: 321 active+clean; 190 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 250 KiB/s rd, 1.9 MiB/s wr, 41 op/s
Dec 05 02:12:08 compute-0 sudo[448949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:12:08 compute-0 sudo[448949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:12:08 compute-0 sudo[448949]: pam_unix(sudo:session): session closed for user root
Dec 05 02:12:08 compute-0 sudo[448974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:12:08 compute-0 sudo[448974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:12:09 compute-0 sudo[448974]: pam_unix(sudo:session): session closed for user root
Dec 05 02:12:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:12:09 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:12:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:12:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:12:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:12:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:12:09 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d3beff60-2d6b-47bb-8586-5b04f26833f6 does not exist
Dec 05 02:12:09 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 57461ed9-d14e-47dd-837f-dd2e64aed7e8 does not exist
Dec 05 02:12:09 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev cbb5181d-add0-4460-9326-806497e89c59 does not exist
Dec 05 02:12:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:12:09 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:12:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:12:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:12:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:12:09 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:12:09 compute-0 ceph-mon[192914]: pgmap v1894: 321 pgs: 321 active+clean; 190 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 250 KiB/s rd, 1.9 MiB/s wr, 41 op/s
Dec 05 02:12:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:12:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:12:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:12:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:12:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:12:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:12:09 compute-0 sudo[449028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:12:09 compute-0 sudo[449028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:12:09 compute-0 sudo[449028]: pam_unix(sudo:session): session closed for user root
Dec 05 02:12:09 compute-0 sudo[449053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:12:09 compute-0 sudo[449053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:12:09 compute-0 sudo[449053]: pam_unix(sudo:session): session closed for user root
Dec 05 02:12:09 compute-0 sudo[449078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:12:09 compute-0 sudo[449078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:12:09 compute-0 sudo[449078]: pam_unix(sudo:session): session closed for user root
Dec 05 02:12:09 compute-0 sudo[449103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:12:09 compute-0 sudo[449103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:12:10 compute-0 nova_compute[349548]: 2025-12-05 02:12:10.064 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:10 compute-0 podman[449168]: 2025-12-05 02:12:10.176073032 +0000 UTC m=+0.053598656 container create 6496a8d34e1ce72413650c09955a55f246548e4396956253d02a9df5d4fd1a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chandrasekhar, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:12:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1895: 321 pgs: 321 active+clean; 190 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 250 KiB/s rd, 1.9 MiB/s wr, 41 op/s
Dec 05 02:12:10 compute-0 systemd[1]: Started libpod-conmon-6496a8d34e1ce72413650c09955a55f246548e4396956253d02a9df5d4fd1a15.scope.
Dec 05 02:12:10 compute-0 podman[449168]: 2025-12-05 02:12:10.151524442 +0000 UTC m=+0.029050096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:12:10 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:12:10 compute-0 podman[449168]: 2025-12-05 02:12:10.29600643 +0000 UTC m=+0.173532074 container init 6496a8d34e1ce72413650c09955a55f246548e4396956253d02a9df5d4fd1a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chandrasekhar, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:12:10 compute-0 podman[449168]: 2025-12-05 02:12:10.312496024 +0000 UTC m=+0.190021638 container start 6496a8d34e1ce72413650c09955a55f246548e4396956253d02a9df5d4fd1a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:12:10 compute-0 podman[449168]: 2025-12-05 02:12:10.317168525 +0000 UTC m=+0.194694239 container attach 6496a8d34e1ce72413650c09955a55f246548e4396956253d02a9df5d4fd1a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:12:10 compute-0 stoic_chandrasekhar[449183]: 167 167
Dec 05 02:12:10 compute-0 systemd[1]: libpod-6496a8d34e1ce72413650c09955a55f246548e4396956253d02a9df5d4fd1a15.scope: Deactivated successfully.
Dec 05 02:12:10 compute-0 conmon[449183]: conmon 6496a8d34e1ce7241365 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6496a8d34e1ce72413650c09955a55f246548e4396956253d02a9df5d4fd1a15.scope/container/memory.events
Dec 05 02:12:10 compute-0 podman[449168]: 2025-12-05 02:12:10.323801201 +0000 UTC m=+0.201326855 container died 6496a8d34e1ce72413650c09955a55f246548e4396956253d02a9df5d4fd1a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:12:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa7a9d975cdde2c61ef571e5e657356c134f96ef94af76f1d193eaad0abe9905-merged.mount: Deactivated successfully.
Dec 05 02:12:10 compute-0 podman[449168]: 2025-12-05 02:12:10.402707117 +0000 UTC m=+0.280232751 container remove 6496a8d34e1ce72413650c09955a55f246548e4396956253d02a9df5d4fd1a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 02:12:10 compute-0 systemd[1]: libpod-conmon-6496a8d34e1ce72413650c09955a55f246548e4396956253d02a9df5d4fd1a15.scope: Deactivated successfully.
Dec 05 02:12:10 compute-0 podman[449206]: 2025-12-05 02:12:10.686139518 +0000 UTC m=+0.106552184 container create 9d47f16afef6145db0d0121f5401d30bc0e12e702ae0bbac3abcffbcbbd1ba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bhabha, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 05 02:12:10 compute-0 podman[449206]: 2025-12-05 02:12:10.634445716 +0000 UTC m=+0.054858442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:12:10 compute-0 systemd[1]: Started libpod-conmon-9d47f16afef6145db0d0121f5401d30bc0e12e702ae0bbac3abcffbcbbd1ba91.scope.
Dec 05 02:12:10 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85a71b0d2297eba99571aca7b3755c5ac3d662c110c56d401e7c8b533f124c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85a71b0d2297eba99571aca7b3755c5ac3d662c110c56d401e7c8b533f124c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85a71b0d2297eba99571aca7b3755c5ac3d662c110c56d401e7c8b533f124c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85a71b0d2297eba99571aca7b3755c5ac3d662c110c56d401e7c8b533f124c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85a71b0d2297eba99571aca7b3755c5ac3d662c110c56d401e7c8b533f124c6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:12:10 compute-0 podman[449206]: 2025-12-05 02:12:10.915786147 +0000 UTC m=+0.336198843 container init 9d47f16afef6145db0d0121f5401d30bc0e12e702ae0bbac3abcffbcbbd1ba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bhabha, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 05 02:12:10 compute-0 podman[449206]: 2025-12-05 02:12:10.927560968 +0000 UTC m=+0.347973614 container start 9d47f16afef6145db0d0121f5401d30bc0e12e702ae0bbac3abcffbcbbd1ba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bhabha, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:12:10 compute-0 podman[449206]: 2025-12-05 02:12:10.931566801 +0000 UTC m=+0.351979447 container attach 9d47f16afef6145db0d0121f5401d30bc0e12e702ae0bbac3abcffbcbbd1ba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bhabha, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:12:11 compute-0 ceph-mon[192914]: pgmap v1895: 321 pgs: 321 active+clean; 190 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 250 KiB/s rd, 1.9 MiB/s wr, 41 op/s
Dec 05 02:12:12 compute-0 sad_bhabha[449222]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:12:12 compute-0 sad_bhabha[449222]: --> relative data size: 1.0
Dec 05 02:12:12 compute-0 sad_bhabha[449222]: --> All data devices are unavailable
Dec 05 02:12:12 compute-0 systemd[1]: libpod-9d47f16afef6145db0d0121f5401d30bc0e12e702ae0bbac3abcffbcbbd1ba91.scope: Deactivated successfully.
Dec 05 02:12:12 compute-0 systemd[1]: libpod-9d47f16afef6145db0d0121f5401d30bc0e12e702ae0bbac3abcffbcbbd1ba91.scope: Consumed 1.015s CPU time.
Dec 05 02:12:12 compute-0 podman[449206]: 2025-12-05 02:12:12.069276173 +0000 UTC m=+1.489688879 container died 9d47f16afef6145db0d0121f5401d30bc0e12e702ae0bbac3abcffbcbbd1ba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bhabha, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:12:12 compute-0 nova_compute[349548]: 2025-12-05 02:12:12.087 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-d85a71b0d2297eba99571aca7b3755c5ac3d662c110c56d401e7c8b533f124c6-merged.mount: Deactivated successfully.
Dec 05 02:12:12 compute-0 podman[449206]: 2025-12-05 02:12:12.160724842 +0000 UTC m=+1.581137498 container remove 9d47f16afef6145db0d0121f5401d30bc0e12e702ae0bbac3abcffbcbbd1ba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:12:12 compute-0 systemd[1]: libpod-conmon-9d47f16afef6145db0d0121f5401d30bc0e12e702ae0bbac3abcffbcbbd1ba91.scope: Deactivated successfully.
Dec 05 02:12:12 compute-0 nova_compute[349548]: 2025-12-05 02:12:12.194 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:12 compute-0 sudo[449103]: pam_unix(sudo:session): session closed for user root
Dec 05 02:12:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1896: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 415 KiB/s rd, 3.5 MiB/s wr, 79 op/s
Dec 05 02:12:12 compute-0 sudo[449262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:12:12 compute-0 sudo[449262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:12:12 compute-0 sudo[449262]: pam_unix(sudo:session): session closed for user root
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.370381) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900732370469, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1434, "num_deletes": 251, "total_data_size": 2243493, "memory_usage": 2277712, "flush_reason": "Manual Compaction"}
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900732387997, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 2200042, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37697, "largest_seqno": 39130, "table_properties": {"data_size": 2193304, "index_size": 3873, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14178, "raw_average_key_size": 19, "raw_value_size": 2179789, "raw_average_value_size": 3074, "num_data_blocks": 173, "num_entries": 709, "num_filter_entries": 709, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764900589, "oldest_key_time": 1764900589, "file_creation_time": 1764900732, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 17675 microseconds, and 8319 cpu microseconds.
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.388069) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 2200042 bytes OK
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.388096) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.390546) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.390566) EVENT_LOG_v1 {"time_micros": 1764900732390559, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.390588) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2237152, prev total WAL file size 2237152, number of live WAL files 2.
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.392019) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(2148KB)], [86(9393KB)]
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900732392117, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11818832, "oldest_snapshot_seqno": -1}
Dec 05 02:12:12 compute-0 sudo[449287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:12:12 compute-0 sudo[449287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:12:12 compute-0 sudo[449287]: pam_unix(sudo:session): session closed for user root
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 5850 keys, 10116314 bytes, temperature: kUnknown
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900732463545, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 10116314, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10075674, "index_size": 24914, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14661, "raw_key_size": 148414, "raw_average_key_size": 25, "raw_value_size": 9968515, "raw_average_value_size": 1704, "num_data_blocks": 1020, "num_entries": 5850, "num_filter_entries": 5850, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764900732, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.463851) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 10116314 bytes
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.466282) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.3 rd, 141.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 9.2 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(10.0) write-amplify(4.6) OK, records in: 6368, records dropped: 518 output_compression: NoCompression
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.466311) EVENT_LOG_v1 {"time_micros": 1764900732466297, "job": 50, "event": "compaction_finished", "compaction_time_micros": 71509, "compaction_time_cpu_micros": 40332, "output_level": 6, "num_output_files": 1, "total_output_size": 10116314, "num_input_records": 6368, "num_output_records": 5850, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900732467189, "job": 50, "event": "table_file_deletion", "file_number": 88}
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900732471071, "job": 50, "event": "table_file_deletion", "file_number": 86}
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.391717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.471273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.471280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.471284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.471287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.471290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:12:12 compute-0 sudo[449312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:12:12 compute-0 sudo[449312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:12:12 compute-0 sudo[449312]: pam_unix(sudo:session): session closed for user root
Dec 05 02:12:12 compute-0 sudo[449337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:12:12 compute-0 sudo[449337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:12:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.183497) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900733183559, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 255, "num_deletes": 250, "total_data_size": 14332, "memory_usage": 20208, "flush_reason": "Manual Compaction"}
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900733186643, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 13846, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39131, "largest_seqno": 39385, "table_properties": {"data_size": 12094, "index_size": 49, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 5124, "raw_average_key_size": 20, "raw_value_size": 8697, "raw_average_value_size": 34, "num_data_blocks": 2, "num_entries": 255, "num_filter_entries": 255, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764900733, "oldest_key_time": 1764900733, "file_creation_time": 1764900733, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 3178 microseconds, and 900 cpu microseconds.
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.186684) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 13846 bytes OK
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.186695) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.188503) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.188513) EVENT_LOG_v1 {"time_micros": 1764900733188510, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.188528) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 12321, prev total WAL file size 12321, number of live WAL files 2.
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.189164) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353033' seq:72057594037927935, type:22 .. '6D6772737461740031373534' seq:0, type:0; will stop at (end)
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(13KB)], [89(9879KB)]
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900733189224, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 10130160, "oldest_snapshot_seqno": -1}
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 5601 keys, 6842209 bytes, temperature: kUnknown
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900733257762, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 6842209, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6808160, "index_size": 18963, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14021, "raw_key_size": 143432, "raw_average_key_size": 25, "raw_value_size": 6710216, "raw_average_value_size": 1198, "num_data_blocks": 771, "num_entries": 5601, "num_filter_entries": 5601, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764900733, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.258198) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 6842209 bytes
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.260704) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.2 rd, 99.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 9.6 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(1225.8) write-amplify(494.2) OK, records in: 6105, records dropped: 504 output_compression: NoCompression
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.260725) EVENT_LOG_v1 {"time_micros": 1764900733260715, "job": 52, "event": "compaction_finished", "compaction_time_micros": 68825, "compaction_time_cpu_micros": 37880, "output_level": 6, "num_output_files": 1, "total_output_size": 6842209, "num_input_records": 6105, "num_output_records": 5601, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900733260853, "job": 52, "event": "table_file_deletion", "file_number": 91}
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900733263248, "job": 52, "event": "table_file_deletion", "file_number": 89}
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.188938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.263520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.263527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.263530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.263533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.263536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:12:13 compute-0 podman[449400]: 2025-12-05 02:12:13.28399781 +0000 UTC m=+0.093384064 container create 514aca16c9ef90148289540b3ad3e14f7427e151d267acfcd8fe146147519bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_blackburn, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:12:13 compute-0 podman[449400]: 2025-12-05 02:12:13.244312845 +0000 UTC m=+0.053699179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:12:13 compute-0 systemd[1]: Started libpod-conmon-514aca16c9ef90148289540b3ad3e14f7427e151d267acfcd8fe146147519bc7.scope.
Dec 05 02:12:13 compute-0 ceph-mon[192914]: pgmap v1896: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 415 KiB/s rd, 3.5 MiB/s wr, 79 op/s
Dec 05 02:12:13 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:12:13 compute-0 podman[449400]: 2025-12-05 02:12:13.428511059 +0000 UTC m=+0.237897383 container init 514aca16c9ef90148289540b3ad3e14f7427e151d267acfcd8fe146147519bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_blackburn, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:12:13 compute-0 podman[449400]: 2025-12-05 02:12:13.444657122 +0000 UTC m=+0.254043406 container start 514aca16c9ef90148289540b3ad3e14f7427e151d267acfcd8fe146147519bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 05 02:12:13 compute-0 affectionate_blackburn[449415]: 167 167
Dec 05 02:12:13 compute-0 systemd[1]: libpod-514aca16c9ef90148289540b3ad3e14f7427e151d267acfcd8fe146147519bc7.scope: Deactivated successfully.
Dec 05 02:12:13 compute-0 conmon[449415]: conmon 514aca16c9ef90148289 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-514aca16c9ef90148289540b3ad3e14f7427e151d267acfcd8fe146147519bc7.scope/container/memory.events
Dec 05 02:12:13 compute-0 podman[449400]: 2025-12-05 02:12:13.452735849 +0000 UTC m=+0.262122103 container attach 514aca16c9ef90148289540b3ad3e14f7427e151d267acfcd8fe146147519bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_blackburn, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 05 02:12:13 compute-0 podman[449400]: 2025-12-05 02:12:13.457309737 +0000 UTC m=+0.266695991 container died 514aca16c9ef90148289540b3ad3e14f7427e151d267acfcd8fe146147519bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_blackburn, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec 05 02:12:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4e47439577f7ee75a0cb74c942084d08e687a19779e999c1c8fa07facc1af10-merged.mount: Deactivated successfully.
Dec 05 02:12:13 compute-0 podman[449400]: 2025-12-05 02:12:13.511327545 +0000 UTC m=+0.320713789 container remove 514aca16c9ef90148289540b3ad3e14f7427e151d267acfcd8fe146147519bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_blackburn, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:12:13 compute-0 systemd[1]: libpod-conmon-514aca16c9ef90148289540b3ad3e14f7427e151d267acfcd8fe146147519bc7.scope: Deactivated successfully.
Dec 05 02:12:13 compute-0 ovn_controller[89286]: 2025-12-05T02:12:13Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ab:49:42 10.100.0.11
Dec 05 02:12:13 compute-0 ovn_controller[89286]: 2025-12-05T02:12:13Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ab:49:42 10.100.0.11
Dec 05 02:12:13 compute-0 podman[449438]: 2025-12-05 02:12:13.7589915 +0000 UTC m=+0.056633911 container create 17bd1bf079754874a9faf22c86ea4422d0c2a7291eee1b075247304ab4a6d863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 05 02:12:13 compute-0 podman[449438]: 2025-12-05 02:12:13.738725071 +0000 UTC m=+0.036367502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:12:13 compute-0 systemd[1]: Started libpod-conmon-17bd1bf079754874a9faf22c86ea4422d0c2a7291eee1b075247304ab4a6d863.scope.
Dec 05 02:12:13 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc92e3bd7314cbe4460b06279fb992e744d1ad01aa116aa5781531e1b570c9f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc92e3bd7314cbe4460b06279fb992e744d1ad01aa116aa5781531e1b570c9f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc92e3bd7314cbe4460b06279fb992e744d1ad01aa116aa5781531e1b570c9f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc92e3bd7314cbe4460b06279fb992e744d1ad01aa116aa5781531e1b570c9f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:12:13 compute-0 podman[449438]: 2025-12-05 02:12:13.920196818 +0000 UTC m=+0.217839259 container init 17bd1bf079754874a9faf22c86ea4422d0c2a7291eee1b075247304ab4a6d863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_goldberg, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:12:13 compute-0 podman[449438]: 2025-12-05 02:12:13.929855949 +0000 UTC m=+0.227498350 container start 17bd1bf079754874a9faf22c86ea4422d0c2a7291eee1b075247304ab4a6d863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_goldberg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:12:13 compute-0 podman[449438]: 2025-12-05 02:12:13.934045067 +0000 UTC m=+0.231687478 container attach 17bd1bf079754874a9faf22c86ea4422d0c2a7291eee1b075247304ab4a6d863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_goldberg, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 05 02:12:13 compute-0 podman[449457]: 2025-12-05 02:12:13.936491726 +0000 UTC m=+0.121701679 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.expose-services=)
Dec 05 02:12:13 compute-0 podman[449455]: 2025-12-05 02:12:13.957349212 +0000 UTC m=+0.149876051 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 02:12:13 compute-0 podman[449456]: 2025-12-05 02:12:13.978379752 +0000 UTC m=+0.164709787 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller)
Dec 05 02:12:13 compute-0 podman[449452]: 2025-12-05 02:12:13.98896861 +0000 UTC m=+0.184293157 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Dec 05 02:12:14 compute-0 nova_compute[349548]: 2025-12-05 02:12:14.158 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1897: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 415 KiB/s rd, 3.5 MiB/s wr, 79 op/s
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]: {
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:     "0": [
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:         {
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "devices": [
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "/dev/loop3"
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             ],
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "lv_name": "ceph_lv0",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "lv_size": "21470642176",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "name": "ceph_lv0",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "tags": {
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.cluster_name": "ceph",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.crush_device_class": "",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.encrypted": "0",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.osd_id": "0",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.type": "block",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.vdo": "0"
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             },
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "type": "block",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "vg_name": "ceph_vg0"
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:         }
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:     ],
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:     "1": [
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:         {
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "devices": [
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "/dev/loop4"
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             ],
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "lv_name": "ceph_lv1",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "lv_size": "21470642176",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "name": "ceph_lv1",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "tags": {
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.cluster_name": "ceph",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.crush_device_class": "",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.encrypted": "0",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.osd_id": "1",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.type": "block",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.vdo": "0"
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             },
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "type": "block",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "vg_name": "ceph_vg1"
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:         }
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:     ],
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:     "2": [
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:         {
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "devices": [
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "/dev/loop5"
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             ],
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "lv_name": "ceph_lv2",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "lv_size": "21470642176",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "name": "ceph_lv2",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "tags": {
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.cluster_name": "ceph",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.crush_device_class": "",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.encrypted": "0",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.osd_id": "2",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.type": "block",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:                 "ceph.vdo": "0"
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             },
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "type": "block",
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:             "vg_name": "ceph_vg2"
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:         }
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]:     ]
Dec 05 02:12:14 compute-0 xenodochial_goldberg[449486]: }
Dec 05 02:12:14 compute-0 systemd[1]: libpod-17bd1bf079754874a9faf22c86ea4422d0c2a7291eee1b075247304ab4a6d863.scope: Deactivated successfully.
Dec 05 02:12:14 compute-0 podman[449438]: 2025-12-05 02:12:14.753105681 +0000 UTC m=+1.050748102 container died 17bd1bf079754874a9faf22c86ea4422d0c2a7291eee1b075247304ab4a6d863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_goldberg, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:12:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc92e3bd7314cbe4460b06279fb992e744d1ad01aa116aa5781531e1b570c9f0-merged.mount: Deactivated successfully.
Dec 05 02:12:14 compute-0 podman[449438]: 2025-12-05 02:12:14.837177122 +0000 UTC m=+1.134819533 container remove 17bd1bf079754874a9faf22c86ea4422d0c2a7291eee1b075247304ab4a6d863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_goldberg, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 05 02:12:14 compute-0 systemd[1]: libpod-conmon-17bd1bf079754874a9faf22c86ea4422d0c2a7291eee1b075247304ab4a6d863.scope: Deactivated successfully.
Dec 05 02:12:14 compute-0 sudo[449337]: pam_unix(sudo:session): session closed for user root
Dec 05 02:12:14 compute-0 sudo[449559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:12:14 compute-0 sudo[449559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:12:14 compute-0 sudo[449559]: pam_unix(sudo:session): session closed for user root
Dec 05 02:12:15 compute-0 sudo[449584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:12:15 compute-0 sudo[449584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:12:15 compute-0 sudo[449584]: pam_unix(sudo:session): session closed for user root
Dec 05 02:12:15 compute-0 sudo[449609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:12:15 compute-0 sudo[449609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:12:15 compute-0 sudo[449609]: pam_unix(sudo:session): session closed for user root
Dec 05 02:12:15 compute-0 sudo[449634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:12:15 compute-0 sudo[449634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:12:15 compute-0 ceph-mon[192914]: pgmap v1897: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 415 KiB/s rd, 3.5 MiB/s wr, 79 op/s
Dec 05 02:12:15 compute-0 podman[449697]: 2025-12-05 02:12:15.903700176 +0000 UTC m=+0.056642332 container create fcc7936363cb99ef392902327e89402697ab024e400786b3fc58bab40ec6ad36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:12:15 compute-0 systemd[1]: Started libpod-conmon-fcc7936363cb99ef392902327e89402697ab024e400786b3fc58bab40ec6ad36.scope.
Dec 05 02:12:15 compute-0 podman[449697]: 2025-12-05 02:12:15.884779024 +0000 UTC m=+0.037721200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:12:16 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:12:16 compute-0 podman[449697]: 2025-12-05 02:12:16.035824887 +0000 UTC m=+0.188767123 container init fcc7936363cb99ef392902327e89402697ab024e400786b3fc58bab40ec6ad36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 02:12:16 compute-0 podman[449697]: 2025-12-05 02:12:16.054436179 +0000 UTC m=+0.207378375 container start fcc7936363cb99ef392902327e89402697ab024e400786b3fc58bab40ec6ad36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 05 02:12:16 compute-0 podman[449697]: 2025-12-05 02:12:16.061762845 +0000 UTC m=+0.214705101 container attach fcc7936363cb99ef392902327e89402697ab024e400786b3fc58bab40ec6ad36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Dec 05 02:12:16 compute-0 vibrant_gould[449713]: 167 167
Dec 05 02:12:16 compute-0 systemd[1]: libpod-fcc7936363cb99ef392902327e89402697ab024e400786b3fc58bab40ec6ad36.scope: Deactivated successfully.
Dec 05 02:12:16 compute-0 podman[449718]: 2025-12-05 02:12:16.137239585 +0000 UTC m=+0.054275316 container died fcc7936363cb99ef392902327e89402697ab024e400786b3fc58bab40ec6ad36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:12:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ab3e7026b96fd1b792662e358428bc82f1009973040f4813dd6f9c5693342c2-merged.mount: Deactivated successfully.
Dec 05 02:12:16 compute-0 podman[449718]: 2025-12-05 02:12:16.198271469 +0000 UTC m=+0.115307160 container remove fcc7936363cb99ef392902327e89402697ab024e400786b3fc58bab40ec6ad36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 02:12:16 compute-0 systemd[1]: libpod-conmon-fcc7936363cb99ef392902327e89402697ab024e400786b3fc58bab40ec6ad36.scope: Deactivated successfully.
Dec 05 02:12:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1898: 321 pgs: 321 active+clean; 232 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 585 KiB/s rd, 4.2 MiB/s wr, 116 op/s
Dec 05 02:12:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:12:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:12:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:12:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:12:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:12:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:12:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:12:16
Dec 05 02:12:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:12:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:12:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'backups', 'default.rgw.meta', 'images', '.rgw.root']
Dec 05 02:12:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:12:16 compute-0 podman[449739]: 2025-12-05 02:12:16.465037821 +0000 UTC m=+0.080421789 container create 8480bc07c1f9d46f7c4f781f6190f38268a30d8c485897bb29f380ff475425f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bohr, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 05 02:12:16 compute-0 podman[449739]: 2025-12-05 02:12:16.436217502 +0000 UTC m=+0.051601560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:12:16 compute-0 systemd[1]: Started libpod-conmon-8480bc07c1f9d46f7c4f781f6190f38268a30d8c485897bb29f380ff475425f2.scope.
Dec 05 02:12:16 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:12:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3827056187f30237f0321a9f26f9ab4a1f2e6ede149772f2745a847fc848e40f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:12:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3827056187f30237f0321a9f26f9ab4a1f2e6ede149772f2745a847fc848e40f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:12:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3827056187f30237f0321a9f26f9ab4a1f2e6ede149772f2745a847fc848e40f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:12:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3827056187f30237f0321a9f26f9ab4a1f2e6ede149772f2745a847fc848e40f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:12:16 compute-0 podman[449739]: 2025-12-05 02:12:16.642839445 +0000 UTC m=+0.258223493 container init 8480bc07c1f9d46f7c4f781f6190f38268a30d8c485897bb29f380ff475425f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 05 02:12:16 compute-0 podman[449739]: 2025-12-05 02:12:16.663875986 +0000 UTC m=+0.279259964 container start 8480bc07c1f9d46f7c4f781f6190f38268a30d8c485897bb29f380ff475425f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bohr, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 05 02:12:16 compute-0 podman[449739]: 2025-12-05 02:12:16.669429512 +0000 UTC m=+0.284813530 container attach 8480bc07c1f9d46f7c4f781f6190f38268a30d8c485897bb29f380ff475425f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Dec 05 02:12:17 compute-0 nova_compute[349548]: 2025-12-05 02:12:17.096 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:17 compute-0 nova_compute[349548]: 2025-12-05 02:12:17.196 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:17 compute-0 ceph-mon[192914]: pgmap v1898: 321 pgs: 321 active+clean; 232 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 585 KiB/s rd, 4.2 MiB/s wr, 116 op/s
Dec 05 02:12:17 compute-0 epic_bohr[449755]: {
Dec 05 02:12:17 compute-0 epic_bohr[449755]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:12:17 compute-0 epic_bohr[449755]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:12:17 compute-0 epic_bohr[449755]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:12:17 compute-0 epic_bohr[449755]:         "osd_id": 0,
Dec 05 02:12:17 compute-0 epic_bohr[449755]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:12:17 compute-0 epic_bohr[449755]:         "type": "bluestore"
Dec 05 02:12:17 compute-0 epic_bohr[449755]:     },
Dec 05 02:12:17 compute-0 epic_bohr[449755]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:12:17 compute-0 epic_bohr[449755]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:12:17 compute-0 epic_bohr[449755]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:12:17 compute-0 epic_bohr[449755]:         "osd_id": 1,
Dec 05 02:12:17 compute-0 epic_bohr[449755]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:12:17 compute-0 epic_bohr[449755]:         "type": "bluestore"
Dec 05 02:12:17 compute-0 epic_bohr[449755]:     },
Dec 05 02:12:17 compute-0 epic_bohr[449755]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:12:17 compute-0 epic_bohr[449755]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:12:17 compute-0 epic_bohr[449755]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:12:17 compute-0 epic_bohr[449755]:         "osd_id": 2,
Dec 05 02:12:17 compute-0 epic_bohr[449755]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:12:17 compute-0 epic_bohr[449755]:         "type": "bluestore"
Dec 05 02:12:17 compute-0 epic_bohr[449755]:     }
Dec 05 02:12:17 compute-0 epic_bohr[449755]: }
Dec 05 02:12:17 compute-0 systemd[1]: libpod-8480bc07c1f9d46f7c4f781f6190f38268a30d8c485897bb29f380ff475425f2.scope: Deactivated successfully.
Dec 05 02:12:17 compute-0 podman[449739]: 2025-12-05 02:12:17.87245175 +0000 UTC m=+1.487835758 container died 8480bc07c1f9d46f7c4f781f6190f38268a30d8c485897bb29f380ff475425f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bohr, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:12:17 compute-0 systemd[1]: libpod-8480bc07c1f9d46f7c4f781f6190f38268a30d8c485897bb29f380ff475425f2.scope: Consumed 1.192s CPU time.
Dec 05 02:12:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-3827056187f30237f0321a9f26f9ab4a1f2e6ede149772f2745a847fc848e40f-merged.mount: Deactivated successfully.
Dec 05 02:12:17 compute-0 podman[449739]: 2025-12-05 02:12:17.985716021 +0000 UTC m=+1.601100009 container remove 8480bc07c1f9d46f7c4f781f6190f38268a30d8c485897bb29f380ff475425f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 05 02:12:18 compute-0 systemd[1]: libpod-conmon-8480bc07c1f9d46f7c4f781f6190f38268a30d8c485897bb29f380ff475425f2.scope: Deactivated successfully.
Dec 05 02:12:18 compute-0 sudo[449634]: pam_unix(sudo:session): session closed for user root
Dec 05 02:12:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:12:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:12:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:12:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:12:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:12:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:12:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:12:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:12:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:12:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:12:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:12:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:12:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:12:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:12:18 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev ecf263fb-c773-4c85-982f-1bc8e2bba288 does not exist
Dec 05 02:12:18 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 0c3db574-89d6-49ac-9e15-efeb59a73aaa does not exist
Dec 05 02:12:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.195600) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900738195703, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 306, "num_deletes": 250, "total_data_size": 115444, "memory_usage": 121848, "flush_reason": "Manual Compaction"}
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900738201357, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 115837, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39386, "largest_seqno": 39691, "table_properties": {"data_size": 113807, "index_size": 258, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4101, "raw_average_key_size": 14, "raw_value_size": 109843, "raw_average_value_size": 392, "num_data_blocks": 11, "num_entries": 280, "num_filter_entries": 280, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764900733, "oldest_key_time": 1764900733, "file_creation_time": 1764900738, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 5861 microseconds, and 1984 cpu microseconds.
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.201469) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 115837 bytes OK
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.201492) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.204519) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.204551) EVENT_LOG_v1 {"time_micros": 1764900738204542, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.204574) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 113244, prev total WAL file size 113244, number of live WAL files 2.
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.206681) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(113KB)], [92(6681KB)]
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900738206779, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 6958046, "oldest_snapshot_seqno": -1}
Dec 05 02:12:18 compute-0 sudo[449801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:12:18 compute-0 sudo[449801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:12:18 compute-0 sudo[449801]: pam_unix(sudo:session): session closed for user root
Dec 05 02:12:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1899: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 597 KiB/s rd, 4.3 MiB/s wr, 118 op/s
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 5370 keys, 6238176 bytes, temperature: kUnknown
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900738257965, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 6238176, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6205798, "index_size": 17868, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 140412, "raw_average_key_size": 26, "raw_value_size": 6111869, "raw_average_value_size": 1138, "num_data_blocks": 706, "num_entries": 5370, "num_filter_entries": 5370, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764900738, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.258175) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 6238176 bytes
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.260126) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.8 rd, 121.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 6.5 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(113.9) write-amplify(53.9) OK, records in: 5881, records dropped: 511 output_compression: NoCompression
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.260146) EVENT_LOG_v1 {"time_micros": 1764900738260137, "job": 54, "event": "compaction_finished", "compaction_time_micros": 51242, "compaction_time_cpu_micros": 31897, "output_level": 6, "num_output_files": 1, "total_output_size": 6238176, "num_input_records": 5881, "num_output_records": 5370, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900738260292, "job": 54, "event": "table_file_deletion", "file_number": 94}
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900738261566, "job": 54, "event": "table_file_deletion", "file_number": 92}
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.206361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.261717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.261722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.261725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.261727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.261730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:12:18 compute-0 sudo[449826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:12:18 compute-0 sudo[449826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:12:18 compute-0 sudo[449826]: pam_unix(sudo:session): session closed for user root
Dec 05 02:12:19 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:12:19 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:12:19 compute-0 ceph-mon[192914]: pgmap v1899: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 597 KiB/s rd, 4.3 MiB/s wr, 118 op/s
Dec 05 02:12:19 compute-0 nova_compute[349548]: 2025-12-05 02:12:19.858 349552 INFO nova.compute.manager [None req-f37f4a2b-3f22-45df-82a1-0a41e59316dd 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Get console output
Dec 05 02:12:19 compute-0 nova_compute[349548]: 2025-12-05 02:12:19.879 349552 INFO oslo.privsep.daemon [None req-f37f4a2b-3f22-45df-82a1-0a41e59316dd 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpzw0yexz7/privsep.sock']
Dec 05 02:12:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1900: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 360 KiB/s rd, 2.3 MiB/s wr, 79 op/s
Dec 05 02:12:20 compute-0 nova_compute[349548]: 2025-12-05 02:12:20.673 349552 INFO oslo.privsep.daemon [None req-f37f4a2b-3f22-45df-82a1-0a41e59316dd 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Spawned new privsep daemon via rootwrap
Dec 05 02:12:20 compute-0 nova_compute[349548]: 2025-12-05 02:12:20.532 449857 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 05 02:12:20 compute-0 nova_compute[349548]: 2025-12-05 02:12:20.541 449857 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 05 02:12:20 compute-0 nova_compute[349548]: 2025-12-05 02:12:20.545 449857 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 05 02:12:20 compute-0 nova_compute[349548]: 2025-12-05 02:12:20.546 449857 INFO oslo.privsep.daemon [-] privsep daemon running as pid 449857
Dec 05 02:12:20 compute-0 nova_compute[349548]: 2025-12-05 02:12:20.789 449857 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 05 02:12:21 compute-0 ceph-mon[192914]: pgmap v1900: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 360 KiB/s rd, 2.3 MiB/s wr, 79 op/s
Dec 05 02:12:21 compute-0 nova_compute[349548]: 2025-12-05 02:12:21.975 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:22 compute-0 nova_compute[349548]: 2025-12-05 02:12:22.100 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:22 compute-0 nova_compute[349548]: 2025-12-05 02:12:22.198 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1901: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 360 KiB/s rd, 2.4 MiB/s wr, 80 op/s
Dec 05 02:12:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:12:23 compute-0 nova_compute[349548]: 2025-12-05 02:12:23.189 349552 DEBUG nova.compute.manager [req-0541147a-3c3e-4f4e-b763-d161ddbdab7d req-5b74aa63-8a08-4d8f-90b3-197c2caadbd6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Received event network-changed-1e754fc7-106a-43d2-a675-79c30089904b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:12:23 compute-0 nova_compute[349548]: 2025-12-05 02:12:23.190 349552 DEBUG nova.compute.manager [req-0541147a-3c3e-4f4e-b763-d161ddbdab7d req-5b74aa63-8a08-4d8f-90b3-197c2caadbd6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Refreshing instance network info cache due to event network-changed-1e754fc7-106a-43d2-a675-79c30089904b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 02:12:23 compute-0 nova_compute[349548]: 2025-12-05 02:12:23.191 349552 DEBUG oslo_concurrency.lockutils [req-0541147a-3c3e-4f4e-b763-d161ddbdab7d req-5b74aa63-8a08-4d8f-90b3-197c2caadbd6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:12:23 compute-0 nova_compute[349548]: 2025-12-05 02:12:23.192 349552 DEBUG oslo_concurrency.lockutils [req-0541147a-3c3e-4f4e-b763-d161ddbdab7d req-5b74aa63-8a08-4d8f-90b3-197c2caadbd6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:12:23 compute-0 nova_compute[349548]: 2025-12-05 02:12:23.193 349552 DEBUG nova.network.neutron [req-0541147a-3c3e-4f4e-b763-d161ddbdab7d req-5b74aa63-8a08-4d8f-90b3-197c2caadbd6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Refreshing network info cache for port 1e754fc7-106a-43d2-a675-79c30089904b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 02:12:23 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:23.242 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:12:23 compute-0 nova_compute[349548]: 2025-12-05 02:12:23.243 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:23 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:23.244 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 02:12:23 compute-0 ceph-mon[192914]: pgmap v1901: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 360 KiB/s rd, 2.4 MiB/s wr, 80 op/s
Dec 05 02:12:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1902: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 196 KiB/s rd, 814 KiB/s wr, 42 op/s
Dec 05 02:12:25 compute-0 ceph-mon[192914]: pgmap v1902: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 196 KiB/s rd, 814 KiB/s wr, 42 op/s
Dec 05 02:12:26 compute-0 nova_compute[349548]: 2025-12-05 02:12:26.005 349552 DEBUG nova.network.neutron [req-0541147a-3c3e-4f4e-b763-d161ddbdab7d req-5b74aa63-8a08-4d8f-90b3-197c2caadbd6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Updated VIF entry in instance network info cache for port 1e754fc7-106a-43d2-a675-79c30089904b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 02:12:26 compute-0 nova_compute[349548]: 2025-12-05 02:12:26.007 349552 DEBUG nova.network.neutron [req-0541147a-3c3e-4f4e-b763-d161ddbdab7d req-5b74aa63-8a08-4d8f-90b3-197c2caadbd6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Updating instance_info_cache with network_info: [{"id": "1e754fc7-106a-43d2-a675-79c30089904b", "address": "fa:16:3e:ab:49:42", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e754fc7-10", "ovs_interfaceid": "1e754fc7-106a-43d2-a675-79c30089904b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:12:26 compute-0 nova_compute[349548]: 2025-12-05 02:12:26.033 349552 DEBUG oslo_concurrency.lockutils [req-0541147a-3c3e-4f4e-b763-d161ddbdab7d req-5b74aa63-8a08-4d8f-90b3-197c2caadbd6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:12:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1903: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 202 KiB/s rd, 818 KiB/s wr, 42 op/s
Dec 05 02:12:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:26.247 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:12:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:12:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.0 total, 600.0 interval
                                            Cumulative writes: 8693 writes, 39K keys, 8693 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                            Cumulative WAL: 8693 writes, 8693 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1389 writes, 7001 keys, 1389 commit groups, 1.0 writes per commit group, ingest: 8.84 MB, 0.01 MB/s
                                            Interval WAL: 1389 writes, 1389 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    103.0      0.47              0.21        27    0.017       0      0       0.0       0.0
                                              L6      1/0    5.95 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.0    119.9     98.2      1.96              0.84        26    0.075    134K    14K       0.0       0.0
                                             Sum      1/0    5.95 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.0     96.7     99.2      2.43              1.05        53    0.046    134K    14K       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.9    110.0    106.4      0.68              0.35        16    0.042     48K   4102       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    119.9     98.2      1.96              0.84        26    0.075    134K    14K       0.0       0.0
                                            High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    104.0      0.46              0.21        26    0.018       0      0       0.0       0.0
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 3600.0 total, 600.0 interval
                                            Flush(GB): cumulative 0.047, interval 0.009
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.23 GB write, 0.07 MB/s write, 0.23 GB read, 0.07 MB/s read, 2.4 seconds
                                            Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.7 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56463779d1f0#2 capacity: 304.00 MB usage: 27.45 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000186 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(1763,26.45 MB,8.69984%) FilterBlock(54,381.86 KB,0.122668%) IndexBlock(54,641.39 KB,0.206039%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Dec 05 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:12:27 compute-0 nova_compute[349548]: 2025-12-05 02:12:27.106 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015168927443511102 of space, bias 1.0, pg target 0.4550678233053331 quantized to 32 (current 32)
Dec 05 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 05 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:12:27 compute-0 nova_compute[349548]: 2025-12-05 02:12:27.203 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:27 compute-0 ceph-mon[192914]: pgmap v1903: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 202 KiB/s rd, 818 KiB/s wr, 42 op/s
Dec 05 02:12:27 compute-0 podman[449860]: 2025-12-05 02:12:27.692315747 +0000 UTC m=+0.095988127 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 02:12:27 compute-0 podman[449859]: 2025-12-05 02:12:27.715628622 +0000 UTC m=+0.119122767 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:12:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:12:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1904: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 36 KiB/s wr, 5 op/s
Dec 05 02:12:29 compute-0 ceph-mon[192914]: pgmap v1904: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 36 KiB/s wr, 5 op/s
Dec 05 02:12:29 compute-0 podman[158197]: time="2025-12-05T02:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:12:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45046 "" "Go-http-client/1.1"
Dec 05 02:12:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9111 "" "Go-http-client/1.1"
Dec 05 02:12:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1905: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Dec 05 02:12:30 compute-0 nova_compute[349548]: 2025-12-05 02:12:30.430 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "e184a71d-1d91-4999-bb53-73c2caa1110a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:12:30 compute-0 nova_compute[349548]: 2025-12-05 02:12:30.431 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:12:30 compute-0 nova_compute[349548]: 2025-12-05 02:12:30.468 349552 DEBUG nova.compute.manager [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 05 02:12:30 compute-0 nova_compute[349548]: 2025-12-05 02:12:30.588 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:12:30 compute-0 nova_compute[349548]: 2025-12-05 02:12:30.589 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:12:30 compute-0 nova_compute[349548]: 2025-12-05 02:12:30.606 349552 DEBUG nova.virt.hardware [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 05 02:12:30 compute-0 nova_compute[349548]: 2025-12-05 02:12:30.607 349552 INFO nova.compute.claims [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Claim successful on node compute-0.ctlplane.example.com
Dec 05 02:12:30 compute-0 nova_compute[349548]: 2025-12-05 02:12:30.774 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:12:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:12:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4048453107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.266 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.278 349552 DEBUG nova.compute.provider_tree [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.304 349552 DEBUG nova.scheduler.client.report [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:12:31 compute-0 ceph-mon[192914]: pgmap v1905: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Dec 05 02:12:31 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4048453107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.359 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.360 349552 DEBUG nova.compute.manager [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 05 02:12:31 compute-0 openstack_network_exporter[366555]: ERROR   02:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:12:31 compute-0 openstack_network_exporter[366555]: ERROR   02:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:12:31 compute-0 openstack_network_exporter[366555]: ERROR   02:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:12:31 compute-0 openstack_network_exporter[366555]: ERROR   02:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:12:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:12:31 compute-0 openstack_network_exporter[366555]: ERROR   02:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:12:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.426 349552 DEBUG nova.compute.manager [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 05 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.427 349552 DEBUG nova.network.neutron [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 05 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.453 349552 INFO nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 05 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.479 349552 DEBUG nova.compute.manager [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 05 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.570 349552 DEBUG nova.compute.manager [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 05 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.572 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 05 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.573 349552 INFO nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Creating image(s)
Dec 05 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.621 349552 DEBUG nova.storage.rbd_utils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image e184a71d-1d91-4999-bb53-73c2caa1110a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.671 349552 DEBUG nova.storage.rbd_utils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image e184a71d-1d91-4999-bb53-73c2caa1110a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.723 349552 DEBUG nova.storage.rbd_utils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image e184a71d-1d91-4999-bb53-73c2caa1110a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.733 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.767 349552 DEBUG nova.policy [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2e61f46e24a240608d1523fb5265d3ac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6aaead05b2404fec8f687504ed800a2b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 05 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.825 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.826 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.827 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.827 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.879 349552 DEBUG nova.storage.rbd_utils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image e184a71d-1d91-4999-bb53-73c2caa1110a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.889 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 e184a71d-1d91-4999-bb53-73c2caa1110a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.112 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.207 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1906: 321 pgs: 321 active+clean; 240 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 273 KiB/s wr, 4 op/s
Dec 05 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.307 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 e184a71d-1d91-4999-bb53-73c2caa1110a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.463 349552 DEBUG nova.storage.rbd_utils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] resizing rbd image e184a71d-1d91-4999-bb53-73c2caa1110a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 05 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.537 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Acquiring lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.538 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.552 349552 DEBUG nova.compute.manager [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 05 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.597 349552 DEBUG nova.network.neutron [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Successfully created port: 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 05 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.619 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.620 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.630 349552 DEBUG nova.virt.hardware [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 05 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.630 349552 INFO nova.compute.claims [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Claim successful on node compute-0.ctlplane.example.com
Dec 05 02:12:32 compute-0 podman[450071]: 2025-12-05 02:12:32.684394443 +0000 UTC m=+0.093440815 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 02:12:32 compute-0 podman[450072]: 2025-12-05 02:12:32.727254117 +0000 UTC m=+0.119920239 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.748 349552 DEBUG nova.objects.instance [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lazy-loading 'migration_context' on Instance uuid e184a71d-1d91-4999-bb53-73c2caa1110a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.767 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 05 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.767 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Ensure instance console log exists: /var/lib/nova/instances/e184a71d-1d91-4999-bb53-73c2caa1110a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 05 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.768 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.768 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.769 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.837 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:12:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:12:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:12:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/486296399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:12:33 compute-0 ceph-mon[192914]: pgmap v1906: 321 pgs: 321 active+clean; 240 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 273 KiB/s wr, 4 op/s
Dec 05 02:12:33 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/486296399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.374 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.385 349552 DEBUG nova.compute.provider_tree [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.412 349552 DEBUG nova.scheduler.client.report [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.453 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.454 349552 DEBUG nova.compute.manager [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 05 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.519 349552 DEBUG nova.compute.manager [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 05 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.519 349552 DEBUG nova.network.neutron [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 05 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.544 349552 INFO nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 05 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.577 349552 DEBUG nova.compute.manager [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 05 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.691 349552 DEBUG nova.compute.manager [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 05 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.692 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 05 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.693 349552 INFO nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Creating image(s)
Dec 05 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.746 349552 DEBUG nova.storage.rbd_utils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] rbd image 117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.822 349552 DEBUG nova.storage.rbd_utils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] rbd image 117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.880 349552 DEBUG nova.storage.rbd_utils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] rbd image 117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.890 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.979 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.980 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Acquiring lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.981 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.982 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:12:34 compute-0 nova_compute[349548]: 2025-12-05 02:12:34.041 349552 DEBUG nova.storage.rbd_utils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] rbd image 117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:12:34 compute-0 nova_compute[349548]: 2025-12-05 02:12:34.053 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:12:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1907: 321 pgs: 321 active+clean; 240 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 262 KiB/s wr, 4 op/s
Dec 05 02:12:34 compute-0 nova_compute[349548]: 2025-12-05 02:12:34.467 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:12:34 compute-0 nova_compute[349548]: 2025-12-05 02:12:34.633 349552 DEBUG nova.storage.rbd_utils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] resizing rbd image 117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 05 02:12:34 compute-0 podman[450270]: 2025-12-05 02:12:34.708501901 +0000 UTC m=+0.125188127 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler, distribution-scope=public, io.openshift.expose-services=, name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, config_id=edpm, maintainer=Red Hat, Inc.)
Dec 05 02:12:34 compute-0 nova_compute[349548]: 2025-12-05 02:12:34.884 349552 DEBUG nova.objects.instance [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lazy-loading 'migration_context' on Instance uuid 117d1772-87cc-4a3d-bf07-3f9b49ac0c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:12:34 compute-0 nova_compute[349548]: 2025-12-05 02:12:34.910 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 05 02:12:34 compute-0 nova_compute[349548]: 2025-12-05 02:12:34.910 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Ensure instance console log exists: /var/lib/nova/instances/117d1772-87cc-4a3d-bf07-3f9b49ac0c63/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 05 02:12:34 compute-0 nova_compute[349548]: 2025-12-05 02:12:34.913 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:12:34 compute-0 nova_compute[349548]: 2025-12-05 02:12:34.914 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:12:34 compute-0 nova_compute[349548]: 2025-12-05 02:12:34.915 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:12:34 compute-0 nova_compute[349548]: 2025-12-05 02:12:34.983 349552 DEBUG nova.policy [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '69e134c969b04dc58a1d1556d8ecf4a8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '286d2d767009421bb0c889a0ff65b2a2', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 05 02:12:35 compute-0 ceph-mon[192914]: pgmap v1907: 321 pgs: 321 active+clean; 240 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 262 KiB/s wr, 4 op/s
Dec 05 02:12:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1908: 321 pgs: 321 active+clean; 277 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 956 KiB/s wr, 9 op/s
Dec 05 02:12:36 compute-0 nova_compute[349548]: 2025-12-05 02:12:36.763 349552 DEBUG nova.network.neutron [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Successfully updated port: 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 05 02:12:36 compute-0 nova_compute[349548]: 2025-12-05 02:12:36.781 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "refresh_cache-e184a71d-1d91-4999-bb53-73c2caa1110a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:12:36 compute-0 nova_compute[349548]: 2025-12-05 02:12:36.781 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquired lock "refresh_cache-e184a71d-1d91-4999-bb53-73c2caa1110a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:12:36 compute-0 nova_compute[349548]: 2025-12-05 02:12:36.782 349552 DEBUG nova.network.neutron [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 05 02:12:36 compute-0 nova_compute[349548]: 2025-12-05 02:12:36.894 349552 DEBUG nova.compute.manager [req-7948c396-f586-464c-91dc-5b1543e66ab0 req-602e42d3-e9f0-405b-baa4-9877d1c76a34 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Received event network-changed-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:12:36 compute-0 nova_compute[349548]: 2025-12-05 02:12:36.895 349552 DEBUG nova.compute.manager [req-7948c396-f586-464c-91dc-5b1543e66ab0 req-602e42d3-e9f0-405b-baa4-9877d1c76a34 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Refreshing instance network info cache due to event network-changed-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 02:12:36 compute-0 nova_compute[349548]: 2025-12-05 02:12:36.896 349552 DEBUG oslo_concurrency.lockutils [req-7948c396-f586-464c-91dc-5b1543e66ab0 req-602e42d3-e9f0-405b-baa4-9877d1c76a34 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-e184a71d-1d91-4999-bb53-73c2caa1110a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:12:36 compute-0 nova_compute[349548]: 2025-12-05 02:12:36.981 349552 DEBUG nova.network.neutron [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Successfully created port: d5201944-8184-405e-ae5f-b743e1bd7399 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 05 02:12:36 compute-0 nova_compute[349548]: 2025-12-05 02:12:36.987 349552 DEBUG nova.network.neutron [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 05 02:12:37 compute-0 nova_compute[349548]: 2025-12-05 02:12:37.123 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:37 compute-0 nova_compute[349548]: 2025-12-05 02:12:37.214 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:37 compute-0 ceph-mon[192914]: pgmap v1908: 321 pgs: 321 active+clean; 277 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 956 KiB/s wr, 9 op/s
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.099 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.099 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 05 02:12:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:12:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1909: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.323 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.324 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.334 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.338 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}03a5c5085f72a10a14834caf2c8f725d7bea9761ee1da0af3d318eb89d91a8ae" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.386 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.387 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.388 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.389 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 292fd084-0808-4a80-adc1-6ab1f28e188a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.753 349552 DEBUG nova.network.neutron [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Updating instance_info_cache with network_info: [{"id": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "address": "fa:16:3e:de:22:fb", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c7e2c9-6a", "ovs_interfaceid": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.772 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Releasing lock "refresh_cache-e184a71d-1d91-4999-bb53-73c2caa1110a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.773 349552 DEBUG nova.compute.manager [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Instance network_info: |[{"id": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "address": "fa:16:3e:de:22:fb", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c7e2c9-6a", "ovs_interfaceid": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.773 349552 DEBUG oslo_concurrency.lockutils [req-7948c396-f586-464c-91dc-5b1543e66ab0 req-602e42d3-e9f0-405b-baa4-9877d1c76a34 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-e184a71d-1d91-4999-bb53-73c2caa1110a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.774 349552 DEBUG nova.network.neutron [req-7948c396-f586-464c-91dc-5b1543e66ab0 req-602e42d3-e9f0-405b-baa4-9877d1c76a34 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Refreshing network info cache for port 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.777 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Start _get_guest_xml network_info=[{"id": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "address": "fa:16:3e:de:22:fb", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c7e2c9-6a", "ovs_interfaceid": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'e9091bfb-b431-47c9-a284-79372046956b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.787 349552 WARNING nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.800 349552 DEBUG nova.virt.libvirt.host [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.801 349552 DEBUG nova.virt.libvirt.host [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.807 349552 DEBUG nova.virt.libvirt.host [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.808 349552 DEBUG nova.virt.libvirt.host [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.809 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.809 349552 DEBUG nova.virt.hardware [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T02:07:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.810 349552 DEBUG nova.virt.hardware [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.811 349552 DEBUG nova.virt.hardware [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.811 349552 DEBUG nova.virt.hardware [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.811 349552 DEBUG nova.virt.hardware [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.812 349552 DEBUG nova.virt.hardware [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.813 349552 DEBUG nova.virt.hardware [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.813 349552 DEBUG nova.virt.hardware [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.814 349552 DEBUG nova.virt.hardware [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.814 349552 DEBUG nova.virt.hardware [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.815 349552 DEBUG nova.virt.hardware [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.819 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.897 349552 DEBUG nova.network.neutron [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Successfully updated port: d5201944-8184-405e-ae5f-b743e1bd7399 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.912 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Acquiring lock "refresh_cache-117d1772-87cc-4a3d-bf07-3f9b49ac0c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.913 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Acquired lock "refresh_cache-117d1772-87cc-4a3d-bf07-3f9b49ac0c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.913 349552 DEBUG nova.network.neutron [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.009 349552 DEBUG nova.compute.manager [req-8b091dad-9831-42f8-90a2-9762ff3e7737 req-e199f0b1-85d9-4e4a-86ba-7282587ea851 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Received event network-changed-d5201944-8184-405e-ae5f-b743e1bd7399 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.010 349552 DEBUG nova.compute.manager [req-8b091dad-9831-42f8-90a2-9762ff3e7737 req-e199f0b1-85d9-4e4a-86ba-7282587ea851 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Refreshing instance network info cache due to event network-changed-d5201944-8184-405e-ae5f-b743e1bd7399. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.010 349552 DEBUG oslo_concurrency.lockutils [req-8b091dad-9831-42f8-90a2-9762ff3e7737 req-e199f0b1-85d9-4e4a-86ba-7282587ea851 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-117d1772-87cc-4a3d-bf07-3f9b49ac0c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:12:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:39.050 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1852 Content-Type: application/json Date: Fri, 05 Dec 2025 02:12:38 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-3110e6ad-c7c8-4e93-96cc-20e82be6ebd4 x-openstack-request-id: req-3110e6ad-c7c8-4e93-96cc-20e82be6ebd4 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 05 02:12:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:39.050 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f", "name": "tempest-TestNetworkBasicOps-server-593464214", "status": "ACTIVE", "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "user_id": "2e61f46e24a240608d1523fb5265d3ac", "metadata": {}, "hostId": "10fe85d51a16ea11dc2b9c4c45121e1df0a1e83cc5f4e895a8b24c00", "image": {"id": "e9091bfb-b431-47c9-a284-79372046956b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/e9091bfb-b431-47c9-a284-79372046956b"}]}, "flavor": {"id": "bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49"}]}, "created": "2025-12-05T02:11:29Z", "updated": "2025-12-05T02:11:39Z", "addresses": {"tempest-network-smoke--2137061445": [{"version": 4, "addr": "10.100.0.11", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ab:49:42"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-727356260", "OS-SRV-USG:launched_at": "2025-12-05T02:11:39.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-843142180"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000c", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 05 02:12:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:39.051 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f used request id req-3110e6ad-c7c8-4e93-96cc-20e82be6ebd4 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 05 02:12:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:39.052 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1fcee2c4-ccfc-4651-bc90-a606a4e46e0f', 'name': 'tempest-TestNetworkBasicOps-server-593464214', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'e9091bfb-b431-47c9-a284-79372046956b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000c', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6aaead05b2404fec8f687504ed800a2b', 'user_id': '2e61f46e24a240608d1523fb5265d3ac', 'hostId': '10fe85d51a16ea11dc2b9c4c45121e1df0a1e83cc5f4e895a8b24c00', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 02:12:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:39.055 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 292fd084-0808-4a80-adc1-6ab1f28e188a from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 05 02:12:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:39.055 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/292fd084-0808-4a80-adc1-6ab1f28e188a -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}03a5c5085f72a10a14834caf2c8f725d7bea9761ee1da0af3d318eb89d91a8ae" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.112 349552 DEBUG nova.network.neutron [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 05 02:12:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 02:12:39 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/127959064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.368 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:12:39 compute-0 ceph-mon[192914]: pgmap v1909: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Dec 05 02:12:39 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/127959064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.436 349552 DEBUG nova.storage.rbd_utils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image e184a71d-1d91-4999-bb53-73c2caa1110a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.446 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:12:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 02:12:39 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/502564395' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.944 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.945 349552 DEBUG nova.virt.libvirt.vif [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:12:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-246991198',display_name='tempest-TestNetworkBasicOps-server-246991198',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-246991198',id=13,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOPiJuAHnAJu46IBGrW2KCWpzoZreiuIkGq3//er4nG+5eIgXpgWi9tSl+igSSp8Nl6if+KEJaz1jLll0XICHyeubF/iswJE5bpcW/PYkhqz7B8mkIP3gi3Vhw5yfXTbIg==',key_name='tempest-TestNetworkBasicOps-994593786',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6aaead05b2404fec8f687504ed800a2b',ramdisk_id='',reservation_id='r-9tqk8ujr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-576606253',owner_user_name='tempest-TestNetworkBasicOps-576606253-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:12:31Z,user_data=None,user_id='2e61f46e24a240608d1523fb5265d3ac',uuid=e184a71d-1d91-4999-bb53-73c2caa1110a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "address": "fa:16:3e:de:22:fb", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c7e2c9-6a", "ovs_interfaceid": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.946 349552 DEBUG nova.network.os_vif_util [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Converting VIF {"id": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "address": "fa:16:3e:de:22:fb", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c7e2c9-6a", "ovs_interfaceid": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.946 349552 DEBUG nova.network.os_vif_util [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:22:fb,bridge_name='br-int',has_traffic_filtering=True,id=94c7e2c9-6aeb-4be2-a022-8cd7ad27d978,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94c7e2c9-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.947 349552 DEBUG nova.objects.instance [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lazy-loading 'pci_devices' on Instance uuid e184a71d-1d91-4999-bb53-73c2caa1110a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.963 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] End _get_guest_xml xml=<domain type="kvm">
Dec 05 02:12:39 compute-0 nova_compute[349548]:   <uuid>e184a71d-1d91-4999-bb53-73c2caa1110a</uuid>
Dec 05 02:12:39 compute-0 nova_compute[349548]:   <name>instance-0000000d</name>
Dec 05 02:12:39 compute-0 nova_compute[349548]:   <memory>131072</memory>
Dec 05 02:12:39 compute-0 nova_compute[349548]:   <vcpu>1</vcpu>
Dec 05 02:12:39 compute-0 nova_compute[349548]:   <metadata>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <nova:name>tempest-TestNetworkBasicOps-server-246991198</nova:name>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <nova:creationTime>2025-12-05 02:12:38</nova:creationTime>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <nova:flavor name="m1.nano">
Dec 05 02:12:39 compute-0 nova_compute[349548]:         <nova:memory>128</nova:memory>
Dec 05 02:12:39 compute-0 nova_compute[349548]:         <nova:disk>1</nova:disk>
Dec 05 02:12:39 compute-0 nova_compute[349548]:         <nova:swap>0</nova:swap>
Dec 05 02:12:39 compute-0 nova_compute[349548]:         <nova:ephemeral>0</nova:ephemeral>
Dec 05 02:12:39 compute-0 nova_compute[349548]:         <nova:vcpus>1</nova:vcpus>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       </nova:flavor>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <nova:owner>
Dec 05 02:12:39 compute-0 nova_compute[349548]:         <nova:user uuid="2e61f46e24a240608d1523fb5265d3ac">tempest-TestNetworkBasicOps-576606253-project-member</nova:user>
Dec 05 02:12:39 compute-0 nova_compute[349548]:         <nova:project uuid="6aaead05b2404fec8f687504ed800a2b">tempest-TestNetworkBasicOps-576606253</nova:project>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       </nova:owner>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <nova:root type="image" uuid="e9091bfb-b431-47c9-a284-79372046956b"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <nova:ports>
Dec 05 02:12:39 compute-0 nova_compute[349548]:         <nova:port uuid="94c7e2c9-6aeb-4be2-a022-8cd7ad27d978">
Dec 05 02:12:39 compute-0 nova_compute[349548]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:         </nova:port>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       </nova:ports>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     </nova:instance>
Dec 05 02:12:39 compute-0 nova_compute[349548]:   </metadata>
Dec 05 02:12:39 compute-0 nova_compute[349548]:   <sysinfo type="smbios">
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <system>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <entry name="manufacturer">RDO</entry>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <entry name="product">OpenStack Compute</entry>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <entry name="serial">e184a71d-1d91-4999-bb53-73c2caa1110a</entry>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <entry name="uuid">e184a71d-1d91-4999-bb53-73c2caa1110a</entry>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <entry name="family">Virtual Machine</entry>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     </system>
Dec 05 02:12:39 compute-0 nova_compute[349548]:   </sysinfo>
Dec 05 02:12:39 compute-0 nova_compute[349548]:   <os>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <boot dev="hd"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <smbios mode="sysinfo"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:   </os>
Dec 05 02:12:39 compute-0 nova_compute[349548]:   <features>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <acpi/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <apic/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <vmcoreinfo/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:   </features>
Dec 05 02:12:39 compute-0 nova_compute[349548]:   <clock offset="utc">
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <timer name="pit" tickpolicy="delay"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <timer name="hpet" present="no"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:   </clock>
Dec 05 02:12:39 compute-0 nova_compute[349548]:   <cpu mode="host-model" match="exact">
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <topology sockets="1" cores="1" threads="1"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:   </cpu>
Dec 05 02:12:39 compute-0 nova_compute[349548]:   <devices>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <disk type="network" device="disk">
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/e184a71d-1d91-4999-bb53-73c2caa1110a_disk">
Dec 05 02:12:39 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       </source>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 02:12:39 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       </auth>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <target dev="vda" bus="virtio"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     </disk>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <disk type="network" device="cdrom">
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/e184a71d-1d91-4999-bb53-73c2caa1110a_disk.config">
Dec 05 02:12:39 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       </source>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 02:12:39 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       </auth>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <target dev="sda" bus="sata"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     </disk>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <interface type="ethernet">
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <mac address="fa:16:3e:de:22:fb"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <driver name="vhost" rx_queue_size="512"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <mtu size="1442"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <target dev="tap94c7e2c9-6a"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     </interface>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <serial type="pty">
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <log file="/var/lib/nova/instances/e184a71d-1d91-4999-bb53-73c2caa1110a/console.log" append="off"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     </serial>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <video>
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     </video>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <input type="tablet" bus="usb"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <rng model="virtio">
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <backend model="random">/dev/urandom</backend>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     </rng>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <controller type="usb" index="0"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     <memballoon model="virtio">
Dec 05 02:12:39 compute-0 nova_compute[349548]:       <stats period="10"/>
Dec 05 02:12:39 compute-0 nova_compute[349548]:     </memballoon>
Dec 05 02:12:39 compute-0 nova_compute[349548]:   </devices>
Dec 05 02:12:39 compute-0 nova_compute[349548]: </domain>
Dec 05 02:12:39 compute-0 nova_compute[349548]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.964 349552 DEBUG nova.compute.manager [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Preparing to wait for external event network-vif-plugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.964 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.965 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.965 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.965 349552 DEBUG nova.virt.libvirt.vif [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:12:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-246991198',display_name='tempest-TestNetworkBasicOps-server-246991198',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-246991198',id=13,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOPiJuAHnAJu46IBGrW2KCWpzoZreiuIkGq3//er4nG+5eIgXpgWi9tSl+igSSp8Nl6if+KEJaz1jLll0XICHyeubF/iswJE5bpcW/PYkhqz7B8mkIP3gi3Vhw5yfXTbIg==',key_name='tempest-TestNetworkBasicOps-994593786',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6aaead05b2404fec8f687504ed800a2b',ramdisk_id='',reservation_id='r-9tqk8ujr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-576606253',owner_user_name='tempest-TestNetworkBasicOps-576606253-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:12:31Z,user_data=None,user_id='2e61f46e24a240608d1523fb5265d3ac',uuid=e184a71d-1d91-4999-bb53-73c2caa1110a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "address": "fa:16:3e:de:22:fb", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c7e2c9-6a", "ovs_interfaceid": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.966 349552 DEBUG nova.network.os_vif_util [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Converting VIF {"id": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "address": "fa:16:3e:de:22:fb", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c7e2c9-6a", "ovs_interfaceid": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.966 349552 DEBUG nova.network.os_vif_util [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:22:fb,bridge_name='br-int',has_traffic_filtering=True,id=94c7e2c9-6aeb-4be2-a022-8cd7ad27d978,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94c7e2c9-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.967 349552 DEBUG os_vif [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:22:fb,bridge_name='br-int',has_traffic_filtering=True,id=94c7e2c9-6aeb-4be2-a022-8cd7ad27d978,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94c7e2c9-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.967 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.967 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.968 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.971 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.971 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap94c7e2c9-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.971 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap94c7e2c9-6a, col_values=(('external_ids', {'iface-id': '94c7e2c9-6aeb-4be2-a022-8cd7ad27d978', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:22:fb', 'vm-uuid': 'e184a71d-1d91-4999-bb53-73c2caa1110a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.974 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:39 compute-0 NetworkManager[49092]: <info>  [1764900759.9750] manager: (tap94c7e2c9-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.977 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.990 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.991 349552 INFO os_vif [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:22:fb,bridge_name='br-int',has_traffic_filtering=True,id=94c7e2c9-6aeb-4be2-a022-8cd7ad27d978,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94c7e2c9-6a')
Dec 05 02:12:40 compute-0 nova_compute[349548]: 2025-12-05 02:12:40.045 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 02:12:40 compute-0 nova_compute[349548]: 2025-12-05 02:12:40.045 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 02:12:40 compute-0 nova_compute[349548]: 2025-12-05 02:12:40.045 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] No VIF found with MAC fa:16:3e:de:22:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 05 02:12:40 compute-0 nova_compute[349548]: 2025-12-05 02:12:40.046 349552 INFO nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Using config drive
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.068 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Fri, 05 Dec 2025 02:12:39 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-48c8e93c-6a12-4464-9687-988b9aab96fa x-openstack-request-id: req-48c8e93c-6a12-4464-9687-988b9aab96fa _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.069 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "292fd084-0808-4a80-adc1-6ab1f28e188a", "name": "te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa", "status": "ACTIVE", "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "user_id": "99591ed8361e41579fee1d14f16bf0f7", "metadata": {"metering.server_group": "92ca195d-98d1-443c-9947-dcb7ca7b926a"}, "hostId": "1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18", "image": {"id": "773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e"}]}, "flavor": {"id": "bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49"}]}, "created": "2025-12-05T02:11:15Z", "updated": "2025-12-05T02:11:30Z", "addresses": {"": [{"version": 4, "addr": "10.100.0.151", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:cf:10:bc"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/292fd084-0808-4a80-adc1-6ab1f28e188a"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/292fd084-0808-4a80-adc1-6ab1f28e188a"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-05T02:11:30.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000b", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.069 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/292fd084-0808-4a80-adc1-6ab1f28e188a used request id req-48c8e93c-6a12-4464-9687-988b9aab96fa request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.072 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '292fd084-0808-4a80-adc1-6ab1f28e188a', 'name': 'te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'hostId': '1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18', 'status': 'active', 'metadata': {'metering.server_group': '92ca195d-98d1-443c-9947-dcb7ca7b926a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.072 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.073 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.073 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.073 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.074 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.075 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.075 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.076 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.076 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.077 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T02:12:40.073517) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.077 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T02:12:40.076336) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 nova_compute[349548]: 2025-12-05 02:12:40.092 349552 DEBUG nova.storage.rbd_utils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image e184a71d-1d91-4999-bb53-73c2caa1110a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.102 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.103 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.127 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.127 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.128 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.129 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.129 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.129 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.130 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.130 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.131 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.131 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.131 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.132 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.132 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.132 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T02:12:40.130276) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.132 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.132 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.133 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-593464214>, <NovaLikeServer: te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-593464214>, <NovaLikeServer: te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa>]
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.133 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.133 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.134 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.134 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.134 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.135 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-05T02:12:40.132595) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.135 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T02:12:40.134368) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.193 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.read.bytes volume: 31119872 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.194 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1910: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.262 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 29961216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.262 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.263 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.263 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.264 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.264 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.264 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.264 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.264 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.read.latency volume: 3189139202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.264 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.read.latency volume: 134745289 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.265 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 3090417276 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.265 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 214244219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.266 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.266 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.266 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.266 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.267 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T02:12:40.264367) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.267 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.267 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.267 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.read.requests volume: 1143 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.267 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.268 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 1059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.268 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.269 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.269 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.269 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.270 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.270 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T02:12:40.267268) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.270 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.270 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T02:12:40.270139) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.270 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.270 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.271 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.271 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.272 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.272 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.272 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.272 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.273 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.273 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.273 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.write.bytes volume: 72970240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.273 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.274 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 72802304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.274 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.274 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T02:12:40.273217) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.275 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.276 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.276 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.276 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.277 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.278 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T02:12:40.277369) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.300 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.320 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.320 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.321 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.321 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.321 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.321 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.321 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.write.latency volume: 11092676280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.321 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.321 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T02:12:40.321378) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.322 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 10839664673 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.322 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.322 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.322 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.322 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.322 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.323 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.323 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.write.requests volume: 289 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.323 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T02:12:40.323016) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.323 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.323 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 284 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.323 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.324 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.324 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.324 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.324 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.324 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.325 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T02:12:40.324660) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.327 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f / tap1e754fc7-10 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.327 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/network.incoming.packets volume: 115 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.330 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 292fd084-0808-4a80-adc1-6ab1f28e188a / tap706f9405-40 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.330 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.331 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.331 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.331 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.331 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.331 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.331 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.331 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T02:12:40.331535) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.332 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.332 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.332 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.332 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.332 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.333 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.333 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.333 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T02:12:40.333008) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.333 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.333 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.334 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.334 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.334 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.334 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.334 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.335 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.335 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T02:12:40.335025) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.335 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.335 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.335 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.336 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.336 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.336 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.336 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.336 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/network.outgoing.bytes volume: 16034 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.336 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.337 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T02:12:40.336437) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.337 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.337 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.337 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.337 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.338 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.338 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.338 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.338 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.339 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.339 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T02:12:40.337970) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.339 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.339 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-05T02:12:40.339546) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.339 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.340 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-593464214>, <NovaLikeServer: te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-593464214>, <NovaLikeServer: te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa>]
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.340 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.340 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.340 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.340 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.340 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T02:12:40.340628) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.340 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/memory.usage volume: 42.78125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.341 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/memory.usage volume: 43.5078125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.341 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.341 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.341 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.342 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.342 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.342 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/network.incoming.bytes volume: 20202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.342 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T02:12:40.342117) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.342 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.343 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.343 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.343 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.343 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.343 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.343 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/network.outgoing.packets volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.343 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T02:12:40.343542) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.344 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.344 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.344 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.344 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.344 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.345 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.345 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.345 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T02:12:40.344996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.345 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.345 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.346 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.346 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.346 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.346 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.346 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/cpu volume: 32290000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T02:12:40.346414) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.346 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/cpu volume: 66970000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.347 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.347 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.347 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.347 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.347 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.348 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T02:12:40.347839) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.348 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.348 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.348 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.349 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.349 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.349 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.349 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.349 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.349 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.350 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.350 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T02:12:40.349270) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:12:40 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/502564395' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.238 349552 INFO nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Creating config drive at /var/lib/nova/instances/e184a71d-1d91-4999-bb53-73c2caa1110a/disk.config
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.250 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e184a71d-1d91-4999-bb53-73c2caa1110a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1mw69v60 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.330 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updating instance_info_cache with network_info: [{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.353 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.354 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.355 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.356 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.356 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.357 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.357 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.358 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.387 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.388 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.388 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.389 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.389 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.424 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e184a71d-1d91-4999-bb53-73c2caa1110a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1mw69v60" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:12:41 compute-0 ceph-mon[192914]: pgmap v1910: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.507 349552 DEBUG nova.storage.rbd_utils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image e184a71d-1d91-4999-bb53-73c2caa1110a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.519 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e184a71d-1d91-4999-bb53-73c2caa1110a/disk.config e184a71d-1d91-4999-bb53-73c2caa1110a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.772 349552 DEBUG nova.network.neutron [req-7948c396-f586-464c-91dc-5b1543e66ab0 req-602e42d3-e9f0-405b-baa4-9877d1c76a34 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Updated VIF entry in instance network info cache for port 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.774 349552 DEBUG nova.network.neutron [req-7948c396-f586-464c-91dc-5b1543e66ab0 req-602e42d3-e9f0-405b-baa4-9877d1c76a34 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Updating instance_info_cache with network_info: [{"id": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "address": "fa:16:3e:de:22:fb", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c7e2c9-6a", "ovs_interfaceid": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.793 349552 DEBUG oslo_concurrency.lockutils [req-7948c396-f586-464c-91dc-5b1543e66ab0 req-602e42d3-e9f0-405b-baa4-9877d1c76a34 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-e184a71d-1d91-4999-bb53-73c2caa1110a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.807 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e184a71d-1d91-4999-bb53-73c2caa1110a/disk.config e184a71d-1d91-4999-bb53-73c2caa1110a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.807 349552 INFO nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Deleting local config drive /var/lib/nova/instances/e184a71d-1d91-4999-bb53-73c2caa1110a/disk.config because it was imported into RBD.
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.812 349552 DEBUG nova.network.neutron [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Updating instance_info_cache with network_info: [{"id": "d5201944-8184-405e-ae5f-b743e1bd7399", "address": "fa:16:3e:8f:b5:d5", "network": {"id": "297ab129-d19a-4a0e-893c-731678c3b7a7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-588084580-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "286d2d767009421bb0c889a0ff65b2a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5201944-81", "ovs_interfaceid": "d5201944-8184-405e-ae5f-b743e1bd7399", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.834 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Releasing lock "refresh_cache-117d1772-87cc-4a3d-bf07-3f9b49ac0c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.834 349552 DEBUG nova.compute.manager [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Instance network_info: |[{"id": "d5201944-8184-405e-ae5f-b743e1bd7399", "address": "fa:16:3e:8f:b5:d5", "network": {"id": "297ab129-d19a-4a0e-893c-731678c3b7a7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-588084580-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "286d2d767009421bb0c889a0ff65b2a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5201944-81", "ovs_interfaceid": "d5201944-8184-405e-ae5f-b743e1bd7399", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.835 349552 DEBUG oslo_concurrency.lockutils [req-8b091dad-9831-42f8-90a2-9762ff3e7737 req-e199f0b1-85d9-4e4a-86ba-7282587ea851 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-117d1772-87cc-4a3d-bf07-3f9b49ac0c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.836 349552 DEBUG nova.network.neutron [req-8b091dad-9831-42f8-90a2-9762ff3e7737 req-e199f0b1-85d9-4e4a-86ba-7282587ea851 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Refreshing network info cache for port d5201944-8184-405e-ae5f-b743e1bd7399 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.842 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Start _get_guest_xml network_info=[{"id": "d5201944-8184-405e-ae5f-b743e1bd7399", "address": "fa:16:3e:8f:b5:d5", "network": {"id": "297ab129-d19a-4a0e-893c-731678c3b7a7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-588084580-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "286d2d767009421bb0c889a0ff65b2a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5201944-81", "ovs_interfaceid": "d5201944-8184-405e-ae5f-b743e1bd7399", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'e9091bfb-b431-47c9-a284-79372046956b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.865 349552 WARNING nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.882 349552 DEBUG nova.virt.libvirt.host [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.883 349552 DEBUG nova.virt.libvirt.host [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.890 349552 DEBUG nova.virt.libvirt.host [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.891 349552 DEBUG nova.virt.libvirt.host [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.891 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.892 349552 DEBUG nova.virt.hardware [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T02:07:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.893 349552 DEBUG nova.virt.hardware [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.893 349552 DEBUG nova.virt.hardware [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.893 349552 DEBUG nova.virt.hardware [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.894 349552 DEBUG nova.virt.hardware [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.894 349552 DEBUG nova.virt.hardware [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.895 349552 DEBUG nova.virt.hardware [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.895 349552 DEBUG nova.virt.hardware [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.895 349552 DEBUG nova.virt.hardware [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.896 349552 DEBUG nova.virt.hardware [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.896 349552 DEBUG nova.virt.hardware [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 05 02:12:41 compute-0 kernel: tap94c7e2c9-6a: entered promiscuous mode
Dec 05 02:12:41 compute-0 NetworkManager[49092]: <info>  [1764900761.9041] manager: (tap94c7e2c9-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/73)
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.906 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:12:41 compute-0 ovn_controller[89286]: 2025-12-05T02:12:41Z|00145|binding|INFO|Claiming lport 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 for this chassis.
Dec 05 02:12:41 compute-0 ovn_controller[89286]: 2025-12-05T02:12:41Z|00146|binding|INFO|94c7e2c9-6aeb-4be2-a022-8cd7ad27d978: Claiming fa:16:3e:de:22:fb 10.100.0.3
Dec 05 02:12:41 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:41.920 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:22:fb 10.100.0.3'], port_security=['fa:16:3e:de:22:fb 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e184a71d-1d91-4999-bb53-73c2caa1110a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6aaead05b2404fec8f687504ed800a2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5dbf4e63-8bae-4a45-8f77-a68eb174185f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee2a399c-ba53-4ea4-9f46-ca7b46a10984, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=94c7e2c9-6aeb-4be2-a022-8cd7ad27d978) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:12:41 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:41.921 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 in datapath 580f50f3-cfd1-4167-ba29-a8edbd53ee0f bound to our chassis
Dec 05 02:12:41 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:41.925 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 580f50f3-cfd1-4167-ba29-a8edbd53ee0f
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.937 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:41 compute-0 ovn_controller[89286]: 2025-12-05T02:12:41Z|00147|binding|INFO|Setting lport 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 ovn-installed in OVS
Dec 05 02:12:41 compute-0 ovn_controller[89286]: 2025-12-05T02:12:41Z|00148|binding|INFO|Setting lport 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 up in Southbound
Dec 05 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.941 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:41 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:41.955 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[8d65503e-b0b1-447c-aca8-0ceeba9d2f37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:12:41 compute-0 systemd-udevd[450488]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 02:12:41 compute-0 systemd-machined[138700]: New machine qemu-14-instance-0000000d.
Dec 05 02:12:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:12:41 compute-0 NetworkManager[49092]: <info>  [1764900761.9827] device (tap94c7e2c9-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 05 02:12:41 compute-0 NetworkManager[49092]: <info>  [1764900761.9833] device (tap94c7e2c9-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 05 02:12:41 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2536725473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:12:41 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000d.
Dec 05 02:12:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:42.006 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[d73554d7-b3df-443f-b570-d68c3ff85df8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:12:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:42.011 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[0b729650-b5e8-4df7-9cd6-1e0801eec36f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.014 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:12:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:42.038 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[4d8bfd54-cf52-4024-9073-bca39f051281]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:12:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:42.056 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[53a9fbcb-767f-4ede-8043-747c98822a89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap580f50f3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:c2:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678558, 'reachable_time': 24166, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 450509, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:12:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:42.072 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d31aa3-e5c8-4bc4-80ab-79d0e97db135]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap580f50f3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678580, 'tstamp': 678580}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 450513, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap580f50f3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678588, 'tstamp': 678588}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 450513, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:12:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:42.074 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap580f50f3-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:12:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:42.080 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap580f50f3-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:12:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:42.080 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:12:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:42.081 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap580f50f3-c0, col_values=(('external_ids', {'iface-id': '29ff39a2-9491-44bb-a004-0de689e8aadc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.081 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:42.082 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.121 349552 DEBUG nova.compute.manager [req-16f09f15-9a61-4fe3-89b0-a03dbba4330b req-35b5a165-b764-49af-bd68-90926f7d77d1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Received event network-vif-plugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.121 349552 DEBUG oslo_concurrency.lockutils [req-16f09f15-9a61-4fe3-89b0-a03dbba4330b req-35b5a165-b764-49af-bd68-90926f7d77d1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.122 349552 DEBUG oslo_concurrency.lockutils [req-16f09f15-9a61-4fe3-89b0-a03dbba4330b req-35b5a165-b764-49af-bd68-90926f7d77d1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.123 349552 DEBUG oslo_concurrency.lockutils [req-16f09f15-9a61-4fe3-89b0-a03dbba4330b req-35b5a165-b764-49af-bd68-90926f7d77d1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.123 349552 DEBUG nova.compute.manager [req-16f09f15-9a61-4fe3-89b0-a03dbba4330b req-35b5a165-b764-49af-bd68-90926f7d77d1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Processing event network-vif-plugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.140 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.140 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.146 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.146 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.152 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.152 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.216 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1911: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.6 MiB/s wr, 59 op/s
Dec 05 02:12:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 02:12:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2241966120' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.396 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.427 349552 DEBUG nova.storage.rbd_utils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] rbd image 117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.435 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:12:42 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2536725473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:12:42 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2241966120' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.715 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900762.7150173, e184a71d-1d91-4999-bb53-73c2caa1110a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.716 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] VM Started (Lifecycle Event)
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.718 349552 DEBUG nova.compute.manager [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.722 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.728 349552 INFO nova.virt.libvirt.driver [-] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Instance spawned successfully.
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.728 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.737 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.743 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.758 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.758 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.759 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.759 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.760 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.760 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.764 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.764 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900762.7151551, e184a71d-1d91-4999-bb53-73c2caa1110a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.765 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] VM Paused (Lifecycle Event)
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.790 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.791 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3550MB free_disk=59.855751037597656GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.791 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.791 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.794 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.799 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900762.7341359, e184a71d-1d91-4999-bb53-73c2caa1110a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.799 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] VM Resumed (Lifecycle Event)
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.824 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.829 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.856 349552 INFO nova.compute.manager [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Took 11.29 seconds to spawn the instance on the hypervisor.
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.857 349552 DEBUG nova.compute.manager [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.857 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 02:12:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 02:12:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4214257080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.913 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.914 349552 DEBUG nova.virt.libvirt.vif [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:12:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1301152906',display_name='tempest-TestServerBasicOps-server-1301152906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1301152906',id=14,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDCz5vjwlgDWbvwiwH6Lrc3odqUa7TZ3EfOipPX5fpPxPUspT7EN7quA0kvbAyTCNWf/e9htL6cMWK3K35T7n3AN3hOq0SEzHNsNLt1sUvuz6ePIFT2WS8FYfWxAPVEIpA==',key_name='tempest-TestServerBasicOps-1536427465',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='286d2d767009421bb0c889a0ff65b2a2',ramdisk_id='',reservation_id='r-iqi50j5i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1996691968',owner_user_name='tempest-TestServerBasicOps-1996691968-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:12:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='69e134c969b04dc58a1d1556d8ecf4a8',uuid=117d1772-87cc-4a3d-bf07-3f9b49ac0c63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5201944-8184-405e-ae5f-b743e1bd7399", "address": "fa:16:3e:8f:b5:d5", "network": {"id": "297ab129-d19a-4a0e-893c-731678c3b7a7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-588084580-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "286d2d767009421bb0c889a0ff65b2a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5201944-81", "ovs_interfaceid": "d5201944-8184-405e-ae5f-b743e1bd7399", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.914 349552 DEBUG nova.network.os_vif_util [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Converting VIF {"id": "d5201944-8184-405e-ae5f-b743e1bd7399", "address": "fa:16:3e:8f:b5:d5", "network": {"id": "297ab129-d19a-4a0e-893c-731678c3b7a7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-588084580-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "286d2d767009421bb0c889a0ff65b2a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5201944-81", "ovs_interfaceid": "d5201944-8184-405e-ae5f-b743e1bd7399", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.915 349552 DEBUG nova.network.os_vif_util [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:b5:d5,bridge_name='br-int',has_traffic_filtering=True,id=d5201944-8184-405e-ae5f-b743e1bd7399,network=Network(297ab129-d19a-4a0e-893c-731678c3b7a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5201944-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.916 349552 DEBUG nova.objects.instance [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 117d1772-87cc-4a3d-bf07-3f9b49ac0c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.917 349552 INFO nova.compute.manager [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Took 12.37 seconds to build instance.
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.932 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] End _get_guest_xml xml=<domain type="kvm">
Dec 05 02:12:42 compute-0 nova_compute[349548]:   <uuid>117d1772-87cc-4a3d-bf07-3f9b49ac0c63</uuid>
Dec 05 02:12:42 compute-0 nova_compute[349548]:   <name>instance-0000000e</name>
Dec 05 02:12:42 compute-0 nova_compute[349548]:   <memory>131072</memory>
Dec 05 02:12:42 compute-0 nova_compute[349548]:   <vcpu>1</vcpu>
Dec 05 02:12:42 compute-0 nova_compute[349548]:   <metadata>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <nova:name>tempest-TestServerBasicOps-server-1301152906</nova:name>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <nova:creationTime>2025-12-05 02:12:41</nova:creationTime>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <nova:flavor name="m1.nano">
Dec 05 02:12:42 compute-0 nova_compute[349548]:         <nova:memory>128</nova:memory>
Dec 05 02:12:42 compute-0 nova_compute[349548]:         <nova:disk>1</nova:disk>
Dec 05 02:12:42 compute-0 nova_compute[349548]:         <nova:swap>0</nova:swap>
Dec 05 02:12:42 compute-0 nova_compute[349548]:         <nova:ephemeral>0</nova:ephemeral>
Dec 05 02:12:42 compute-0 nova_compute[349548]:         <nova:vcpus>1</nova:vcpus>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       </nova:flavor>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <nova:owner>
Dec 05 02:12:42 compute-0 nova_compute[349548]:         <nova:user uuid="69e134c969b04dc58a1d1556d8ecf4a8">tempest-TestServerBasicOps-1996691968-project-member</nova:user>
Dec 05 02:12:42 compute-0 nova_compute[349548]:         <nova:project uuid="286d2d767009421bb0c889a0ff65b2a2">tempest-TestServerBasicOps-1996691968</nova:project>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       </nova:owner>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <nova:root type="image" uuid="e9091bfb-b431-47c9-a284-79372046956b"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <nova:ports>
Dec 05 02:12:42 compute-0 nova_compute[349548]:         <nova:port uuid="d5201944-8184-405e-ae5f-b743e1bd7399">
Dec 05 02:12:42 compute-0 nova_compute[349548]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:         </nova:port>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       </nova:ports>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     </nova:instance>
Dec 05 02:12:42 compute-0 nova_compute[349548]:   </metadata>
Dec 05 02:12:42 compute-0 nova_compute[349548]:   <sysinfo type="smbios">
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <system>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <entry name="manufacturer">RDO</entry>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <entry name="product">OpenStack Compute</entry>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <entry name="serial">117d1772-87cc-4a3d-bf07-3f9b49ac0c63</entry>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <entry name="uuid">117d1772-87cc-4a3d-bf07-3f9b49ac0c63</entry>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <entry name="family">Virtual Machine</entry>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     </system>
Dec 05 02:12:42 compute-0 nova_compute[349548]:   </sysinfo>
Dec 05 02:12:42 compute-0 nova_compute[349548]:   <os>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <boot dev="hd"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <smbios mode="sysinfo"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:   </os>
Dec 05 02:12:42 compute-0 nova_compute[349548]:   <features>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <acpi/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <apic/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <vmcoreinfo/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:   </features>
Dec 05 02:12:42 compute-0 nova_compute[349548]:   <clock offset="utc">
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <timer name="pit" tickpolicy="delay"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <timer name="hpet" present="no"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:   </clock>
Dec 05 02:12:42 compute-0 nova_compute[349548]:   <cpu mode="host-model" match="exact">
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <topology sockets="1" cores="1" threads="1"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:   </cpu>
Dec 05 02:12:42 compute-0 nova_compute[349548]:   <devices>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <disk type="network" device="disk">
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk">
Dec 05 02:12:42 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       </source>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 02:12:42 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       </auth>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <target dev="vda" bus="virtio"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     </disk>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <disk type="network" device="cdrom">
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk.config">
Dec 05 02:12:42 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       </source>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 02:12:42 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       </auth>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <target dev="sda" bus="sata"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     </disk>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <interface type="ethernet">
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <mac address="fa:16:3e:8f:b5:d5"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <driver name="vhost" rx_queue_size="512"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <mtu size="1442"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <target dev="tapd5201944-81"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     </interface>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <serial type="pty">
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <log file="/var/lib/nova/instances/117d1772-87cc-4a3d-bf07-3f9b49ac0c63/console.log" append="off"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     </serial>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <video>
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     </video>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <input type="tablet" bus="usb"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <rng model="virtio">
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <backend model="random">/dev/urandom</backend>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     </rng>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <controller type="usb" index="0"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     <memballoon model="virtio">
Dec 05 02:12:42 compute-0 nova_compute[349548]:       <stats period="10"/>
Dec 05 02:12:42 compute-0 nova_compute[349548]:     </memballoon>
Dec 05 02:12:42 compute-0 nova_compute[349548]:   </devices>
Dec 05 02:12:42 compute-0 nova_compute[349548]: </domain>
Dec 05 02:12:42 compute-0 nova_compute[349548]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.932 349552 DEBUG nova.compute.manager [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Preparing to wait for external event network-vif-plugged-d5201944-8184-405e-ae5f-b743e1bd7399 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.932 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Acquiring lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.933 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.933 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.933 349552 DEBUG nova.virt.libvirt.vif [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:12:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1301152906',display_name='tempest-TestServerBasicOps-server-1301152906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1301152906',id=14,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDCz5vjwlgDWbvwiwH6Lrc3odqUa7TZ3EfOipPX5fpPxPUspT7EN7quA0kvbAyTCNWf/e9htL6cMWK3K35T7n3AN3hOq0SEzHNsNLt1sUvuz6ePIFT2WS8FYfWxAPVEIpA==',key_name='tempest-TestServerBasicOps-1536427465',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='286d2d767009421bb0c889a0ff65b2a2',ramdisk_id='',reservation_id='r-iqi50j5i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1996691968',owner_user_name='tempest-TestServerBasicOps-1996691968-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:12:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='69e134c969b04dc58a1d1556d8ecf4a8',uuid=117d1772-87cc-4a3d-bf07-3f9b49ac0c63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5201944-8184-405e-ae5f-b743e1bd7399", "address": "fa:16:3e:8f:b5:d5", "network": {"id": "297ab129-d19a-4a0e-893c-731678c3b7a7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-588084580-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "286d2d767009421bb0c889a0ff65b2a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5201944-81", "ovs_interfaceid": "d5201944-8184-405e-ae5f-b743e1bd7399", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.933 349552 DEBUG nova.network.os_vif_util [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Converting VIF {"id": "d5201944-8184-405e-ae5f-b743e1bd7399", "address": "fa:16:3e:8f:b5:d5", "network": {"id": "297ab129-d19a-4a0e-893c-731678c3b7a7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-588084580-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "286d2d767009421bb0c889a0ff65b2a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5201944-81", "ovs_interfaceid": "d5201944-8184-405e-ae5f-b743e1bd7399", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.934 349552 DEBUG nova.network.os_vif_util [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:b5:d5,bridge_name='br-int',has_traffic_filtering=True,id=d5201944-8184-405e-ae5f-b743e1bd7399,network=Network(297ab129-d19a-4a0e-893c-731678c3b7a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5201944-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.934 349552 DEBUG os_vif [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:b5:d5,bridge_name='br-int',has_traffic_filtering=True,id=d5201944-8184-405e-ae5f-b743e1bd7399,network=Network(297ab129-d19a-4a0e-893c-731678c3b7a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5201944-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.935 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.935 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.935 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.936 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.938 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.938 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5201944-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.938 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd5201944-81, col_values=(('external_ids', {'iface-id': 'd5201944-8184-405e-ae5f-b743e1bd7399', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:b5:d5', 'vm-uuid': '117d1772-87cc-4a3d-bf07-3f9b49ac0c63'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.940 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:42 compute-0 NetworkManager[49092]: <info>  [1764900762.9413] manager: (tapd5201944-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.941 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.943 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.943 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.943 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance e184a71d-1d91-4999-bb53-73c2caa1110a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.943 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 117d1772-87cc-4a3d-bf07-3f9b49ac0c63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.943 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.944 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.948 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.948 349552 INFO os_vif [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:b5:d5,bridge_name='br-int',has_traffic_filtering=True,id=d5201944-8184-405e-ae5f-b743e1bd7399,network=Network(297ab129-d19a-4a0e-893c-731678c3b7a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5201944-81')
Dec 05 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.004 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.004 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.004 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] No VIF found with MAC fa:16:3e:8f:b5:d5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 05 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.004 349552 INFO nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Using config drive
Dec 05 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.063 349552 DEBUG nova.storage.rbd_utils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] rbd image 117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.133 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:12:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.348 349552 DEBUG nova.network.neutron [req-8b091dad-9831-42f8-90a2-9762ff3e7737 req-e199f0b1-85d9-4e4a-86ba-7282587ea851 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Updated VIF entry in instance network info cache for port d5201944-8184-405e-ae5f-b743e1bd7399. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.349 349552 DEBUG nova.network.neutron [req-8b091dad-9831-42f8-90a2-9762ff3e7737 req-e199f0b1-85d9-4e4a-86ba-7282587ea851 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Updating instance_info_cache with network_info: [{"id": "d5201944-8184-405e-ae5f-b743e1bd7399", "address": "fa:16:3e:8f:b5:d5", "network": {"id": "297ab129-d19a-4a0e-893c-731678c3b7a7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-588084580-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "286d2d767009421bb0c889a0ff65b2a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5201944-81", "ovs_interfaceid": "d5201944-8184-405e-ae5f-b743e1bd7399", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.370 349552 DEBUG oslo_concurrency.lockutils [req-8b091dad-9831-42f8-90a2-9762ff3e7737 req-e199f0b1-85d9-4e4a-86ba-7282587ea851 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-117d1772-87cc-4a3d-bf07-3f9b49ac0c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:12:43 compute-0 ceph-mon[192914]: pgmap v1911: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.6 MiB/s wr, 59 op/s
Dec 05 02:12:43 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4214257080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:12:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:12:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1808277890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.668 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.693 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.721 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.744 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.745 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.954s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.942 349552 INFO nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Creating config drive at /var/lib/nova/instances/117d1772-87cc-4a3d-bf07-3f9b49ac0c63/disk.config
Dec 05 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.954 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/117d1772-87cc-4a3d-bf07-3f9b49ac0c63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk7f84plt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.088 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/117d1772-87cc-4a3d-bf07-3f9b49ac0c63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk7f84plt" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.139 349552 DEBUG nova.storage.rbd_utils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] rbd image 117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.149 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/117d1772-87cc-4a3d-bf07-3f9b49ac0c63/disk.config 117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.196 349552 DEBUG nova.compute.manager [req-f3bb9c46-60ca-47a7-9936-7e455aace84b req-7625ed6e-0d72-4a73-bda6-8ca82faefb3f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Received event network-vif-plugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.199 349552 DEBUG oslo_concurrency.lockutils [req-f3bb9c46-60ca-47a7-9936-7e455aace84b req-7625ed6e-0d72-4a73-bda6-8ca82faefb3f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.199 349552 DEBUG oslo_concurrency.lockutils [req-f3bb9c46-60ca-47a7-9936-7e455aace84b req-7625ed6e-0d72-4a73-bda6-8ca82faefb3f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.200 349552 DEBUG oslo_concurrency.lockutils [req-f3bb9c46-60ca-47a7-9936-7e455aace84b req-7625ed6e-0d72-4a73-bda6-8ca82faefb3f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.200 349552 DEBUG nova.compute.manager [req-f3bb9c46-60ca-47a7-9936-7e455aace84b req-7625ed6e-0d72-4a73-bda6-8ca82faefb3f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] No waiting events found dispatching network-vif-plugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.201 349552 WARNING nova.compute.manager [req-f3bb9c46-60ca-47a7-9936-7e455aace84b req-7625ed6e-0d72-4a73-bda6-8ca82faefb3f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Received unexpected event network-vif-plugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 for instance with vm_state active and task_state None.
Dec 05 02:12:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1912: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.3 MiB/s wr, 55 op/s
Dec 05 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.415 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/117d1772-87cc-4a3d-bf07-3f9b49ac0c63/disk.config 117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.416 349552 INFO nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Deleting local config drive /var/lib/nova/instances/117d1772-87cc-4a3d-bf07-3f9b49ac0c63/disk.config because it was imported into RBD.
Dec 05 02:12:44 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1808277890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:12:44 compute-0 kernel: tapd5201944-81: entered promiscuous mode
Dec 05 02:12:44 compute-0 NetworkManager[49092]: <info>  [1764900764.4885] manager: (tapd5201944-81): new Tun device (/org/freedesktop/NetworkManager/Devices/75)
Dec 05 02:12:44 compute-0 ovn_controller[89286]: 2025-12-05T02:12:44Z|00149|binding|INFO|Claiming lport d5201944-8184-405e-ae5f-b743e1bd7399 for this chassis.
Dec 05 02:12:44 compute-0 ovn_controller[89286]: 2025-12-05T02:12:44Z|00150|binding|INFO|d5201944-8184-405e-ae5f-b743e1bd7399: Claiming fa:16:3e:8f:b5:d5 10.100.0.12
Dec 05 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.489 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.496 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:b5:d5 10.100.0.12'], port_security=['fa:16:3e:8f:b5:d5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '117d1772-87cc-4a3d-bf07-3f9b49ac0c63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-297ab129-d19a-4a0e-893c-731678c3b7a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '286d2d767009421bb0c889a0ff65b2a2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cea42b97-22e3-42f2-b4a9-e60ab6e5a3f6 f4a2d83a-c7b3-4fde-b9ec-59d46e5208fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff1f531e-a659-4463-9351-3086ed6c2f8e, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=d5201944-8184-405e-ae5f-b743e1bd7399) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.497 287122 INFO neutron.agent.ovn.metadata.agent [-] Port d5201944-8184-405e-ae5f-b743e1bd7399 in datapath 297ab129-d19a-4a0e-893c-731678c3b7a7 bound to our chassis
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.499 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 297ab129-d19a-4a0e-893c-731678c3b7a7
Dec 05 02:12:44 compute-0 NetworkManager[49092]: <info>  [1764900764.5017] device (tapd5201944-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 05 02:12:44 compute-0 NetworkManager[49092]: <info>  [1764900764.5022] device (tapd5201944-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.511 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[20cab368-add5-41ab-9659-f98d638ae7fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.512 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap297ab129-d1 in ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 05 02:12:44 compute-0 ovn_controller[89286]: 2025-12-05T02:12:44Z|00151|binding|INFO|Setting lport d5201944-8184-405e-ae5f-b743e1bd7399 ovn-installed in OVS
Dec 05 02:12:44 compute-0 ovn_controller[89286]: 2025-12-05T02:12:44Z|00152|binding|INFO|Setting lport d5201944-8184-405e-ae5f-b743e1bd7399 up in Southbound
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.513 412744 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap297ab129-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 05 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.515 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.513 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[6e251ffd-8149-4c27-8bbb-363a24af2615]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.517 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[0a33e432-ecdd-43ef-9242-4ba5ccb80cc3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.525 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.537 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[88ee01d2-a135-405b-a3de-9ae31794ba2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:12:44 compute-0 systemd-machined[138700]: New machine qemu-15-instance-0000000e.
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.562 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[2f82d351-2bc6-4d87-a6c4-3ae69b05f74a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:12:44 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000e.
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.615 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[858c5f2e-2866-480f-89df-4795b0e25adf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:12:44 compute-0 podman[450703]: 2025-12-05 02:12:44.620635239 +0000 UTC m=+0.089304409 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:12:44 compute-0 NetworkManager[49092]: <info>  [1764900764.6311] manager: (tap297ab129-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/76)
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.633 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[0439ab32-36bd-4fc7-b4ac-1253a9f17391]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:12:44 compute-0 podman[450704]: 2025-12-05 02:12:44.662740842 +0000 UTC m=+0.130081005 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.683 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[aca4361f-f8ed-42fd-a731-715d8294dfde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:12:44 compute-0 podman[450707]: 2025-12-05 02:12:44.689534504 +0000 UTC m=+0.136917786 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., config_id=edpm, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.688 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[6fea007a-aae2-4be7-b0a7-cb35a14a27e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:12:44 compute-0 NetworkManager[49092]: <info>  [1764900764.7168] device (tap297ab129-d0): carrier: link connected
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.727 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[3091511f-47d6-4cb9-a3b1-548fb2131ff6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.755 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff7664e-85a6-4654-905e-18e7b4842c1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap297ab129-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:0d:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685138, 'reachable_time': 17985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 450816, 'error': None, 'target': 'ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:12:44 compute-0 podman[450706]: 2025-12-05 02:12:44.761610009 +0000 UTC m=+0.212774937 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.772 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[b50c7bc4-7688-4702-977e-06aff0812db6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:dbb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685138, 'tstamp': 685138}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 450817, 'error': None, 'target': 'ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.796 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[6aca4535-d07a-4444-9e9f-db6eb150bfcc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap297ab129-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:0d:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685138, 'reachable_time': 17985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 450821, 'error': None, 'target': 'ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.838 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[97fbf751-559e-4ebb-b373-ebac5e83d716]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.905 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[1eed7de1-fc18-4e13-bc70-2555fcbfa7ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.906 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap297ab129-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.907 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.907 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap297ab129-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:12:44 compute-0 NetworkManager[49092]: <info>  [1764900764.9101] manager: (tap297ab129-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Dec 05 02:12:44 compute-0 kernel: tap297ab129-d0: entered promiscuous mode
Dec 05 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.915 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.916 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap297ab129-d0, col_values=(('external_ids', {'iface-id': '9db11503-fcc0-46ec-ad9b-de48fe796de4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:12:44 compute-0 ovn_controller[89286]: 2025-12-05T02:12:44Z|00153|binding|INFO|Releasing lport 9db11503-fcc0-46ec-ad9b-de48fe796de4 from this chassis (sb_readonly=0)
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.919 287122 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/297ab129-d19a-4a0e-893c-731678c3b7a7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/297ab129-d19a-4a0e-893c-731678c3b7a7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.920 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[b00fe178-03de-4515-86ea-9a35669e0186]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.921 287122 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: global
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]:     log         /dev/log local0 debug
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]:     log-tag     haproxy-metadata-proxy-297ab129-d19a-4a0e-893c-731678c3b7a7
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]:     user        root
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]:     group       root
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]:     maxconn     1024
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]:     pidfile     /var/lib/neutron/external/pids/297ab129-d19a-4a0e-893c-731678c3b7a7.pid.haproxy
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]:     daemon
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: defaults
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]:     log global
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]:     mode http
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]:     option httplog
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]:     option dontlognull
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]:     option http-server-close
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]:     option forwardfor
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]:     retries                 3
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]:     timeout http-request    30s
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]:     timeout connect         30s
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]:     timeout client          32s
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]:     timeout server          32s
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]:     timeout http-keep-alive 30s
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: listen listener
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]:     bind 169.254.169.254:80
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]:     server metadata /var/lib/neutron/metadata_proxy
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]:     http-request add-header X-OVN-Network-ID 297ab129-d19a-4a0e-893c-731678c3b7a7
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 05 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.922 287122 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7', 'env', 'PROCESS_TAG=haproxy-297ab129-d19a-4a0e-893c-731678c3b7a7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/297ab129-d19a-4a0e-893c-731678c3b7a7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 05 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.931 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.025 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900765.024596, 117d1772-87cc-4a3d-bf07-3f9b49ac0c63 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.026 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] VM Started (Lifecycle Event)
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.047 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.054 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900765.024677, 117d1772-87cc-4a3d-bf07-3f9b49ac0c63 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.054 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] VM Paused (Lifecycle Event)
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.070 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.076 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.093 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.234 349552 DEBUG nova.compute.manager [req-3c80e0b9-c781-4f5a-8873-354a672ae1a7 req-3f1408e7-7f0c-4342-a559-de332c24eda7 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Received event network-vif-plugged-d5201944-8184-405e-ae5f-b743e1bd7399 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.235 349552 DEBUG oslo_concurrency.lockutils [req-3c80e0b9-c781-4f5a-8873-354a672ae1a7 req-3f1408e7-7f0c-4342-a559-de332c24eda7 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.235 349552 DEBUG oslo_concurrency.lockutils [req-3c80e0b9-c781-4f5a-8873-354a672ae1a7 req-3f1408e7-7f0c-4342-a559-de332c24eda7 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.236 349552 DEBUG oslo_concurrency.lockutils [req-3c80e0b9-c781-4f5a-8873-354a672ae1a7 req-3f1408e7-7f0c-4342-a559-de332c24eda7 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.236 349552 DEBUG nova.compute.manager [req-3c80e0b9-c781-4f5a-8873-354a672ae1a7 req-3f1408e7-7f0c-4342-a559-de332c24eda7 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Processing event network-vif-plugged-d5201944-8184-405e-ae5f-b743e1bd7399 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.239 349552 DEBUG nova.compute.manager [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.244 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900765.2433593, 117d1772-87cc-4a3d-bf07-3f9b49ac0c63 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.244 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] VM Resumed (Lifecycle Event)
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.246 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.260 349552 INFO nova.virt.libvirt.driver [-] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Instance spawned successfully.
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.261 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.264 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.271 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.285 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.286 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.287 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.287 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.288 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.289 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.294 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 02:12:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:12:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2380504868' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:12:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:12:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2380504868' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.347 349552 INFO nova.compute.manager [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Took 11.66 seconds to spawn the instance on the hypervisor.
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.347 349552 DEBUG nova.compute.manager [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.406 349552 INFO nova.compute.manager [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Took 12.80 seconds to build instance.
Dec 05 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.422 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:12:45 compute-0 podman[450889]: 2025-12-05 02:12:45.433556401 +0000 UTC m=+0.071016436 container create 6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:12:45 compute-0 systemd[1]: Started libpod-conmon-6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad.scope.
Dec 05 02:12:45 compute-0 ceph-mon[192914]: pgmap v1912: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.3 MiB/s wr, 55 op/s
Dec 05 02:12:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2380504868' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:12:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2380504868' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:12:45 compute-0 podman[450889]: 2025-12-05 02:12:45.398151316 +0000 UTC m=+0.035611371 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 05 02:12:45 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:12:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/596f961f4772e7095f34b4530a56ee23aa3e77a9b26fe356092ec6991cf0ede7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 05 02:12:45 compute-0 podman[450889]: 2025-12-05 02:12:45.536168223 +0000 UTC m=+0.173628348 container init 6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:12:45 compute-0 podman[450889]: 2025-12-05 02:12:45.545451964 +0000 UTC m=+0.182912029 container start 6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:12:45 compute-0 neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7[450903]: [NOTICE]   (450908) : New worker (450910) forked
Dec 05 02:12:45 compute-0 neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7[450903]: [NOTICE]   (450908) : Loading success.
Dec 05 02:12:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1913: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 266 KiB/s rd, 3.3 MiB/s wr, 72 op/s
Dec 05 02:12:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:12:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:12:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:12:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:12:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:12:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:12:46 compute-0 nova_compute[349548]: 2025-12-05 02:12:46.457 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:12:46 compute-0 nova_compute[349548]: 2025-12-05 02:12:46.458 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:12:46 compute-0 nova_compute[349548]: 2025-12-05 02:12:46.460 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.220 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.326 349552 DEBUG nova.compute.manager [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Received event network-vif-plugged-d5201944-8184-405e-ae5f-b743e1bd7399 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.327 349552 DEBUG oslo_concurrency.lockutils [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.329 349552 DEBUG oslo_concurrency.lockutils [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.330 349552 DEBUG oslo_concurrency.lockutils [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.331 349552 DEBUG nova.compute.manager [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] No waiting events found dispatching network-vif-plugged-d5201944-8184-405e-ae5f-b743e1bd7399 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.331 349552 WARNING nova.compute.manager [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Received unexpected event network-vif-plugged-d5201944-8184-405e-ae5f-b743e1bd7399 for instance with vm_state active and task_state None.
Dec 05 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.332 349552 DEBUG nova.compute.manager [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Received event network-changed-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.333 349552 DEBUG nova.compute.manager [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Refreshing instance network info cache due to event network-changed-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.333 349552 DEBUG oslo_concurrency.lockutils [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-e184a71d-1d91-4999-bb53-73c2caa1110a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.334 349552 DEBUG oslo_concurrency.lockutils [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-e184a71d-1d91-4999-bb53-73c2caa1110a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.335 349552 DEBUG nova.network.neutron [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Refreshing network info cache for port 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 02:12:47 compute-0 ceph-mon[192914]: pgmap v1913: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 266 KiB/s rd, 3.3 MiB/s wr, 72 op/s
Dec 05 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.940 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:12:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1914: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 133 op/s
Dec 05 02:12:49 compute-0 nova_compute[349548]: 2025-12-05 02:12:49.258 349552 DEBUG nova.network.neutron [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Updated VIF entry in instance network info cache for port 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 02:12:49 compute-0 nova_compute[349548]: 2025-12-05 02:12:49.260 349552 DEBUG nova.network.neutron [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Updating instance_info_cache with network_info: [{"id": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "address": "fa:16:3e:de:22:fb", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c7e2c9-6a", "ovs_interfaceid": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:12:49 compute-0 nova_compute[349548]: 2025-12-05 02:12:49.285 349552 DEBUG oslo_concurrency.lockutils [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-e184a71d-1d91-4999-bb53-73c2caa1110a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:12:49 compute-0 nova_compute[349548]: 2025-12-05 02:12:49.466 349552 DEBUG nova.compute.manager [req-3db08275-cdc0-412b-9c00-9fa986f5c3d5 req-2adf48b4-dd08-4125-a4c7-9a4b23cf7831 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Received event network-changed-d5201944-8184-405e-ae5f-b743e1bd7399 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:12:49 compute-0 nova_compute[349548]: 2025-12-05 02:12:49.467 349552 DEBUG nova.compute.manager [req-3db08275-cdc0-412b-9c00-9fa986f5c3d5 req-2adf48b4-dd08-4125-a4c7-9a4b23cf7831 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Refreshing instance network info cache due to event network-changed-d5201944-8184-405e-ae5f-b743e1bd7399. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 02:12:49 compute-0 nova_compute[349548]: 2025-12-05 02:12:49.468 349552 DEBUG oslo_concurrency.lockutils [req-3db08275-cdc0-412b-9c00-9fa986f5c3d5 req-2adf48b4-dd08-4125-a4c7-9a4b23cf7831 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-117d1772-87cc-4a3d-bf07-3f9b49ac0c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:12:49 compute-0 nova_compute[349548]: 2025-12-05 02:12:49.469 349552 DEBUG oslo_concurrency.lockutils [req-3db08275-cdc0-412b-9c00-9fa986f5c3d5 req-2adf48b4-dd08-4125-a4c7-9a4b23cf7831 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-117d1772-87cc-4a3d-bf07-3f9b49ac0c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:12:49 compute-0 nova_compute[349548]: 2025-12-05 02:12:49.470 349552 DEBUG nova.network.neutron [req-3db08275-cdc0-412b-9c00-9fa986f5c3d5 req-2adf48b4-dd08-4125-a4c7-9a4b23cf7831 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Refreshing network info cache for port d5201944-8184-405e-ae5f-b743e1bd7399 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 02:12:49 compute-0 ceph-mon[192914]: pgmap v1914: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 133 op/s
Dec 05 02:12:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1915: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 30 KiB/s wr, 87 op/s
Dec 05 02:12:51 compute-0 nova_compute[349548]: 2025-12-05 02:12:51.346 349552 DEBUG nova.network.neutron [req-3db08275-cdc0-412b-9c00-9fa986f5c3d5 req-2adf48b4-dd08-4125-a4c7-9a4b23cf7831 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Updated VIF entry in instance network info cache for port d5201944-8184-405e-ae5f-b743e1bd7399. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 02:12:51 compute-0 nova_compute[349548]: 2025-12-05 02:12:51.349 349552 DEBUG nova.network.neutron [req-3db08275-cdc0-412b-9c00-9fa986f5c3d5 req-2adf48b4-dd08-4125-a4c7-9a4b23cf7831 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Updating instance_info_cache with network_info: [{"id": "d5201944-8184-405e-ae5f-b743e1bd7399", "address": "fa:16:3e:8f:b5:d5", "network": {"id": "297ab129-d19a-4a0e-893c-731678c3b7a7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-588084580-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "286d2d767009421bb0c889a0ff65b2a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5201944-81", "ovs_interfaceid": "d5201944-8184-405e-ae5f-b743e1bd7399", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:12:51 compute-0 nova_compute[349548]: 2025-12-05 02:12:51.370 349552 DEBUG oslo_concurrency.lockutils [req-3db08275-cdc0-412b-9c00-9fa986f5c3d5 req-2adf48b4-dd08-4125-a4c7-9a4b23cf7831 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-117d1772-87cc-4a3d-bf07-3f9b49ac0c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:12:51 compute-0 ceph-mon[192914]: pgmap v1915: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 30 KiB/s wr, 87 op/s
Dec 05 02:12:52 compute-0 nova_compute[349548]: 2025-12-05 02:12:52.223 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1916: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 30 KiB/s wr, 148 op/s
Dec 05 02:12:52 compute-0 nova_compute[349548]: 2025-12-05 02:12:52.945 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:12:53 compute-0 ceph-mon[192914]: pgmap v1916: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 30 KiB/s wr, 148 op/s
Dec 05 02:12:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1917: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 17 KiB/s wr, 143 op/s
Dec 05 02:12:55 compute-0 ceph-mon[192914]: pgmap v1917: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 17 KiB/s wr, 143 op/s
Dec 05 02:12:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:56.209 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:12:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:56.210 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:12:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:56.211 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:12:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1918: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 144 op/s
Dec 05 02:12:57 compute-0 nova_compute[349548]: 2025-12-05 02:12:57.226 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:57 compute-0 ceph-mon[192914]: pgmap v1918: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 144 op/s
Dec 05 02:12:57 compute-0 nova_compute[349548]: 2025-12-05 02:12:57.950 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:12:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:12:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1919: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 9.4 KiB/s wr, 127 op/s
Dec 05 02:12:58 compute-0 podman[450920]: 2025-12-05 02:12:58.672556787 +0000 UTC m=+0.083920578 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 05 02:12:58 compute-0 podman[450921]: 2025-12-05 02:12:58.683492424 +0000 UTC m=+0.092407457 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 02:12:59 compute-0 ceph-mon[192914]: pgmap v1919: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 9.4 KiB/s wr, 127 op/s
Dec 05 02:12:59 compute-0 podman[158197]: time="2025-12-05T02:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:12:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46279 "" "Go-http-client/1.1"
Dec 05 02:12:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9588 "" "Go-http-client/1.1"
Dec 05 02:13:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1920: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 8.0 KiB/s wr, 62 op/s
Dec 05 02:13:01 compute-0 openstack_network_exporter[366555]: ERROR   02:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:13:01 compute-0 openstack_network_exporter[366555]: ERROR   02:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:13:01 compute-0 openstack_network_exporter[366555]: ERROR   02:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:13:01 compute-0 openstack_network_exporter[366555]: ERROR   02:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:13:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:13:01 compute-0 openstack_network_exporter[366555]: ERROR   02:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:13:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:13:01 compute-0 ceph-mon[192914]: pgmap v1920: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 8.0 KiB/s wr, 62 op/s
Dec 05 02:13:02 compute-0 nova_compute[349548]: 2025-12-05 02:13:02.229 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1921: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 8.0 KiB/s wr, 62 op/s
Dec 05 02:13:02 compute-0 nova_compute[349548]: 2025-12-05 02:13:02.954 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:13:03 compute-0 ceph-mon[192914]: pgmap v1921: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 8.0 KiB/s wr, 62 op/s
Dec 05 02:13:03 compute-0 podman[450961]: 2025-12-05 02:13:03.739013422 +0000 UTC m=+0.128972273 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm)
Dec 05 02:13:03 compute-0 podman[450960]: 2025-12-05 02:13:03.741861452 +0000 UTC m=+0.126469913 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec 05 02:13:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1922: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s wr, 1 op/s
Dec 05 02:13:04 compute-0 ceph-mon[192914]: pgmap v1922: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s wr, 1 op/s
Dec 05 02:13:05 compute-0 podman[450997]: 2025-12-05 02:13:05.731593924 +0000 UTC m=+0.138530921 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vcs-type=git, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, vendor=Red Hat, Inc., container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4, config_id=edpm, name=ubi9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container)
Dec 05 02:13:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1923: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s wr, 1 op/s
Dec 05 02:13:07 compute-0 nova_compute[349548]: 2025-12-05 02:13:07.233 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:07 compute-0 ceph-mon[192914]: pgmap v1923: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s wr, 1 op/s
Dec 05 02:13:07 compute-0 nova_compute[349548]: 2025-12-05 02:13:07.958 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:13:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1924: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:13:09 compute-0 ceph-mon[192914]: pgmap v1924: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:13:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1925: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:13:11 compute-0 ceph-mon[192914]: pgmap v1925: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:13:12 compute-0 nova_compute[349548]: 2025-12-05 02:13:12.234 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1926: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec 05 02:13:12 compute-0 nova_compute[349548]: 2025-12-05 02:13:12.960 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:13:13 compute-0 ceph-mon[192914]: pgmap v1926: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec 05 02:13:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1927: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec 05 02:13:14 compute-0 podman[451022]: 2025-12-05 02:13:14.824581666 +0000 UTC m=+0.093265400 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, version=9.6, config_id=edpm, io.openshift.tags=minimal rhel9)
Dec 05 02:13:14 compute-0 podman[451015]: 2025-12-05 02:13:14.833538348 +0000 UTC m=+0.139101388 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 02:13:14 compute-0 podman[451016]: 2025-12-05 02:13:14.860754172 +0000 UTC m=+0.137067600 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 02:13:14 compute-0 podman[451064]: 2025-12-05 02:13:14.935533602 +0000 UTC m=+0.105559375 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 05 02:13:15 compute-0 ceph-mon[192914]: pgmap v1927: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec 05 02:13:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1928: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec 05 02:13:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:13:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:13:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:13:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:13:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:13:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:13:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:13:16
Dec 05 02:13:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:13:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:13:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'vms', 'backups', 'default.rgw.control', 'default.rgw.meta', 'images', '.mgr', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log']
Dec 05 02:13:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:13:17 compute-0 ceph-mgr[193209]: client.0 ms_handle_reset on v2:192.168.122.100:6800/858078637
Dec 05 02:13:17 compute-0 nova_compute[349548]: 2025-12-05 02:13:17.236 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:17 compute-0 ceph-mon[192914]: pgmap v1928: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec 05 02:13:17 compute-0 nova_compute[349548]: 2025-12-05 02:13:17.963 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:13:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:13:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:13:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:13:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:13:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:13:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:13:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:13:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:13:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:13:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:13:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1929: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 2.7 KiB/s wr, 0 op/s
Dec 05 02:13:18 compute-0 ovn_controller[89286]: 2025-12-05T02:13:18Z|00154|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Dec 05 02:13:18 compute-0 sudo[451096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:13:18 compute-0 sudo[451096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:13:18 compute-0 sudo[451096]: pam_unix(sudo:session): session closed for user root
Dec 05 02:13:18 compute-0 sudo[451121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:13:18 compute-0 sudo[451121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:13:18 compute-0 sudo[451121]: pam_unix(sudo:session): session closed for user root
Dec 05 02:13:18 compute-0 sudo[451146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:13:18 compute-0 sudo[451146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:13:18 compute-0 sudo[451146]: pam_unix(sudo:session): session closed for user root
Dec 05 02:13:18 compute-0 sudo[451171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:13:18 compute-0 sudo[451171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:13:19 compute-0 ceph-mon[192914]: pgmap v1929: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 2.7 KiB/s wr, 0 op/s
Dec 05 02:13:19 compute-0 sudo[451171]: pam_unix(sudo:session): session closed for user root
Dec 05 02:13:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:13:19 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:13:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:13:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:13:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:13:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:13:19 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 03925c2f-fae8-49d3-bd1b-6fc7da0f616e does not exist
Dec 05 02:13:19 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 44f7f950-5b84-4446-8da9-7cb635512fb3 does not exist
Dec 05 02:13:19 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 78c41e78-d5a6-4b8b-972d-f029e888572d does not exist
Dec 05 02:13:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:13:19 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:13:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:13:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:13:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:13:19 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:13:19 compute-0 sudo[451227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:13:19 compute-0 sudo[451227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:13:19 compute-0 sudo[451227]: pam_unix(sudo:session): session closed for user root
Dec 05 02:13:19 compute-0 sudo[451252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:13:19 compute-0 sudo[451252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:13:19 compute-0 sudo[451252]: pam_unix(sudo:session): session closed for user root
Dec 05 02:13:19 compute-0 sudo[451277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:13:19 compute-0 sudo[451277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:13:19 compute-0 sudo[451277]: pam_unix(sudo:session): session closed for user root
Dec 05 02:13:19 compute-0 sudo[451302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:13:19 compute-0 sudo[451302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:13:20 compute-0 ovn_controller[89286]: 2025-12-05T02:13:20Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:de:22:fb 10.100.0.3
Dec 05 02:13:20 compute-0 ovn_controller[89286]: 2025-12-05T02:13:20Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:de:22:fb 10.100.0.3
Dec 05 02:13:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1930: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 2.7 KiB/s wr, 0 op/s
Dec 05 02:13:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:13:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:13:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:13:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:13:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:13:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:13:20 compute-0 podman[451362]: 2025-12-05 02:13:20.405778357 +0000 UTC m=+0.058434592 container create 689457d98f5acb7f893b470d9a24dcaf62aa071804f1b03f3f0de678b39dcf81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:13:20 compute-0 podman[451362]: 2025-12-05 02:13:20.382136913 +0000 UTC m=+0.034793168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:13:20 compute-0 systemd[1]: Started libpod-conmon-689457d98f5acb7f893b470d9a24dcaf62aa071804f1b03f3f0de678b39dcf81.scope.
Dec 05 02:13:20 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:13:20 compute-0 podman[451362]: 2025-12-05 02:13:20.561673985 +0000 UTC m=+0.214330240 container init 689457d98f5acb7f893b470d9a24dcaf62aa071804f1b03f3f0de678b39dcf81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 05 02:13:20 compute-0 podman[451362]: 2025-12-05 02:13:20.569749942 +0000 UTC m=+0.222406177 container start 689457d98f5acb7f893b470d9a24dcaf62aa071804f1b03f3f0de678b39dcf81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:13:20 compute-0 podman[451362]: 2025-12-05 02:13:20.573739064 +0000 UTC m=+0.226395299 container attach 689457d98f5acb7f893b470d9a24dcaf62aa071804f1b03f3f0de678b39dcf81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 05 02:13:20 compute-0 unruffled_montalcini[451380]: 167 167
Dec 05 02:13:20 compute-0 systemd[1]: libpod-689457d98f5acb7f893b470d9a24dcaf62aa071804f1b03f3f0de678b39dcf81.scope: Deactivated successfully.
Dec 05 02:13:20 compute-0 podman[451362]: 2025-12-05 02:13:20.58176326 +0000 UTC m=+0.234419495 container died 689457d98f5acb7f893b470d9a24dcaf62aa071804f1b03f3f0de678b39dcf81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 05 02:13:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-90a6732ff0b3c5c103ec064e5762d888a35adfd2dabe45e0a0f58cc3568c4726-merged.mount: Deactivated successfully.
Dec 05 02:13:20 compute-0 podman[451362]: 2025-12-05 02:13:20.627658029 +0000 UTC m=+0.280314264 container remove 689457d98f5acb7f893b470d9a24dcaf62aa071804f1b03f3f0de678b39dcf81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_montalcini, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 05 02:13:20 compute-0 systemd[1]: libpod-conmon-689457d98f5acb7f893b470d9a24dcaf62aa071804f1b03f3f0de678b39dcf81.scope: Deactivated successfully.
Dec 05 02:13:20 compute-0 podman[451403]: 2025-12-05 02:13:20.877189377 +0000 UTC m=+0.071664854 container create 0f8100fc86059c6a944e35fe127ae53e8df6388fa3cefa534d2d4f842d70dfa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_driscoll, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:13:20 compute-0 systemd[1]: Started libpod-conmon-0f8100fc86059c6a944e35fe127ae53e8df6388fa3cefa534d2d4f842d70dfa7.scope.
Dec 05 02:13:20 compute-0 podman[451403]: 2025-12-05 02:13:20.852545625 +0000 UTC m=+0.047021182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:13:20 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ab464c7a49eaa65b0c16dec954b33190a812f2ecd0579b72889db2e378bbc8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ab464c7a49eaa65b0c16dec954b33190a812f2ecd0579b72889db2e378bbc8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ab464c7a49eaa65b0c16dec954b33190a812f2ecd0579b72889db2e378bbc8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ab464c7a49eaa65b0c16dec954b33190a812f2ecd0579b72889db2e378bbc8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ab464c7a49eaa65b0c16dec954b33190a812f2ecd0579b72889db2e378bbc8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:13:21 compute-0 podman[451403]: 2025-12-05 02:13:21.020037319 +0000 UTC m=+0.214512816 container init 0f8100fc86059c6a944e35fe127ae53e8df6388fa3cefa534d2d4f842d70dfa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_driscoll, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:13:21 compute-0 podman[451403]: 2025-12-05 02:13:21.033443356 +0000 UTC m=+0.227918843 container start 0f8100fc86059c6a944e35fe127ae53e8df6388fa3cefa534d2d4f842d70dfa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 05 02:13:21 compute-0 podman[451403]: 2025-12-05 02:13:21.038472377 +0000 UTC m=+0.232947854 container attach 0f8100fc86059c6a944e35fe127ae53e8df6388fa3cefa534d2d4f842d70dfa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_driscoll, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:13:21 compute-0 ceph-mon[192914]: pgmap v1930: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 2.7 KiB/s wr, 0 op/s
Dec 05 02:13:21 compute-0 ovn_controller[89286]: 2025-12-05T02:13:21Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8f:b5:d5 10.100.0.12
Dec 05 02:13:21 compute-0 ovn_controller[89286]: 2025-12-05T02:13:21Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8f:b5:d5 10.100.0.12
Dec 05 02:13:22 compute-0 affectionate_driscoll[451418]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:13:22 compute-0 affectionate_driscoll[451418]: --> relative data size: 1.0
Dec 05 02:13:22 compute-0 affectionate_driscoll[451418]: --> All data devices are unavailable
Dec 05 02:13:22 compute-0 nova_compute[349548]: 2025-12-05 02:13:22.240 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1931: 321 pgs: 321 active+clean; 384 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 4.1 MiB/s wr, 87 op/s
Dec 05 02:13:22 compute-0 systemd[1]: libpod-0f8100fc86059c6a944e35fe127ae53e8df6388fa3cefa534d2d4f842d70dfa7.scope: Deactivated successfully.
Dec 05 02:13:22 compute-0 systemd[1]: libpod-0f8100fc86059c6a944e35fe127ae53e8df6388fa3cefa534d2d4f842d70dfa7.scope: Consumed 1.083s CPU time.
Dec 05 02:13:22 compute-0 conmon[451418]: conmon 0f8100fc86059c6a944e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0f8100fc86059c6a944e35fe127ae53e8df6388fa3cefa534d2d4f842d70dfa7.scope/container/memory.events
Dec 05 02:13:22 compute-0 podman[451403]: 2025-12-05 02:13:22.271676672 +0000 UTC m=+1.466152189 container died 0f8100fc86059c6a944e35fe127ae53e8df6388fa3cefa534d2d4f842d70dfa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_driscoll, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 05 02:13:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-59ab464c7a49eaa65b0c16dec954b33190a812f2ecd0579b72889db2e378bbc8-merged.mount: Deactivated successfully.
Dec 05 02:13:22 compute-0 podman[451403]: 2025-12-05 02:13:22.369408287 +0000 UTC m=+1.563883774 container remove 0f8100fc86059c6a944e35fe127ae53e8df6388fa3cefa534d2d4f842d70dfa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_driscoll, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:13:22 compute-0 systemd[1]: libpod-conmon-0f8100fc86059c6a944e35fe127ae53e8df6388fa3cefa534d2d4f842d70dfa7.scope: Deactivated successfully.
Dec 05 02:13:22 compute-0 sudo[451302]: pam_unix(sudo:session): session closed for user root
Dec 05 02:13:22 compute-0 sudo[451458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:13:22 compute-0 sudo[451458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:13:22 compute-0 sudo[451458]: pam_unix(sudo:session): session closed for user root
Dec 05 02:13:22 compute-0 sudo[451483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:13:22 compute-0 sudo[451483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:13:22 compute-0 sudo[451483]: pam_unix(sudo:session): session closed for user root
Dec 05 02:13:22 compute-0 sudo[451508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:13:22 compute-0 sudo[451508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:13:22 compute-0 sudo[451508]: pam_unix(sudo:session): session closed for user root
Dec 05 02:13:22 compute-0 sudo[451533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:13:22 compute-0 sudo[451533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:13:22 compute-0 nova_compute[349548]: 2025-12-05 02:13:22.965 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:13:23 compute-0 podman[451596]: 2025-12-05 02:13:23.275806593 +0000 UTC m=+0.095361129 container create 58ad5985c959447f5b7c4efe58812f3b3f671abb2edc8bd45101018010d4a398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dhawan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:13:23 compute-0 podman[451596]: 2025-12-05 02:13:23.242355434 +0000 UTC m=+0.061909950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:13:23 compute-0 systemd[1]: Started libpod-conmon-58ad5985c959447f5b7c4efe58812f3b3f671abb2edc8bd45101018010d4a398.scope.
Dec 05 02:13:23 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:13:23 compute-0 podman[451596]: 2025-12-05 02:13:23.419229882 +0000 UTC m=+0.238784428 container init 58ad5985c959447f5b7c4efe58812f3b3f671abb2edc8bd45101018010d4a398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dhawan, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:13:23 compute-0 podman[451596]: 2025-12-05 02:13:23.431083794 +0000 UTC m=+0.250638320 container start 58ad5985c959447f5b7c4efe58812f3b3f671abb2edc8bd45101018010d4a398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 02:13:23 compute-0 recursing_dhawan[451610]: 167 167
Dec 05 02:13:23 compute-0 podman[451596]: 2025-12-05 02:13:23.437637199 +0000 UTC m=+0.257191745 container attach 58ad5985c959447f5b7c4efe58812f3b3f671abb2edc8bd45101018010d4a398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:13:23 compute-0 systemd[1]: libpod-58ad5985c959447f5b7c4efe58812f3b3f671abb2edc8bd45101018010d4a398.scope: Deactivated successfully.
Dec 05 02:13:23 compute-0 podman[451596]: 2025-12-05 02:13:23.439653105 +0000 UTC m=+0.259207631 container died 58ad5985c959447f5b7c4efe58812f3b3f671abb2edc8bd45101018010d4a398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 02:13:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-8695b3713b1a869fb291d2ee15900fcc5615ab58b3bdfa12d7da227712e3b058-merged.mount: Deactivated successfully.
Dec 05 02:13:23 compute-0 podman[451596]: 2025-12-05 02:13:23.499449625 +0000 UTC m=+0.319004131 container remove 58ad5985c959447f5b7c4efe58812f3b3f671abb2edc8bd45101018010d4a398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 05 02:13:23 compute-0 systemd[1]: libpod-conmon-58ad5985c959447f5b7c4efe58812f3b3f671abb2edc8bd45101018010d4a398.scope: Deactivated successfully.
Dec 05 02:13:23 compute-0 ceph-mon[192914]: pgmap v1931: 321 pgs: 321 active+clean; 384 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 4.1 MiB/s wr, 87 op/s
Dec 05 02:13:23 compute-0 podman[451634]: 2025-12-05 02:13:23.79624013 +0000 UTC m=+0.094656279 container create 88b81d6bb413f9369be44e0d5e202aa8136e5640b88648ec5d875eaa8380ad70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 02:13:23 compute-0 podman[451634]: 2025-12-05 02:13:23.750115415 +0000 UTC m=+0.048531604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:13:23 compute-0 systemd[1]: Started libpod-conmon-88b81d6bb413f9369be44e0d5e202aa8136e5640b88648ec5d875eaa8380ad70.scope.
Dec 05 02:13:23 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:13:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2544ec5cc92c3399577fd39481fa16e3be616d980e5ff68bf91af3eba5bf9723/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:13:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2544ec5cc92c3399577fd39481fa16e3be616d980e5ff68bf91af3eba5bf9723/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:13:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2544ec5cc92c3399577fd39481fa16e3be616d980e5ff68bf91af3eba5bf9723/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:13:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2544ec5cc92c3399577fd39481fa16e3be616d980e5ff68bf91af3eba5bf9723/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:13:23 compute-0 podman[451634]: 2025-12-05 02:13:23.956306506 +0000 UTC m=+0.254722655 container init 88b81d6bb413f9369be44e0d5e202aa8136e5640b88648ec5d875eaa8380ad70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lichterman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 05 02:13:23 compute-0 podman[451634]: 2025-12-05 02:13:23.989763706 +0000 UTC m=+0.288179845 container start 88b81d6bb413f9369be44e0d5e202aa8136e5640b88648ec5d875eaa8380ad70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lichterman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 02:13:23 compute-0 podman[451634]: 2025-12-05 02:13:23.995763014 +0000 UTC m=+0.294179163 container attach 88b81d6bb413f9369be44e0d5e202aa8136e5640b88648ec5d875eaa8380ad70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lichterman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 02:13:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1932: 321 pgs: 321 active+clean; 384 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 4.1 MiB/s wr, 87 op/s
Dec 05 02:13:24 compute-0 busy_lichterman[451651]: {
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:     "0": [
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:         {
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "devices": [
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "/dev/loop3"
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             ],
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "lv_name": "ceph_lv0",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "lv_size": "21470642176",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "name": "ceph_lv0",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "tags": {
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.cluster_name": "ceph",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.crush_device_class": "",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.encrypted": "0",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.osd_id": "0",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.type": "block",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.vdo": "0"
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             },
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "type": "block",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "vg_name": "ceph_vg0"
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:         }
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:     ],
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:     "1": [
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:         {
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "devices": [
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "/dev/loop4"
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             ],
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "lv_name": "ceph_lv1",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "lv_size": "21470642176",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "name": "ceph_lv1",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "tags": {
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.cluster_name": "ceph",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.crush_device_class": "",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.encrypted": "0",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.osd_id": "1",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.type": "block",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.vdo": "0"
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             },
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "type": "block",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "vg_name": "ceph_vg1"
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:         }
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:     ],
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:     "2": [
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:         {
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "devices": [
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "/dev/loop5"
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             ],
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "lv_name": "ceph_lv2",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "lv_size": "21470642176",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "name": "ceph_lv2",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "tags": {
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.cluster_name": "ceph",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.crush_device_class": "",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.encrypted": "0",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.osd_id": "2",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.type": "block",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:                 "ceph.vdo": "0"
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             },
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "type": "block",
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:             "vg_name": "ceph_vg2"
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:         }
Dec 05 02:13:24 compute-0 busy_lichterman[451651]:     ]
Dec 05 02:13:24 compute-0 busy_lichterman[451651]: }
Dec 05 02:13:24 compute-0 systemd[1]: libpod-88b81d6bb413f9369be44e0d5e202aa8136e5640b88648ec5d875eaa8380ad70.scope: Deactivated successfully.
Dec 05 02:13:24 compute-0 podman[451634]: 2025-12-05 02:13:24.78820137 +0000 UTC m=+1.086617519 container died 88b81d6bb413f9369be44e0d5e202aa8136e5640b88648ec5d875eaa8380ad70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:13:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-2544ec5cc92c3399577fd39481fa16e3be616d980e5ff68bf91af3eba5bf9723-merged.mount: Deactivated successfully.
Dec 05 02:13:24 compute-0 podman[451634]: 2025-12-05 02:13:24.891135701 +0000 UTC m=+1.189551820 container remove 88b81d6bb413f9369be44e0d5e202aa8136e5640b88648ec5d875eaa8380ad70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:13:24 compute-0 systemd[1]: libpod-conmon-88b81d6bb413f9369be44e0d5e202aa8136e5640b88648ec5d875eaa8380ad70.scope: Deactivated successfully.
Dec 05 02:13:24 compute-0 sudo[451533]: pam_unix(sudo:session): session closed for user root
Dec 05 02:13:25 compute-0 sudo[451672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:13:25 compute-0 sudo[451672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:13:25 compute-0 sudo[451672]: pam_unix(sudo:session): session closed for user root
Dec 05 02:13:25 compute-0 sudo[451697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:13:25 compute-0 sudo[451697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:13:25 compute-0 sudo[451697]: pam_unix(sudo:session): session closed for user root
Dec 05 02:13:25 compute-0 sudo[451722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:13:25 compute-0 sudo[451722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:13:25 compute-0 sudo[451722]: pam_unix(sudo:session): session closed for user root
Dec 05 02:13:25 compute-0 sudo[451747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:13:25 compute-0 sudo[451747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:13:25 compute-0 ceph-mon[192914]: pgmap v1932: 321 pgs: 321 active+clean; 384 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 4.1 MiB/s wr, 87 op/s
Dec 05 02:13:26 compute-0 podman[451811]: 2025-12-05 02:13:26.098661366 +0000 UTC m=+0.090851433 container create a925d7c9b2fc07d7efa71c55751a3e34a6756cdcb9bb063a1369e166fef8287e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 02:13:26 compute-0 systemd[1]: Started libpod-conmon-a925d7c9b2fc07d7efa71c55751a3e34a6756cdcb9bb063a1369e166fef8287e.scope.
Dec 05 02:13:26 compute-0 podman[451811]: 2025-12-05 02:13:26.070782663 +0000 UTC m=+0.062972730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:13:26 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:13:26 compute-0 podman[451811]: 2025-12-05 02:13:26.243334929 +0000 UTC m=+0.235525026 container init a925d7c9b2fc07d7efa71c55751a3e34a6756cdcb9bb063a1369e166fef8287e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:13:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1933: 321 pgs: 321 active+clean; 394 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 587 KiB/s rd, 4.3 MiB/s wr, 119 op/s
Dec 05 02:13:26 compute-0 podman[451811]: 2025-12-05 02:13:26.26047109 +0000 UTC m=+0.252661157 container start a925d7c9b2fc07d7efa71c55751a3e34a6756cdcb9bb063a1369e166fef8287e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:13:26 compute-0 podman[451811]: 2025-12-05 02:13:26.267761465 +0000 UTC m=+0.259951532 container attach a925d7c9b2fc07d7efa71c55751a3e34a6756cdcb9bb063a1369e166fef8287e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:13:26 compute-0 naughty_jackson[451827]: 167 167
Dec 05 02:13:26 compute-0 systemd[1]: libpod-a925d7c9b2fc07d7efa71c55751a3e34a6756cdcb9bb063a1369e166fef8287e.scope: Deactivated successfully.
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.280 349552 INFO nova.compute.manager [None req-5382d356-6ec0-439c-8c04-ddc894f8c060 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Get console output
Dec 05 02:13:26 compute-0 podman[451811]: 2025-12-05 02:13:26.283997761 +0000 UTC m=+0.276187828 container died a925d7c9b2fc07d7efa71c55751a3e34a6756cdcb9bb063a1369e166fef8287e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.305 449857 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 05 02:13:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-25a427bb122b6ec0b466945da2bbdcc375f0f7688f8eaa5938775ae7f60a940c-merged.mount: Deactivated successfully.
Dec 05 02:13:26 compute-0 podman[451811]: 2025-12-05 02:13:26.360352756 +0000 UTC m=+0.352542823 container remove a925d7c9b2fc07d7efa71c55751a3e34a6756cdcb9bb063a1369e166fef8287e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:13:26 compute-0 systemd[1]: libpod-conmon-a925d7c9b2fc07d7efa71c55751a3e34a6756cdcb9bb063a1369e166fef8287e.scope: Deactivated successfully.
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.603 349552 DEBUG oslo_concurrency.lockutils [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "e184a71d-1d91-4999-bb53-73c2caa1110a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.605 349552 DEBUG oslo_concurrency.lockutils [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.606 349552 DEBUG oslo_concurrency.lockutils [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.607 349552 DEBUG oslo_concurrency.lockutils [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.607 349552 DEBUG oslo_concurrency.lockutils [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.611 349552 INFO nova.compute.manager [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Terminating instance
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.614 349552 DEBUG nova.compute.manager [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 05 02:13:26 compute-0 podman[451849]: 2025-12-05 02:13:26.643115696 +0000 UTC m=+0.083701341 container create 5c64f8eab24d474c6b12deba9299fb74a8905657721b4fb1e0da0676ebbfe017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mclean, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 05 02:13:26 compute-0 podman[451849]: 2025-12-05 02:13:26.618432193 +0000 UTC m=+0.059017848 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:13:26 compute-0 systemd[1]: Started libpod-conmon-5c64f8eab24d474c6b12deba9299fb74a8905657721b4fb1e0da0676ebbfe017.scope.
Dec 05 02:13:26 compute-0 kernel: tap94c7e2c9-6a (unregistering): left promiscuous mode
Dec 05 02:13:26 compute-0 NetworkManager[49092]: <info>  [1764900806.7200] device (tap94c7e2c9-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.732 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:26 compute-0 ovn_controller[89286]: 2025-12-05T02:13:26Z|00155|binding|INFO|Releasing lport 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 from this chassis (sb_readonly=0)
Dec 05 02:13:26 compute-0 ovn_controller[89286]: 2025-12-05T02:13:26Z|00156|binding|INFO|Setting lport 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 down in Southbound
Dec 05 02:13:26 compute-0 ovn_controller[89286]: 2025-12-05T02:13:26Z|00157|binding|INFO|Removing iface tap94c7e2c9-6a ovn-installed in OVS
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.739 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.748 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:22:fb 10.100.0.3'], port_security=['fa:16:3e:de:22:fb 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e184a71d-1d91-4999-bb53-73c2caa1110a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6aaead05b2404fec8f687504ed800a2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5dbf4e63-8bae-4a45-8f77-a68eb174185f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee2a399c-ba53-4ea4-9f46-ca7b46a10984, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=94c7e2c9-6aeb-4be2-a022-8cd7ad27d978) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.750 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 in datapath 580f50f3-cfd1-4167-ba29-a8edbd53ee0f unbound from our chassis
Dec 05 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.752 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 580f50f3-cfd1-4167-ba29-a8edbd53ee0f
Dec 05 02:13:26 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2a60a2c62c2897285e4474980e4b871a955d8c03aa3a1baf8fb2780fa4c80d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.766 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2a60a2c62c2897285e4474980e4b871a955d8c03aa3a1baf8fb2780fa4c80d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2a60a2c62c2897285e4474980e4b871a955d8c03aa3a1baf8fb2780fa4c80d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2a60a2c62c2897285e4474980e4b871a955d8c03aa3a1baf8fb2780fa4c80d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.786 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[bca9f9f3-fbba-4302-bdfb-efdea66df223]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:26 compute-0 podman[451849]: 2025-12-05 02:13:26.794293802 +0000 UTC m=+0.234879457 container init 5c64f8eab24d474c6b12deba9299fb74a8905657721b4fb1e0da0676ebbfe017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 05 02:13:26 compute-0 podman[451849]: 2025-12-05 02:13:26.809200391 +0000 UTC m=+0.249786036 container start 5c64f8eab24d474c6b12deba9299fb74a8905657721b4fb1e0da0676ebbfe017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mclean, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:13:26 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Dec 05 02:13:26 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Consumed 39.976s CPU time.
Dec 05 02:13:26 compute-0 podman[451849]: 2025-12-05 02:13:26.821442325 +0000 UTC m=+0.262027960 container attach 5c64f8eab24d474c6b12deba9299fb74a8905657721b4fb1e0da0676ebbfe017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mclean, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:13:26 compute-0 systemd-machined[138700]: Machine qemu-14-instance-0000000d terminated.
Dec 05 02:13:26 compute-0 kernel: tap94c7e2c9-6a: entered promiscuous mode
Dec 05 02:13:26 compute-0 systemd-udevd[451875]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.842 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:26 compute-0 kernel: tap94c7e2c9-6a (unregistering): left promiscuous mode
Dec 05 02:13:26 compute-0 ovn_controller[89286]: 2025-12-05T02:13:26Z|00158|binding|INFO|Claiming lport 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 for this chassis.
Dec 05 02:13:26 compute-0 ovn_controller[89286]: 2025-12-05T02:13:26Z|00159|binding|INFO|94c7e2c9-6aeb-4be2-a022-8cd7ad27d978: Claiming fa:16:3e:de:22:fb 10.100.0.3
Dec 05 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.838 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[bc9e966a-47f5-499b-8967-3ecd0cb0ae8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:26 compute-0 NetworkManager[49092]: <info>  [1764900806.8495] manager: (tap94c7e2c9-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/78)
Dec 05 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.851 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[fc451d94-4cc3-4c85-8c4a-1a431702e659]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.854 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:22:fb 10.100.0.3'], port_security=['fa:16:3e:de:22:fb 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e184a71d-1d91-4999-bb53-73c2caa1110a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6aaead05b2404fec8f687504ed800a2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5dbf4e63-8bae-4a45-8f77-a68eb174185f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee2a399c-ba53-4ea4-9f46-ca7b46a10984, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=94c7e2c9-6aeb-4be2-a022-8cd7ad27d978) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:13:26 compute-0 ovn_controller[89286]: 2025-12-05T02:13:26Z|00160|binding|INFO|Setting lport 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 ovn-installed in OVS
Dec 05 02:13:26 compute-0 ovn_controller[89286]: 2025-12-05T02:13:26Z|00161|binding|INFO|Setting lport 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 up in Southbound
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.879 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:26 compute-0 ovn_controller[89286]: 2025-12-05T02:13:26Z|00162|binding|INFO|Releasing lport 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 from this chassis (sb_readonly=1)
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.881 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:26 compute-0 ovn_controller[89286]: 2025-12-05T02:13:26Z|00163|if_status|INFO|Not setting lport 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 down as sb is readonly
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.885 349552 INFO nova.virt.libvirt.driver [-] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Instance destroyed successfully.
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.885 349552 DEBUG nova.objects.instance [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lazy-loading 'resources' on Instance uuid e184a71d-1d91-4999-bb53-73c2caa1110a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:13:26 compute-0 ovn_controller[89286]: 2025-12-05T02:13:26Z|00164|binding|INFO|Releasing lport 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 from this chassis (sb_readonly=0)
Dec 05 02:13:26 compute-0 ovn_controller[89286]: 2025-12-05T02:13:26Z|00165|binding|INFO|Removing iface tap94c7e2c9-6a ovn-installed in OVS
Dec 05 02:13:26 compute-0 ovn_controller[89286]: 2025-12-05T02:13:26Z|00166|binding|INFO|Setting lport 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 down in Southbound
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.893 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.901 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.901 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd898dc-70ab-442f-8923-86bd99e1b835]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.920 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[10a07bfd-865f-4560-98dd-f197854f8f1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap580f50f3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:c2:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678558, 'reachable_time': 20224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 451886, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.937 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[2ccfa7c6-34a9-45e5-9187-c1a7f78e957c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap580f50f3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678580, 'tstamp': 678580}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 451887, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap580f50f3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678588, 'tstamp': 678588}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 451887, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.940 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap580f50f3-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.942 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.943 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:22:fb 10.100.0.3'], port_security=['fa:16:3e:de:22:fb 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e184a71d-1d91-4999-bb53-73c2caa1110a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6aaead05b2404fec8f687504ed800a2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5dbf4e63-8bae-4a45-8f77-a68eb174185f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee2a399c-ba53-4ea4-9f46-ca7b46a10984, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=94c7e2c9-6aeb-4be2-a022-8cd7ad27d978) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.946 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.947 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap580f50f3-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.948 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.949 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap580f50f3-c0, col_values=(('external_ids', {'iface-id': '29ff39a2-9491-44bb-a004-0de689e8aadc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.950 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.952 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 in datapath 580f50f3-cfd1-4167-ba29-a8edbd53ee0f unbound from our chassis
Dec 05 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.954 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 580f50f3-cfd1-4167-ba29-a8edbd53ee0f
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.956 349552 DEBUG nova.virt.libvirt.vif [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:12:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-246991198',display_name='tempest-TestNetworkBasicOps-server-246991198',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-246991198',id=13,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOPiJuAHnAJu46IBGrW2KCWpzoZreiuIkGq3//er4nG+5eIgXpgWi9tSl+igSSp8Nl6if+KEJaz1jLll0XICHyeubF/iswJE5bpcW/PYkhqz7B8mkIP3gi3Vhw5yfXTbIg==',key_name='tempest-TestNetworkBasicOps-994593786',keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:12:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6aaead05b2404fec8f687504ed800a2b',ramdisk_id='',reservation_id='r-9tqk8ujr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-576606253',owner_user_name='tempest-TestNetworkBasicOps-576606253-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:12:42Z,user_data=None,user_id='2e61f46e24a240608d1523fb5265d3ac',uuid=e184a71d-1d91-4999-bb53-73c2caa1110a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "address": "fa:16:3e:de:22:fb", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c7e2c9-6a", "ovs_interfaceid": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.957 349552 DEBUG nova.network.os_vif_util [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Converting VIF {"id": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "address": "fa:16:3e:de:22:fb", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c7e2c9-6a", "ovs_interfaceid": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.957 349552 DEBUG nova.network.os_vif_util [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:de:22:fb,bridge_name='br-int',has_traffic_filtering=True,id=94c7e2c9-6aeb-4be2-a022-8cd7ad27d978,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94c7e2c9-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.958 349552 DEBUG os_vif [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:22:fb,bridge_name='br-int',has_traffic_filtering=True,id=94c7e2c9-6aeb-4be2-a022-8cd7ad27d978,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94c7e2c9-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.959 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.960 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94c7e2c9-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.961 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.963 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.966 349552 INFO os_vif [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:22:fb,bridge_name='br-int',has_traffic_filtering=True,id=94c7e2c9-6aeb-4be2-a022-8cd7ad27d978,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94c7e2c9-6a')
Dec 05 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.971 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[cda3db4f-e978-418d-aea7-e51b8a114569]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.008 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[3ba0d404-73b6-4c35-8325-1bf79c35a165]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.012 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[85e78643-05ec-47a6-90be-99222c696370]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.045 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[0c797541-791c-4d83-b4db-1e896d283940]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.088 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[0e81ae91-9e13-4ae7-befc-c1c0b01c3610]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap580f50f3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:c2:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 658, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 658, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678558, 'reachable_time': 20224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 451912, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0030310510882659374 of space, bias 1.0, pg target 0.9093153264797812 quantized to 32 (current 32)
Dec 05 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 05 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.120 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[484a84e5-6a07-4caf-b8bd-a3ba2c4d556e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap580f50f3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678580, 'tstamp': 678580}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 451913, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap580f50f3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678588, 'tstamp': 678588}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 451913, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.138 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap580f50f3-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:13:27 compute-0 nova_compute[349548]: 2025-12-05 02:13:27.142 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:27 compute-0 nova_compute[349548]: 2025-12-05 02:13:27.143 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.143 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap580f50f3-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.144 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.145 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap580f50f3-c0, col_values=(('external_ids', {'iface-id': '29ff39a2-9491-44bb-a004-0de689e8aadc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.146 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.148 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 in datapath 580f50f3-cfd1-4167-ba29-a8edbd53ee0f unbound from our chassis
Dec 05 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.150 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 580f50f3-cfd1-4167-ba29-a8edbd53ee0f
Dec 05 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.178 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[9c6d48f5-4265-4453-80f1-e4e03fd127dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.219 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[fe4a2b53-3d7c-4b7f-9184-df944d8d8676]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.222 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[1c21ef35-6f32-462a-8898-6b5b39ba4991]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:27 compute-0 nova_compute[349548]: 2025-12-05 02:13:27.244 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.282 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[244d7f1f-efcc-475c-85d5-fcbd34094829]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.304 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[422a8e4d-741e-4c69-b9c1-47bd8213b9b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap580f50f3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:c2:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 658, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 658, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678558, 'reachable_time': 20224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 451919, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.324 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[f17566e9-c61c-45f6-a858-48b9f180cac3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap580f50f3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678580, 'tstamp': 678580}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 451920, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap580f50f3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678588, 'tstamp': 678588}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 451920, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.327 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap580f50f3-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:13:27 compute-0 nova_compute[349548]: 2025-12-05 02:13:27.329 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:27 compute-0 nova_compute[349548]: 2025-12-05 02:13:27.331 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.332 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap580f50f3-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.333 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.334 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap580f50f3-c0, col_values=(('external_ids', {'iface-id': '29ff39a2-9491-44bb-a004-0de689e8aadc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.335 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:13:27 compute-0 ceph-mon[192914]: pgmap v1933: 321 pgs: 321 active+clean; 394 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 587 KiB/s rd, 4.3 MiB/s wr, 119 op/s
Dec 05 02:13:27 compute-0 nova_compute[349548]: 2025-12-05 02:13:27.670 349552 INFO nova.virt.libvirt.driver [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Deleting instance files /var/lib/nova/instances/e184a71d-1d91-4999-bb53-73c2caa1110a_del
Dec 05 02:13:27 compute-0 nova_compute[349548]: 2025-12-05 02:13:27.671 349552 INFO nova.virt.libvirt.driver [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Deletion of /var/lib/nova/instances/e184a71d-1d91-4999-bb53-73c2caa1110a_del complete
Dec 05 02:13:27 compute-0 nova_compute[349548]: 2025-12-05 02:13:27.762 349552 INFO nova.compute.manager [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Took 1.15 seconds to destroy the instance on the hypervisor.
Dec 05 02:13:27 compute-0 nova_compute[349548]: 2025-12-05 02:13:27.762 349552 DEBUG oslo.service.loopingcall [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 05 02:13:27 compute-0 nova_compute[349548]: 2025-12-05 02:13:27.763 349552 DEBUG nova.compute.manager [-] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 05 02:13:27 compute-0 nova_compute[349548]: 2025-12-05 02:13:27.763 349552 DEBUG nova.network.neutron [-] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 05 02:13:28 compute-0 kind_mclean[451865]: {
Dec 05 02:13:28 compute-0 kind_mclean[451865]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:13:28 compute-0 kind_mclean[451865]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:13:28 compute-0 kind_mclean[451865]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:13:28 compute-0 kind_mclean[451865]:         "osd_id": 0,
Dec 05 02:13:28 compute-0 kind_mclean[451865]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:13:28 compute-0 kind_mclean[451865]:         "type": "bluestore"
Dec 05 02:13:28 compute-0 kind_mclean[451865]:     },
Dec 05 02:13:28 compute-0 kind_mclean[451865]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:13:28 compute-0 kind_mclean[451865]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:13:28 compute-0 kind_mclean[451865]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:13:28 compute-0 kind_mclean[451865]:         "osd_id": 1,
Dec 05 02:13:28 compute-0 kind_mclean[451865]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:13:28 compute-0 kind_mclean[451865]:         "type": "bluestore"
Dec 05 02:13:28 compute-0 kind_mclean[451865]:     },
Dec 05 02:13:28 compute-0 kind_mclean[451865]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:13:28 compute-0 kind_mclean[451865]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:13:28 compute-0 kind_mclean[451865]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:13:28 compute-0 kind_mclean[451865]:         "osd_id": 2,
Dec 05 02:13:28 compute-0 kind_mclean[451865]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:13:28 compute-0 kind_mclean[451865]:         "type": "bluestore"
Dec 05 02:13:28 compute-0 kind_mclean[451865]:     }
Dec 05 02:13:28 compute-0 kind_mclean[451865]: }
Dec 05 02:13:28 compute-0 systemd[1]: libpod-5c64f8eab24d474c6b12deba9299fb74a8905657721b4fb1e0da0676ebbfe017.scope: Deactivated successfully.
Dec 05 02:13:28 compute-0 systemd[1]: libpod-5c64f8eab24d474c6b12deba9299fb74a8905657721b4fb1e0da0676ebbfe017.scope: Consumed 1.182s CPU time.
Dec 05 02:13:28 compute-0 podman[451849]: 2025-12-05 02:13:28.063609292 +0000 UTC m=+1.504194957 container died 5c64f8eab24d474c6b12deba9299fb74a8905657721b4fb1e0da0676ebbfe017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mclean, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:13:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2a60a2c62c2897285e4474980e4b871a955d8c03aa3a1baf8fb2780fa4c80d6-merged.mount: Deactivated successfully.
Dec 05 02:13:28 compute-0 nova_compute[349548]: 2025-12-05 02:13:28.122 349552 DEBUG nova.compute.manager [req-2bcdb3fa-6380-4304-b659-5e0c7fb9a343 req-c019cc23-1d65-41e6-8895-f905449db1ec a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Received event network-vif-unplugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:13:28 compute-0 nova_compute[349548]: 2025-12-05 02:13:28.122 349552 DEBUG oslo_concurrency.lockutils [req-2bcdb3fa-6380-4304-b659-5e0c7fb9a343 req-c019cc23-1d65-41e6-8895-f905449db1ec a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:13:28 compute-0 nova_compute[349548]: 2025-12-05 02:13:28.123 349552 DEBUG oslo_concurrency.lockutils [req-2bcdb3fa-6380-4304-b659-5e0c7fb9a343 req-c019cc23-1d65-41e6-8895-f905449db1ec a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:13:28 compute-0 nova_compute[349548]: 2025-12-05 02:13:28.123 349552 DEBUG oslo_concurrency.lockutils [req-2bcdb3fa-6380-4304-b659-5e0c7fb9a343 req-c019cc23-1d65-41e6-8895-f905449db1ec a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:13:28 compute-0 nova_compute[349548]: 2025-12-05 02:13:28.123 349552 DEBUG nova.compute.manager [req-2bcdb3fa-6380-4304-b659-5e0c7fb9a343 req-c019cc23-1d65-41e6-8895-f905449db1ec a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] No waiting events found dispatching network-vif-unplugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:13:28 compute-0 nova_compute[349548]: 2025-12-05 02:13:28.124 349552 DEBUG nova.compute.manager [req-2bcdb3fa-6380-4304-b659-5e0c7fb9a343 req-c019cc23-1d65-41e6-8895-f905449db1ec a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Received event network-vif-unplugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 05 02:13:28 compute-0 podman[451849]: 2025-12-05 02:13:28.153146477 +0000 UTC m=+1.593732122 container remove 5c64f8eab24d474c6b12deba9299fb74a8905657721b4fb1e0da0676ebbfe017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mclean, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:13:28 compute-0 systemd[1]: libpod-conmon-5c64f8eab24d474c6b12deba9299fb74a8905657721b4fb1e0da0676ebbfe017.scope: Deactivated successfully.
Dec 05 02:13:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:13:28 compute-0 sudo[451747]: pam_unix(sudo:session): session closed for user root
Dec 05 02:13:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:13:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:13:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:13:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:13:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 3ef501c5-e39a-4098-b58c-687e4ec6ba54 does not exist
Dec 05 02:13:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8021e548-aa02-46ee-9e1f-0cc917e97c95 does not exist
Dec 05 02:13:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1934: 321 pgs: 321 active+clean; 395 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 604 KiB/s rd, 4.3 MiB/s wr, 133 op/s
Dec 05 02:13:28 compute-0 sudo[451963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:13:28 compute-0 sudo[451963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:13:28 compute-0 sudo[451963]: pam_unix(sudo:session): session closed for user root
Dec 05 02:13:28 compute-0 sudo[451988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:13:28 compute-0 sudo[451988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:13:28 compute-0 sudo[451988]: pam_unix(sudo:session): session closed for user root
Dec 05 02:13:29 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:29.041 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:13:29 compute-0 nova_compute[349548]: 2025-12-05 02:13:29.042 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:29 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:29.045 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 02:13:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:13:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:13:29 compute-0 ceph-mon[192914]: pgmap v1934: 321 pgs: 321 active+clean; 395 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 604 KiB/s rd, 4.3 MiB/s wr, 133 op/s
Dec 05 02:13:29 compute-0 nova_compute[349548]: 2025-12-05 02:13:29.551 349552 DEBUG nova.network.neutron [-] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:13:29 compute-0 nova_compute[349548]: 2025-12-05 02:13:29.566 349552 INFO nova.compute.manager [-] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Took 1.80 seconds to deallocate network for instance.
Dec 05 02:13:29 compute-0 nova_compute[349548]: 2025-12-05 02:13:29.613 349552 DEBUG oslo_concurrency.lockutils [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:13:29 compute-0 nova_compute[349548]: 2025-12-05 02:13:29.614 349552 DEBUG oslo_concurrency.lockutils [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:13:29 compute-0 nova_compute[349548]: 2025-12-05 02:13:29.676 349552 DEBUG nova.compute.manager [req-5800c343-bbb4-4ae0-9826-74d17d2571bf req-a315565c-f7c2-4689-800c-2ee671c8f35c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Received event network-vif-deleted-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:13:29 compute-0 podman[452014]: 2025-12-05 02:13:29.722550905 +0000 UTC m=+0.116482022 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 02:13:29 compute-0 podman[452013]: 2025-12-05 02:13:29.737118394 +0000 UTC m=+0.130430194 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec 05 02:13:29 compute-0 podman[158197]: time="2025-12-05T02:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:13:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46279 "" "Go-http-client/1.1"
Dec 05 02:13:29 compute-0 nova_compute[349548]: 2025-12-05 02:13:29.768 349552 DEBUG oslo_concurrency.processutils [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:13:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9592 "" "Go-http-client/1.1"
Dec 05 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.198 349552 DEBUG nova.compute.manager [req-4a6a493b-a86a-4d6d-a427-103c656ffeea req-014c07d3-f523-4537-bb5e-eb37a2468c39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Received event network-vif-plugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.199 349552 DEBUG oslo_concurrency.lockutils [req-4a6a493b-a86a-4d6d-a427-103c656ffeea req-014c07d3-f523-4537-bb5e-eb37a2468c39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.200 349552 DEBUG oslo_concurrency.lockutils [req-4a6a493b-a86a-4d6d-a427-103c656ffeea req-014c07d3-f523-4537-bb5e-eb37a2468c39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.201 349552 DEBUG oslo_concurrency.lockutils [req-4a6a493b-a86a-4d6d-a427-103c656ffeea req-014c07d3-f523-4537-bb5e-eb37a2468c39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.202 349552 DEBUG nova.compute.manager [req-4a6a493b-a86a-4d6d-a427-103c656ffeea req-014c07d3-f523-4537-bb5e-eb37a2468c39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] No waiting events found dispatching network-vif-plugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.202 349552 WARNING nova.compute.manager [req-4a6a493b-a86a-4d6d-a427-103c656ffeea req-014c07d3-f523-4537-bb5e-eb37a2468c39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Received unexpected event network-vif-plugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 for instance with vm_state deleted and task_state None.
Dec 05 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.203 349552 DEBUG nova.compute.manager [req-4a6a493b-a86a-4d6d-a427-103c656ffeea req-014c07d3-f523-4537-bb5e-eb37a2468c39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Received event network-vif-plugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.204 349552 DEBUG oslo_concurrency.lockutils [req-4a6a493b-a86a-4d6d-a427-103c656ffeea req-014c07d3-f523-4537-bb5e-eb37a2468c39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.205 349552 DEBUG oslo_concurrency.lockutils [req-4a6a493b-a86a-4d6d-a427-103c656ffeea req-014c07d3-f523-4537-bb5e-eb37a2468c39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.205 349552 DEBUG oslo_concurrency.lockutils [req-4a6a493b-a86a-4d6d-a427-103c656ffeea req-014c07d3-f523-4537-bb5e-eb37a2468c39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.208 349552 DEBUG nova.compute.manager [req-4a6a493b-a86a-4d6d-a427-103c656ffeea req-014c07d3-f523-4537-bb5e-eb37a2468c39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] No waiting events found dispatching network-vif-plugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.209 349552 WARNING nova.compute.manager [req-4a6a493b-a86a-4d6d-a427-103c656ffeea req-014c07d3-f523-4537-bb5e-eb37a2468c39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Received unexpected event network-vif-plugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 for instance with vm_state deleted and task_state None.
Dec 05 02:13:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1935: 321 pgs: 321 active+clean; 395 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 603 KiB/s rd, 4.3 MiB/s wr, 132 op/s
Dec 05 02:13:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:13:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3394186467' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.340 349552 DEBUG oslo_concurrency.processutils [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:13:30 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3394186467' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.364 349552 DEBUG nova.compute.provider_tree [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.395 349552 DEBUG nova.scheduler.client.report [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.422 349552 DEBUG oslo_concurrency.lockutils [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.465 349552 INFO nova.scheduler.client.report [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Deleted allocations for instance e184a71d-1d91-4999-bb53-73c2caa1110a
Dec 05 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.577 349552 DEBUG oslo_concurrency.lockutils [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.971s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:13:31 compute-0 ceph-mon[192914]: pgmap v1935: 321 pgs: 321 active+clean; 395 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 603 KiB/s rd, 4.3 MiB/s wr, 132 op/s
Dec 05 02:13:31 compute-0 openstack_network_exporter[366555]: ERROR   02:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:13:31 compute-0 openstack_network_exporter[366555]: ERROR   02:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:13:31 compute-0 openstack_network_exporter[366555]: ERROR   02:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:13:31 compute-0 openstack_network_exporter[366555]: ERROR   02:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:13:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:13:31 compute-0 openstack_network_exporter[366555]: ERROR   02:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:13:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.570 349552 DEBUG oslo_concurrency.lockutils [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.571 349552 DEBUG oslo_concurrency.lockutils [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.572 349552 DEBUG oslo_concurrency.lockutils [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.572 349552 DEBUG oslo_concurrency.lockutils [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.572 349552 DEBUG oslo_concurrency.lockutils [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.574 349552 INFO nova.compute.manager [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Terminating instance
Dec 05 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.577 349552 DEBUG nova.compute.manager [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 05 02:13:31 compute-0 kernel: tap1e754fc7-10 (unregistering): left promiscuous mode
Dec 05 02:13:31 compute-0 NetworkManager[49092]: <info>  [1764900811.7093] device (tap1e754fc7-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 05 02:13:31 compute-0 ovn_controller[89286]: 2025-12-05T02:13:31Z|00167|binding|INFO|Releasing lport 1e754fc7-106a-43d2-a675-79c30089904b from this chassis (sb_readonly=0)
Dec 05 02:13:31 compute-0 ovn_controller[89286]: 2025-12-05T02:13:31Z|00168|binding|INFO|Setting lport 1e754fc7-106a-43d2-a675-79c30089904b down in Southbound
Dec 05 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.740 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:31 compute-0 ovn_controller[89286]: 2025-12-05T02:13:31Z|00169|binding|INFO|Removing iface tap1e754fc7-10 ovn-installed in OVS
Dec 05 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.750 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:31 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:31.749 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:49:42 10.100.0.11'], port_security=['fa:16:3e:ab:49:42 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '1fcee2c4-ccfc-4651-bc90-a606a4e46e0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6aaead05b2404fec8f687504ed800a2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6637e5fa-33c5-4d8a-98b9-4b42baed7ff5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee2a399c-ba53-4ea4-9f46-ca7b46a10984, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=1e754fc7-106a-43d2-a675-79c30089904b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:13:31 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:31.750 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 1e754fc7-106a-43d2-a675-79c30089904b in datapath 580f50f3-cfd1-4167-ba29-a8edbd53ee0f unbound from our chassis
Dec 05 02:13:31 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:31.751 287122 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 580f50f3-cfd1-4167-ba29-a8edbd53ee0f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 02:13:31 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:31.752 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[51c663da-6329-4b21-a361-baa8ba8a4d13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:31 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:31.753 287122 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f namespace which is not needed anymore
Dec 05 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.769 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:31 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Dec 05 02:13:31 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Consumed 47.082s CPU time.
Dec 05 02:13:31 compute-0 systemd-machined[138700]: Machine qemu-13-instance-0000000c terminated.
Dec 05 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.840 349552 INFO nova.virt.libvirt.driver [-] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Instance destroyed successfully.
Dec 05 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.841 349552 DEBUG nova.objects.instance [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lazy-loading 'resources' on Instance uuid 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.857 349552 DEBUG nova.virt.libvirt.vif [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:11:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-593464214',display_name='tempest-TestNetworkBasicOps-server-593464214',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-593464214',id=12,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPalP/AzwmHbA95rHCd/QJUJ7wbPS0Rqk62UPUO5FJAN2XrqFXhwvH10HGMSigesY1L3ja9sPfGII3cyjD9vy9gcLVsBBYGCRjTM6JwQSUcRRAf5rls2BCt8IBDTT+ISQg==',key_name='tempest-TestNetworkBasicOps-727356260',keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:11:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6aaead05b2404fec8f687504ed800a2b',ramdisk_id='',reservation_id='r-bpaczbpy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-576606253',owner_user_name='tempest-TestNetworkBasicOps-576606253-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:11:39Z,user_data=None,user_id='2e61f46e24a240608d1523fb5265d3ac',uuid=1fcee2c4-ccfc-4651-bc90-a606a4e46e0f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e754fc7-106a-43d2-a675-79c30089904b", "address": "fa:16:3e:ab:49:42", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e754fc7-10", "ovs_interfaceid": "1e754fc7-106a-43d2-a675-79c30089904b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 05 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.858 349552 DEBUG nova.network.os_vif_util [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Converting VIF {"id": "1e754fc7-106a-43d2-a675-79c30089904b", "address": "fa:16:3e:ab:49:42", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e754fc7-10", "ovs_interfaceid": "1e754fc7-106a-43d2-a675-79c30089904b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.858 349552 DEBUG nova.network.os_vif_util [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ab:49:42,bridge_name='br-int',has_traffic_filtering=True,id=1e754fc7-106a-43d2-a675-79c30089904b,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e754fc7-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.859 349552 DEBUG os_vif [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:49:42,bridge_name='br-int',has_traffic_filtering=True,id=1e754fc7-106a-43d2-a675-79c30089904b,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e754fc7-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 05 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.861 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.861 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e754fc7-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.865 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.868 349552 INFO os_vif [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:49:42,bridge_name='br-int',has_traffic_filtering=True,id=1e754fc7-106a-43d2-a675-79c30089904b,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e754fc7-10')
Dec 05 02:13:32 compute-0 neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f[448664]: [NOTICE]   (448668) : haproxy version is 2.8.14-c23fe91
Dec 05 02:13:32 compute-0 neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f[448664]: [NOTICE]   (448668) : path to executable is /usr/sbin/haproxy
Dec 05 02:13:32 compute-0 neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f[448664]: [WARNING]  (448668) : Exiting Master process...
Dec 05 02:13:32 compute-0 neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f[448664]: [WARNING]  (448668) : Exiting Master process...
Dec 05 02:13:32 compute-0 neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f[448664]: [ALERT]    (448668) : Current worker (448670) exited with code 143 (Terminated)
Dec 05 02:13:32 compute-0 neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f[448664]: [WARNING]  (448668) : All workers exited. Exiting... (0)
Dec 05 02:13:32 compute-0 systemd[1]: libpod-df4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e.scope: Deactivated successfully.
Dec 05 02:13:32 compute-0 podman[452121]: 2025-12-05 02:13:32.028603002 +0000 UTC m=+0.089139955 container died df4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 02:13:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:32.048 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:13:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-df4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e-userdata-shm.mount: Deactivated successfully.
Dec 05 02:13:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-be1f4a9f0791655bc892fe852878dc488b3de35fa469b0c521274d81205f10f4-merged.mount: Deactivated successfully.
Dec 05 02:13:32 compute-0 podman[452121]: 2025-12-05 02:13:32.100367137 +0000 UTC m=+0.160904040 container cleanup df4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 05 02:13:32 compute-0 systemd[1]: libpod-conmon-df4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e.scope: Deactivated successfully.
Dec 05 02:13:32 compute-0 podman[452150]: 2025-12-05 02:13:32.238025914 +0000 UTC m=+0.096698487 container remove df4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 05 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.245 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:32.255 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[f3a1f677-19d7-48f6-a602-786eb9f54d21]: (4, ('Fri Dec  5 02:13:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f (df4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e)\ndf4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e\nFri Dec  5 02:13:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f (df4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e)\ndf4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1936: 321 pgs: 321 active+clean; 295 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 619 KiB/s rd, 4.3 MiB/s wr, 159 op/s
Dec 05 02:13:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:32.260 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[e7f46495-5c3b-4565-9843-79003c856322]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:32.265 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap580f50f3-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.267 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:32 compute-0 kernel: tap580f50f3-c0: left promiscuous mode
Dec 05 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.272 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:32.273 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[c432d6c9-db46-4848-9dd7-dfc5e1dd2d57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.287 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:32.301 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[2f772dfc-d30d-4eb3-9765-3fa03c7fe89d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:32.303 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[afadae2c-1dfe-4124-92be-82aae715ed28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:32.330 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[d564711e-2cf1-4fe3-bd15-1fcb5a048f7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678547, 'reachable_time': 15221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 452164, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:32.335 287504 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 05 02:13:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:32.335 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[3b769e36-ebb6-48be-b954-347d64cee3f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d580f50f3\x2dcfd1\x2d4167\x2dba29\x2da8edbd53ee0f.mount: Deactivated successfully.
Dec 05 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.372 349552 DEBUG nova.compute.manager [req-75fb96ae-1818-4638-af4d-def7657d4ea8 req-8fbec432-cb32-4d36-a510-056727b0c9e4 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Received event network-vif-unplugged-1e754fc7-106a-43d2-a675-79c30089904b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.373 349552 DEBUG oslo_concurrency.lockutils [req-75fb96ae-1818-4638-af4d-def7657d4ea8 req-8fbec432-cb32-4d36-a510-056727b0c9e4 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.373 349552 DEBUG oslo_concurrency.lockutils [req-75fb96ae-1818-4638-af4d-def7657d4ea8 req-8fbec432-cb32-4d36-a510-056727b0c9e4 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.373 349552 DEBUG oslo_concurrency.lockutils [req-75fb96ae-1818-4638-af4d-def7657d4ea8 req-8fbec432-cb32-4d36-a510-056727b0c9e4 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.373 349552 DEBUG nova.compute.manager [req-75fb96ae-1818-4638-af4d-def7657d4ea8 req-8fbec432-cb32-4d36-a510-056727b0c9e4 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] No waiting events found dispatching network-vif-unplugged-1e754fc7-106a-43d2-a675-79c30089904b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.373 349552 DEBUG nova.compute.manager [req-75fb96ae-1818-4638-af4d-def7657d4ea8 req-8fbec432-cb32-4d36-a510-056727b0c9e4 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Received event network-vif-unplugged-1e754fc7-106a-43d2-a675-79c30089904b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 05 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.643 349552 INFO nova.virt.libvirt.driver [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Deleting instance files /var/lib/nova/instances/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_del
Dec 05 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.644 349552 INFO nova.virt.libvirt.driver [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Deletion of /var/lib/nova/instances/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_del complete
Dec 05 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.697 349552 INFO nova.compute.manager [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Took 1.12 seconds to destroy the instance on the hypervisor.
Dec 05 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.698 349552 DEBUG oslo.service.loopingcall [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 05 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.699 349552 DEBUG nova.compute.manager [-] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 05 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.700 349552 DEBUG nova.network.neutron [-] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 05 02:13:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:13:33 compute-0 ceph-mon[192914]: pgmap v1936: 321 pgs: 321 active+clean; 295 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 619 KiB/s rd, 4.3 MiB/s wr, 159 op/s
Dec 05 02:13:33 compute-0 nova_compute[349548]: 2025-12-05 02:13:33.526 349552 DEBUG nova.network.neutron [-] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:13:33 compute-0 nova_compute[349548]: 2025-12-05 02:13:33.546 349552 INFO nova.compute.manager [-] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Took 0.85 seconds to deallocate network for instance.
Dec 05 02:13:33 compute-0 nova_compute[349548]: 2025-12-05 02:13:33.599 349552 DEBUG oslo_concurrency.lockutils [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:13:33 compute-0 nova_compute[349548]: 2025-12-05 02:13:33.600 349552 DEBUG oslo_concurrency.lockutils [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:13:33 compute-0 nova_compute[349548]: 2025-12-05 02:13:33.608 349552 DEBUG nova.compute.manager [req-19c7028d-b0da-40af-86da-817063863dda req-f62017fa-8218-4d06-9896-38605619e946 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Received event network-vif-deleted-1e754fc7-106a-43d2-a675-79c30089904b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:13:33 compute-0 nova_compute[349548]: 2025-12-05 02:13:33.701 349552 DEBUG oslo_concurrency.processutils [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:13:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:13:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2611683973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:13:34 compute-0 nova_compute[349548]: 2025-12-05 02:13:34.200 349552 DEBUG oslo_concurrency.processutils [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:13:34 compute-0 nova_compute[349548]: 2025-12-05 02:13:34.212 349552 DEBUG nova.compute.provider_tree [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:13:34 compute-0 nova_compute[349548]: 2025-12-05 02:13:34.234 349552 DEBUG nova.scheduler.client.report [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:13:34 compute-0 nova_compute[349548]: 2025-12-05 02:13:34.258 349552 DEBUG oslo_concurrency.lockutils [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:13:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1937: 321 pgs: 321 active+clean; 295 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 233 KiB/s rd, 145 KiB/s wr, 72 op/s
Dec 05 02:13:34 compute-0 nova_compute[349548]: 2025-12-05 02:13:34.284 349552 INFO nova.scheduler.client.report [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Deleted allocations for instance 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f
Dec 05 02:13:34 compute-0 nova_compute[349548]: 2025-12-05 02:13:34.353 349552 DEBUG oslo_concurrency.lockutils [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:13:34 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2611683973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:13:34 compute-0 nova_compute[349548]: 2025-12-05 02:13:34.445 349552 DEBUG nova.compute.manager [req-f1fbe6e3-3bbf-4e23-b829-c1da29617176 req-fddbe79d-eeaa-4ba1-a63a-0aee99a925a2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Received event network-vif-plugged-1e754fc7-106a-43d2-a675-79c30089904b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:13:34 compute-0 nova_compute[349548]: 2025-12-05 02:13:34.446 349552 DEBUG oslo_concurrency.lockutils [req-f1fbe6e3-3bbf-4e23-b829-c1da29617176 req-fddbe79d-eeaa-4ba1-a63a-0aee99a925a2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:13:34 compute-0 nova_compute[349548]: 2025-12-05 02:13:34.451 349552 DEBUG oslo_concurrency.lockutils [req-f1fbe6e3-3bbf-4e23-b829-c1da29617176 req-fddbe79d-eeaa-4ba1-a63a-0aee99a925a2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:13:34 compute-0 nova_compute[349548]: 2025-12-05 02:13:34.452 349552 DEBUG oslo_concurrency.lockutils [req-f1fbe6e3-3bbf-4e23-b829-c1da29617176 req-fddbe79d-eeaa-4ba1-a63a-0aee99a925a2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:13:34 compute-0 nova_compute[349548]: 2025-12-05 02:13:34.452 349552 DEBUG nova.compute.manager [req-f1fbe6e3-3bbf-4e23-b829-c1da29617176 req-fddbe79d-eeaa-4ba1-a63a-0aee99a925a2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] No waiting events found dispatching network-vif-plugged-1e754fc7-106a-43d2-a675-79c30089904b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:13:34 compute-0 nova_compute[349548]: 2025-12-05 02:13:34.453 349552 WARNING nova.compute.manager [req-f1fbe6e3-3bbf-4e23-b829-c1da29617176 req-fddbe79d-eeaa-4ba1-a63a-0aee99a925a2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Received unexpected event network-vif-plugged-1e754fc7-106a-43d2-a675-79c30089904b for instance with vm_state deleted and task_state None.
Dec 05 02:13:34 compute-0 podman[452188]: 2025-12-05 02:13:34.720423603 +0000 UTC m=+0.113983502 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec 05 02:13:34 compute-0 podman[452187]: 2025-12-05 02:13:34.764873032 +0000 UTC m=+0.162274079 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.vendor=CentOS)
Dec 05 02:13:35 compute-0 ceph-mon[192914]: pgmap v1937: 321 pgs: 321 active+clean; 295 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 233 KiB/s rd, 145 KiB/s wr, 72 op/s
Dec 05 02:13:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1938: 321 pgs: 321 active+clean; 268 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 237 KiB/s rd, 146 KiB/s wr, 80 op/s
Dec 05 02:13:36 compute-0 podman[452222]: 2025-12-05 02:13:36.705981789 +0000 UTC m=+0.105198815 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, container_name=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, name=ubi9, config_id=edpm, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 02:13:36 compute-0 ovn_controller[89286]: 2025-12-05T02:13:36Z|00170|binding|INFO|Releasing lport 9db11503-fcc0-46ec-ad9b-de48fe796de4 from this chassis (sb_readonly=0)
Dec 05 02:13:36 compute-0 ovn_controller[89286]: 2025-12-05T02:13:36Z|00171|binding|INFO|Releasing lport 9309009c-26a0-4ed9-8142-14ad142ca1c0 from this chassis (sb_readonly=0)
Dec 05 02:13:36 compute-0 nova_compute[349548]: 2025-12-05 02:13:36.866 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:36 compute-0 nova_compute[349548]: 2025-12-05 02:13:36.957 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:37 compute-0 nova_compute[349548]: 2025-12-05 02:13:37.249 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:37 compute-0 ceph-mon[192914]: pgmap v1938: 321 pgs: 321 active+clean; 268 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 237 KiB/s rd, 146 KiB/s wr, 80 op/s
Dec 05 02:13:38 compute-0 nova_compute[349548]: 2025-12-05 02:13:38.069 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:13:38 compute-0 nova_compute[349548]: 2025-12-05 02:13:38.070 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:13:38 compute-0 nova_compute[349548]: 2025-12-05 02:13:38.109 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 02:13:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:13:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1939: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 29 KiB/s wr, 61 op/s
Dec 05 02:13:39 compute-0 nova_compute[349548]: 2025-12-05 02:13:39.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:13:39 compute-0 nova_compute[349548]: 2025-12-05 02:13:39.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:13:39 compute-0 nova_compute[349548]: 2025-12-05 02:13:39.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:13:39 compute-0 ceph-mon[192914]: pgmap v1939: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 29 KiB/s wr, 61 op/s
Dec 05 02:13:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1940: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 27 KiB/s wr, 47 op/s
Dec 05 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.104 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.105 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.106 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.106 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.107 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:13:41 compute-0 ceph-mon[192914]: pgmap v1940: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 27 KiB/s wr, 47 op/s
Dec 05 02:13:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:13:41 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3237394633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.639 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.776 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.776 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.784 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.785 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.872 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.877 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764900806.8760295, e184a71d-1d91-4999-bb53-73c2caa1110a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.878 349552 INFO nova.compute.manager [-] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] VM Stopped (Lifecycle Event)
Dec 05 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.913 349552 DEBUG nova.compute.manager [None req-757ccffd-2fd5-4cbf-aa59-f855945d94b7 - - - - - -] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.250 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1941: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 27 KiB/s wr, 47 op/s
Dec 05 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.326 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.327 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3530MB free_disk=59.897274017333984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.327 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.328 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.422 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.423 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 117d1772-87cc-4a3d-bf07-3f9b49ac0c63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.423 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.424 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.446 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing inventories for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 02:13:42 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3237394633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.472 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating ProviderTree inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.473 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.495 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing aggregate associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.514 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing trait associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, traits: HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.560 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:13:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:13:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/661731154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:13:43 compute-0 nova_compute[349548]: 2025-12-05 02:13:43.013 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:13:43 compute-0 nova_compute[349548]: 2025-12-05 02:13:43.024 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:13:43 compute-0 nova_compute[349548]: 2025-12-05 02:13:43.043 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:13:43 compute-0 nova_compute[349548]: 2025-12-05 02:13:43.078 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:13:43 compute-0 nova_compute[349548]: 2025-12-05 02:13:43.079 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:13:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:13:43 compute-0 ceph-mon[192914]: pgmap v1941: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 27 KiB/s wr, 47 op/s
Dec 05 02:13:43 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/661731154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:13:44 compute-0 nova_compute[349548]: 2025-12-05 02:13:44.080 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:13:44 compute-0 nova_compute[349548]: 2025-12-05 02:13:44.081 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:13:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1942: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Dec 05 02:13:45 compute-0 nova_compute[349548]: 2025-12-05 02:13:45.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:13:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:13:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2050494399' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:13:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:13:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2050494399' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:13:45 compute-0 ceph-mon[192914]: pgmap v1942: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Dec 05 02:13:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2050494399' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:13:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2050494399' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:13:45 compute-0 podman[452287]: 2025-12-05 02:13:45.701058222 +0000 UTC m=+0.113178860 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec 05 02:13:45 compute-0 podman[452295]: 2025-12-05 02:13:45.707088221 +0000 UTC m=+0.093557279 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_id=edpm, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, container_name=openstack_network_exporter, version=9.6, release=1755695350, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7)
Dec 05 02:13:45 compute-0 podman[452288]: 2025-12-05 02:13:45.716354861 +0000 UTC m=+0.120315860 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 02:13:45 compute-0 podman[452289]: 2025-12-05 02:13:45.752710102 +0000 UTC m=+0.151287840 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 05 02:13:46 compute-0 nova_compute[349548]: 2025-12-05 02:13:46.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:13:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1943: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.2 KiB/s wr, 21 op/s
Dec 05 02:13:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:13:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:13:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:13:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:13:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:13:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:13:46 compute-0 nova_compute[349548]: 2025-12-05 02:13:46.833 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764900811.8315182, 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:13:46 compute-0 nova_compute[349548]: 2025-12-05 02:13:46.834 349552 INFO nova.compute.manager [-] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] VM Stopped (Lifecycle Event)
Dec 05 02:13:46 compute-0 nova_compute[349548]: 2025-12-05 02:13:46.877 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:46 compute-0 nova_compute[349548]: 2025-12-05 02:13:46.879 349552 DEBUG nova.compute.manager [None req-b0aa18d8-3ee7-42d7-88f9-508abec97c0f - - - - - -] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:13:47 compute-0 nova_compute[349548]: 2025-12-05 02:13:47.226 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:47 compute-0 nova_compute[349548]: 2025-12-05 02:13:47.252 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:47 compute-0 ceph-mon[192914]: pgmap v1943: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.2 KiB/s wr, 21 op/s
Dec 05 02:13:48 compute-0 nova_compute[349548]: 2025-12-05 02:13:48.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:13:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:13:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1944: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.8 KiB/s rd, 1.5 KiB/s wr, 13 op/s
Dec 05 02:13:49 compute-0 ceph-mon[192914]: pgmap v1944: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.8 KiB/s rd, 1.5 KiB/s wr, 13 op/s
Dec 05 02:13:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1945: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 05 02:13:51 compute-0 ceph-mon[192914]: pgmap v1945: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 05 02:13:51 compute-0 nova_compute[349548]: 2025-12-05 02:13:51.881 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:51 compute-0 nova_compute[349548]: 2025-12-05 02:13:51.998 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:52 compute-0 nova_compute[349548]: 2025-12-05 02:13:52.258 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1946: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.2 KiB/s wr, 0 op/s
Dec 05 02:13:52 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:52.991 287490 DEBUG eventlet.wsgi.server [-] (287490) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Dec 05 02:13:52 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:52.994 287490 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Dec 05 02:13:52 compute-0 ovn_metadata_agent[287107]: Accept: */*
Dec 05 02:13:52 compute-0 ovn_metadata_agent[287107]: Connection: close
Dec 05 02:13:52 compute-0 ovn_metadata_agent[287107]: Content-Type: text/plain
Dec 05 02:13:52 compute-0 ovn_metadata_agent[287107]: Host: 169.254.169.254
Dec 05 02:13:52 compute-0 ovn_metadata_agent[287107]: User-Agent: curl/7.84.0
Dec 05 02:13:52 compute-0 ovn_metadata_agent[287107]: X-Forwarded-For: 10.100.0.12
Dec 05 02:13:52 compute-0 ovn_metadata_agent[287107]: X-Ovn-Network-Id: 297ab129-d19a-4a0e-893c-731678c3b7a7 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Dec 05 02:13:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:13:53 compute-0 ceph-mon[192914]: pgmap v1946: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.2 KiB/s wr, 0 op/s
Dec 05 02:13:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:54.224 287490 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Dec 05 02:13:54 compute-0 haproxy-metadata-proxy-297ab129-d19a-4a0e-893c-731678c3b7a7[450910]: 10.100.0.12:43802 [05/Dec/2025:02:13:52.990] listener listener/metadata 0/0/0/1234/1234 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Dec 05 02:13:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:54.224 287490 INFO eventlet.wsgi.server [-] 10.100.0.12,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.2309959
Dec 05 02:13:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1947: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 05 02:13:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:54.367 287490 DEBUG eventlet.wsgi.server [-] (287490) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Dec 05 02:13:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:54.368 287490 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Dec 05 02:13:54 compute-0 ovn_metadata_agent[287107]: Accept: */*
Dec 05 02:13:54 compute-0 ovn_metadata_agent[287107]: Connection: close
Dec 05 02:13:54 compute-0 ovn_metadata_agent[287107]: Content-Length: 100
Dec 05 02:13:54 compute-0 ovn_metadata_agent[287107]: Content-Type: application/x-www-form-urlencoded
Dec 05 02:13:54 compute-0 ovn_metadata_agent[287107]: Host: 169.254.169.254
Dec 05 02:13:54 compute-0 ovn_metadata_agent[287107]: User-Agent: curl/7.84.0
Dec 05 02:13:54 compute-0 ovn_metadata_agent[287107]: X-Forwarded-For: 10.100.0.12
Dec 05 02:13:54 compute-0 ovn_metadata_agent[287107]: X-Ovn-Network-Id: 297ab129-d19a-4a0e-893c-731678c3b7a7
Dec 05 02:13:54 compute-0 ovn_metadata_agent[287107]: 
Dec 05 02:13:54 compute-0 ovn_metadata_agent[287107]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Dec 05 02:13:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:54.633 287490 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Dec 05 02:13:54 compute-0 haproxy-metadata-proxy-297ab129-d19a-4a0e-893c-731678c3b7a7[450910]: 10.100.0.12:43810 [05/Dec/2025:02:13:54.365] listener listener/metadata 0/0/0/267/267 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Dec 05 02:13:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:54.633 287490 INFO eventlet.wsgi.server [-] 10.100.0.12,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2655857
Dec 05 02:13:55 compute-0 ceph-mon[192914]: pgmap v1947: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 05 02:13:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:56.211 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:13:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:56.212 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:13:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:56.212 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:13:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1948: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 2.2 KiB/s wr, 1 op/s
Dec 05 02:13:56 compute-0 nova_compute[349548]: 2025-12-05 02:13:56.886 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.137 349552 DEBUG oslo_concurrency.lockutils [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Acquiring lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.138 349552 DEBUG oslo_concurrency.lockutils [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.142 349552 DEBUG oslo_concurrency.lockutils [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Acquiring lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.142 349552 DEBUG oslo_concurrency.lockutils [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.143 349552 DEBUG oslo_concurrency.lockutils [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.145 349552 INFO nova.compute.manager [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Terminating instance
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.147 349552 DEBUG nova.compute.manager [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 05 02:13:57 compute-0 kernel: tapd5201944-81 (unregistering): left promiscuous mode
Dec 05 02:13:57 compute-0 NetworkManager[49092]: <info>  [1764900837.2704] device (tapd5201944-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.283 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:57 compute-0 ovn_controller[89286]: 2025-12-05T02:13:57Z|00172|binding|INFO|Releasing lport d5201944-8184-405e-ae5f-b743e1bd7399 from this chassis (sb_readonly=0)
Dec 05 02:13:57 compute-0 ovn_controller[89286]: 2025-12-05T02:13:57Z|00173|binding|INFO|Setting lport d5201944-8184-405e-ae5f-b743e1bd7399 down in Southbound
Dec 05 02:13:57 compute-0 ovn_controller[89286]: 2025-12-05T02:13:57Z|00174|binding|INFO|Removing iface tapd5201944-81 ovn-installed in OVS
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.287 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.297 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:b5:d5 10.100.0.12'], port_security=['fa:16:3e:8f:b5:d5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '117d1772-87cc-4a3d-bf07-3f9b49ac0c63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-297ab129-d19a-4a0e-893c-731678c3b7a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '286d2d767009421bb0c889a0ff65b2a2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cea42b97-22e3-42f2-b4a9-e60ab6e5a3f6 f4a2d83a-c7b3-4fde-b9ec-59d46e5208fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.207'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff1f531e-a659-4463-9351-3086ed6c2f8e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=d5201944-8184-405e-ae5f-b743e1bd7399) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.299 287122 INFO neutron.agent.ovn.metadata.agent [-] Port d5201944-8184-405e-ae5f-b743e1bd7399 in datapath 297ab129-d19a-4a0e-893c-731678c3b7a7 unbound from our chassis
Dec 05 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.303 287122 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 297ab129-d19a-4a0e-893c-731678c3b7a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.304 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[20612916-8591-4250-b504-4719987d7f91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.305 287122 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7 namespace which is not needed anymore
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.315 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:57 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Dec 05 02:13:57 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Consumed 44.580s CPU time.
Dec 05 02:13:57 compute-0 systemd-machined[138700]: Machine qemu-15-instance-0000000e terminated.
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.411 349552 INFO nova.virt.libvirt.driver [-] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Instance destroyed successfully.
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.420 349552 DEBUG nova.objects.instance [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lazy-loading 'resources' on Instance uuid 117d1772-87cc-4a3d-bf07-3f9b49ac0c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.437 349552 DEBUG nova.virt.libvirt.vif [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:12:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1301152906',display_name='tempest-TestServerBasicOps-server-1301152906',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1301152906',id=14,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDCz5vjwlgDWbvwiwH6Lrc3odqUa7TZ3EfOipPX5fpPxPUspT7EN7quA0kvbAyTCNWf/e9htL6cMWK3K35T7n3AN3hOq0SEzHNsNLt1sUvuz6ePIFT2WS8FYfWxAPVEIpA==',key_name='tempest-TestServerBasicOps-1536427465',keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:12:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='286d2d767009421bb0c889a0ff65b2a2',ramdisk_id='',reservation_id='r-iqi50j5i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-1996691968',owner_user_name='tempest-TestServerBasicOps-1996691968-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:13:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='69e134c969b04dc58a1d1556d8ecf4a8',uuid=117d1772-87cc-4a3d-bf07-3f9b49ac0c63,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5201944-8184-405e-ae5f-b743e1bd7399", "address": "fa:16:3e:8f:b5:d5", "network": {"id": "297ab129-d19a-4a0e-893c-731678c3b7a7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-588084580-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "286d2d767009421bb0c889a0ff65b2a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5201944-81", "ovs_interfaceid": "d5201944-8184-405e-ae5f-b743e1bd7399", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.437 349552 DEBUG nova.network.os_vif_util [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Converting VIF {"id": "d5201944-8184-405e-ae5f-b743e1bd7399", "address": "fa:16:3e:8f:b5:d5", "network": {"id": "297ab129-d19a-4a0e-893c-731678c3b7a7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-588084580-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "286d2d767009421bb0c889a0ff65b2a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5201944-81", "ovs_interfaceid": "d5201944-8184-405e-ae5f-b743e1bd7399", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.439 349552 DEBUG nova.network.os_vif_util [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8f:b5:d5,bridge_name='br-int',has_traffic_filtering=True,id=d5201944-8184-405e-ae5f-b743e1bd7399,network=Network(297ab129-d19a-4a0e-893c-731678c3b7a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5201944-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.440 349552 DEBUG os_vif [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:b5:d5,bridge_name='br-int',has_traffic_filtering=True,id=d5201944-8184-405e-ae5f-b743e1bd7399,network=Network(297ab129-d19a-4a0e-893c-731678c3b7a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5201944-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.442 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.442 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5201944-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.444 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.447 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.450 349552 INFO os_vif [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:b5:d5,bridge_name='br-int',has_traffic_filtering=True,id=d5201944-8184-405e-ae5f-b743e1bd7399,network=Network(297ab129-d19a-4a0e-893c-731678c3b7a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5201944-81')
Dec 05 02:13:57 compute-0 neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7[450903]: [NOTICE]   (450908) : haproxy version is 2.8.14-c23fe91
Dec 05 02:13:57 compute-0 neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7[450903]: [NOTICE]   (450908) : path to executable is /usr/sbin/haproxy
Dec 05 02:13:57 compute-0 neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7[450903]: [WARNING]  (450908) : Exiting Master process...
Dec 05 02:13:57 compute-0 neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7[450903]: [WARNING]  (450908) : Exiting Master process...
Dec 05 02:13:57 compute-0 neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7[450903]: [ALERT]    (450908) : Current worker (450910) exited with code 143 (Terminated)
Dec 05 02:13:57 compute-0 neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7[450903]: [WARNING]  (450908) : All workers exited. Exiting... (0)
Dec 05 02:13:57 compute-0 systemd[1]: libpod-6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad.scope: Deactivated successfully.
Dec 05 02:13:57 compute-0 podman[452400]: 2025-12-05 02:13:57.545543232 +0000 UTC m=+0.073903856 container died 6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:13:57 compute-0 ceph-mon[192914]: pgmap v1948: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 2.2 KiB/s wr, 1 op/s
Dec 05 02:13:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad-userdata-shm.mount: Deactivated successfully.
Dec 05 02:13:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-596f961f4772e7095f34b4530a56ee23aa3e77a9b26fe356092ec6991cf0ede7-merged.mount: Deactivated successfully.
Dec 05 02:13:57 compute-0 podman[452400]: 2025-12-05 02:13:57.626339821 +0000 UTC m=+0.154700415 container cleanup 6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 05 02:13:57 compute-0 systemd[1]: libpod-conmon-6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad.scope: Deactivated successfully.
Dec 05 02:13:57 compute-0 podman[452443]: 2025-12-05 02:13:57.771696764 +0000 UTC m=+0.097211291 container remove 6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.789 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[eac2a280-d9a9-4800-9c65-02884a9691af]: (4, ('Fri Dec  5 02:13:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7 (6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad)\n6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad\nFri Dec  5 02:13:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7 (6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad)\n6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.793 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[3fa0413b-4fb4-497b-8f52-687fa865c372]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.795 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap297ab129-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.797 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:57 compute-0 kernel: tap297ab129-d0: left promiscuous mode
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.800 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.816 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[fdae9b21-57c8-41e3-82b4-d43bd7ee25fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.821 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.831 349552 DEBUG nova.compute.manager [req-5a733769-9068-4f22-839c-1ab9fb2b44fe req-320a54e7-3479-42e1-bd65-1379ac021db1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Received event network-vif-unplugged-d5201944-8184-405e-ae5f-b743e1bd7399 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.831 349552 DEBUG oslo_concurrency.lockutils [req-5a733769-9068-4f22-839c-1ab9fb2b44fe req-320a54e7-3479-42e1-bd65-1379ac021db1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.832 349552 DEBUG oslo_concurrency.lockutils [req-5a733769-9068-4f22-839c-1ab9fb2b44fe req-320a54e7-3479-42e1-bd65-1379ac021db1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.832 349552 DEBUG oslo_concurrency.lockutils [req-5a733769-9068-4f22-839c-1ab9fb2b44fe req-320a54e7-3479-42e1-bd65-1379ac021db1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.832 349552 DEBUG nova.compute.manager [req-5a733769-9068-4f22-839c-1ab9fb2b44fe req-320a54e7-3479-42e1-bd65-1379ac021db1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] No waiting events found dispatching network-vif-unplugged-d5201944-8184-405e-ae5f-b743e1bd7399 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.832 349552 DEBUG nova.compute.manager [req-5a733769-9068-4f22-839c-1ab9fb2b44fe req-320a54e7-3479-42e1-bd65-1379ac021db1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Received event network-vif-unplugged-d5201944-8184-405e-ae5f-b743e1bd7399 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 05 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.839 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[9de92e9c-51d4-4bd3-ac62-a9c6697c45e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.841 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[a6185ec6-0209-421e-b15c-420e4c510dd4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.880 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[07db7734-901c-40f4-a8cc-9fb46ea36cce]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685127, 'reachable_time': 20794, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 452455, 'error': None, 'target': 'ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:57 compute-0 systemd[1]: run-netns-ovnmeta\x2d297ab129\x2dd19a\x2d4a0e\x2d893c\x2d731678c3b7a7.mount: Deactivated successfully.
Dec 05 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.887 287504 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 05 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.888 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c32868-b449-4423-ad31-d082cc3669c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:13:58 compute-0 nova_compute[349548]: 2025-12-05 02:13:58.190 349552 INFO nova.virt.libvirt.driver [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Deleting instance files /var/lib/nova/instances/117d1772-87cc-4a3d-bf07-3f9b49ac0c63_del
Dec 05 02:13:58 compute-0 nova_compute[349548]: 2025-12-05 02:13:58.193 349552 INFO nova.virt.libvirt.driver [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Deletion of /var/lib/nova/instances/117d1772-87cc-4a3d-bf07-3f9b49ac0c63_del complete
Dec 05 02:13:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:13:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1949: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 2.5 KiB/s wr, 1 op/s
Dec 05 02:13:58 compute-0 nova_compute[349548]: 2025-12-05 02:13:58.349 349552 INFO nova.compute.manager [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Took 1.20 seconds to destroy the instance on the hypervisor.
Dec 05 02:13:58 compute-0 nova_compute[349548]: 2025-12-05 02:13:58.350 349552 DEBUG oslo.service.loopingcall [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 05 02:13:58 compute-0 nova_compute[349548]: 2025-12-05 02:13:58.351 349552 DEBUG nova.compute.manager [-] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 05 02:13:58 compute-0 nova_compute[349548]: 2025-12-05 02:13:58.352 349552 DEBUG nova.network.neutron [-] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 05 02:13:59 compute-0 ceph-mon[192914]: pgmap v1949: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 2.5 KiB/s wr, 1 op/s
Dec 05 02:13:59 compute-0 podman[158197]: time="2025-12-05T02:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:13:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:13:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8653 "" "Go-http-client/1.1"
Dec 05 02:14:00 compute-0 nova_compute[349548]: 2025-12-05 02:14:00.030 349552 DEBUG nova.compute.manager [req-34d420b7-1558-4c41-a177-d6f3dad97600 req-f43ff667-ee79-49e0-afee-caa393042348 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Received event network-vif-plugged-d5201944-8184-405e-ae5f-b743e1bd7399 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:14:00 compute-0 nova_compute[349548]: 2025-12-05 02:14:00.031 349552 DEBUG oslo_concurrency.lockutils [req-34d420b7-1558-4c41-a177-d6f3dad97600 req-f43ff667-ee79-49e0-afee-caa393042348 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:14:00 compute-0 nova_compute[349548]: 2025-12-05 02:14:00.032 349552 DEBUG oslo_concurrency.lockutils [req-34d420b7-1558-4c41-a177-d6f3dad97600 req-f43ff667-ee79-49e0-afee-caa393042348 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:14:00 compute-0 nova_compute[349548]: 2025-12-05 02:14:00.033 349552 DEBUG oslo_concurrency.lockutils [req-34d420b7-1558-4c41-a177-d6f3dad97600 req-f43ff667-ee79-49e0-afee-caa393042348 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:14:00 compute-0 nova_compute[349548]: 2025-12-05 02:14:00.033 349552 DEBUG nova.compute.manager [req-34d420b7-1558-4c41-a177-d6f3dad97600 req-f43ff667-ee79-49e0-afee-caa393042348 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] No waiting events found dispatching network-vif-plugged-d5201944-8184-405e-ae5f-b743e1bd7399 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:14:00 compute-0 nova_compute[349548]: 2025-12-05 02:14:00.033 349552 WARNING nova.compute.manager [req-34d420b7-1558-4c41-a177-d6f3dad97600 req-f43ff667-ee79-49e0-afee-caa393042348 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Received unexpected event network-vif-plugged-d5201944-8184-405e-ae5f-b743e1bd7399 for instance with vm_state active and task_state deleting.
Dec 05 02:14:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1950: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 2.5 KiB/s wr, 1 op/s
Dec 05 02:14:00 compute-0 nova_compute[349548]: 2025-12-05 02:14:00.447 349552 DEBUG nova.network.neutron [-] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:14:00 compute-0 nova_compute[349548]: 2025-12-05 02:14:00.474 349552 INFO nova.compute.manager [-] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Took 2.12 seconds to deallocate network for instance.
Dec 05 02:14:00 compute-0 nova_compute[349548]: 2025-12-05 02:14:00.535 349552 DEBUG oslo_concurrency.lockutils [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:14:00 compute-0 nova_compute[349548]: 2025-12-05 02:14:00.537 349552 DEBUG oslo_concurrency.lockutils [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:14:00 compute-0 nova_compute[349548]: 2025-12-05 02:14:00.540 349552 DEBUG nova.compute.manager [req-ab023671-1978-47e4-824d-e21aa9dfae1f req-56fce45c-f861-4971-985a-25992e3432e8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Received event network-vif-deleted-d5201944-8184-405e-ae5f-b743e1bd7399 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:14:00 compute-0 nova_compute[349548]: 2025-12-05 02:14:00.620 349552 DEBUG oslo_concurrency.processutils [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:14:00 compute-0 podman[452456]: 2025-12-05 02:14:00.703420963 +0000 UTC m=+0.121610366 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 05 02:14:00 compute-0 podman[452457]: 2025-12-05 02:14:00.735246637 +0000 UTC m=+0.134817997 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 02:14:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:14:01 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1653726545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:14:01 compute-0 nova_compute[349548]: 2025-12-05 02:14:01.086 349552 DEBUG oslo_concurrency.processutils [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:14:01 compute-0 nova_compute[349548]: 2025-12-05 02:14:01.103 349552 DEBUG nova.compute.provider_tree [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:14:01 compute-0 nova_compute[349548]: 2025-12-05 02:14:01.128 349552 DEBUG nova.scheduler.client.report [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:14:01 compute-0 nova_compute[349548]: 2025-12-05 02:14:01.158 349552 DEBUG oslo_concurrency.lockutils [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:14:01 compute-0 nova_compute[349548]: 2025-12-05 02:14:01.198 349552 INFO nova.scheduler.client.report [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Deleted allocations for instance 117d1772-87cc-4a3d-bf07-3f9b49ac0c63
Dec 05 02:14:01 compute-0 nova_compute[349548]: 2025-12-05 02:14:01.285 349552 DEBUG oslo_concurrency.lockutils [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:14:01 compute-0 openstack_network_exporter[366555]: ERROR   02:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:14:01 compute-0 openstack_network_exporter[366555]: ERROR   02:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:14:01 compute-0 openstack_network_exporter[366555]: ERROR   02:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:14:01 compute-0 openstack_network_exporter[366555]: ERROR   02:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:14:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:14:01 compute-0 openstack_network_exporter[366555]: ERROR   02:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:14:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:14:01 compute-0 ceph-mon[192914]: pgmap v1950: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 2.5 KiB/s wr, 1 op/s
Dec 05 02:14:01 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1653726545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:14:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1951: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 7.7 KiB/s wr, 31 op/s
Dec 05 02:14:02 compute-0 nova_compute[349548]: 2025-12-05 02:14:02.314 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:02 compute-0 nova_compute[349548]: 2025-12-05 02:14:02.445 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:14:03 compute-0 ovn_controller[89286]: 2025-12-05T02:14:03Z|00175|binding|INFO|Releasing lport 9309009c-26a0-4ed9-8142-14ad142ca1c0 from this chassis (sb_readonly=0)
Dec 05 02:14:03 compute-0 nova_compute[349548]: 2025-12-05 02:14:03.539 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:03 compute-0 ceph-mon[192914]: pgmap v1951: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 7.7 KiB/s wr, 31 op/s
Dec 05 02:14:03 compute-0 ovn_controller[89286]: 2025-12-05T02:14:03Z|00176|binding|INFO|Releasing lport 9309009c-26a0-4ed9-8142-14ad142ca1c0 from this chassis (sb_readonly=0)
Dec 05 02:14:03 compute-0 nova_compute[349548]: 2025-12-05 02:14:03.870 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1952: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 7.6 KiB/s wr, 31 op/s
Dec 05 02:14:05 compute-0 ceph-mon[192914]: pgmap v1952: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 7.6 KiB/s wr, 31 op/s
Dec 05 02:14:05 compute-0 podman[452520]: 2025-12-05 02:14:05.679817039 +0000 UTC m=+0.096977925 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 05 02:14:05 compute-0 podman[452521]: 2025-12-05 02:14:05.739253218 +0000 UTC m=+0.145866148 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:14:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1953: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 7.6 KiB/s wr, 31 op/s
Dec 05 02:14:06 compute-0 ceph-mon[192914]: pgmap v1953: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 7.6 KiB/s wr, 31 op/s
Dec 05 02:14:07 compute-0 nova_compute[349548]: 2025-12-05 02:14:07.316 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:07 compute-0 nova_compute[349548]: 2025-12-05 02:14:07.449 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:07 compute-0 podman[452559]: 2025-12-05 02:14:07.72825679 +0000 UTC m=+0.133404858 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, container_name=kepler, maintainer=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.component=ubi9-container, managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Dec 05 02:14:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:14:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1954: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Dec 05 02:14:09 compute-0 ceph-mon[192914]: pgmap v1954: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Dec 05 02:14:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1955: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Dec 05 02:14:11 compute-0 ceph-mon[192914]: pgmap v1955: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Dec 05 02:14:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1956: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Dec 05 02:14:12 compute-0 nova_compute[349548]: 2025-12-05 02:14:12.320 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:12 compute-0 nova_compute[349548]: 2025-12-05 02:14:12.404 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764900837.402943, 117d1772-87cc-4a3d-bf07-3f9b49ac0c63 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:14:12 compute-0 nova_compute[349548]: 2025-12-05 02:14:12.404 349552 INFO nova.compute.manager [-] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] VM Stopped (Lifecycle Event)
Dec 05 02:14:12 compute-0 nova_compute[349548]: 2025-12-05 02:14:12.430 349552 DEBUG nova.compute.manager [None req-2e75218f-4cf8-40fe-af15-84dd697ec005 - - - - - -] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:14:12 compute-0 nova_compute[349548]: 2025-12-05 02:14:12.452 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:14:13 compute-0 ceph-mon[192914]: pgmap v1956: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Dec 05 02:14:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1957: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:15 compute-0 ceph-mon[192914]: pgmap v1957: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1958: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:14:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:14:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:14:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:14:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:14:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:14:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:14:16
Dec 05 02:14:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:14:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:14:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'images', 'backups', 'volumes']
Dec 05 02:14:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:14:16 compute-0 podman[452582]: 2025-12-05 02:14:16.913202915 +0000 UTC m=+0.096538662 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 05 02:14:16 compute-0 podman[452579]: 2025-12-05 02:14:16.91586864 +0000 UTC m=+0.118719686 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 02:14:16 compute-0 podman[452580]: 2025-12-05 02:14:16.938506586 +0000 UTC m=+0.131839974 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:14:16 compute-0 podman[452581]: 2025-12-05 02:14:16.987298706 +0000 UTC m=+0.171793826 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 02:14:17 compute-0 nova_compute[349548]: 2025-12-05 02:14:17.323 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:17 compute-0 ceph-mon[192914]: pgmap v1958: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:17 compute-0 nova_compute[349548]: 2025-12-05 02:14:17.454 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:14:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:14:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:14:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:14:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:14:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:14:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:14:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:14:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:14:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:14:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:14:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1959: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:19 compute-0 ceph-mon[192914]: pgmap v1959: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1960: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:21 compute-0 ceph-mon[192914]: pgmap v1960: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1961: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:22 compute-0 nova_compute[349548]: 2025-12-05 02:14:22.324 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:22 compute-0 nova_compute[349548]: 2025-12-05 02:14:22.457 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.1 total, 600.0 interval
                                            Cumulative writes: 9676 writes, 36K keys, 9676 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 9676 writes, 2587 syncs, 3.74 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 2307 writes, 8716 keys, 2307 commit groups, 1.0 writes per commit group, ingest: 9.18 MB, 0.02 MB/s
                                            Interval WAL: 2307 writes, 929 syncs, 2.48 writes per sync, written: 0.01 GB, 0.02 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:14:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:14:23 compute-0 ceph-mon[192914]: pgmap v1961: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1962: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:25 compute-0 ceph-mon[192914]: pgmap v1962: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1963: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007575561022660676 of space, bias 1.0, pg target 0.2272668306798203 quantized to 32 (current 32)
Dec 05 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 05 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:14:27 compute-0 nova_compute[349548]: 2025-12-05 02:14:27.327 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:27 compute-0 nova_compute[349548]: 2025-12-05 02:14:27.459 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:27 compute-0 ceph-mon[192914]: pgmap v1963: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:14:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1964: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:28 compute-0 sudo[452661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:14:28 compute-0 sudo[452661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:14:28 compute-0 sudo[452661]: pam_unix(sudo:session): session closed for user root
Dec 05 02:14:28 compute-0 sudo[452686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:14:28 compute-0 sudo[452686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:14:28 compute-0 sudo[452686]: pam_unix(sudo:session): session closed for user root
Dec 05 02:14:28 compute-0 sudo[452711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:14:28 compute-0 sudo[452711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:14:28 compute-0 sudo[452711]: pam_unix(sudo:session): session closed for user root
Dec 05 02:14:28 compute-0 sudo[452736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:14:28 compute-0 sudo[452736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:14:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:14:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.2 total, 600.0 interval
                                            Cumulative writes: 11K writes, 44K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 11K writes, 2982 syncs, 3.80 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 2397 writes, 9102 keys, 2397 commit groups, 1.0 writes per commit group, ingest: 9.64 MB, 0.02 MB/s
                                            Interval WAL: 2397 writes, 959 syncs, 2.50 writes per sync, written: 0.01 GB, 0.02 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:14:29 compute-0 ceph-mon[192914]: pgmap v1964: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:29 compute-0 sudo[452736]: pam_unix(sudo:session): session closed for user root
Dec 05 02:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:14:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:14:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:14:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:14:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 966e5409-a857-48db-95ab-9b8542a433cc does not exist
Dec 05 02:14:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 60a2fb9f-1d55-44a5-90c5-049e59862260 does not exist
Dec 05 02:14:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a26da526-d659-4343-82aa-2e09f701fd87 does not exist
Dec 05 02:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:14:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:14:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:14:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:14:29 compute-0 podman[158197]: time="2025-12-05T02:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:14:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:14:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8659 "" "Go-http-client/1.1"
Dec 05 02:14:29 compute-0 sudo[452791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:14:29 compute-0 sudo[452791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:14:29 compute-0 sudo[452791]: pam_unix(sudo:session): session closed for user root
Dec 05 02:14:29 compute-0 sudo[452816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:14:29 compute-0 sudo[452816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:14:29 compute-0 sudo[452816]: pam_unix(sudo:session): session closed for user root
Dec 05 02:14:29 compute-0 sudo[452841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:14:29 compute-0 sudo[452841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:14:29 compute-0 sudo[452841]: pam_unix(sudo:session): session closed for user root
Dec 05 02:14:30 compute-0 sudo[452866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:14:30 compute-0 sudo[452866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:14:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1965: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:14:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:14:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:14:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:14:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:14:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:14:30 compute-0 podman[452931]: 2025-12-05 02:14:30.664076058 +0000 UTC m=+0.089598667 container create 53f901430caa289cf037ee9b965ffa6556cd3e750cf59430c74dfb83804d9c26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:14:30 compute-0 podman[452931]: 2025-12-05 02:14:30.63245719 +0000 UTC m=+0.057979879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:14:30 compute-0 systemd[1]: Started libpod-conmon-53f901430caa289cf037ee9b965ffa6556cd3e750cf59430c74dfb83804d9c26.scope.
Dec 05 02:14:30 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:14:30 compute-0 podman[452931]: 2025-12-05 02:14:30.83502665 +0000 UTC m=+0.260549329 container init 53f901430caa289cf037ee9b965ffa6556cd3e750cf59430c74dfb83804d9c26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:14:30 compute-0 podman[452931]: 2025-12-05 02:14:30.850499004 +0000 UTC m=+0.276021593 container start 53f901430caa289cf037ee9b965ffa6556cd3e750cf59430c74dfb83804d9c26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_goldberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:14:30 compute-0 podman[452931]: 2025-12-05 02:14:30.856087891 +0000 UTC m=+0.281610540 container attach 53f901430caa289cf037ee9b965ffa6556cd3e750cf59430c74dfb83804d9c26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 02:14:30 compute-0 relaxed_goldberg[452947]: 167 167
Dec 05 02:14:30 compute-0 systemd[1]: libpod-53f901430caa289cf037ee9b965ffa6556cd3e750cf59430c74dfb83804d9c26.scope: Deactivated successfully.
Dec 05 02:14:30 compute-0 conmon[452947]: conmon 53f901430caa289cf037 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-53f901430caa289cf037ee9b965ffa6556cd3e750cf59430c74dfb83804d9c26.scope/container/memory.events
Dec 05 02:14:30 compute-0 podman[452931]: 2025-12-05 02:14:30.862274144 +0000 UTC m=+0.287796743 container died 53f901430caa289cf037ee9b965ffa6556cd3e750cf59430c74dfb83804d9c26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_goldberg, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:14:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-04ecb1c9fe731822801c747542dc82d7e1bf9597fb0f77f6d033ab7d90e651f0-merged.mount: Deactivated successfully.
Dec 05 02:14:30 compute-0 podman[452946]: 2025-12-05 02:14:30.917511575 +0000 UTC m=+0.145859966 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec 05 02:14:30 compute-0 podman[452931]: 2025-12-05 02:14:30.923955206 +0000 UTC m=+0.349477775 container remove 53f901430caa289cf037ee9b965ffa6556cd3e750cf59430c74dfb83804d9c26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 05 02:14:30 compute-0 systemd[1]: libpod-conmon-53f901430caa289cf037ee9b965ffa6556cd3e750cf59430c74dfb83804d9c26.scope: Deactivated successfully.
Dec 05 02:14:30 compute-0 podman[452955]: 2025-12-05 02:14:30.969535756 +0000 UTC m=+0.150286391 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:14:31 compute-0 podman[453006]: 2025-12-05 02:14:31.156138057 +0000 UTC m=+0.072827416 container create 938ea7f5d38862d8cd372d01c6d6a8f9b03448a0a231b73070164bc9229c5987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_newton, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:14:31 compute-0 podman[453006]: 2025-12-05 02:14:31.127555405 +0000 UTC m=+0.044244804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:14:31 compute-0 systemd[1]: Started libpod-conmon-938ea7f5d38862d8cd372d01c6d6a8f9b03448a0a231b73070164bc9229c5987.scope.
Dec 05 02:14:31 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/345cc48542ea1fcb84b9e64ef055ec21279afb938281b7286c7afc030fdb4b1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/345cc48542ea1fcb84b9e64ef055ec21279afb938281b7286c7afc030fdb4b1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/345cc48542ea1fcb84b9e64ef055ec21279afb938281b7286c7afc030fdb4b1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/345cc48542ea1fcb84b9e64ef055ec21279afb938281b7286c7afc030fdb4b1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/345cc48542ea1fcb84b9e64ef055ec21279afb938281b7286c7afc030fdb4b1f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:14:31 compute-0 podman[453006]: 2025-12-05 02:14:31.328808447 +0000 UTC m=+0.245497766 container init 938ea7f5d38862d8cd372d01c6d6a8f9b03448a0a231b73070164bc9229c5987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 05 02:14:31 compute-0 podman[453006]: 2025-12-05 02:14:31.345205467 +0000 UTC m=+0.261894796 container start 938ea7f5d38862d8cd372d01c6d6a8f9b03448a0a231b73070164bc9229c5987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 05 02:14:31 compute-0 podman[453006]: 2025-12-05 02:14:31.350506766 +0000 UTC m=+0.267196105 container attach 938ea7f5d38862d8cd372d01c6d6a8f9b03448a0a231b73070164bc9229c5987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 05 02:14:31 compute-0 openstack_network_exporter[366555]: ERROR   02:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:14:31 compute-0 openstack_network_exporter[366555]: ERROR   02:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:14:31 compute-0 openstack_network_exporter[366555]: ERROR   02:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:14:31 compute-0 openstack_network_exporter[366555]: ERROR   02:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:14:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:14:31 compute-0 openstack_network_exporter[366555]: ERROR   02:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:14:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:14:31 compute-0 ceph-mon[192914]: pgmap v1965: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1966: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:32 compute-0 nova_compute[349548]: 2025-12-05 02:14:32.330 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:32 compute-0 nova_compute[349548]: 2025-12-05 02:14:32.463 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:32 compute-0 interesting_newton[453021]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:14:32 compute-0 interesting_newton[453021]: --> relative data size: 1.0
Dec 05 02:14:32 compute-0 interesting_newton[453021]: --> All data devices are unavailable
Dec 05 02:14:32 compute-0 systemd[1]: libpod-938ea7f5d38862d8cd372d01c6d6a8f9b03448a0a231b73070164bc9229c5987.scope: Deactivated successfully.
Dec 05 02:14:32 compute-0 systemd[1]: libpod-938ea7f5d38862d8cd372d01c6d6a8f9b03448a0a231b73070164bc9229c5987.scope: Consumed 1.201s CPU time.
Dec 05 02:14:32 compute-0 podman[453006]: 2025-12-05 02:14:32.601471601 +0000 UTC m=+1.518160950 container died 938ea7f5d38862d8cd372d01c6d6a8f9b03448a0a231b73070164bc9229c5987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Dec 05 02:14:32 compute-0 ceph-mon[192914]: pgmap v1966: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-345cc48542ea1fcb84b9e64ef055ec21279afb938281b7286c7afc030fdb4b1f-merged.mount: Deactivated successfully.
Dec 05 02:14:32 compute-0 podman[453006]: 2025-12-05 02:14:32.86135342 +0000 UTC m=+1.778042779 container remove 938ea7f5d38862d8cd372d01c6d6a8f9b03448a0a231b73070164bc9229c5987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_newton, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:14:32 compute-0 systemd[1]: libpod-conmon-938ea7f5d38862d8cd372d01c6d6a8f9b03448a0a231b73070164bc9229c5987.scope: Deactivated successfully.
Dec 05 02:14:32 compute-0 sudo[452866]: pam_unix(sudo:session): session closed for user root
Dec 05 02:14:33 compute-0 sudo[453061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:14:33 compute-0 sudo[453061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:14:33 compute-0 sudo[453061]: pam_unix(sudo:session): session closed for user root
Dec 05 02:14:33 compute-0 sudo[453086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:14:33 compute-0 sudo[453086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:14:33 compute-0 sudo[453086]: pam_unix(sudo:session): session closed for user root
Dec 05 02:14:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:14:33 compute-0 sudo[453111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:14:33 compute-0 sudo[453111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:14:33 compute-0 sudo[453111]: pam_unix(sudo:session): session closed for user root
Dec 05 02:14:33 compute-0 sudo[453136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:14:33 compute-0 sudo[453136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:14:33 compute-0 podman[453200]: 2025-12-05 02:14:33.931620679 +0000 UTC m=+0.070916042 container create 0ea75c4f09caa29f9edb960f1d4446fa742ac4090f7e0b52a9a6bf21d7063ffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 05 02:14:33 compute-0 systemd[1]: Started libpod-conmon-0ea75c4f09caa29f9edb960f1d4446fa742ac4090f7e0b52a9a6bf21d7063ffa.scope.
Dec 05 02:14:34 compute-0 podman[453200]: 2025-12-05 02:14:33.906769962 +0000 UTC m=+0.046065355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:14:34 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:14:34 compute-0 podman[453200]: 2025-12-05 02:14:34.134358084 +0000 UTC m=+0.273653517 container init 0ea75c4f09caa29f9edb960f1d4446fa742ac4090f7e0b52a9a6bf21d7063ffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:14:34 compute-0 podman[453200]: 2025-12-05 02:14:34.149871179 +0000 UTC m=+0.289166582 container start 0ea75c4f09caa29f9edb960f1d4446fa742ac4090f7e0b52a9a6bf21d7063ffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 05 02:14:34 compute-0 podman[453200]: 2025-12-05 02:14:34.162603377 +0000 UTC m=+0.301898780 container attach 0ea75c4f09caa29f9edb960f1d4446fa742ac4090f7e0b52a9a6bf21d7063ffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:14:34 compute-0 angry_booth[453216]: 167 167
Dec 05 02:14:34 compute-0 systemd[1]: libpod-0ea75c4f09caa29f9edb960f1d4446fa742ac4090f7e0b52a9a6bf21d7063ffa.scope: Deactivated successfully.
Dec 05 02:14:34 compute-0 podman[453200]: 2025-12-05 02:14:34.168322738 +0000 UTC m=+0.307618141 container died 0ea75c4f09caa29f9edb960f1d4446fa742ac4090f7e0b52a9a6bf21d7063ffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_booth, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:14:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b8eaa706de67091647c9322fb2bf65b68c19b92ba3933be50bfcc3483c6a0e6-merged.mount: Deactivated successfully.
Dec 05 02:14:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1967: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:34 compute-0 podman[453200]: 2025-12-05 02:14:34.333058774 +0000 UTC m=+0.472354167 container remove 0ea75c4f09caa29f9edb960f1d4446fa742ac4090f7e0b52a9a6bf21d7063ffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_booth, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:14:34 compute-0 systemd[1]: libpod-conmon-0ea75c4f09caa29f9edb960f1d4446fa742ac4090f7e0b52a9a6bf21d7063ffa.scope: Deactivated successfully.
Dec 05 02:14:34 compute-0 podman[453240]: 2025-12-05 02:14:34.655369196 +0000 UTC m=+0.085359439 container create b47d4f076530f13ea73ca5c68d300e2552d6869d5d761f09be271e98e0fa74ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:14:34 compute-0 podman[453240]: 2025-12-05 02:14:34.619750275 +0000 UTC m=+0.049740588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:14:34 compute-0 systemd[1]: Started libpod-conmon-b47d4f076530f13ea73ca5c68d300e2552d6869d5d761f09be271e98e0fa74ea.scope.
Dec 05 02:14:34 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/533ef216754e28c0bb82e6da509087057248bd887792724f62f909f37e42df5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/533ef216754e28c0bb82e6da509087057248bd887792724f62f909f37e42df5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/533ef216754e28c0bb82e6da509087057248bd887792724f62f909f37e42df5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/533ef216754e28c0bb82e6da509087057248bd887792724f62f909f37e42df5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:14:35 compute-0 podman[453240]: 2025-12-05 02:14:35.021673434 +0000 UTC m=+0.451663737 container init b47d4f076530f13ea73ca5c68d300e2552d6869d5d761f09be271e98e0fa74ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hamilton, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 05 02:14:35 compute-0 podman[453240]: 2025-12-05 02:14:35.043090165 +0000 UTC m=+0.473080428 container start b47d4f076530f13ea73ca5c68d300e2552d6869d5d761f09be271e98e0fa74ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:14:35 compute-0 podman[453240]: 2025-12-05 02:14:35.196532735 +0000 UTC m=+0.626522968 container attach b47d4f076530f13ea73ca5c68d300e2552d6869d5d761f09be271e98e0fa74ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hamilton, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:14:35 compute-0 ceph-mon[192914]: pgmap v1967: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.1 total, 600.0 interval
                                            Cumulative writes: 9064 writes, 35K keys, 9064 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 9064 writes, 2319 syncs, 3.91 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1653 writes, 6154 keys, 1653 commit groups, 1.0 writes per commit group, ingest: 6.39 MB, 0.01 MB/s
                                            Interval WAL: 1653 writes, 687 syncs, 2.41 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]: {
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:     "0": [
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:         {
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "devices": [
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "/dev/loop3"
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             ],
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "lv_name": "ceph_lv0",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "lv_size": "21470642176",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "name": "ceph_lv0",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "tags": {
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.cluster_name": "ceph",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.crush_device_class": "",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.encrypted": "0",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.osd_id": "0",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.type": "block",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.vdo": "0"
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             },
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "type": "block",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "vg_name": "ceph_vg0"
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:         }
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:     ],
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:     "1": [
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:         {
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "devices": [
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "/dev/loop4"
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             ],
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "lv_name": "ceph_lv1",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "lv_size": "21470642176",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "name": "ceph_lv1",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "tags": {
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.cluster_name": "ceph",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.crush_device_class": "",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.encrypted": "0",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.osd_id": "1",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.type": "block",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.vdo": "0"
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             },
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "type": "block",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "vg_name": "ceph_vg1"
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:         }
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:     ],
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:     "2": [
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:         {
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "devices": [
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "/dev/loop5"
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             ],
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "lv_name": "ceph_lv2",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "lv_size": "21470642176",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "name": "ceph_lv2",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "tags": {
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.cluster_name": "ceph",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.crush_device_class": "",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.encrypted": "0",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.osd_id": "2",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.type": "block",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:                 "ceph.vdo": "0"
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             },
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "type": "block",
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:             "vg_name": "ceph_vg2"
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:         }
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]:     ]
Dec 05 02:14:35 compute-0 laughing_hamilton[453253]: }
Dec 05 02:14:35 compute-0 systemd[1]: libpod-b47d4f076530f13ea73ca5c68d300e2552d6869d5d761f09be271e98e0fa74ea.scope: Deactivated successfully.
Dec 05 02:14:36 compute-0 podman[453264]: 2025-12-05 02:14:36.056633331 +0000 UTC m=+0.053412941 container died b47d4f076530f13ea73ca5c68d300e2552d6869d5d761f09be271e98e0fa74ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:14:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-533ef216754e28c0bb82e6da509087057248bd887792724f62f909f37e42df5b-merged.mount: Deactivated successfully.
Dec 05 02:14:36 compute-0 podman[453265]: 2025-12-05 02:14:36.155302133 +0000 UTC m=+0.142019120 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 02:14:36 compute-0 podman[453263]: 2025-12-05 02:14:36.259209261 +0000 UTC m=+0.244104947 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm)
Dec 05 02:14:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1968: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:36 compute-0 podman[453264]: 2025-12-05 02:14:36.511404544 +0000 UTC m=+0.508184154 container remove b47d4f076530f13ea73ca5c68d300e2552d6869d5d761f09be271e98e0fa74ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hamilton, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:14:36 compute-0 systemd[1]: libpod-conmon-b47d4f076530f13ea73ca5c68d300e2552d6869d5d761f09be271e98e0fa74ea.scope: Deactivated successfully.
Dec 05 02:14:36 compute-0 sudo[453136]: pam_unix(sudo:session): session closed for user root
Dec 05 02:14:36 compute-0 sudo[453312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:14:36 compute-0 sudo[453312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:14:36 compute-0 sudo[453312]: pam_unix(sudo:session): session closed for user root
Dec 05 02:14:36 compute-0 sudo[453337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:14:36 compute-0 sudo[453337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:14:36 compute-0 sudo[453337]: pam_unix(sudo:session): session closed for user root
Dec 05 02:14:36 compute-0 sudo[453362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:14:36 compute-0 sudo[453362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:14:36 compute-0 sudo[453362]: pam_unix(sudo:session): session closed for user root
Dec 05 02:14:37 compute-0 sudo[453387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:14:37 compute-0 sudo[453387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:14:37 compute-0 nova_compute[349548]: 2025-12-05 02:14:37.332 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:37 compute-0 ceph-mon[192914]: pgmap v1968: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:37 compute-0 nova_compute[349548]: 2025-12-05 02:14:37.465 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:37 compute-0 podman[453449]: 2025-12-05 02:14:37.570349165 +0000 UTC m=+0.070469700 container create f2f1e02dc97fbdbf43c89da3af6368bdf89922065f43aaa6188687719d37e7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Dec 05 02:14:37 compute-0 ceph-mgr[193209]: [devicehealth INFO root] Check health
Dec 05 02:14:37 compute-0 podman[453449]: 2025-12-05 02:14:37.546779053 +0000 UTC m=+0.046899608 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:14:37 compute-0 systemd[1]: Started libpod-conmon-f2f1e02dc97fbdbf43c89da3af6368bdf89922065f43aaa6188687719d37e7cd.scope.
Dec 05 02:14:37 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:14:37 compute-0 podman[453449]: 2025-12-05 02:14:37.970796132 +0000 UTC m=+0.470916687 container init f2f1e02dc97fbdbf43c89da3af6368bdf89922065f43aaa6188687719d37e7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_varahamihira, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:14:37 compute-0 podman[453449]: 2025-12-05 02:14:37.992580104 +0000 UTC m=+0.492700659 container start f2f1e02dc97fbdbf43c89da3af6368bdf89922065f43aaa6188687719d37e7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_varahamihira, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:14:37 compute-0 podman[453449]: 2025-12-05 02:14:37.999833928 +0000 UTC m=+0.499954503 container attach f2f1e02dc97fbdbf43c89da3af6368bdf89922065f43aaa6188687719d37e7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 05 02:14:38 compute-0 heuristic_varahamihira[453467]: 167 167
Dec 05 02:14:38 compute-0 systemd[1]: libpod-f2f1e02dc97fbdbf43c89da3af6368bdf89922065f43aaa6188687719d37e7cd.scope: Deactivated successfully.
Dec 05 02:14:38 compute-0 podman[453449]: 2025-12-05 02:14:38.006503245 +0000 UTC m=+0.506623810 container died f2f1e02dc97fbdbf43c89da3af6368bdf89922065f43aaa6188687719d37e7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_varahamihira, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:14:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c382bea35d4f0a8a9666aca8f6f7f160e2f4010d9521670bbf0b0d335fcc6ca4-merged.mount: Deactivated successfully.
Dec 05 02:14:38 compute-0 podman[453449]: 2025-12-05 02:14:38.087299024 +0000 UTC m=+0.587419569 container remove f2f1e02dc97fbdbf43c89da3af6368bdf89922065f43aaa6188687719d37e7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_varahamihira, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:14:38 compute-0 podman[453466]: 2025-12-05 02:14:38.090618077 +0000 UTC m=+0.273478781 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., config_id=edpm, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, release-0.7.12=, architecture=x86_64, com.redhat.component=ubi9-container, io.openshift.expose-services=, distribution-scope=public, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 05 02:14:38 compute-0 systemd[1]: libpod-conmon-f2f1e02dc97fbdbf43c89da3af6368bdf89922065f43aaa6188687719d37e7cd.scope: Deactivated successfully.
Dec 05 02:14:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:14:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1969: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.324 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.325 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.333 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '292fd084-0808-4a80-adc1-6ab1f28e188a', 'name': 'te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'hostId': '1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18', 'status': 'active', 'metadata': {'metering.server_group': '92ca195d-98d1-443c-9947-dcb7ca7b926a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.334 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.334 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.334 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.335 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.336 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.336 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.336 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.337 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.337 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T02:14:38.335139) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T02:14:38.337367) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:14:38 compute-0 podman[453506]: 2025-12-05 02:14:38.355359712 +0000 UTC m=+0.085069240 container create fa773b8cb0baeac3aee53ef0617fd7b819d44c897c78761b26a903917ea1b805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.361 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.361 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.362 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.363 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.363 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.364 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.364 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.365 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.365 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.366 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.366 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T02:14:38.364220) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.366 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.366 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.366 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.367 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.368 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T02:14:38.367174) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:14:38 compute-0 podman[453506]: 2025-12-05 02:14:38.31540723 +0000 UTC m=+0.045116818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.432 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 29961216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.433 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.433 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.434 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.434 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.434 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.434 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.434 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.435 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 3090417276 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.435 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 214244219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.436 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.436 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.436 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.437 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.437 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:14:38 compute-0 systemd[1]: Started libpod-conmon-fa773b8cb0baeac3aee53ef0617fd7b819d44c897c78761b26a903917ea1b805.scope.
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.438 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T02:14:38.434863) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.437 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.438 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 1059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.439 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.440 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.441 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.441 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T02:14:38.437725) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.441 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.441 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.441 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.441 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.442 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T02:14:38.441779) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.442 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.443 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.443 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.444 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.444 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.444 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.444 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.445 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.445 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 72839168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.446 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T02:14:38.445084) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.446 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.446 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.447 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.447 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.447 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.447 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.447 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.448 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T02:14:38.447510) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:14:38 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68dc37b46001bba592061ce04c9f560a93ba75b42ef79495dfc19b0bf0cb795b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68dc37b46001bba592061ce04c9f560a93ba75b42ef79495dfc19b0bf0cb795b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68dc37b46001bba592061ce04c9f560a93ba75b42ef79495dfc19b0bf0cb795b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68dc37b46001bba592061ce04c9f560a93ba75b42ef79495dfc19b0bf0cb795b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.487 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.488 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.488 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.489 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.489 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.489 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.489 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.489 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 10935968399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.490 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.491 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.491 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.491 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.492 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.492 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.492 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.493 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 290 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.493 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.494 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.494 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.494 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.495 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.495 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.495 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.498 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T02:14:38.489597) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.499 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T02:14:38.492564) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.500 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T02:14:38.495475) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.501 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.501 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.501 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.502 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.502 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.502 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.502 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.502 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.502 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.503 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.503 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.503 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.503 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.503 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.503 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.504 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.504 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T02:14:38.502333) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.504 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.504 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.504 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.505 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.505 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.505 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.505 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.505 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.506 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.506 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.506 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.506 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.506 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.507 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.507 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.507 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.507 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.507 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.507 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.508 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.508 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.508 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.508 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.508 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.508 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.509 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.509 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.509 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.509 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/memory.usage volume: 43.12109375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.510 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.510 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.510 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.510 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.510 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.511 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.511 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.511 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.512 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.512 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.512 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.512 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.512 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.513 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.513 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.513 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.513 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.513 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.513 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.514 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.514 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.514 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.514 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.514 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.514 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/cpu volume: 183440000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.515 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.515 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.515 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.515 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.515 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.515 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.516 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.516 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.516 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.516 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.516 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.517 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.517 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T02:14:38.503688) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.517 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.518 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T02:14:38.505314) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.518 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T02:14:38.506505) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.519 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T02:14:38.507813) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.522 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.522 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.522 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.522 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.522 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.522 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.523 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T02:14:38.509469) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T02:14:38.510831) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T02:14:38.512226) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T02:14:38.513418) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.525 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T02:14:38.514673) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.525 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T02:14:38.515754) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.526 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T02:14:38.517040) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:14:38 compute-0 podman[453506]: 2025-12-05 02:14:38.551722327 +0000 UTC m=+0.281431895 container init fa773b8cb0baeac3aee53ef0617fd7b819d44c897c78761b26a903917ea1b805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pare, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 05 02:14:38 compute-0 podman[453506]: 2025-12-05 02:14:38.573351895 +0000 UTC m=+0.303061393 container start fa773b8cb0baeac3aee53ef0617fd7b819d44c897c78761b26a903917ea1b805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pare, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:14:38 compute-0 podman[453506]: 2025-12-05 02:14:38.578486299 +0000 UTC m=+0.308195827 container attach fa773b8cb0baeac3aee53ef0617fd7b819d44c897c78761b26a903917ea1b805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Dec 05 02:14:39 compute-0 ceph-mon[192914]: pgmap v1969: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:39 compute-0 serene_pare[453521]: {
Dec 05 02:14:39 compute-0 serene_pare[453521]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:14:39 compute-0 serene_pare[453521]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:14:39 compute-0 serene_pare[453521]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:14:39 compute-0 serene_pare[453521]:         "osd_id": 0,
Dec 05 02:14:39 compute-0 serene_pare[453521]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:14:39 compute-0 serene_pare[453521]:         "type": "bluestore"
Dec 05 02:14:39 compute-0 serene_pare[453521]:     },
Dec 05 02:14:39 compute-0 serene_pare[453521]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:14:39 compute-0 serene_pare[453521]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:14:39 compute-0 serene_pare[453521]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:14:39 compute-0 serene_pare[453521]:         "osd_id": 1,
Dec 05 02:14:39 compute-0 serene_pare[453521]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:14:39 compute-0 serene_pare[453521]:         "type": "bluestore"
Dec 05 02:14:39 compute-0 serene_pare[453521]:     },
Dec 05 02:14:39 compute-0 serene_pare[453521]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:14:39 compute-0 serene_pare[453521]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:14:39 compute-0 serene_pare[453521]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:14:39 compute-0 serene_pare[453521]:         "osd_id": 2,
Dec 05 02:14:39 compute-0 serene_pare[453521]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:14:39 compute-0 serene_pare[453521]:         "type": "bluestore"
Dec 05 02:14:39 compute-0 serene_pare[453521]:     }
Dec 05 02:14:39 compute-0 serene_pare[453521]: }
Dec 05 02:14:39 compute-0 systemd[1]: libpod-fa773b8cb0baeac3aee53ef0617fd7b819d44c897c78761b26a903917ea1b805.scope: Deactivated successfully.
Dec 05 02:14:39 compute-0 podman[453506]: 2025-12-05 02:14:39.718646181 +0000 UTC m=+1.448355709 container died fa773b8cb0baeac3aee53ef0617fd7b819d44c897c78761b26a903917ea1b805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 05 02:14:39 compute-0 systemd[1]: libpod-fa773b8cb0baeac3aee53ef0617fd7b819d44c897c78761b26a903917ea1b805.scope: Consumed 1.155s CPU time.
Dec 05 02:14:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-68dc37b46001bba592061ce04c9f560a93ba75b42ef79495dfc19b0bf0cb795b-merged.mount: Deactivated successfully.
Dec 05 02:14:39 compute-0 podman[453506]: 2025-12-05 02:14:39.815741968 +0000 UTC m=+1.545451476 container remove fa773b8cb0baeac3aee53ef0617fd7b819d44c897c78761b26a903917ea1b805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 05 02:14:39 compute-0 systemd[1]: libpod-conmon-fa773b8cb0baeac3aee53ef0617fd7b819d44c897c78761b26a903917ea1b805.scope: Deactivated successfully.
Dec 05 02:14:39 compute-0 sudo[453387]: pam_unix(sudo:session): session closed for user root
Dec 05 02:14:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:14:39 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:14:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:14:39 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:14:39 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b6da5b51-4939-43e9-8c52-b201d938c612 does not exist
Dec 05 02:14:39 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b88f9982-44de-4f27-900c-504db4a23a7e does not exist
Dec 05 02:14:40 compute-0 sudo[453565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:14:40 compute-0 sudo[453565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:14:40 compute-0 sudo[453565]: pam_unix(sudo:session): session closed for user root
Dec 05 02:14:40 compute-0 nova_compute[349548]: 2025-12-05 02:14:40.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:14:40 compute-0 nova_compute[349548]: 2025-12-05 02:14:40.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:14:40 compute-0 nova_compute[349548]: 2025-12-05 02:14:40.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:14:40 compute-0 sudo[453590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:14:40 compute-0 sudo[453590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:14:40 compute-0 sudo[453590]: pam_unix(sudo:session): session closed for user root
Dec 05 02:14:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1970: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:40 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:14:40 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:14:40 compute-0 ceph-mon[192914]: pgmap v1970: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:41 compute-0 nova_compute[349548]: 2025-12-05 02:14:41.080 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:14:41 compute-0 nova_compute[349548]: 2025-12-05 02:14:41.081 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:14:41 compute-0 nova_compute[349548]: 2025-12-05 02:14:41.081 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 02:14:41 compute-0 nova_compute[349548]: 2025-12-05 02:14:41.082 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 292fd084-0808-4a80-adc1-6ab1f28e188a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:14:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1971: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:42 compute-0 nova_compute[349548]: 2025-12-05 02:14:42.335 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:42 compute-0 nova_compute[349548]: 2025-12-05 02:14:42.467 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:42 compute-0 nova_compute[349548]: 2025-12-05 02:14:42.831 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updating instance_info_cache with network_info: [{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:14:42 compute-0 nova_compute[349548]: 2025-12-05 02:14:42.854 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:14:42 compute-0 nova_compute[349548]: 2025-12-05 02:14:42.854 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 02:14:42 compute-0 nova_compute[349548]: 2025-12-05 02:14:42.855 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:14:42 compute-0 nova_compute[349548]: 2025-12-05 02:14:42.855 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:14:42 compute-0 nova_compute[349548]: 2025-12-05 02:14:42.856 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:14:42 compute-0 nova_compute[349548]: 2025-12-05 02:14:42.856 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:14:43 compute-0 nova_compute[349548]: 2025-12-05 02:14:43.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:14:43 compute-0 nova_compute[349548]: 2025-12-05 02:14:43.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:14:43 compute-0 nova_compute[349548]: 2025-12-05 02:14:43.116 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:14:43 compute-0 nova_compute[349548]: 2025-12-05 02:14:43.117 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:14:43 compute-0 nova_compute[349548]: 2025-12-05 02:14:43.117 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:14:43 compute-0 nova_compute[349548]: 2025-12-05 02:14:43.118 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:14:43 compute-0 nova_compute[349548]: 2025-12-05 02:14:43.118 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:14:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:14:43 compute-0 ceph-mon[192914]: pgmap v1971: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:14:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/576510669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:14:43 compute-0 nova_compute[349548]: 2025-12-05 02:14:43.616 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:14:43 compute-0 nova_compute[349548]: 2025-12-05 02:14:43.722 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:14:43 compute-0 nova_compute[349548]: 2025-12-05 02:14:43.724 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:14:44 compute-0 nova_compute[349548]: 2025-12-05 02:14:44.234 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:14:44 compute-0 nova_compute[349548]: 2025-12-05 02:14:44.235 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3729MB free_disk=59.94283676147461GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:14:44 compute-0 nova_compute[349548]: 2025-12-05 02:14:44.236 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:14:44 compute-0 nova_compute[349548]: 2025-12-05 02:14:44.236 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:14:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1972: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:44 compute-0 nova_compute[349548]: 2025-12-05 02:14:44.336 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:14:44 compute-0 nova_compute[349548]: 2025-12-05 02:14:44.337 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:14:44 compute-0 nova_compute[349548]: 2025-12-05 02:14:44.338 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:14:44 compute-0 nova_compute[349548]: 2025-12-05 02:14:44.370 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:14:44 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 05 02:14:44 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/576510669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:14:44 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/106377308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:14:44 compute-0 nova_compute[349548]: 2025-12-05 02:14:44.946 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:14:44 compute-0 nova_compute[349548]: 2025-12-05 02:14:44.956 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:14:44 compute-0 nova_compute[349548]: 2025-12-05 02:14:44.983 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:14:45 compute-0 nova_compute[349548]: 2025-12-05 02:14:45.016 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:14:45 compute-0 nova_compute[349548]: 2025-12-05 02:14:45.016 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:14:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:14:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2229788711' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:14:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:14:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2229788711' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:14:45 compute-0 ceph-mon[192914]: pgmap v1972: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/106377308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:14:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2229788711' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:14:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2229788711' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:14:46 compute-0 nova_compute[349548]: 2025-12-05 02:14:46.017 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:14:46 compute-0 nova_compute[349548]: 2025-12-05 02:14:46.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:14:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1973: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:14:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:14:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:14:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:14:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:14:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:14:46 compute-0 ceph-mon[192914]: pgmap v1973: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:46 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 05 02:14:47 compute-0 nova_compute[349548]: 2025-12-05 02:14:47.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:14:47 compute-0 nova_compute[349548]: 2025-12-05 02:14:47.338 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:47 compute-0 nova_compute[349548]: 2025-12-05 02:14:47.470 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:47 compute-0 podman[453662]: 2025-12-05 02:14:47.688100339 +0000 UTC m=+0.095012310 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 05 02:14:47 compute-0 podman[453663]: 2025-12-05 02:14:47.706097784 +0000 UTC m=+0.118528480 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 02:14:47 compute-0 podman[453664]: 2025-12-05 02:14:47.734242145 +0000 UTC m=+0.129900509 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 05 02:14:47 compute-0 podman[453669]: 2025-12-05 02:14:47.748574777 +0000 UTC m=+0.134026805 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, io.openshift.tags=minimal rhel9, version=9.6, build-date=2025-08-20T13:12:41, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, distribution-scope=public, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 05 02:14:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:14:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1974: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:49 compute-0 ceph-mon[192914]: pgmap v1974: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1975: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:51 compute-0 ceph-mon[192914]: pgmap v1975: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1976: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:52 compute-0 nova_compute[349548]: 2025-12-05 02:14:52.343 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:52 compute-0 nova_compute[349548]: 2025-12-05 02:14:52.474 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:14:53 compute-0 ovn_controller[89286]: 2025-12-05T02:14:53Z|00177|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Dec 05 02:14:53 compute-0 ceph-mon[192914]: pgmap v1976: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1977: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:55 compute-0 ceph-mon[192914]: pgmap v1977: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:14:56.212 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:14:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:14:56.213 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:14:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:14:56.214 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:14:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1978: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:57 compute-0 nova_compute[349548]: 2025-12-05 02:14:57.346 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:57 compute-0 nova_compute[349548]: 2025-12-05 02:14:57.478 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:14:57 compute-0 ceph-mon[192914]: pgmap v1978: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:14:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1979: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:59 compute-0 ceph-mon[192914]: pgmap v1979: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:14:59 compute-0 podman[158197]: time="2025-12-05T02:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:14:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:14:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8649 "" "Go-http-client/1.1"
Dec 05 02:15:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1980: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:01 compute-0 openstack_network_exporter[366555]: ERROR   02:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:15:01 compute-0 openstack_network_exporter[366555]: ERROR   02:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:15:01 compute-0 openstack_network_exporter[366555]: ERROR   02:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:15:01 compute-0 openstack_network_exporter[366555]: ERROR   02:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:15:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:15:01 compute-0 openstack_network_exporter[366555]: ERROR   02:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:15:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:15:01 compute-0 ceph-mon[192914]: pgmap v1980: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:01 compute-0 podman[453746]: 2025-12-05 02:15:01.695341151 +0000 UTC m=+0.103153968 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:15:01 compute-0 podman[453747]: 2025-12-05 02:15:01.707677567 +0000 UTC m=+0.116659047 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:15:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1981: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:02 compute-0 nova_compute[349548]: 2025-12-05 02:15:02.349 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:02 compute-0 nova_compute[349548]: 2025-12-05 02:15:02.481 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:02 compute-0 ceph-mon[192914]: pgmap v1981: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:15:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1982: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:05 compute-0 ceph-mon[192914]: pgmap v1982: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1983: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:06 compute-0 podman[453787]: 2025-12-05 02:15:06.696056659 +0000 UTC m=+0.106038258 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec 05 02:15:06 compute-0 podman[453788]: 2025-12-05 02:15:06.743222303 +0000 UTC m=+0.136539834 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.067 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.067 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.068 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.069 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.069 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.070 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.104 349552 DEBUG nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.128 349552 DEBUG nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.129 349552 DEBUG nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Image id 773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e yields fingerprint ce40e952b4771285622230948599d16442d55b06 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.130 349552 INFO nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] image 773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e at (/var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06): checking
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.131 349552 DEBUG nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] image 773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e at (/var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.135 349552 DEBUG nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.136 349552 DEBUG nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] 292fd084-0808-4a80-adc1-6ab1f28e188a is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.137 349552 WARNING nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.138 349552 WARNING nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.139 349552 WARNING nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.139 349552 INFO nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Active base files: /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.140 349552 INFO nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Removable base files: /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410 /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.141 349552 INFO nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.142 349552 INFO nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.143 349552 INFO nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.143 349552 DEBUG nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.144 349552 DEBUG nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.145 349552 DEBUG nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.145 349552 INFO nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.353 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:07 compute-0 ceph-mon[192914]: pgmap v1983: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.485 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:15:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1984: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:08 compute-0 podman[453826]: 2025-12-05 02:15:08.710312461 +0000 UTC m=+0.110433453 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, version=9.4, release-0.7.12=, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, release=1214.1726694543, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 05 02:15:09 compute-0 ceph-mon[192914]: pgmap v1984: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1985: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:11 compute-0 ceph-mon[192914]: pgmap v1985: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1986: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:12 compute-0 nova_compute[349548]: 2025-12-05 02:15:12.356 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:12 compute-0 nova_compute[349548]: 2025-12-05 02:15:12.488 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:15:13 compute-0 ceph-mon[192914]: pgmap v1986: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1987: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:15 compute-0 ceph-mon[192914]: pgmap v1987: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:15:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:15:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1988: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:15:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:15:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:15:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:15:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:15:16
Dec 05 02:15:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:15:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:15:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.meta', 'backups', '.rgw.root', 'default.rgw.control', '.mgr']
Dec 05 02:15:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:15:17 compute-0 nova_compute[349548]: 2025-12-05 02:15:17.361 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:17 compute-0 nova_compute[349548]: 2025-12-05 02:15:17.490 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:17 compute-0 ceph-mon[192914]: pgmap v1988: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:15:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:15:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:15:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:15:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:15:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:15:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:15:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:15:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:15:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:15:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:15:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1989: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:18 compute-0 podman[453847]: 2025-12-05 02:15:18.694839162 +0000 UTC m=+0.091867441 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 02:15:18 compute-0 podman[453854]: 2025-12-05 02:15:18.712993972 +0000 UTC m=+0.099990859 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_id=edpm, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public)
Dec 05 02:15:18 compute-0 podman[453846]: 2025-12-05 02:15:18.723155268 +0000 UTC m=+0.137784951 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 02:15:18 compute-0 podman[453848]: 2025-12-05 02:15:18.763513771 +0000 UTC m=+0.155318763 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 05 02:15:18 compute-0 ceph-mon[192914]: pgmap v1989: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1990: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:21 compute-0 ceph-mon[192914]: pgmap v1990: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1991: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:22 compute-0 nova_compute[349548]: 2025-12-05 02:15:22.364 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:22 compute-0 nova_compute[349548]: 2025-12-05 02:15:22.495 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:15:23 compute-0 ceph-mon[192914]: pgmap v1991: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1992: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:25 compute-0 ceph-mon[192914]: pgmap v1992: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1993: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007575561022660676 of space, bias 1.0, pg target 0.2272668306798203 quantized to 32 (current 32)
Dec 05 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 05 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:15:27 compute-0 nova_compute[349548]: 2025-12-05 02:15:27.366 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:27 compute-0 ceph-mon[192914]: pgmap v1993: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:27 compute-0 nova_compute[349548]: 2025-12-05 02:15:27.498 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:15:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1994: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:29 compute-0 ceph-mon[192914]: pgmap v1994: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:29 compute-0 podman[158197]: time="2025-12-05T02:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:15:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:15:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8664 "" "Go-http-client/1.1"
Dec 05 02:15:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1995: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:30 compute-0 nova_compute[349548]: 2025-12-05 02:15:30.654 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:15:30 compute-0 nova_compute[349548]: 2025-12-05 02:15:30.655 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:15:30 compute-0 nova_compute[349548]: 2025-12-05 02:15:30.679 349552 DEBUG nova.compute.manager [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 05 02:15:30 compute-0 nova_compute[349548]: 2025-12-05 02:15:30.802 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:15:30 compute-0 nova_compute[349548]: 2025-12-05 02:15:30.803 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:15:30 compute-0 nova_compute[349548]: 2025-12-05 02:15:30.818 349552 DEBUG nova.virt.hardware [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 05 02:15:30 compute-0 nova_compute[349548]: 2025-12-05 02:15:30.819 349552 INFO nova.compute.claims [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Claim successful on node compute-0.ctlplane.example.com
Dec 05 02:15:30 compute-0 nova_compute[349548]: 2025-12-05 02:15:30.940 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:15:31 compute-0 openstack_network_exporter[366555]: ERROR   02:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:15:31 compute-0 openstack_network_exporter[366555]: ERROR   02:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:15:31 compute-0 openstack_network_exporter[366555]: ERROR   02:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:15:31 compute-0 openstack_network_exporter[366555]: ERROR   02:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:15:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:15:31 compute-0 openstack_network_exporter[366555]: ERROR   02:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:15:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:15:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:15:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2236030647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:15:31 compute-0 ceph-mon[192914]: pgmap v1995: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:31 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2236030647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.515 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.529 349552 DEBUG nova.compute.provider_tree [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.563 349552 DEBUG nova.scheduler.client.report [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.592 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.594 349552 DEBUG nova.compute.manager [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 05 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.655 349552 DEBUG nova.compute.manager [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 05 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.656 349552 DEBUG nova.network.neutron [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 05 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.685 349552 INFO nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 05 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.705 349552 DEBUG nova.compute.manager [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 05 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.792 349552 DEBUG nova.compute.manager [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 05 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.794 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 05 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.795 349552 INFO nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Creating image(s)
Dec 05 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.835 349552 DEBUG nova.storage.rbd_utils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.879 349552 DEBUG nova.storage.rbd_utils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.933 349552 DEBUG nova.storage.rbd_utils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.945 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.985 349552 DEBUG nova.policy [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 05 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.047 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.048 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "ce40e952b4771285622230948599d16442d55b06" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.048 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "ce40e952b4771285622230948599d16442d55b06" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.049 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "ce40e952b4771285622230948599d16442d55b06" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.097 349552 DEBUG nova.storage.rbd_utils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.107 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06 e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:15:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1996: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.369 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.501 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.506 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06 e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:15:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:32.535 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:15:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:32.540 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.576 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.684 349552 DEBUG nova.network.neutron [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Successfully created port: afc3cf6c-cbe3-4163-920e-7122f474d371 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 05 02:15:32 compute-0 podman[454067]: 2025-12-05 02:15:32.703446503 +0000 UTC m=+0.104761363 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 02:15:32 compute-0 podman[454064]: 2025-12-05 02:15:32.704944945 +0000 UTC m=+0.114195998 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.705 349552 DEBUG nova.storage.rbd_utils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] resizing rbd image e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 05 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.875 349552 DEBUG nova.objects.instance [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lazy-loading 'migration_context' on Instance uuid e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.891 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 05 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.892 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Ensure instance console log exists: /var/lib/nova/instances/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 05 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.892 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.893 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.893 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:15:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:15:33 compute-0 nova_compute[349548]: 2025-12-05 02:15:33.404 349552 DEBUG nova.network.neutron [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Successfully updated port: afc3cf6c-cbe3-4163-920e-7122f474d371 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 05 02:15:33 compute-0 nova_compute[349548]: 2025-12-05 02:15:33.423 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:15:33 compute-0 nova_compute[349548]: 2025-12-05 02:15:33.424 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquired lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:15:33 compute-0 nova_compute[349548]: 2025-12-05 02:15:33.424 349552 DEBUG nova.network.neutron [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 05 02:15:33 compute-0 ceph-mon[192914]: pgmap v1996: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:33 compute-0 nova_compute[349548]: 2025-12-05 02:15:33.505 349552 DEBUG nova.compute.manager [req-bc032658-6ce9-4449-a1fc-aa1001464151 req-913dbf6b-7767-4b15-94ab-31059bb58be8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Received event network-changed-afc3cf6c-cbe3-4163-920e-7122f474d371 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:15:33 compute-0 nova_compute[349548]: 2025-12-05 02:15:33.505 349552 DEBUG nova.compute.manager [req-bc032658-6ce9-4449-a1fc-aa1001464151 req-913dbf6b-7767-4b15-94ab-31059bb58be8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Refreshing instance network info cache due to event network-changed-afc3cf6c-cbe3-4163-920e-7122f474d371. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 05 02:15:33 compute-0 nova_compute[349548]: 2025-12-05 02:15:33.505 349552 DEBUG oslo_concurrency.lockutils [req-bc032658-6ce9-4449-a1fc-aa1001464151 req-913dbf6b-7767-4b15-94ab-31059bb58be8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:15:33 compute-0 nova_compute[349548]: 2025-12-05 02:15:33.582 349552 DEBUG nova.network.neutron [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 05 02:15:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1997: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:35 compute-0 ceph-mon[192914]: pgmap v1997: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:15:35 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:35.544 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.083 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.327 349552 DEBUG nova.network.neutron [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Updating instance_info_cache with network_info: [{"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:15:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1998: 321 pgs: 321 active+clean; 166 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 524 KiB/s wr, 0 op/s
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.351 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Releasing lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.351 349552 DEBUG nova.compute.manager [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Instance network_info: |[{"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.352 349552 DEBUG oslo_concurrency.lockutils [req-bc032658-6ce9-4449-a1fc-aa1001464151 req-913dbf6b-7767-4b15-94ab-31059bb58be8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.352 349552 DEBUG nova.network.neutron [req-bc032658-6ce9-4449-a1fc-aa1001464151 req-913dbf6b-7767-4b15-94ab-31059bb58be8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Refreshing network info cache for port afc3cf6c-cbe3-4163-920e-7122f474d371 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.357 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Start _get_guest_xml network_info=[{"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:11:06Z,direct_url=<?>,disk_format='qcow2',id=773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e,min_disk=0,min_ram=0,name='tempest-scenario-img--2105045224',owner='b01709a3378347e1a3f25eeb2b8b1bca',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:11:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.372 349552 WARNING nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.380 349552 DEBUG nova.virt.libvirt.host [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.381 349552 DEBUG nova.virt.libvirt.host [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.386 349552 DEBUG nova.virt.libvirt.host [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.387 349552 DEBUG nova.virt.libvirt.host [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.388 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.388 349552 DEBUG nova.virt.hardware [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T02:07:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:11:06Z,direct_url=<?>,disk_format='qcow2',id=773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e,min_disk=0,min_ram=0,name='tempest-scenario-img--2105045224',owner='b01709a3378347e1a3f25eeb2b8b1bca',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:11:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.389 349552 DEBUG nova.virt.hardware [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.389 349552 DEBUG nova.virt.hardware [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.390 349552 DEBUG nova.virt.hardware [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.390 349552 DEBUG nova.virt.hardware [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.391 349552 DEBUG nova.virt.hardware [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.391 349552 DEBUG nova.virt.hardware [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.391 349552 DEBUG nova.virt.hardware [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.392 349552 DEBUG nova.virt.hardware [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.392 349552 DEBUG nova.virt.hardware [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.392 349552 DEBUG nova.virt.hardware [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.397 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:15:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 02:15:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3856466857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.952 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.997 349552 DEBUG nova.storage.rbd_utils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:15:37 compute-0 nova_compute[349548]: 2025-12-05 02:15:37.007 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:15:37 compute-0 nova_compute[349548]: 2025-12-05 02:15:37.372 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 05 02:15:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2213337067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:15:37 compute-0 nova_compute[349548]: 2025-12-05 02:15:37.501 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:15:37 compute-0 nova_compute[349548]: 2025-12-05 02:15:37.503 349552 DEBUG nova.virt.libvirt.vif [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:15:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo',id=15,image_ref='773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='92ca195d-98d1-443c-9947-dcb7ca7b926a'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b01709a3378347e1a3f25eeb2b8b1bca',ramdisk_id='',reservation_id='r-hkm16u1q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-257639068',owner_user_name='tempest-PrometheusGabbiTest-257639068-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:15:31Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='99591ed8361e41579fee1d14f16bf0f7',uuid=e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 05 02:15:37 compute-0 nova_compute[349548]: 2025-12-05 02:15:37.504 349552 DEBUG nova.network.os_vif_util [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Converting VIF {"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:15:37 compute-0 nova_compute[349548]: 2025-12-05 02:15:37.505 349552 DEBUG nova.network.os_vif_util [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:80:52,bridge_name='br-int',has_traffic_filtering=True,id=afc3cf6c-cbe3-4163-920e-7122f474d371,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafc3cf6c-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:15:37 compute-0 nova_compute[349548]: 2025-12-05 02:15:37.508 349552 DEBUG nova.objects.instance [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lazy-loading 'pci_devices' on Instance uuid e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:15:37 compute-0 nova_compute[349548]: 2025-12-05 02:15:37.510 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:37 compute-0 ceph-mon[192914]: pgmap v1998: 321 pgs: 321 active+clean; 166 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 524 KiB/s wr, 0 op/s
Dec 05 02:15:37 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3856466857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:15:37 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2213337067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 05 02:15:37 compute-0 podman[454223]: 2025-12-05 02:15:37.715260713 +0000 UTC m=+0.117769159 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:15:37 compute-0 podman[454222]: 2025-12-05 02:15:37.726354674 +0000 UTC m=+0.141519595 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, tcib_managed=true)
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.208 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] End _get_guest_xml xml=<domain type="kvm">
Dec 05 02:15:38 compute-0 nova_compute[349548]:   <uuid>e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7</uuid>
Dec 05 02:15:38 compute-0 nova_compute[349548]:   <name>instance-0000000f</name>
Dec 05 02:15:38 compute-0 nova_compute[349548]:   <memory>131072</memory>
Dec 05 02:15:38 compute-0 nova_compute[349548]:   <vcpu>1</vcpu>
Dec 05 02:15:38 compute-0 nova_compute[349548]:   <metadata>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <nova:name>te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo</nova:name>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <nova:creationTime>2025-12-05 02:15:36</nova:creationTime>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <nova:flavor name="m1.nano">
Dec 05 02:15:38 compute-0 nova_compute[349548]:         <nova:memory>128</nova:memory>
Dec 05 02:15:38 compute-0 nova_compute[349548]:         <nova:disk>1</nova:disk>
Dec 05 02:15:38 compute-0 nova_compute[349548]:         <nova:swap>0</nova:swap>
Dec 05 02:15:38 compute-0 nova_compute[349548]:         <nova:ephemeral>0</nova:ephemeral>
Dec 05 02:15:38 compute-0 nova_compute[349548]:         <nova:vcpus>1</nova:vcpus>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       </nova:flavor>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <nova:owner>
Dec 05 02:15:38 compute-0 nova_compute[349548]:         <nova:user uuid="99591ed8361e41579fee1d14f16bf0f7">tempest-PrometheusGabbiTest-257639068-project-member</nova:user>
Dec 05 02:15:38 compute-0 nova_compute[349548]:         <nova:project uuid="b01709a3378347e1a3f25eeb2b8b1bca">tempest-PrometheusGabbiTest-257639068</nova:project>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       </nova:owner>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <nova:root type="image" uuid="773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <nova:ports>
Dec 05 02:15:38 compute-0 nova_compute[349548]:         <nova:port uuid="afc3cf6c-cbe3-4163-920e-7122f474d371">
Dec 05 02:15:38 compute-0 nova_compute[349548]:           <nova:ip type="fixed" address="10.100.2.8" ipVersion="4"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:         </nova:port>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       </nova:ports>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     </nova:instance>
Dec 05 02:15:38 compute-0 nova_compute[349548]:   </metadata>
Dec 05 02:15:38 compute-0 nova_compute[349548]:   <sysinfo type="smbios">
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <system>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <entry name="manufacturer">RDO</entry>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <entry name="product">OpenStack Compute</entry>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <entry name="serial">e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7</entry>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <entry name="uuid">e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7</entry>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <entry name="family">Virtual Machine</entry>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     </system>
Dec 05 02:15:38 compute-0 nova_compute[349548]:   </sysinfo>
Dec 05 02:15:38 compute-0 nova_compute[349548]:   <os>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <boot dev="hd"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <smbios mode="sysinfo"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:   </os>
Dec 05 02:15:38 compute-0 nova_compute[349548]:   <features>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <acpi/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <apic/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <vmcoreinfo/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:   </features>
Dec 05 02:15:38 compute-0 nova_compute[349548]:   <clock offset="utc">
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <timer name="pit" tickpolicy="delay"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <timer name="hpet" present="no"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:   </clock>
Dec 05 02:15:38 compute-0 nova_compute[349548]:   <cpu mode="host-model" match="exact">
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <topology sockets="1" cores="1" threads="1"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:   </cpu>
Dec 05 02:15:38 compute-0 nova_compute[349548]:   <devices>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <disk type="network" device="disk">
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk">
Dec 05 02:15:38 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       </source>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 02:15:38 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       </auth>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <target dev="vda" bus="virtio"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     </disk>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <disk type="network" device="cdrom">
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <driver type="raw" cache="none"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <source protocol="rbd" name="vms/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk.config">
Dec 05 02:15:38 compute-0 nova_compute[349548]:         <host name="192.168.122.100" port="6789"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       </source>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <auth username="openstack">
Dec 05 02:15:38 compute-0 nova_compute[349548]:         <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       </auth>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <target dev="sda" bus="sata"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     </disk>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <interface type="ethernet">
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <mac address="fa:16:3e:69:80:52"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <driver name="vhost" rx_queue_size="512"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <mtu size="1442"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <target dev="tapafc3cf6c-cb"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     </interface>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <serial type="pty">
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <log file="/var/lib/nova/instances/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/console.log" append="off"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     </serial>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <video>
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <model type="virtio"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     </video>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <input type="tablet" bus="usb"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <rng model="virtio">
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <backend model="random">/dev/urandom</backend>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     </rng>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="pci" model="pcie-root-port"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <controller type="usb" index="0"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     <memballoon model="virtio">
Dec 05 02:15:38 compute-0 nova_compute[349548]:       <stats period="10"/>
Dec 05 02:15:38 compute-0 nova_compute[349548]:     </memballoon>
Dec 05 02:15:38 compute-0 nova_compute[349548]:   </devices>
Dec 05 02:15:38 compute-0 nova_compute[349548]: </domain>
Dec 05 02:15:38 compute-0 nova_compute[349548]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.208 349552 DEBUG nova.compute.manager [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Preparing to wait for external event network-vif-plugged-afc3cf6c-cbe3-4163-920e-7122f474d371 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.209 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.209 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.210 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.211 349552 DEBUG nova.virt.libvirt.vif [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:15:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo',id=15,image_ref='773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='92ca195d-98d1-443c-9947-dcb7ca7b926a'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b01709a3378347e1a3f25eeb2b8b1bca',ramdisk_id='',reservation_id='r-hkm16u1q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-257639068',owner_user_name='tempest-PrometheusGabbiTest-257639068-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:15:31Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='99591ed8361e41579fee1d14f16bf0f7',uuid=e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.211 349552 DEBUG nova.network.os_vif_util [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Converting VIF {"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.212 349552 DEBUG nova.network.os_vif_util [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:80:52,bridge_name='br-int',has_traffic_filtering=True,id=afc3cf6c-cbe3-4163-920e-7122f474d371,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafc3cf6c-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.213 349552 DEBUG os_vif [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:80:52,bridge_name='br-int',has_traffic_filtering=True,id=afc3cf6c-cbe3-4163-920e-7122f474d371,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafc3cf6c-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.215 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.215 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.216 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.221 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.222 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapafc3cf6c-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.222 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapafc3cf6c-cb, col_values=(('external_ids', {'iface-id': 'afc3cf6c-cbe3-4163-920e-7122f474d371', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:80:52', 'vm-uuid': 'e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:15:38 compute-0 NetworkManager[49092]: <info>  [1764900938.2264] manager: (tapafc3cf6c-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.225 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.229 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.235 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.235 349552 INFO os_vif [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:80:52,bridge_name='br-int',has_traffic_filtering=True,id=afc3cf6c-cbe3-4163-920e-7122f474d371,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafc3cf6c-cb')
Dec 05 02:15:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.308 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.309 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.309 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] No VIF found with MAC fa:16:3e:69:80:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.310 349552 INFO nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Using config drive
Dec 05 02:15:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1999: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.357 349552 DEBUG nova.storage.rbd_utils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.813 349552 INFO nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Creating config drive at /var/lib/nova/instances/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.config
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.825 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp78s77pex execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.861 349552 DEBUG nova.network.neutron [req-bc032658-6ce9-4449-a1fc-aa1001464151 req-913dbf6b-7767-4b15-94ab-31059bb58be8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Updated VIF entry in instance network info cache for port afc3cf6c-cbe3-4163-920e-7122f474d371. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.862 349552 DEBUG nova.network.neutron [req-bc032658-6ce9-4449-a1fc-aa1001464151 req-913dbf6b-7767-4b15-94ab-31059bb58be8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Updating instance_info_cache with network_info: [{"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.884 349552 DEBUG oslo_concurrency.lockutils [req-bc032658-6ce9-4449-a1fc-aa1001464151 req-913dbf6b-7767-4b15-94ab-31059bb58be8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.979 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp78s77pex" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:15:39 compute-0 nova_compute[349548]: 2025-12-05 02:15:39.033 349552 DEBUG nova.storage.rbd_utils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 05 02:15:39 compute-0 nova_compute[349548]: 2025-12-05 02:15:39.046 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.config e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:15:39 compute-0 nova_compute[349548]: 2025-12-05 02:15:39.307 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.config e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:15:39 compute-0 nova_compute[349548]: 2025-12-05 02:15:39.309 349552 INFO nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Deleting local config drive /var/lib/nova/instances/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.config because it was imported into RBD.
Dec 05 02:15:39 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 05 02:15:39 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 05 02:15:39 compute-0 kernel: tapafc3cf6c-cb: entered promiscuous mode
Dec 05 02:15:39 compute-0 NetworkManager[49092]: <info>  [1764900939.4713] manager: (tapafc3cf6c-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/80)
Dec 05 02:15:39 compute-0 nova_compute[349548]: 2025-12-05 02:15:39.473 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:39 compute-0 ovn_controller[89286]: 2025-12-05T02:15:39Z|00178|binding|INFO|Claiming lport afc3cf6c-cbe3-4163-920e-7122f474d371 for this chassis.
Dec 05 02:15:39 compute-0 ovn_controller[89286]: 2025-12-05T02:15:39Z|00179|binding|INFO|afc3cf6c-cbe3-4163-920e-7122f474d371: Claiming fa:16:3e:69:80:52 10.100.2.8
Dec 05 02:15:39 compute-0 podman[454319]: 2025-12-05 02:15:39.490393328 +0000 UTC m=+0.126304148 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, config_id=edpm, container_name=kepler, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 02:15:39 compute-0 nova_compute[349548]: 2025-12-05 02:15:39.498 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:39 compute-0 ovn_controller[89286]: 2025-12-05T02:15:39Z|00180|binding|INFO|Setting lport afc3cf6c-cbe3-4163-920e-7122f474d371 ovn-installed in OVS
Dec 05 02:15:39 compute-0 nova_compute[349548]: 2025-12-05 02:15:39.499 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:39 compute-0 systemd-udevd[454366]: Network interface NamePolicy= disabled on kernel command line.
Dec 05 02:15:39 compute-0 NetworkManager[49092]: <info>  [1764900939.5243] device (tapafc3cf6c-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 05 02:15:39 compute-0 ceph-mon[192914]: pgmap v1999: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 05 02:15:39 compute-0 NetworkManager[49092]: <info>  [1764900939.5285] device (tapafc3cf6c-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 05 02:15:39 compute-0 systemd-machined[138700]: New machine qemu-16-instance-0000000f.
Dec 05 02:15:39 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Dec 05 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.567 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:80:52 10.100.2.8'], port_security=['fa:16:3e:69:80:52 10.100.2.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.2.8/16', 'neutron:device_id': 'e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cb556767-8d1b-4432-9d0a-485dcba856ee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40610b26-f7eb-46a6-9c49-714ab1f77db8, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=afc3cf6c-cbe3-4163-920e-7122f474d371) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.569 287122 INFO neutron.agent.ovn.metadata.agent [-] Port afc3cf6c-cbe3-4163-920e-7122f474d371 in datapath d7842201-32d0-4f34-ad6b-51f98e5f8322 bound to our chassis
Dec 05 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.571 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7842201-32d0-4f34-ad6b-51f98e5f8322
Dec 05 02:15:39 compute-0 ovn_controller[89286]: 2025-12-05T02:15:39Z|00181|binding|INFO|Setting lport afc3cf6c-cbe3-4163-920e-7122f474d371 up in Southbound
Dec 05 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.589 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[208153bd-706f-41ff-a3ed-817963bfac6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.626 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c4c6c5-9284-42de-aeb4-bd8808952a70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.630 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[de0b1d97-ce7b-4929-97e4-93dabc5f2f34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.667 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[dee28747-b5fc-43ce-9a76-04e6f163f8e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.686 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[60332d2a-12cf-4296-a3f0-a2b55a590863]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7842201-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:26:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677128, 'reachable_time': 17953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 454383, 'error': None, 'target': 'ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.710 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b9b3e7-7418-4fac-94d1-c3531220687f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd7842201-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677143, 'tstamp': 677143}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 454384, 'error': None, 'target': 'ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapd7842201-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677147, 'tstamp': 677147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 454384, 'error': None, 'target': 'ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.712 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7842201-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.716 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7842201-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.716 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:15:39 compute-0 nova_compute[349548]: 2025-12-05 02:15:39.716 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.717 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7842201-30, col_values=(('external_ids', {'iface-id': '9309009c-26a0-4ed9-8142-14ad142ca1c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.718 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:15:40 compute-0 nova_compute[349548]: 2025-12-05 02:15:40.108 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900940.107982, e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:15:40 compute-0 nova_compute[349548]: 2025-12-05 02:15:40.109 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] VM Started (Lifecycle Event)
Dec 05 02:15:40 compute-0 nova_compute[349548]: 2025-12-05 02:15:40.137 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:15:40 compute-0 nova_compute[349548]: 2025-12-05 02:15:40.145 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900940.1082067, e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:15:40 compute-0 nova_compute[349548]: 2025-12-05 02:15:40.146 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] VM Paused (Lifecycle Event)
Dec 05 02:15:40 compute-0 nova_compute[349548]: 2025-12-05 02:15:40.171 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:15:40 compute-0 nova_compute[349548]: 2025-12-05 02:15:40.177 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 02:15:40 compute-0 nova_compute[349548]: 2025-12-05 02:15:40.199 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 02:15:40 compute-0 sudo[454427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:15:40 compute-0 sudo[454427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:15:40 compute-0 sudo[454427]: pam_unix(sudo:session): session closed for user root
Dec 05 02:15:40 compute-0 sudo[454452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:15:40 compute-0 sudo[454452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:15:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2000: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 05 02:15:40 compute-0 sudo[454452]: pam_unix(sudo:session): session closed for user root
Dec 05 02:15:40 compute-0 sudo[454477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:15:40 compute-0 sudo[454477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:15:40 compute-0 sudo[454477]: pam_unix(sudo:session): session closed for user root
Dec 05 02:15:40 compute-0 sudo[454502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:15:40 compute-0 sudo[454502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:15:40 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 05 02:15:40 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 05 02:15:41 compute-0 nova_compute[349548]: 2025-12-05 02:15:41.081 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:15:41 compute-0 nova_compute[349548]: 2025-12-05 02:15:41.082 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:15:41 compute-0 nova_compute[349548]: 2025-12-05 02:15:41.082 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:15:41 compute-0 nova_compute[349548]: 2025-12-05 02:15:41.106 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 05 02:15:41 compute-0 sudo[454502]: pam_unix(sudo:session): session closed for user root
Dec 05 02:15:41 compute-0 sudo[454575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:15:41 compute-0 sudo[454575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:15:41 compute-0 sudo[454575]: pam_unix(sudo:session): session closed for user root
Dec 05 02:15:41 compute-0 sudo[454600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:15:41 compute-0 sudo[454600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:15:41 compute-0 sudo[454600]: pam_unix(sudo:session): session closed for user root
Dec 05 02:15:41 compute-0 ceph-mon[192914]: pgmap v2000: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 05 02:15:41 compute-0 sudo[454625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:15:41 compute-0 sudo[454625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:15:41 compute-0 sudo[454625]: pam_unix(sudo:session): session closed for user root
Dec 05 02:15:41 compute-0 sudo[454650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Dec 05 02:15:41 compute-0 sudo[454650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:15:41 compute-0 nova_compute[349548]: 2025-12-05 02:15:41.901 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:15:41 compute-0 nova_compute[349548]: 2025-12-05 02:15:41.902 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:15:41 compute-0 nova_compute[349548]: 2025-12-05 02:15:41.902 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 02:15:41 compute-0 nova_compute[349548]: 2025-12-05 02:15:41.902 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 292fd084-0808-4a80-adc1-6ab1f28e188a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:15:42 compute-0 sudo[454650]: pam_unix(sudo:session): session closed for user root
Dec 05 02:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.074 349552 DEBUG nova.compute.manager [req-ab619e57-1e22-4187-84d2-e8e385e3307b req-4599da30-ffe1-4d77-8851-548afc1cd34b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Received event network-vif-plugged-afc3cf6c-cbe3-4163-920e-7122f474d371 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.075 349552 DEBUG oslo_concurrency.lockutils [req-ab619e57-1e22-4187-84d2-e8e385e3307b req-4599da30-ffe1-4d77-8851-548afc1cd34b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.077 349552 DEBUG oslo_concurrency.lockutils [req-ab619e57-1e22-4187-84d2-e8e385e3307b req-4599da30-ffe1-4d77-8851-548afc1cd34b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.077 349552 DEBUG oslo_concurrency.lockutils [req-ab619e57-1e22-4187-84d2-e8e385e3307b req-4599da30-ffe1-4d77-8851-548afc1cd34b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.078 349552 DEBUG nova.compute.manager [req-ab619e57-1e22-4187-84d2-e8e385e3307b req-4599da30-ffe1-4d77-8851-548afc1cd34b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Processing event network-vif-plugged-afc3cf6c-cbe3-4163-920e-7122f474d371 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 05 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.081 349552 DEBUG nova.compute.manager [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 05 02:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.093 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900942.0932474, e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.094 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] VM Resumed (Lifecycle Event)
Dec 05 02:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.098 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 05 02:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.108 349552 INFO nova.virt.libvirt.driver [-] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Instance spawned successfully.
Dec 05 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.109 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 05 02:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:15:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8bcc5eb9-6470-42e7-baa4-788b22560c36 does not exist
Dec 05 02:15:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 92bfb68b-5822-40c8-a95b-1ddf59298ad4 does not exist
Dec 05 02:15:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a6c0d59b-da0b-44a4-a8b4-bcd65c63359a does not exist
Dec 05 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.122 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.152 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 05 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.166 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.167 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.168 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.169 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.170 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.170 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 05 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.176 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 05 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.237 349552 INFO nova.compute.manager [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Took 10.44 seconds to spawn the instance on the hypervisor.
Dec 05 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.238 349552 DEBUG nova.compute.manager [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:15:42 compute-0 sudo[454691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:15:42 compute-0 sudo[454691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:15:42 compute-0 sudo[454691]: pam_unix(sudo:session): session closed for user root
Dec 05 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.310 349552 INFO nova.compute.manager [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Took 11.55 seconds to build instance.
Dec 05 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.326 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:15:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2001: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec 05 02:15:42 compute-0 sudo[454716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:15:42 compute-0 sudo[454716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:15:42 compute-0 sudo[454716]: pam_unix(sudo:session): session closed for user root
Dec 05 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.376 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:42 compute-0 sudo[454741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:15:42 compute-0 sudo[454741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:15:42 compute-0 sudo[454741]: pam_unix(sudo:session): session closed for user root
Dec 05 02:15:42 compute-0 sudo[454766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:15:42 compute-0 sudo[454766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:15:43 compute-0 podman[454829]: 2025-12-05 02:15:43.048215272 +0000 UTC m=+0.085777770 container create 77391065b56884f302037b4de58ae54cc66fc722cc28b1788bcdbcd8c8a0b565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:15:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:15:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:15:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:15:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:15:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:15:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:15:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:15:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:15:43 compute-0 ceph-mon[192914]: pgmap v2001: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec 05 02:15:43 compute-0 podman[454829]: 2025-12-05 02:15:43.012028225 +0000 UTC m=+0.049590783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:15:43 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 02:15:43 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 02:15:43 compute-0 systemd[1]: Started libpod-conmon-77391065b56884f302037b4de58ae54cc66fc722cc28b1788bcdbcd8c8a0b565.scope.
Dec 05 02:15:43 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:15:43 compute-0 nova_compute[349548]: 2025-12-05 02:15:43.218 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updating instance_info_cache with network_info: [{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:15:43 compute-0 podman[454829]: 2025-12-05 02:15:43.222586619 +0000 UTC m=+0.260149117 container init 77391065b56884f302037b4de58ae54cc66fc722cc28b1788bcdbcd8c8a0b565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 05 02:15:43 compute-0 nova_compute[349548]: 2025-12-05 02:15:43.225 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:43 compute-0 podman[454829]: 2025-12-05 02:15:43.232335573 +0000 UTC m=+0.269898051 container start 77391065b56884f302037b4de58ae54cc66fc722cc28b1788bcdbcd8c8a0b565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 05 02:15:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:15:43 compute-0 podman[454829]: 2025-12-05 02:15:43.237498938 +0000 UTC m=+0.275061436 container attach 77391065b56884f302037b4de58ae54cc66fc722cc28b1788bcdbcd8c8a0b565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 05 02:15:43 compute-0 eloquent_kepler[454846]: 167 167
Dec 05 02:15:43 compute-0 systemd[1]: libpod-77391065b56884f302037b4de58ae54cc66fc722cc28b1788bcdbcd8c8a0b565.scope: Deactivated successfully.
Dec 05 02:15:43 compute-0 nova_compute[349548]: 2025-12-05 02:15:43.246 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:15:43 compute-0 nova_compute[349548]: 2025-12-05 02:15:43.247 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 02:15:43 compute-0 nova_compute[349548]: 2025-12-05 02:15:43.248 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:15:43 compute-0 nova_compute[349548]: 2025-12-05 02:15:43.249 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:15:43 compute-0 nova_compute[349548]: 2025-12-05 02:15:43.249 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:15:43 compute-0 nova_compute[349548]: 2025-12-05 02:15:43.249 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:15:43 compute-0 podman[454851]: 2025-12-05 02:15:43.299747356 +0000 UTC m=+0.043759840 container died 77391065b56884f302037b4de58ae54cc66fc722cc28b1788bcdbcd8c8a0b565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 05 02:15:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-be338eee8daa2b5083ab83399cdd3265363c7146d2d058a29abffdcc2bf4b371-merged.mount: Deactivated successfully.
Dec 05 02:15:43 compute-0 podman[454851]: 2025-12-05 02:15:43.363527368 +0000 UTC m=+0.107539822 container remove 77391065b56884f302037b4de58ae54cc66fc722cc28b1788bcdbcd8c8a0b565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:15:43 compute-0 systemd[1]: libpod-conmon-77391065b56884f302037b4de58ae54cc66fc722cc28b1788bcdbcd8c8a0b565.scope: Deactivated successfully.
Dec 05 02:15:43 compute-0 podman[454872]: 2025-12-05 02:15:43.638021557 +0000 UTC m=+0.073001811 container create 6d6208ecb80219af8b4a57016c41135b0a4150b7c22789fcc37897144bb0a229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_roentgen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 05 02:15:43 compute-0 systemd[1]: Started libpod-conmon-6d6208ecb80219af8b4a57016c41135b0a4150b7c22789fcc37897144bb0a229.scope.
Dec 05 02:15:43 compute-0 podman[454872]: 2025-12-05 02:15:43.608551119 +0000 UTC m=+0.043531403 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:15:43 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dbe1ab9ef200b0bec101e824563b5dc625e53be68100417630cf5355c81c283/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dbe1ab9ef200b0bec101e824563b5dc625e53be68100417630cf5355c81c283/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dbe1ab9ef200b0bec101e824563b5dc625e53be68100417630cf5355c81c283/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dbe1ab9ef200b0bec101e824563b5dc625e53be68100417630cf5355c81c283/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dbe1ab9ef200b0bec101e824563b5dc625e53be68100417630cf5355c81c283/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:15:43 compute-0 podman[454872]: 2025-12-05 02:15:43.752624816 +0000 UTC m=+0.187605080 container init 6d6208ecb80219af8b4a57016c41135b0a4150b7c22789fcc37897144bb0a229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 02:15:43 compute-0 podman[454872]: 2025-12-05 02:15:43.782489914 +0000 UTC m=+0.217470158 container start 6d6208ecb80219af8b4a57016c41135b0a4150b7c22789fcc37897144bb0a229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:15:43 compute-0 podman[454872]: 2025-12-05 02:15:43.788776811 +0000 UTC m=+0.223757065 container attach 6d6208ecb80219af8b4a57016c41135b0a4150b7c22789fcc37897144bb0a229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.094 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.095 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.095 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.095 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.096 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.307 349552 DEBUG nova.compute.manager [req-002b7ce2-42a9-4407-91fb-75dd5e468395 req-2f2e01b1-04d7-4dae-bd13-9d2b94c9fa92 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Received event network-vif-plugged-afc3cf6c-cbe3-4163-920e-7122f474d371 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.309 349552 DEBUG oslo_concurrency.lockutils [req-002b7ce2-42a9-4407-91fb-75dd5e468395 req-2f2e01b1-04d7-4dae-bd13-9d2b94c9fa92 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.310 349552 DEBUG oslo_concurrency.lockutils [req-002b7ce2-42a9-4407-91fb-75dd5e468395 req-2f2e01b1-04d7-4dae-bd13-9d2b94c9fa92 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.311 349552 DEBUG oslo_concurrency.lockutils [req-002b7ce2-42a9-4407-91fb-75dd5e468395 req-2f2e01b1-04d7-4dae-bd13-9d2b94c9fa92 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.312 349552 DEBUG nova.compute.manager [req-002b7ce2-42a9-4407-91fb-75dd5e468395 req-2f2e01b1-04d7-4dae-bd13-9d2b94c9fa92 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] No waiting events found dispatching network-vif-plugged-afc3cf6c-cbe3-4163-920e-7122f474d371 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.313 349552 WARNING nova.compute.manager [req-002b7ce2-42a9-4407-91fb-75dd5e468395 req-2f2e01b1-04d7-4dae-bd13-9d2b94c9fa92 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Received unexpected event network-vif-plugged-afc3cf6c-cbe3-4163-920e-7122f474d371 for instance with vm_state active and task_state None.
Dec 05 02:15:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2002: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec 05 02:15:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:15:44 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3476277176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.585 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.721 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.722 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.729 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.729 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:15:44 compute-0 reverent_roentgen[454888]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:15:44 compute-0 reverent_roentgen[454888]: --> relative data size: 1.0
Dec 05 02:15:44 compute-0 reverent_roentgen[454888]: --> All data devices are unavailable
Dec 05 02:15:44 compute-0 systemd[1]: libpod-6d6208ecb80219af8b4a57016c41135b0a4150b7c22789fcc37897144bb0a229.scope: Deactivated successfully.
Dec 05 02:15:44 compute-0 systemd[1]: libpod-6d6208ecb80219af8b4a57016c41135b0a4150b7c22789fcc37897144bb0a229.scope: Consumed 1.010s CPU time.
Dec 05 02:15:44 compute-0 conmon[454888]: conmon 6d6208ecb80219af8b4a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6d6208ecb80219af8b4a57016c41135b0a4150b7c22789fcc37897144bb0a229.scope/container/memory.events
Dec 05 02:15:44 compute-0 podman[454872]: 2025-12-05 02:15:44.898441017 +0000 UTC m=+1.333421251 container died 6d6208ecb80219af8b4a57016c41135b0a4150b7c22789fcc37897144bb0a229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:15:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-1dbe1ab9ef200b0bec101e824563b5dc625e53be68100417630cf5355c81c283-merged.mount: Deactivated successfully.
Dec 05 02:15:44 compute-0 podman[454872]: 2025-12-05 02:15:44.969479292 +0000 UTC m=+1.404459526 container remove 6d6208ecb80219af8b4a57016c41135b0a4150b7c22789fcc37897144bb0a229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:15:44 compute-0 systemd[1]: libpod-conmon-6d6208ecb80219af8b4a57016c41135b0a4150b7c22789fcc37897144bb0a229.scope: Deactivated successfully.
Dec 05 02:15:45 compute-0 sudo[454766]: pam_unix(sudo:session): session closed for user root
Dec 05 02:15:45 compute-0 sudo[454952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:15:45 compute-0 sudo[454952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:15:45 compute-0 sudo[454952]: pam_unix(sudo:session): session closed for user root
Dec 05 02:15:45 compute-0 nova_compute[349548]: 2025-12-05 02:15:45.202 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:15:45 compute-0 nova_compute[349548]: 2025-12-05 02:15:45.204 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3632MB free_disk=59.92191696166992GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:15:45 compute-0 nova_compute[349548]: 2025-12-05 02:15:45.204 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:15:45 compute-0 nova_compute[349548]: 2025-12-05 02:15:45.204 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:15:45 compute-0 sudo[454977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:15:45 compute-0 sudo[454977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:15:45 compute-0 sudo[454977]: pam_unix(sudo:session): session closed for user root
Dec 05 02:15:45 compute-0 sudo[455002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:15:45 compute-0 sudo[455002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:15:45 compute-0 sudo[455002]: pam_unix(sudo:session): session closed for user root
Dec 05 02:15:45 compute-0 ceph-mon[192914]: pgmap v2002: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec 05 02:15:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3476277176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:15:45 compute-0 sudo[455027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:15:45 compute-0 sudo[455027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:15:45 compute-0 nova_compute[349548]: 2025-12-05 02:15:45.568 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:15:45 compute-0 nova_compute[349548]: 2025-12-05 02:15:45.568 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:15:45 compute-0 nova_compute[349548]: 2025-12-05 02:15:45.569 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:15:45 compute-0 nova_compute[349548]: 2025-12-05 02:15:45.569 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:15:45 compute-0 nova_compute[349548]: 2025-12-05 02:15:45.721 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:15:45 compute-0 podman[455101]: 2025-12-05 02:15:45.924287259 +0000 UTC m=+0.047365121 container create d75648c9629aed1070fd2cf217a05308786d75be200b7ad9fd771b5ca14e6db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 02:15:45 compute-0 systemd[1]: Started libpod-conmon-d75648c9629aed1070fd2cf217a05308786d75be200b7ad9fd771b5ca14e6db0.scope.
Dec 05 02:15:46 compute-0 podman[455101]: 2025-12-05 02:15:45.906230742 +0000 UTC m=+0.029308624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:15:46 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:15:46 compute-0 podman[455101]: 2025-12-05 02:15:46.022314612 +0000 UTC m=+0.145392474 container init d75648c9629aed1070fd2cf217a05308786d75be200b7ad9fd771b5ca14e6db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 05 02:15:46 compute-0 podman[455101]: 2025-12-05 02:15:46.033224607 +0000 UTC m=+0.156302469 container start d75648c9629aed1070fd2cf217a05308786d75be200b7ad9fd771b5ca14e6db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nightingale, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 02:15:46 compute-0 podman[455101]: 2025-12-05 02:15:46.037743284 +0000 UTC m=+0.160821176 container attach d75648c9629aed1070fd2cf217a05308786d75be200b7ad9fd771b5ca14e6db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 05 02:15:46 compute-0 vigilant_nightingale[455125]: 167 167
Dec 05 02:15:46 compute-0 systemd[1]: libpod-d75648c9629aed1070fd2cf217a05308786d75be200b7ad9fd771b5ca14e6db0.scope: Deactivated successfully.
Dec 05 02:15:46 compute-0 podman[455101]: 2025-12-05 02:15:46.044523635 +0000 UTC m=+0.167601497 container died d75648c9629aed1070fd2cf217a05308786d75be200b7ad9fd771b5ca14e6db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nightingale, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:15:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb64170fec3de1af5375b6865b134354075b8401a43b21ae9ec7ce5a6149a983-merged.mount: Deactivated successfully.
Dec 05 02:15:46 compute-0 podman[455101]: 2025-12-05 02:15:46.093878441 +0000 UTC m=+0.216956303 container remove d75648c9629aed1070fd2cf217a05308786d75be200b7ad9fd771b5ca14e6db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nightingale, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 02:15:46 compute-0 systemd[1]: libpod-conmon-d75648c9629aed1070fd2cf217a05308786d75be200b7ad9fd771b5ca14e6db0.scope: Deactivated successfully.
Dec 05 02:15:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:15:46 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2469863366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:15:46 compute-0 nova_compute[349548]: 2025-12-05 02:15:46.270 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:15:46 compute-0 nova_compute[349548]: 2025-12-05 02:15:46.290 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:15:46 compute-0 nova_compute[349548]: 2025-12-05 02:15:46.309 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:15:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:15:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:15:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:15:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:15:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:15:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:15:46 compute-0 nova_compute[349548]: 2025-12-05 02:15:46.338 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:15:46 compute-0 nova_compute[349548]: 2025-12-05 02:15:46.339 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:15:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2003: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 478 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Dec 05 02:15:46 compute-0 podman[455151]: 2025-12-05 02:15:46.363867154 +0000 UTC m=+0.101886283 container create e5a55f21cbedfe4c8ba5d5f0b3bb02b21c99c41cc6a3be79e374ef2264d6effd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feynman, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 05 02:15:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2469863366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:15:46 compute-0 podman[455151]: 2025-12-05 02:15:46.334630363 +0000 UTC m=+0.072649512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:15:46 compute-0 systemd[1]: Started libpod-conmon-e5a55f21cbedfe4c8ba5d5f0b3bb02b21c99c41cc6a3be79e374ef2264d6effd.scope.
Dec 05 02:15:46 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d133fa26ae8623aee71e65680a41e72415b46523ae7f1e18ae5731476be15bda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d133fa26ae8623aee71e65680a41e72415b46523ae7f1e18ae5731476be15bda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d133fa26ae8623aee71e65680a41e72415b46523ae7f1e18ae5731476be15bda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d133fa26ae8623aee71e65680a41e72415b46523ae7f1e18ae5731476be15bda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:15:46 compute-0 podman[455151]: 2025-12-05 02:15:46.509612107 +0000 UTC m=+0.247631246 container init e5a55f21cbedfe4c8ba5d5f0b3bb02b21c99c41cc6a3be79e374ef2264d6effd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feynman, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:15:46 compute-0 podman[455151]: 2025-12-05 02:15:46.534060614 +0000 UTC m=+0.272079713 container start e5a55f21cbedfe4c8ba5d5f0b3bb02b21c99c41cc6a3be79e374ef2264d6effd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feynman, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 02:15:46 compute-0 podman[455151]: 2025-12-05 02:15:46.542013697 +0000 UTC m=+0.280032806 container attach e5a55f21cbedfe4c8ba5d5f0b3bb02b21c99c41cc6a3be79e374ef2264d6effd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feynman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 05 02:15:47 compute-0 nova_compute[349548]: 2025-12-05 02:15:47.334 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]: {
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:     "0": [
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:         {
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "devices": [
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "/dev/loop3"
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             ],
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "lv_name": "ceph_lv0",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "lv_size": "21470642176",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "name": "ceph_lv0",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "tags": {
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.cluster_name": "ceph",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.crush_device_class": "",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.encrypted": "0",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.osd_id": "0",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.type": "block",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.vdo": "0"
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             },
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "type": "block",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "vg_name": "ceph_vg0"
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:         }
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:     ],
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:     "1": [
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:         {
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "devices": [
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "/dev/loop4"
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             ],
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "lv_name": "ceph_lv1",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "lv_size": "21470642176",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "name": "ceph_lv1",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "tags": {
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.cluster_name": "ceph",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.crush_device_class": "",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.encrypted": "0",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.osd_id": "1",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.type": "block",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.vdo": "0"
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             },
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "type": "block",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "vg_name": "ceph_vg1"
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:         }
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:     ],
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:     "2": [
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:         {
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "devices": [
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "/dev/loop5"
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             ],
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "lv_name": "ceph_lv2",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "lv_size": "21470642176",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "name": "ceph_lv2",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "tags": {
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.cluster_name": "ceph",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.crush_device_class": "",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.encrypted": "0",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.osd_id": "2",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.type": "block",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:                 "ceph.vdo": "0"
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             },
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "type": "block",
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:             "vg_name": "ceph_vg2"
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:         }
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]:     ]
Dec 05 02:15:47 compute-0 upbeat_feynman[455164]: }
Dec 05 02:15:47 compute-0 podman[455151]: 2025-12-05 02:15:47.375583109 +0000 UTC m=+1.113602228 container died e5a55f21cbedfe4c8ba5d5f0b3bb02b21c99c41cc6a3be79e374ef2264d6effd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feynman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 02:15:47 compute-0 systemd[1]: libpod-e5a55f21cbedfe4c8ba5d5f0b3bb02b21c99c41cc6a3be79e374ef2264d6effd.scope: Deactivated successfully.
Dec 05 02:15:47 compute-0 nova_compute[349548]: 2025-12-05 02:15:47.377 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:47 compute-0 ceph-mon[192914]: pgmap v2003: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 478 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Dec 05 02:15:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-d133fa26ae8623aee71e65680a41e72415b46523ae7f1e18ae5731476be15bda-merged.mount: Deactivated successfully.
Dec 05 02:15:47 compute-0 podman[455151]: 2025-12-05 02:15:47.4795934 +0000 UTC m=+1.217612519 container remove e5a55f21cbedfe4c8ba5d5f0b3bb02b21c99c41cc6a3be79e374ef2264d6effd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:15:47 compute-0 systemd[1]: libpod-conmon-e5a55f21cbedfe4c8ba5d5f0b3bb02b21c99c41cc6a3be79e374ef2264d6effd.scope: Deactivated successfully.
Dec 05 02:15:47 compute-0 sudo[455027]: pam_unix(sudo:session): session closed for user root
Dec 05 02:15:47 compute-0 sudo[455186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:15:47 compute-0 sudo[455186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:15:47 compute-0 sudo[455186]: pam_unix(sudo:session): session closed for user root
Dec 05 02:15:47 compute-0 sudo[455211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:15:47 compute-0 sudo[455211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:15:47 compute-0 sudo[455211]: pam_unix(sudo:session): session closed for user root
Dec 05 02:15:47 compute-0 sudo[455236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:15:47 compute-0 sudo[455236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:15:47 compute-0 sudo[455236]: pam_unix(sudo:session): session closed for user root
Dec 05 02:15:47 compute-0 sudo[455261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:15:47 compute-0 sudo[455261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:15:48 compute-0 nova_compute[349548]: 2025-12-05 02:15:48.061 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:15:48 compute-0 nova_compute[349548]: 2025-12-05 02:15:48.094 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:15:48 compute-0 nova_compute[349548]: 2025-12-05 02:15:48.095 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:15:48 compute-0 nova_compute[349548]: 2025-12-05 02:15:48.228 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:15:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2004: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 99 op/s
Dec 05 02:15:48 compute-0 podman[455323]: 2025-12-05 02:15:48.472448805 +0000 UTC m=+0.096650945 container create 5137c55ee084094c5d554f5487ccc611ca512c5139c327bd30aa9acce0229415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:15:48 compute-0 podman[455323]: 2025-12-05 02:15:48.425076485 +0000 UTC m=+0.049278685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:15:48 compute-0 systemd[1]: Started libpod-conmon-5137c55ee084094c5d554f5487ccc611ca512c5139c327bd30aa9acce0229415.scope.
Dec 05 02:15:48 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:15:48 compute-0 podman[455323]: 2025-12-05 02:15:48.607168549 +0000 UTC m=+0.231370779 container init 5137c55ee084094c5d554f5487ccc611ca512c5139c327bd30aa9acce0229415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_spence, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:15:48 compute-0 podman[455323]: 2025-12-05 02:15:48.623602001 +0000 UTC m=+0.247804181 container start 5137c55ee084094c5d554f5487ccc611ca512c5139c327bd30aa9acce0229415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_spence, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 02:15:48 compute-0 podman[455323]: 2025-12-05 02:15:48.630265298 +0000 UTC m=+0.254467458 container attach 5137c55ee084094c5d554f5487ccc611ca512c5139c327bd30aa9acce0229415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_spence, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 02:15:48 compute-0 boring_spence[455337]: 167 167
Dec 05 02:15:48 compute-0 systemd[1]: libpod-5137c55ee084094c5d554f5487ccc611ca512c5139c327bd30aa9acce0229415.scope: Deactivated successfully.
Dec 05 02:15:48 compute-0 podman[455323]: 2025-12-05 02:15:48.638560731 +0000 UTC m=+0.262762901 container died 5137c55ee084094c5d554f5487ccc611ca512c5139c327bd30aa9acce0229415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_spence, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:15:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc266e145545c29e2941bdcd46e3b4bf9502f5d9d98b8c0cb8ac129a95774fcd-merged.mount: Deactivated successfully.
Dec 05 02:15:48 compute-0 podman[455323]: 2025-12-05 02:15:48.710230154 +0000 UTC m=+0.334432304 container remove 5137c55ee084094c5d554f5487ccc611ca512c5139c327bd30aa9acce0229415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 02:15:48 compute-0 systemd[1]: libpod-conmon-5137c55ee084094c5d554f5487ccc611ca512c5139c327bd30aa9acce0229415.scope: Deactivated successfully.
Dec 05 02:15:48 compute-0 podman[455398]: 2025-12-05 02:15:48.911535098 +0000 UTC m=+0.055325915 container create bc4ce12973dd392959e42454e7999c7819a7067dd7be360a94a19e78a51aff5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bhaskara, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 05 02:15:48 compute-0 podman[455354]: 2025-12-05 02:15:48.91377613 +0000 UTC m=+0.134057156 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 02:15:48 compute-0 podman[455357]: 2025-12-05 02:15:48.936632602 +0000 UTC m=+0.146103194 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Dec 05 02:15:48 compute-0 podman[455356]: 2025-12-05 02:15:48.93832068 +0000 UTC m=+0.146657850 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, config_id=edpm, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container)
Dec 05 02:15:48 compute-0 podman[455362]: 2025-12-05 02:15:48.958850477 +0000 UTC m=+0.151588319 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:15:48 compute-0 systemd[1]: Started libpod-conmon-bc4ce12973dd392959e42454e7999c7819a7067dd7be360a94a19e78a51aff5b.scope.
Dec 05 02:15:48 compute-0 podman[455398]: 2025-12-05 02:15:48.890855047 +0000 UTC m=+0.034645884 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:15:48 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:15:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b6eee875e73fad3e187986392a410da2a22fd7cc23f40e2d8bee885374ad09f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:15:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b6eee875e73fad3e187986392a410da2a22fd7cc23f40e2d8bee885374ad09f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b6eee875e73fad3e187986392a410da2a22fd7cc23f40e2d8bee885374ad09f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b6eee875e73fad3e187986392a410da2a22fd7cc23f40e2d8bee885374ad09f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:15:49 compute-0 podman[455398]: 2025-12-05 02:15:49.014716696 +0000 UTC m=+0.158507563 container init bc4ce12973dd392959e42454e7999c7819a7067dd7be360a94a19e78a51aff5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bhaskara, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:15:49 compute-0 podman[455398]: 2025-12-05 02:15:49.028272616 +0000 UTC m=+0.172063433 container start bc4ce12973dd392959e42454e7999c7819a7067dd7be360a94a19e78a51aff5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bhaskara, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:15:49 compute-0 podman[455398]: 2025-12-05 02:15:49.032460814 +0000 UTC m=+0.176251671 container attach bc4ce12973dd392959e42454e7999c7819a7067dd7be360a94a19e78a51aff5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Dec 05 02:15:49 compute-0 ceph-mon[192914]: pgmap v2004: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 99 op/s
Dec 05 02:15:50 compute-0 gallant_bhaskara[455460]: {
Dec 05 02:15:50 compute-0 gallant_bhaskara[455460]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:15:50 compute-0 gallant_bhaskara[455460]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:15:50 compute-0 gallant_bhaskara[455460]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:15:50 compute-0 gallant_bhaskara[455460]:         "osd_id": 0,
Dec 05 02:15:50 compute-0 gallant_bhaskara[455460]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:15:50 compute-0 gallant_bhaskara[455460]:         "type": "bluestore"
Dec 05 02:15:50 compute-0 gallant_bhaskara[455460]:     },
Dec 05 02:15:50 compute-0 gallant_bhaskara[455460]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:15:50 compute-0 gallant_bhaskara[455460]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:15:50 compute-0 gallant_bhaskara[455460]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:15:50 compute-0 gallant_bhaskara[455460]:         "osd_id": 1,
Dec 05 02:15:50 compute-0 gallant_bhaskara[455460]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:15:50 compute-0 gallant_bhaskara[455460]:         "type": "bluestore"
Dec 05 02:15:50 compute-0 gallant_bhaskara[455460]:     },
Dec 05 02:15:50 compute-0 gallant_bhaskara[455460]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:15:50 compute-0 gallant_bhaskara[455460]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:15:50 compute-0 gallant_bhaskara[455460]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:15:50 compute-0 gallant_bhaskara[455460]:         "osd_id": 2,
Dec 05 02:15:50 compute-0 gallant_bhaskara[455460]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:15:50 compute-0 gallant_bhaskara[455460]:         "type": "bluestore"
Dec 05 02:15:50 compute-0 gallant_bhaskara[455460]:     }
Dec 05 02:15:50 compute-0 gallant_bhaskara[455460]: }
Dec 05 02:15:50 compute-0 systemd[1]: libpod-bc4ce12973dd392959e42454e7999c7819a7067dd7be360a94a19e78a51aff5b.scope: Deactivated successfully.
Dec 05 02:15:50 compute-0 podman[455398]: 2025-12-05 02:15:50.135648737 +0000 UTC m=+1.279439584 container died bc4ce12973dd392959e42454e7999c7819a7067dd7be360a94a19e78a51aff5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bhaskara, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 05 02:15:50 compute-0 systemd[1]: libpod-bc4ce12973dd392959e42454e7999c7819a7067dd7be360a94a19e78a51aff5b.scope: Consumed 1.094s CPU time.
Dec 05 02:15:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b6eee875e73fad3e187986392a410da2a22fd7cc23f40e2d8bee885374ad09f-merged.mount: Deactivated successfully.
Dec 05 02:15:50 compute-0 podman[455398]: 2025-12-05 02:15:50.238631689 +0000 UTC m=+1.382422526 container remove bc4ce12973dd392959e42454e7999c7819a7067dd7be360a94a19e78a51aff5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bhaskara, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:15:50 compute-0 systemd[1]: libpod-conmon-bc4ce12973dd392959e42454e7999c7819a7067dd7be360a94a19e78a51aff5b.scope: Deactivated successfully.
Dec 05 02:15:50 compute-0 sudo[455261]: pam_unix(sudo:session): session closed for user root
Dec 05 02:15:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:15:50 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:15:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:15:50 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:15:50 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev cdc8dde9-c94f-47ed-978b-d7bc417fcb43 does not exist
Dec 05 02:15:50 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b59a0732-450c-404a-bc66-bb44c38e27dd does not exist
Dec 05 02:15:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2005: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Dec 05 02:15:50 compute-0 sudo[455508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:15:50 compute-0 sudo[455508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:15:50 compute-0 sudo[455508]: pam_unix(sudo:session): session closed for user root
Dec 05 02:15:50 compute-0 sudo[455533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:15:50 compute-0 sudo[455533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:15:50 compute-0 sudo[455533]: pam_unix(sudo:session): session closed for user root
Dec 05 02:15:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:15:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:15:51 compute-0 ceph-mon[192914]: pgmap v2005: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Dec 05 02:15:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2006: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 73 op/s
Dec 05 02:15:52 compute-0 nova_compute[349548]: 2025-12-05 02:15:52.378 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:53 compute-0 nova_compute[349548]: 2025-12-05 02:15:53.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:15:53 compute-0 nova_compute[349548]: 2025-12-05 02:15:53.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 02:15:53 compute-0 nova_compute[349548]: 2025-12-05 02:15:53.232 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:15:53 compute-0 ceph-mon[192914]: pgmap v2006: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 73 op/s
Dec 05 02:15:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2007: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 05 02:15:55 compute-0 ceph-mon[192914]: pgmap v2007: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 05 02:15:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:56.213 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:15:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:56.213 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:15:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:56.214 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:15:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2008: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 05 02:15:57 compute-0 nova_compute[349548]: 2025-12-05 02:15:57.381 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:57 compute-0 ceph-mon[192914]: pgmap v2008: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.467501) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900957467584, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 1991, "num_deletes": 251, "total_data_size": 3320324, "memory_usage": 3381232, "flush_reason": "Manual Compaction"}
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900957495250, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 3255352, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39692, "largest_seqno": 41682, "table_properties": {"data_size": 3246231, "index_size": 5743, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18351, "raw_average_key_size": 20, "raw_value_size": 3228145, "raw_average_value_size": 3539, "num_data_blocks": 255, "num_entries": 912, "num_filter_entries": 912, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764900739, "oldest_key_time": 1764900739, "file_creation_time": 1764900957, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 27799 microseconds, and 14197 cpu microseconds.
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.495311) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 3255352 bytes OK
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.495340) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.499577) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.499600) EVENT_LOG_v1 {"time_micros": 1764900957499593, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.499624) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 3311944, prev total WAL file size 3311944, number of live WAL files 2.
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.501249) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(3179KB)], [95(6091KB)]
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900957501299, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 9493528, "oldest_snapshot_seqno": -1}
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 5768 keys, 7791521 bytes, temperature: kUnknown
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900957544773, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 7791521, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7754952, "index_size": 21035, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14469, "raw_key_size": 149336, "raw_average_key_size": 25, "raw_value_size": 7652574, "raw_average_value_size": 1326, "num_data_blocks": 837, "num_entries": 5768, "num_filter_entries": 5768, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764900957, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.545010) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 7791521 bytes
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.546848) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 218.1 rd, 179.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 5.9 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(5.3) write-amplify(2.4) OK, records in: 6282, records dropped: 514 output_compression: NoCompression
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.546863) EVENT_LOG_v1 {"time_micros": 1764900957546856, "job": 56, "event": "compaction_finished", "compaction_time_micros": 43532, "compaction_time_cpu_micros": 17880, "output_level": 6, "num_output_files": 1, "total_output_size": 7791521, "num_input_records": 6282, "num_output_records": 5768, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900957547477, "job": 56, "event": "table_file_deletion", "file_number": 97}
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900957548408, "job": 56, "event": "table_file_deletion", "file_number": 95}
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.501104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.548674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.548681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.548684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.548687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.548690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:15:58 compute-0 nova_compute[349548]: 2025-12-05 02:15:58.236 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:15:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:15:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2009: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 49 op/s
Dec 05 02:15:58 compute-0 sshd-session[455558]: Connection closed by authenticating user root 123.253.22.45 port 58778 [preauth]
Dec 05 02:15:59 compute-0 ceph-mon[192914]: pgmap v2009: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 49 op/s
Dec 05 02:15:59 compute-0 podman[158197]: time="2025-12-05T02:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:15:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:15:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8656 "" "Go-http-client/1.1"
Dec 05 02:16:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2010: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 op/s
Dec 05 02:16:01 compute-0 openstack_network_exporter[366555]: ERROR   02:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:16:01 compute-0 openstack_network_exporter[366555]: ERROR   02:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:16:01 compute-0 openstack_network_exporter[366555]: ERROR   02:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:16:01 compute-0 openstack_network_exporter[366555]: ERROR   02:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:16:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:16:01 compute-0 openstack_network_exporter[366555]: ERROR   02:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:16:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:16:01 compute-0 ceph-mon[192914]: pgmap v2010: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 op/s
Dec 05 02:16:02 compute-0 nova_compute[349548]: 2025-12-05 02:16:02.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:16:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2011: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:16:02 compute-0 nova_compute[349548]: 2025-12-05 02:16:02.383 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:16:03 compute-0 nova_compute[349548]: 2025-12-05 02:16:03.239 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:16:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:16:03 compute-0 ceph-mon[192914]: pgmap v2011: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:16:03 compute-0 podman[455560]: 2025-12-05 02:16:03.694423965 +0000 UTC m=+0.098977611 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 05 02:16:03 compute-0 podman[455561]: 2025-12-05 02:16:03.710513097 +0000 UTC m=+0.112111220 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:16:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2012: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:16:05 compute-0 ceph-mon[192914]: pgmap v2012: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:16:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2013: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Dec 05 02:16:07 compute-0 nova_compute[349548]: 2025-12-05 02:16:07.388 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:16:07 compute-0 ceph-mon[192914]: pgmap v2013: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Dec 05 02:16:08 compute-0 nova_compute[349548]: 2025-12-05 02:16:08.242 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:16:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:16:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2014: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s
Dec 05 02:16:08 compute-0 podman[455600]: 2025-12-05 02:16:08.684848663 +0000 UTC m=+0.102595672 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 02:16:08 compute-0 podman[455601]: 2025-12-05 02:16:08.715034891 +0000 UTC m=+0.120426423 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi)
Dec 05 02:16:09 compute-0 ovn_controller[89286]: 2025-12-05T02:16:09Z|00182|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Dec 05 02:16:09 compute-0 ceph-mon[192914]: pgmap v2014: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s
Dec 05 02:16:09 compute-0 podman[455637]: 2025-12-05 02:16:09.671553336 +0000 UTC m=+0.092152850 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=base rhel9, release-0.7.12=, managed_by=edpm_ansible, version=9.4, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 05 02:16:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2015: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 0 B/s wr, 66 op/s
Dec 05 02:16:11 compute-0 ceph-mon[192914]: pgmap v2015: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 0 B/s wr, 66 op/s
Dec 05 02:16:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2016: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Dec 05 02:16:12 compute-0 nova_compute[349548]: 2025-12-05 02:16:12.387 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:16:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:16:13 compute-0 nova_compute[349548]: 2025-12-05 02:16:13.246 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:16:13 compute-0 ceph-mon[192914]: pgmap v2016: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Dec 05 02:16:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2017: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Dec 05 02:16:15 compute-0 ceph-mon[192914]: pgmap v2017: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Dec 05 02:16:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:16:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:16:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:16:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:16:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:16:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:16:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2018: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Dec 05 02:16:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:16:16
Dec 05 02:16:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:16:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:16:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'vms', 'volumes', 'backups', '.mgr', 'cephfs.cephfs.meta', 'images']
Dec 05 02:16:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:16:17 compute-0 nova_compute[349548]: 2025-12-05 02:16:17.388 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:16:17 compute-0 ceph-mon[192914]: pgmap v2018: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Dec 05 02:16:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:16:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:16:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:16:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:16:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:16:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:16:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:16:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:16:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:16:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:16:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:16:18 compute-0 nova_compute[349548]: 2025-12-05 02:16:18.249 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:16:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2019: 321 pgs: 321 active+clean; 213 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 1.1 MiB/s wr, 80 op/s
Dec 05 02:16:18 compute-0 ovn_controller[89286]: 2025-12-05T02:16:18Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:69:80:52 10.100.2.8
Dec 05 02:16:18 compute-0 ovn_controller[89286]: 2025-12-05T02:16:18Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:69:80:52 10.100.2.8
Dec 05 02:16:18 compute-0 ceph-mon[192914]: pgmap v2019: 321 pgs: 321 active+clean; 213 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 1.1 MiB/s wr, 80 op/s
Dec 05 02:16:19 compute-0 podman[455658]: 2025-12-05 02:16:19.687653124 +0000 UTC m=+0.090628117 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 02:16:19 compute-0 podman[455657]: 2025-12-05 02:16:19.697379577 +0000 UTC m=+0.099762873 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 05 02:16:19 compute-0 podman[455659]: 2025-12-05 02:16:19.731587158 +0000 UTC m=+0.133945723 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 02:16:19 compute-0 podman[455660]: 2025-12-05 02:16:19.735255991 +0000 UTC m=+0.129319923 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, architecture=x86_64, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, name=ubi9-minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6)
Dec 05 02:16:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2020: 321 pgs: 321 active+clean; 219 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 1.3 MiB/s wr, 57 op/s
Dec 05 02:16:21 compute-0 ceph-mon[192914]: pgmap v2020: 321 pgs: 321 active+clean; 219 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 1.3 MiB/s wr, 57 op/s
Dec 05 02:16:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2021: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 295 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 05 02:16:22 compute-0 nova_compute[349548]: 2025-12-05 02:16:22.390 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:16:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:16:23 compute-0 nova_compute[349548]: 2025-12-05 02:16:23.252 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:16:23 compute-0 ceph-mon[192914]: pgmap v2021: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 295 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 05 02:16:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2022: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Dec 05 02:16:25 compute-0 ceph-mon[192914]: pgmap v2022: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Dec 05 02:16:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2023: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Dec 05 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015141583439148272 of space, bias 1.0, pg target 0.4542475031744482 quantized to 32 (current 32)
Dec 05 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 05 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:16:27 compute-0 nova_compute[349548]: 2025-12-05 02:16:27.392 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:16:27 compute-0 ceph-mon[192914]: pgmap v2023: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Dec 05 02:16:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:16:28 compute-0 nova_compute[349548]: 2025-12-05 02:16:28.255 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:16:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2024: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Dec 05 02:16:29 compute-0 ceph-mon[192914]: pgmap v2024: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Dec 05 02:16:29 compute-0 podman[158197]: time="2025-12-05T02:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:16:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:16:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8650 "" "Go-http-client/1.1"
Dec 05 02:16:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2025: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 276 KiB/s rd, 1.0 MiB/s wr, 44 op/s
Dec 05 02:16:31 compute-0 openstack_network_exporter[366555]: ERROR   02:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:16:31 compute-0 openstack_network_exporter[366555]: ERROR   02:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:16:31 compute-0 openstack_network_exporter[366555]: ERROR   02:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:16:31 compute-0 openstack_network_exporter[366555]: ERROR   02:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:16:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:16:31 compute-0 openstack_network_exporter[366555]: ERROR   02:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:16:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:16:31 compute-0 ceph-mon[192914]: pgmap v2025: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 276 KiB/s rd, 1.0 MiB/s wr, 44 op/s
Dec 05 02:16:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2026: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 210 KiB/s rd, 821 KiB/s wr, 29 op/s
Dec 05 02:16:32 compute-0 nova_compute[349548]: 2025-12-05 02:16:32.395 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:16:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:16:33 compute-0 nova_compute[349548]: 2025-12-05 02:16:33.258 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:16:33 compute-0 ceph-mon[192914]: pgmap v2026: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 210 KiB/s rd, 821 KiB/s wr, 29 op/s
Dec 05 02:16:34 compute-0 nova_compute[349548]: 2025-12-05 02:16:34.089 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:16:34 compute-0 nova_compute[349548]: 2025-12-05 02:16:34.122 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Triggering sync for uuid 292fd084-0808-4a80-adc1-6ab1f28e188a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 05 02:16:34 compute-0 nova_compute[349548]: 2025-12-05 02:16:34.123 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Triggering sync for uuid e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 05 02:16:34 compute-0 nova_compute[349548]: 2025-12-05 02:16:34.124 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "292fd084-0808-4a80-adc1-6ab1f28e188a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:16:34 compute-0 nova_compute[349548]: 2025-12-05 02:16:34.125 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:16:34 compute-0 nova_compute[349548]: 2025-12-05 02:16:34.126 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:16:34 compute-0 nova_compute[349548]: 2025-12-05 02:16:34.127 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:16:34 compute-0 nova_compute[349548]: 2025-12-05 02:16:34.170 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:16:34 compute-0 nova_compute[349548]: 2025-12-05 02:16:34.172 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:16:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2027: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec 05 02:16:34 compute-0 podman[455741]: 2025-12-05 02:16:34.714673218 +0000 UTC m=+0.123639334 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 02:16:34 compute-0 podman[455742]: 2025-12-05 02:16:34.755768902 +0000 UTC m=+0.153106141 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:16:35 compute-0 ceph-mon[192914]: pgmap v2027: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec 05 02:16:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2028: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec 05 02:16:37 compute-0 nova_compute[349548]: 2025-12-05 02:16:37.398 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:16:37 compute-0 ceph-mon[192914]: pgmap v2028: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec 05 02:16:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:16:38 compute-0 nova_compute[349548]: 2025-12-05 02:16:38.261 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.325 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.325 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.337 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '292fd084-0808-4a80-adc1-6ab1f28e188a', 'name': 'te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'hostId': '1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18', 'status': 'active', 'metadata': {'metering.server_group': '92ca195d-98d1-443c-9947-dcb7ca7b926a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.341 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 05 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.343 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}03a5c5085f72a10a14834caf2c8f725d7bea9761ee1da0af3d318eb89d91a8ae" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 05 02:16:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2029: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.378 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1830 Content-Type: application/json Date: Fri, 05 Dec 2025 02:16:38 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-cd29bee7-01d3-4292-8507-29ac68d1958b x-openstack-request-id: req-cd29bee7-01d3-4292-8507-29ac68d1958b _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.379 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7", "name": "te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo", "status": "ACTIVE", "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "user_id": "99591ed8361e41579fee1d14f16bf0f7", "metadata": {"metering.server_group": "92ca195d-98d1-443c-9947-dcb7ca7b926a"}, "hostId": "1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18", "image": {"id": "773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e"}]}, "flavor": {"id": "bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49"}]}, "created": "2025-12-05T02:15:29Z", "updated": "2025-12-05T02:15:42Z", "addresses": {"": [{"version": 4, "addr": "10.100.2.8", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:69:80:52"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-05T02:15:42.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000f", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.379 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 used request id req-cd29bee7-01d3-4292-8507-29ac68d1958b request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.381 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7', 'name': 'te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'hostId': '1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18', 'status': 'active', 'metadata': {'metering.server_group': '92ca195d-98d1-443c-9947-dcb7ca7b926a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.382 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.382 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.382 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.383 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.384 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.385 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.385 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.386 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.386 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.386 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.386 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T02:16:39.382996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.387 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T02:16:39.386479) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.410 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.411 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.433 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.434 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.435 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.435 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.436 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.436 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.436 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.436 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.438 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.438 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.438 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.439 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.439 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.439 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.440 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.440 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo>]
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.440 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.441 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.440 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T02:16:39.436788) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.441 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-05T02:16:39.439621) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.441 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.442 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.442 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.443 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T02:16:39.442347) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.506 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 29961216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.507 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceph-mon[192914]: pgmap v2029: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.595 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.bytes volume: 30075904 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.596 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.597 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.597 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.597 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.598 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.598 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.598 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.599 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T02:16:39.598458) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.599 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 3090417276 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.599 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 214244219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.600 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.latency volume: 2761905668 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.601 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.latency volume: 175446078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.602 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.602 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.602 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.603 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.603 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.603 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.604 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 1059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.604 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T02:16:39.603585) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.605 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.605 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.requests volume: 1075 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.606 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.607 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.607 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.608 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.608 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.608 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.609 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T02:16:39.609088) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.609 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.609 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.610 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.611 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.611 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.612 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.613 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.613 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.613 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.614 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.614 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.615 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T02:16:39.614487) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.615 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 72839168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.615 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.616 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.bytes volume: 72785920 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.616 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.617 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.618 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.618 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.619 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.619 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.620 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.620 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T02:16:39.619844) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.664 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 podman[455783]: 2025-12-05 02:16:39.707841114 +0000 UTC m=+0.121766311 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.708 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.710 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.710 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.710 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.711 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.711 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.711 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 10935968399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.712 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T02:16:39.711193) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.713 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.714 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.latency volume: 10282138591 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.714 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.715 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.715 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.715 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.716 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.716 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.716 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 290 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.717 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.717 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T02:16:39.716228) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.718 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.requests volume: 271 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.718 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.719 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.719 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.719 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.720 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.720 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.721 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T02:16:39.720624) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.725 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.729 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 / tapafc3cf6c-cb inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.729 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets volume: 10 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 podman[455784]: 2025-12-05 02:16:39.729859352 +0000 UTC m=+0.125272029 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.730 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.730 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.730 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.730 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.730 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.730 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.731 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T02:16:39.730412) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.731 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.732 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.732 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.733 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.733 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.733 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.733 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.733 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.733 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T02:16:39.733296) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.734 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.734 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.734 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.735 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.735 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.735 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.735 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.735 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.735 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.735 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.735 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.736 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.736 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T02:16:39.735694) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.737 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.737 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.737 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.737 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.738 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.738 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.738 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.739 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T02:16:39.738342) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.739 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.739 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.740 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.740 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.740 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.740 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.740 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.740 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T02:16:39.740406) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.741 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.741 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.742 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.742 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.742 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.742 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.742 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.743 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.743 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.743 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-05T02:16:39.743153) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.743 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo>]
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.744 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.744 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.744 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.744 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.744 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.744 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T02:16:39.744502) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.745 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/memory.usage volume: 43.12109375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.745 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/memory.usage volume: 43.484375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.745 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.745 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.746 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.746 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.746 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.746 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.746 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T02:16:39.746229) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.747 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.747 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.bytes volume: 1346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.748 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.748 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.748 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.748 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.748 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.748 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.749 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T02:16:39.748958) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.749 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.750 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.750 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.750 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.751 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.751 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.751 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.751 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.751 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes.delta volume: 168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.752 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T02:16:39.751501) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.752 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.753 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.753 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.753 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.753 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.754 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.754 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.754 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/cpu volume: 303170000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.755 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/cpu volume: 54360000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.755 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.755 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.756 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.756 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.756 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.756 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T02:16:39.754266) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.756 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.756 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T02:16:39.756106) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.757 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.757 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.757 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.757 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.757 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.757 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.757 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.758 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.758 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T02:16:39.757865) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.758 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.759 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.759 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.762 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.762 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.762 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.762 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.762 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.762 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.762 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:16:39 compute-0 podman[455818]: 2025-12-05 02:16:39.815382214 +0000 UTC m=+0.075078430 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, architecture=x86_64, com.redhat.component=ubi9-container, distribution-scope=public, io.buildah.version=1.29.0, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., io.openshift.expose-services=, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, name=ubi9)
Dec 05 02:16:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2030: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec 05 02:16:41 compute-0 nova_compute[349548]: 2025-12-05 02:16:41.104 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:16:41 compute-0 nova_compute[349548]: 2025-12-05 02:16:41.106 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:16:41 compute-0 nova_compute[349548]: 2025-12-05 02:16:41.107 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:16:41 compute-0 ceph-mon[192914]: pgmap v2030: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec 05 02:16:41 compute-0 nova_compute[349548]: 2025-12-05 02:16:41.825 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:16:41 compute-0 nova_compute[349548]: 2025-12-05 02:16:41.826 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:16:41 compute-0 nova_compute[349548]: 2025-12-05 02:16:41.827 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 02:16:41 compute-0 nova_compute[349548]: 2025-12-05 02:16:41.828 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 292fd084-0808-4a80-adc1-6ab1f28e188a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:16:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2031: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 5.1 KiB/s wr, 0 op/s
Dec 05 02:16:42 compute-0 nova_compute[349548]: 2025-12-05 02:16:42.399 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:16:43 compute-0 nova_compute[349548]: 2025-12-05 02:16:43.088 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updating instance_info_cache with network_info: [{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:16:43 compute-0 nova_compute[349548]: 2025-12-05 02:16:43.116 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:16:43 compute-0 nova_compute[349548]: 2025-12-05 02:16:43.117 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 02:16:43 compute-0 nova_compute[349548]: 2025-12-05 02:16:43.118 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:16:43 compute-0 nova_compute[349548]: 2025-12-05 02:16:43.119 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:16:43 compute-0 nova_compute[349548]: 2025-12-05 02:16:43.119 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:16:43 compute-0 nova_compute[349548]: 2025-12-05 02:16:43.120 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:16:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:16:43 compute-0 nova_compute[349548]: 2025-12-05 02:16:43.265 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:16:43 compute-0 ceph-mon[192914]: pgmap v2031: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 5.1 KiB/s wr, 0 op/s
Dec 05 02:16:44 compute-0 nova_compute[349548]: 2025-12-05 02:16:44.069 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:16:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2032: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 05 02:16:45 compute-0 nova_compute[349548]: 2025-12-05 02:16:45.064 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:16:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:16:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/34920223' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:16:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:16:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/34920223' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:16:45 compute-0 ceph-mon[192914]: pgmap v2032: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 05 02:16:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/34920223' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:16:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/34920223' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:16:46 compute-0 nova_compute[349548]: 2025-12-05 02:16:46.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:16:46 compute-0 nova_compute[349548]: 2025-12-05 02:16:46.109 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:16:46 compute-0 nova_compute[349548]: 2025-12-05 02:16:46.110 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:16:46 compute-0 nova_compute[349548]: 2025-12-05 02:16:46.111 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:16:46 compute-0 nova_compute[349548]: 2025-12-05 02:16:46.112 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:16:46 compute-0 nova_compute[349548]: 2025-12-05 02:16:46.113 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:16:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:16:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:16:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:16:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:16:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:16:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:16:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2033: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 05 02:16:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:16:46 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2984854046' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:16:46 compute-0 nova_compute[349548]: 2025-12-05 02:16:46.697 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:16:46 compute-0 nova_compute[349548]: 2025-12-05 02:16:46.818 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:16:46 compute-0 nova_compute[349548]: 2025-12-05 02:16:46.820 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:16:46 compute-0 nova_compute[349548]: 2025-12-05 02:16:46.828 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:16:46 compute-0 nova_compute[349548]: 2025-12-05 02:16:46.829 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:16:47 compute-0 nova_compute[349548]: 2025-12-05 02:16:47.402 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:16:47 compute-0 nova_compute[349548]: 2025-12-05 02:16:47.486 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:16:47 compute-0 nova_compute[349548]: 2025-12-05 02:16:47.487 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3588MB free_disk=59.897396087646484GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:16:47 compute-0 nova_compute[349548]: 2025-12-05 02:16:47.488 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:16:47 compute-0 nova_compute[349548]: 2025-12-05 02:16:47.489 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:16:47 compute-0 nova_compute[349548]: 2025-12-05 02:16:47.592 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:16:47 compute-0 nova_compute[349548]: 2025-12-05 02:16:47.593 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:16:47 compute-0 nova_compute[349548]: 2025-12-05 02:16:47.594 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:16:47 compute-0 nova_compute[349548]: 2025-12-05 02:16:47.594 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:16:47 compute-0 ceph-mon[192914]: pgmap v2033: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 05 02:16:47 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2984854046' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:16:47 compute-0 nova_compute[349548]: 2025-12-05 02:16:47.651 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:16:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:16:48 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1398920761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:16:48 compute-0 nova_compute[349548]: 2025-12-05 02:16:48.204 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:16:48 compute-0 nova_compute[349548]: 2025-12-05 02:16:48.219 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:16:48 compute-0 nova_compute[349548]: 2025-12-05 02:16:48.242 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:16:48 compute-0 nova_compute[349548]: 2025-12-05 02:16:48.246 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:16:48 compute-0 nova_compute[349548]: 2025-12-05 02:16:48.248 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:16:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:16:48 compute-0 nova_compute[349548]: 2025-12-05 02:16:48.268 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:16:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2034: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 05 02:16:48 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1398920761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:16:49 compute-0 ceph-mon[192914]: pgmap v2034: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 05 02:16:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2035: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 05 02:16:50 compute-0 sudo[455883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:16:50 compute-0 sudo[455883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:16:50 compute-0 sudo[455883]: pam_unix(sudo:session): session closed for user root
Dec 05 02:16:50 compute-0 podman[455886]: 2025-12-05 02:16:50.717589709 +0000 UTC m=+0.112900822 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 05 02:16:50 compute-0 podman[455891]: 2025-12-05 02:16:50.723223177 +0000 UTC m=+0.126254317 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 02:16:50 compute-0 sudo[455956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:16:50 compute-0 sudo[455956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:16:50 compute-0 podman[455901]: 2025-12-05 02:16:50.754932608 +0000 UTC m=+0.137208615 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, release=1755695350, io.openshift.tags=minimal rhel9, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter)
Dec 05 02:16:50 compute-0 sudo[455956]: pam_unix(sudo:session): session closed for user root
Dec 05 02:16:50 compute-0 podman[455893]: 2025-12-05 02:16:50.760468583 +0000 UTC m=+0.145518868 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 02:16:50 compute-0 sudo[456013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:16:50 compute-0 sudo[456013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:16:50 compute-0 sudo[456013]: pam_unix(sudo:session): session closed for user root
Dec 05 02:16:50 compute-0 sudo[456039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:16:50 compute-0 sudo[456039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:16:51 compute-0 nova_compute[349548]: 2025-12-05 02:16:51.250 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:16:51 compute-0 nova_compute[349548]: 2025-12-05 02:16:51.253 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:16:51 compute-0 sudo[456039]: pam_unix(sudo:session): session closed for user root
Dec 05 02:16:51 compute-0 ceph-mon[192914]: pgmap v2035: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec 05 02:16:51 compute-0 sudo[456095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:16:51 compute-0 sudo[456095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:16:51 compute-0 sudo[456095]: pam_unix(sudo:session): session closed for user root
Dec 05 02:16:51 compute-0 sudo[456120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:16:51 compute-0 sudo[456120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:16:51 compute-0 sudo[456120]: pam_unix(sudo:session): session closed for user root
Dec 05 02:16:52 compute-0 sudo[456145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:16:52 compute-0 sudo[456145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:16:52 compute-0 sudo[456145]: pam_unix(sudo:session): session closed for user root
Dec 05 02:16:52 compute-0 sudo[456170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- inventory --format=json-pretty --filter-for-batch
Dec 05 02:16:52 compute-0 sudo[456170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:16:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2036: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec 05 02:16:52 compute-0 nova_compute[349548]: 2025-12-05 02:16:52.403 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:16:52 compute-0 podman[456235]: 2025-12-05 02:16:52.617229932 +0000 UTC m=+0.071627433 container create 13fe6aa58e5bd5f03ac2c9593df258e8f5636d65486248c5e6348dc20a077ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shamir, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 05 02:16:52 compute-0 podman[456235]: 2025-12-05 02:16:52.582297161 +0000 UTC m=+0.036694712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:16:52 compute-0 systemd[1]: Started libpod-conmon-13fe6aa58e5bd5f03ac2c9593df258e8f5636d65486248c5e6348dc20a077ad1.scope.
Dec 05 02:16:52 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:16:52 compute-0 podman[456235]: 2025-12-05 02:16:52.749051444 +0000 UTC m=+0.203448985 container init 13fe6aa58e5bd5f03ac2c9593df258e8f5636d65486248c5e6348dc20a077ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shamir, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:16:52 compute-0 podman[456235]: 2025-12-05 02:16:52.760087274 +0000 UTC m=+0.214484765 container start 13fe6aa58e5bd5f03ac2c9593df258e8f5636d65486248c5e6348dc20a077ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 05 02:16:52 compute-0 podman[456235]: 2025-12-05 02:16:52.764292263 +0000 UTC m=+0.218689764 container attach 13fe6aa58e5bd5f03ac2c9593df258e8f5636d65486248c5e6348dc20a077ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 05 02:16:52 compute-0 beautiful_shamir[456251]: 167 167
Dec 05 02:16:52 compute-0 podman[456235]: 2025-12-05 02:16:52.772585925 +0000 UTC m=+0.226983406 container died 13fe6aa58e5bd5f03ac2c9593df258e8f5636d65486248c5e6348dc20a077ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shamir, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 02:16:52 compute-0 systemd[1]: libpod-13fe6aa58e5bd5f03ac2c9593df258e8f5636d65486248c5e6348dc20a077ad1.scope: Deactivated successfully.
Dec 05 02:16:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-457ba28c763674dd0952f6ab925630f61463c735e993551ac2c3f9f3056d8ab3-merged.mount: Deactivated successfully.
Dec 05 02:16:52 compute-0 podman[456235]: 2025-12-05 02:16:52.82687961 +0000 UTC m=+0.281277101 container remove 13fe6aa58e5bd5f03ac2c9593df258e8f5636d65486248c5e6348dc20a077ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 05 02:16:52 compute-0 systemd[1]: libpod-conmon-13fe6aa58e5bd5f03ac2c9593df258e8f5636d65486248c5e6348dc20a077ad1.scope: Deactivated successfully.
Dec 05 02:16:53 compute-0 podman[456274]: 2025-12-05 02:16:53.052788615 +0000 UTC m=+0.079643728 container create 0b5116f720f6528e8fbbf12dbdd388f98515519031e303497f26f4c86894abd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:16:53 compute-0 podman[456274]: 2025-12-05 02:16:53.01950324 +0000 UTC m=+0.046358393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:16:53 compute-0 systemd[1]: Started libpod-conmon-0b5116f720f6528e8fbbf12dbdd388f98515519031e303497f26f4c86894abd2.scope.
Dec 05 02:16:53 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:16:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b2d40ba2216777cbe4075b83b9e16e8c2227e4bfd3bd43dcf6d311340fe95a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:16:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b2d40ba2216777cbe4075b83b9e16e8c2227e4bfd3bd43dcf6d311340fe95a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:16:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b2d40ba2216777cbe4075b83b9e16e8c2227e4bfd3bd43dcf6d311340fe95a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:16:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b2d40ba2216777cbe4075b83b9e16e8c2227e4bfd3bd43dcf6d311340fe95a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:16:53 compute-0 podman[456274]: 2025-12-05 02:16:53.235813806 +0000 UTC m=+0.262668969 container init 0b5116f720f6528e8fbbf12dbdd388f98515519031e303497f26f4c86894abd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 02:16:53 compute-0 podman[456274]: 2025-12-05 02:16:53.250850858 +0000 UTC m=+0.277705931 container start 0b5116f720f6528e8fbbf12dbdd388f98515519031e303497f26f4c86894abd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:16:53 compute-0 podman[456274]: 2025-12-05 02:16:53.257201746 +0000 UTC m=+0.284056869 container attach 0b5116f720f6528e8fbbf12dbdd388f98515519031e303497f26f4c86894abd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 05 02:16:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:16:53 compute-0 nova_compute[349548]: 2025-12-05 02:16:53.272 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:16:53 compute-0 ceph-mon[192914]: pgmap v2036: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec 05 02:16:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2037: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]: [
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:     {
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:         "available": false,
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:         "ceph_device": false,
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:         "lsm_data": {},
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:         "lvs": [],
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:         "path": "/dev/sr0",
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:         "rejected_reasons": [
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:             "Insufficient space (<5GB)",
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:             "Has a FileSystem"
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:         ],
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:         "sys_api": {
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:             "actuators": null,
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:             "device_nodes": "sr0",
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:             "devname": "sr0",
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:             "human_readable_size": "482.00 KB",
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:             "id_bus": "ata",
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:             "model": "QEMU DVD-ROM",
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:             "nr_requests": "2",
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:             "parent": "/dev/sr0",
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:             "partitions": {},
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:             "path": "/dev/sr0",
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:             "removable": "1",
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:             "rev": "2.5+",
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:             "ro": "0",
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:             "rotational": "1",
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:             "sas_address": "",
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:             "sas_device_handle": "",
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:             "scheduler_mode": "mq-deadline",
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:             "sectors": 0,
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:             "sectorsize": "2048",
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:             "size": 493568.0,
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:             "support_discard": "2048",
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:             "type": "disk",
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:             "vendor": "QEMU"
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:         }
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]:     }
Dec 05 02:16:55 compute-0 wonderful_proskuriakova[456290]: ]
Dec 05 02:16:55 compute-0 systemd[1]: libpod-0b5116f720f6528e8fbbf12dbdd388f98515519031e303497f26f4c86894abd2.scope: Deactivated successfully.
Dec 05 02:16:55 compute-0 podman[456274]: 2025-12-05 02:16:55.615956133 +0000 UTC m=+2.642811226 container died 0b5116f720f6528e8fbbf12dbdd388f98515519031e303497f26f4c86894abd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 05 02:16:55 compute-0 systemd[1]: libpod-0b5116f720f6528e8fbbf12dbdd388f98515519031e303497f26f4c86894abd2.scope: Consumed 2.405s CPU time.
Dec 05 02:16:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6b2d40ba2216777cbe4075b83b9e16e8c2227e4bfd3bd43dcf6d311340fe95a-merged.mount: Deactivated successfully.
Dec 05 02:16:55 compute-0 ceph-mon[192914]: pgmap v2037: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:16:55 compute-0 podman[456274]: 2025-12-05 02:16:55.696433563 +0000 UTC m=+2.723288646 container remove 0b5116f720f6528e8fbbf12dbdd388f98515519031e303497f26f4c86894abd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:16:55 compute-0 sudo[456170]: pam_unix(sudo:session): session closed for user root
Dec 05 02:16:55 compute-0 systemd[1]: libpod-conmon-0b5116f720f6528e8fbbf12dbdd388f98515519031e303497f26f4c86894abd2.scope: Deactivated successfully.
Dec 05 02:16:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:16:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:16:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:16:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:16:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:16:55 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:16:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:16:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:16:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:16:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:16:55 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 4165ef21-ac94-4975-9a10-db81e571b918 does not exist
Dec 05 02:16:55 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 01f6321f-a38c-4402-9dc0-53028a9b712f does not exist
Dec 05 02:16:55 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev f60816ee-95cb-4c73-b1a8-23870fcab235 does not exist
Dec 05 02:16:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:16:55 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:16:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:16:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:16:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:16:55 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:16:55 compute-0 sudo[458449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:16:55 compute-0 sudo[458449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:16:55 compute-0 sudo[458449]: pam_unix(sudo:session): session closed for user root
Dec 05 02:16:56 compute-0 sudo[458474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:16:56 compute-0 sudo[458474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:16:56 compute-0 sudo[458474]: pam_unix(sudo:session): session closed for user root
Dec 05 02:16:56 compute-0 sudo[458499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:16:56 compute-0 sudo[458499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:16:56 compute-0 sudo[458499]: pam_unix(sudo:session): session closed for user root
Dec 05 02:16:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:16:56.214 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:16:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:16:56.215 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:16:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:16:56.215 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:16:56 compute-0 sudo[458524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:16:56 compute-0 sudo[458524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:16:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2038: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:16:56 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:16:56 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:16:56 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:16:56 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:16:56 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:16:56 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:16:56 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:16:56 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:16:56 compute-0 podman[458588]: 2025-12-05 02:16:56.903593298 +0000 UTC m=+0.072201569 container create 8d6d60a63ccd24b8ad9d435607045d518a51f0e35e0729721910c959ed244618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_turing, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 05 02:16:56 compute-0 systemd[1]: Started libpod-conmon-8d6d60a63ccd24b8ad9d435607045d518a51f0e35e0729721910c959ed244618.scope.
Dec 05 02:16:56 compute-0 podman[458588]: 2025-12-05 02:16:56.873848442 +0000 UTC m=+0.042456723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:16:57 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:16:57 compute-0 podman[458588]: 2025-12-05 02:16:57.031621713 +0000 UTC m=+0.200229994 container init 8d6d60a63ccd24b8ad9d435607045d518a51f0e35e0729721910c959ed244618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_turing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 05 02:16:57 compute-0 podman[458588]: 2025-12-05 02:16:57.051735548 +0000 UTC m=+0.220343799 container start 8d6d60a63ccd24b8ad9d435607045d518a51f0e35e0729721910c959ed244618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_turing, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 05 02:16:57 compute-0 podman[458588]: 2025-12-05 02:16:57.057637524 +0000 UTC m=+0.226245775 container attach 8d6d60a63ccd24b8ad9d435607045d518a51f0e35e0729721910c959ed244618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:16:57 compute-0 hungry_turing[458604]: 167 167
Dec 05 02:16:57 compute-0 systemd[1]: libpod-8d6d60a63ccd24b8ad9d435607045d518a51f0e35e0729721910c959ed244618.scope: Deactivated successfully.
Dec 05 02:16:57 compute-0 podman[458588]: 2025-12-05 02:16:57.065012441 +0000 UTC m=+0.233620712 container died 8d6d60a63ccd24b8ad9d435607045d518a51f0e35e0729721910c959ed244618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_turing, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 02:16:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb8bd229b227c2f5dbb3fbd49eee17dc109d7897911320d68c1c8697f6a0d367-merged.mount: Deactivated successfully.
Dec 05 02:16:57 compute-0 podman[458588]: 2025-12-05 02:16:57.132034513 +0000 UTC m=+0.300642744 container remove 8d6d60a63ccd24b8ad9d435607045d518a51f0e35e0729721910c959ed244618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 02:16:57 compute-0 systemd[1]: libpod-conmon-8d6d60a63ccd24b8ad9d435607045d518a51f0e35e0729721910c959ed244618.scope: Deactivated successfully.
Dec 05 02:16:57 compute-0 nova_compute[349548]: 2025-12-05 02:16:57.407 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:16:57 compute-0 podman[458628]: 2025-12-05 02:16:57.425939178 +0000 UTC m=+0.111647977 container create 9aa73d19a6f5a16693c84660199a91f65f437d8e000ab8f471911759c51c1be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_montalcini, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 02:16:57 compute-0 podman[458628]: 2025-12-05 02:16:57.372644911 +0000 UTC m=+0.058353730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:16:57 compute-0 systemd[1]: Started libpod-conmon-9aa73d19a6f5a16693c84660199a91f65f437d8e000ab8f471911759c51c1be9.scope.
Dec 05 02:16:57 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe12e73fdfe1a17fc543936ca492819258d205f2503f8b6b19ecc4d0617c387/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe12e73fdfe1a17fc543936ca492819258d205f2503f8b6b19ecc4d0617c387/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe12e73fdfe1a17fc543936ca492819258d205f2503f8b6b19ecc4d0617c387/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe12e73fdfe1a17fc543936ca492819258d205f2503f8b6b19ecc4d0617c387/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe12e73fdfe1a17fc543936ca492819258d205f2503f8b6b19ecc4d0617c387/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:16:57 compute-0 podman[458628]: 2025-12-05 02:16:57.626712496 +0000 UTC m=+0.312421315 container init 9aa73d19a6f5a16693c84660199a91f65f437d8e000ab8f471911759c51c1be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:16:57 compute-0 podman[458628]: 2025-12-05 02:16:57.644830185 +0000 UTC m=+0.330538984 container start 9aa73d19a6f5a16693c84660199a91f65f437d8e000ab8f471911759c51c1be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 02:16:57 compute-0 podman[458628]: 2025-12-05 02:16:57.651555884 +0000 UTC m=+0.337264683 container attach 9aa73d19a6f5a16693c84660199a91f65f437d8e000ab8f471911759c51c1be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_montalcini, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 02:16:57 compute-0 ceph-mon[192914]: pgmap v2038: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:16:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:16:58 compute-0 nova_compute[349548]: 2025-12-05 02:16:58.275 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:16:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2039: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 05 02:16:58 compute-0 sharp_montalcini[458644]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:16:58 compute-0 sharp_montalcini[458644]: --> relative data size: 1.0
Dec 05 02:16:58 compute-0 sharp_montalcini[458644]: --> All data devices are unavailable
Dec 05 02:16:59 compute-0 systemd[1]: libpod-9aa73d19a6f5a16693c84660199a91f65f437d8e000ab8f471911759c51c1be9.scope: Deactivated successfully.
Dec 05 02:16:59 compute-0 podman[458628]: 2025-12-05 02:16:59.016432608 +0000 UTC m=+1.702141397 container died 9aa73d19a6f5a16693c84660199a91f65f437d8e000ab8f471911759c51c1be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_montalcini, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:16:59 compute-0 systemd[1]: libpod-9aa73d19a6f5a16693c84660199a91f65f437d8e000ab8f471911759c51c1be9.scope: Consumed 1.301s CPU time.
Dec 05 02:16:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-efe12e73fdfe1a17fc543936ca492819258d205f2503f8b6b19ecc4d0617c387-merged.mount: Deactivated successfully.
Dec 05 02:16:59 compute-0 podman[458628]: 2025-12-05 02:16:59.114506582 +0000 UTC m=+1.800215381 container remove 9aa73d19a6f5a16693c84660199a91f65f437d8e000ab8f471911759c51c1be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 02:16:59 compute-0 systemd[1]: libpod-conmon-9aa73d19a6f5a16693c84660199a91f65f437d8e000ab8f471911759c51c1be9.scope: Deactivated successfully.
Dec 05 02:16:59 compute-0 sudo[458524]: pam_unix(sudo:session): session closed for user root
Dec 05 02:16:59 compute-0 sudo[458685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:16:59 compute-0 sudo[458685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:16:59 compute-0 sudo[458685]: pam_unix(sudo:session): session closed for user root
Dec 05 02:16:59 compute-0 sudo[458710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:16:59 compute-0 sudo[458710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:16:59 compute-0 sudo[458710]: pam_unix(sudo:session): session closed for user root
Dec 05 02:16:59 compute-0 sudo[458735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:16:59 compute-0 sudo[458735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:16:59 compute-0 sudo[458735]: pam_unix(sudo:session): session closed for user root
Dec 05 02:16:59 compute-0 sudo[458760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:16:59 compute-0 sudo[458760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:16:59 compute-0 podman[158197]: time="2025-12-05T02:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:16:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:16:59 compute-0 ceph-mon[192914]: pgmap v2039: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 05 02:16:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8660 "" "Go-http-client/1.1"
Dec 05 02:17:00 compute-0 podman[458821]: 2025-12-05 02:17:00.229767765 +0000 UTC m=+0.086632954 container create 46a9acda12e9ddd6d943b988a6f8fb22e336e53db08b2c38f5f276eb600475b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hugle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:17:00 compute-0 podman[458821]: 2025-12-05 02:17:00.188004162 +0000 UTC m=+0.044869431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:17:00 compute-0 systemd[1]: Started libpod-conmon-46a9acda12e9ddd6d943b988a6f8fb22e336e53db08b2c38f5f276eb600475b0.scope.
Dec 05 02:17:00 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:17:00 compute-0 podman[458821]: 2025-12-05 02:17:00.374661875 +0000 UTC m=+0.231527144 container init 46a9acda12e9ddd6d943b988a6f8fb22e336e53db08b2c38f5f276eb600475b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:17:00 compute-0 podman[458821]: 2025-12-05 02:17:00.387527806 +0000 UTC m=+0.244393025 container start 46a9acda12e9ddd6d943b988a6f8fb22e336e53db08b2c38f5f276eb600475b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hugle, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:17:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2040: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 05 02:17:00 compute-0 podman[458821]: 2025-12-05 02:17:00.395228992 +0000 UTC m=+0.252094261 container attach 46a9acda12e9ddd6d943b988a6f8fb22e336e53db08b2c38f5f276eb600475b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hugle, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:17:00 compute-0 condescending_hugle[458837]: 167 167
Dec 05 02:17:00 compute-0 systemd[1]: libpod-46a9acda12e9ddd6d943b988a6f8fb22e336e53db08b2c38f5f276eb600475b0.scope: Deactivated successfully.
Dec 05 02:17:00 compute-0 podman[458821]: 2025-12-05 02:17:00.402733943 +0000 UTC m=+0.259599142 container died 46a9acda12e9ddd6d943b988a6f8fb22e336e53db08b2c38f5f276eb600475b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hugle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:17:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-666ba6de41b1383330473895541dcfd6248d0ef70fc972fd84f983cb73ad2be4-merged.mount: Deactivated successfully.
Dec 05 02:17:00 compute-0 podman[458821]: 2025-12-05 02:17:00.469968351 +0000 UTC m=+0.326833540 container remove 46a9acda12e9ddd6d943b988a6f8fb22e336e53db08b2c38f5f276eb600475b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hugle, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 05 02:17:00 compute-0 systemd[1]: libpod-conmon-46a9acda12e9ddd6d943b988a6f8fb22e336e53db08b2c38f5f276eb600475b0.scope: Deactivated successfully.
Dec 05 02:17:00 compute-0 podman[458859]: 2025-12-05 02:17:00.754944875 +0000 UTC m=+0.091392908 container create cdfd21eca7a1c8d3f829074ef032461d8734eb71bb83e9e2a78cff182cd59c5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_black, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 05 02:17:00 compute-0 podman[458859]: 2025-12-05 02:17:00.719245113 +0000 UTC m=+0.055693186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:17:00 compute-0 systemd[1]: Started libpod-conmon-cdfd21eca7a1c8d3f829074ef032461d8734eb71bb83e9e2a78cff182cd59c5a.scope.
Dec 05 02:17:00 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33d60e7905d7cbb234bbdd64b99b10fa7128c0385f72c55a967d59c29f35a512/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33d60e7905d7cbb234bbdd64b99b10fa7128c0385f72c55a967d59c29f35a512/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33d60e7905d7cbb234bbdd64b99b10fa7128c0385f72c55a967d59c29f35a512/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33d60e7905d7cbb234bbdd64b99b10fa7128c0385f72c55a967d59c29f35a512/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:17:00 compute-0 podman[458859]: 2025-12-05 02:17:00.917695096 +0000 UTC m=+0.254143209 container init cdfd21eca7a1c8d3f829074ef032461d8734eb71bb83e9e2a78cff182cd59c5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_black, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 02:17:00 compute-0 podman[458859]: 2025-12-05 02:17:00.959982184 +0000 UTC m=+0.296430247 container start cdfd21eca7a1c8d3f829074ef032461d8734eb71bb83e9e2a78cff182cd59c5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_black, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 02:17:00 compute-0 podman[458859]: 2025-12-05 02:17:00.966565029 +0000 UTC m=+0.303013142 container attach cdfd21eca7a1c8d3f829074ef032461d8734eb71bb83e9e2a78cff182cd59c5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_black, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 05 02:17:01 compute-0 openstack_network_exporter[366555]: ERROR   02:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:17:01 compute-0 openstack_network_exporter[366555]: ERROR   02:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:17:01 compute-0 openstack_network_exporter[366555]: ERROR   02:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:17:01 compute-0 openstack_network_exporter[366555]: ERROR   02:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:17:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:17:01 compute-0 openstack_network_exporter[366555]: ERROR   02:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:17:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:17:01 compute-0 quirky_black[458875]: {
Dec 05 02:17:01 compute-0 quirky_black[458875]:     "0": [
Dec 05 02:17:01 compute-0 quirky_black[458875]:         {
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "devices": [
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "/dev/loop3"
Dec 05 02:17:01 compute-0 quirky_black[458875]:             ],
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "lv_name": "ceph_lv0",
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "lv_size": "21470642176",
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "name": "ceph_lv0",
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "tags": {
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.cluster_name": "ceph",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.crush_device_class": "",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.encrypted": "0",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.osd_id": "0",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.type": "block",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.vdo": "0"
Dec 05 02:17:01 compute-0 quirky_black[458875]:             },
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "type": "block",
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "vg_name": "ceph_vg0"
Dec 05 02:17:01 compute-0 quirky_black[458875]:         }
Dec 05 02:17:01 compute-0 quirky_black[458875]:     ],
Dec 05 02:17:01 compute-0 quirky_black[458875]:     "1": [
Dec 05 02:17:01 compute-0 quirky_black[458875]:         {
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "devices": [
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "/dev/loop4"
Dec 05 02:17:01 compute-0 quirky_black[458875]:             ],
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "lv_name": "ceph_lv1",
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "lv_size": "21470642176",
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "name": "ceph_lv1",
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "tags": {
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.cluster_name": "ceph",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.crush_device_class": "",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.encrypted": "0",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.osd_id": "1",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.type": "block",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.vdo": "0"
Dec 05 02:17:01 compute-0 quirky_black[458875]:             },
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "type": "block",
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "vg_name": "ceph_vg1"
Dec 05 02:17:01 compute-0 quirky_black[458875]:         }
Dec 05 02:17:01 compute-0 quirky_black[458875]:     ],
Dec 05 02:17:01 compute-0 quirky_black[458875]:     "2": [
Dec 05 02:17:01 compute-0 quirky_black[458875]:         {
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "devices": [
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "/dev/loop5"
Dec 05 02:17:01 compute-0 quirky_black[458875]:             ],
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "lv_name": "ceph_lv2",
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "lv_size": "21470642176",
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "name": "ceph_lv2",
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "tags": {
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.cluster_name": "ceph",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.crush_device_class": "",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.encrypted": "0",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.osd_id": "2",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.type": "block",
Dec 05 02:17:01 compute-0 quirky_black[458875]:                 "ceph.vdo": "0"
Dec 05 02:17:01 compute-0 quirky_black[458875]:             },
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "type": "block",
Dec 05 02:17:01 compute-0 quirky_black[458875]:             "vg_name": "ceph_vg2"
Dec 05 02:17:01 compute-0 quirky_black[458875]:         }
Dec 05 02:17:01 compute-0 quirky_black[458875]:     ]
Dec 05 02:17:01 compute-0 quirky_black[458875]: }
Dec 05 02:17:01 compute-0 ceph-mon[192914]: pgmap v2040: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 05 02:17:01 compute-0 systemd[1]: libpod-cdfd21eca7a1c8d3f829074ef032461d8734eb71bb83e9e2a78cff182cd59c5a.scope: Deactivated successfully.
Dec 05 02:17:01 compute-0 podman[458859]: 2025-12-05 02:17:01.847961993 +0000 UTC m=+1.184410026 container died cdfd21eca7a1c8d3f829074ef032461d8734eb71bb83e9e2a78cff182cd59c5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 02:17:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-33d60e7905d7cbb234bbdd64b99b10fa7128c0385f72c55a967d59c29f35a512-merged.mount: Deactivated successfully.
Dec 05 02:17:01 compute-0 podman[458859]: 2025-12-05 02:17:01.929182304 +0000 UTC m=+1.265630347 container remove cdfd21eca7a1c8d3f829074ef032461d8734eb71bb83e9e2a78cff182cd59c5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_black, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:17:01 compute-0 systemd[1]: libpod-conmon-cdfd21eca7a1c8d3f829074ef032461d8734eb71bb83e9e2a78cff182cd59c5a.scope: Deactivated successfully.
Dec 05 02:17:01 compute-0 sudo[458760]: pam_unix(sudo:session): session closed for user root
Dec 05 02:17:02 compute-0 sudo[458894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:17:02 compute-0 sudo[458894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:17:02 compute-0 sudo[458894]: pam_unix(sudo:session): session closed for user root
Dec 05 02:17:02 compute-0 sudo[458919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:17:02 compute-0 sudo[458919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:17:02 compute-0 sudo[458919]: pam_unix(sudo:session): session closed for user root
Dec 05 02:17:02 compute-0 sudo[458944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:17:02 compute-0 sudo[458944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:17:02 compute-0 sudo[458944]: pam_unix(sudo:session): session closed for user root
Dec 05 02:17:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2041: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 05 02:17:02 compute-0 nova_compute[349548]: 2025-12-05 02:17:02.409 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:17:02 compute-0 sudo[458969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:17:02 compute-0 sudo[458969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:17:02 compute-0 podman[459031]: 2025-12-05 02:17:02.967622439 +0000 UTC m=+0.086610273 container create 6d71595d31ca17f12edd3a5164107ec50c4ad16ce680820f4743fadf164794aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 05 02:17:03 compute-0 podman[459031]: 2025-12-05 02:17:02.931425573 +0000 UTC m=+0.050413477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:17:03 compute-0 systemd[1]: Started libpod-conmon-6d71595d31ca17f12edd3a5164107ec50c4ad16ce680820f4743fadf164794aa.scope.
Dec 05 02:17:03 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:17:03 compute-0 podman[459031]: 2025-12-05 02:17:03.104196135 +0000 UTC m=+0.223184009 container init 6d71595d31ca17f12edd3a5164107ec50c4ad16ce680820f4743fadf164794aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_buck, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:17:03 compute-0 podman[459031]: 2025-12-05 02:17:03.118798295 +0000 UTC m=+0.237786159 container start 6d71595d31ca17f12edd3a5164107ec50c4ad16ce680820f4743fadf164794aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_buck, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 05 02:17:03 compute-0 podman[459031]: 2025-12-05 02:17:03.125694619 +0000 UTC m=+0.244682463 container attach 6d71595d31ca17f12edd3a5164107ec50c4ad16ce680820f4743fadf164794aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_buck, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:17:03 compute-0 stoic_buck[459047]: 167 167
Dec 05 02:17:03 compute-0 systemd[1]: libpod-6d71595d31ca17f12edd3a5164107ec50c4ad16ce680820f4743fadf164794aa.scope: Deactivated successfully.
Dec 05 02:17:03 compute-0 podman[459031]: 2025-12-05 02:17:03.132370646 +0000 UTC m=+0.251358470 container died 6d71595d31ca17f12edd3a5164107ec50c4ad16ce680820f4743fadf164794aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Dec 05 02:17:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-45fe5e4ea339b04d6b2479de3025b567e8cee4c88bd908135385c03b6264eb38-merged.mount: Deactivated successfully.
Dec 05 02:17:03 compute-0 podman[459031]: 2025-12-05 02:17:03.209477452 +0000 UTC m=+0.328465286 container remove 6d71595d31ca17f12edd3a5164107ec50c4ad16ce680820f4743fadf164794aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_buck, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:17:03 compute-0 systemd[1]: libpod-conmon-6d71595d31ca17f12edd3a5164107ec50c4ad16ce680820f4743fadf164794aa.scope: Deactivated successfully.
Dec 05 02:17:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:17:03 compute-0 nova_compute[349548]: 2025-12-05 02:17:03.278 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:17:03 compute-0 podman[459070]: 2025-12-05 02:17:03.470120122 +0000 UTC m=+0.072558348 container create ffe6b1ae6d0f0ee9e54585a96e9eec76ae26b27adbb026a01e30913b47fcb6bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 05 02:17:03 compute-0 podman[459070]: 2025-12-05 02:17:03.434808691 +0000 UTC m=+0.037246947 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:17:03 compute-0 systemd[1]: Started libpod-conmon-ffe6b1ae6d0f0ee9e54585a96e9eec76ae26b27adbb026a01e30913b47fcb6bf.scope.
Dec 05 02:17:03 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:17:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df6b871b34d2003a152489f5d146bd2c6985f8f146941e8b1974fa098d77c9ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:17:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df6b871b34d2003a152489f5d146bd2c6985f8f146941e8b1974fa098d77c9ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:17:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df6b871b34d2003a152489f5d146bd2c6985f8f146941e8b1974fa098d77c9ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:17:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df6b871b34d2003a152489f5d146bd2c6985f8f146941e8b1974fa098d77c9ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:17:03 compute-0 podman[459070]: 2025-12-05 02:17:03.622394039 +0000 UTC m=+0.224832265 container init ffe6b1ae6d0f0ee9e54585a96e9eec76ae26b27adbb026a01e30913b47fcb6bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ride, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 05 02:17:03 compute-0 podman[459070]: 2025-12-05 02:17:03.637280407 +0000 UTC m=+0.239718653 container start ffe6b1ae6d0f0ee9e54585a96e9eec76ae26b27adbb026a01e30913b47fcb6bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:17:03 compute-0 podman[459070]: 2025-12-05 02:17:03.644370176 +0000 UTC m=+0.246808422 container attach ffe6b1ae6d0f0ee9e54585a96e9eec76ae26b27adbb026a01e30913b47fcb6bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ride, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:17:03 compute-0 ceph-mon[192914]: pgmap v2041: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 05 02:17:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2042: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:17:04 compute-0 infallible_ride[459085]: {
Dec 05 02:17:04 compute-0 infallible_ride[459085]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:17:04 compute-0 infallible_ride[459085]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:17:04 compute-0 infallible_ride[459085]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:17:04 compute-0 infallible_ride[459085]:         "osd_id": 0,
Dec 05 02:17:04 compute-0 infallible_ride[459085]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:17:04 compute-0 infallible_ride[459085]:         "type": "bluestore"
Dec 05 02:17:04 compute-0 infallible_ride[459085]:     },
Dec 05 02:17:04 compute-0 infallible_ride[459085]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:17:04 compute-0 infallible_ride[459085]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:17:04 compute-0 infallible_ride[459085]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:17:04 compute-0 infallible_ride[459085]:         "osd_id": 1,
Dec 05 02:17:04 compute-0 infallible_ride[459085]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:17:04 compute-0 infallible_ride[459085]:         "type": "bluestore"
Dec 05 02:17:04 compute-0 infallible_ride[459085]:     },
Dec 05 02:17:04 compute-0 infallible_ride[459085]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:17:04 compute-0 infallible_ride[459085]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:17:04 compute-0 infallible_ride[459085]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:17:04 compute-0 infallible_ride[459085]:         "osd_id": 2,
Dec 05 02:17:04 compute-0 infallible_ride[459085]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:17:04 compute-0 infallible_ride[459085]:         "type": "bluestore"
Dec 05 02:17:04 compute-0 infallible_ride[459085]:     }
Dec 05 02:17:04 compute-0 infallible_ride[459085]: }
Dec 05 02:17:04 compute-0 systemd[1]: libpod-ffe6b1ae6d0f0ee9e54585a96e9eec76ae26b27adbb026a01e30913b47fcb6bf.scope: Deactivated successfully.
Dec 05 02:17:04 compute-0 systemd[1]: libpod-ffe6b1ae6d0f0ee9e54585a96e9eec76ae26b27adbb026a01e30913b47fcb6bf.scope: Consumed 1.158s CPU time.
Dec 05 02:17:04 compute-0 podman[459121]: 2025-12-05 02:17:04.895667189 +0000 UTC m=+0.070327676 container died ffe6b1ae6d0f0ee9e54585a96e9eec76ae26b27adbb026a01e30913b47fcb6bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ride, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 05 02:17:04 compute-0 podman[459122]: 2025-12-05 02:17:04.920736253 +0000 UTC m=+0.092051206 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 02:17:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-df6b871b34d2003a152489f5d146bd2c6985f8f146941e8b1974fa098d77c9ca-merged.mount: Deactivated successfully.
Dec 05 02:17:04 compute-0 podman[459120]: 2025-12-05 02:17:04.950025276 +0000 UTC m=+0.118542660 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 02:17:04 compute-0 podman[459121]: 2025-12-05 02:17:04.975910653 +0000 UTC m=+0.150571090 container remove ffe6b1ae6d0f0ee9e54585a96e9eec76ae26b27adbb026a01e30913b47fcb6bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ride, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:17:04 compute-0 systemd[1]: libpod-conmon-ffe6b1ae6d0f0ee9e54585a96e9eec76ae26b27adbb026a01e30913b47fcb6bf.scope: Deactivated successfully.
Dec 05 02:17:05 compute-0 sudo[458969]: pam_unix(sudo:session): session closed for user root
Dec 05 02:17:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:17:05 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:17:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:17:05 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:17:05 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d6964181-190d-40d3-aa34-b4790c58d58f does not exist
Dec 05 02:17:05 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev ca745ec6-c8c6-4cf2-8985-5e583ab89944 does not exist
Dec 05 02:17:05 compute-0 sudo[459174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:17:05 compute-0 sudo[459174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:17:05 compute-0 sudo[459174]: pam_unix(sudo:session): session closed for user root
Dec 05 02:17:05 compute-0 sudo[459199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:17:05 compute-0 sudo[459199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:17:05 compute-0 sudo[459199]: pam_unix(sudo:session): session closed for user root
Dec 05 02:17:05 compute-0 ceph-mon[192914]: pgmap v2042: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:17:05 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:17:05 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:17:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2043: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:17:07 compute-0 nova_compute[349548]: 2025-12-05 02:17:07.414 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:17:07 compute-0 ceph-mon[192914]: pgmap v2043: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:17:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:17:08 compute-0 nova_compute[349548]: 2025-12-05 02:17:08.281 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:17:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2044: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:17:09 compute-0 ceph-mon[192914]: pgmap v2044: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:17:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2045: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 1 op/s
Dec 05 02:17:10 compute-0 podman[459224]: 2025-12-05 02:17:10.676391625 +0000 UTC m=+0.092412786 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.schema-version=1.0)
Dec 05 02:17:10 compute-0 podman[459225]: 2025-12-05 02:17:10.698107995 +0000 UTC m=+0.111812931 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=kepler, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, maintainer=Red Hat, Inc., distribution-scope=public, build-date=2024-09-18T21:23:30, release-0.7.12=, io.openshift.tags=base rhel9)
Dec 05 02:17:10 compute-0 podman[459226]: 2025-12-05 02:17:10.706326606 +0000 UTC m=+0.110595757 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:17:11 compute-0 ceph-mon[192914]: pgmap v2045: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 1 op/s
Dec 05 02:17:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2046: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 170 B/s wr, 4 op/s
Dec 05 02:17:12 compute-0 nova_compute[349548]: 2025-12-05 02:17:12.416 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:17:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:17:13 compute-0 nova_compute[349548]: 2025-12-05 02:17:13.285 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:17:13 compute-0 ceph-mon[192914]: pgmap v2046: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 170 B/s wr, 4 op/s
Dec 05 02:17:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2047: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 170 B/s wr, 4 op/s
Dec 05 02:17:15 compute-0 ceph-mon[192914]: pgmap v2047: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 170 B/s wr, 4 op/s
Dec 05 02:17:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:17:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:17:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:17:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:17:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:17:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:17:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:17:16
Dec 05 02:17:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:17:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:17:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'vms', '.rgw.root', 'default.rgw.log']
Dec 05 02:17:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:17:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2048: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 7.8 KiB/s wr, 4 op/s
Dec 05 02:17:17 compute-0 nova_compute[349548]: 2025-12-05 02:17:17.431 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:17:17 compute-0 ceph-mon[192914]: pgmap v2048: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 7.8 KiB/s wr, 4 op/s
Dec 05 02:17:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:17:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:17:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:17:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:17:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:17:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:17:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:17:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:17:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:17:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:17:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:17:18 compute-0 nova_compute[349548]: 2025-12-05 02:17:18.288 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:17:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2049: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec 05 02:17:19 compute-0 ceph-mon[192914]: pgmap v2049: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec 05 02:17:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2050: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec 05 02:17:21 compute-0 ceph-mon[192914]: pgmap v2050: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec 05 02:17:21 compute-0 podman[459283]: 2025-12-05 02:17:21.705519075 +0000 UTC m=+0.109100065 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 02:17:21 compute-0 podman[459282]: 2025-12-05 02:17:21.729296323 +0000 UTC m=+0.136257818 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 02:17:21 compute-0 podman[459285]: 2025-12-05 02:17:21.73276719 +0000 UTC m=+0.116910634 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, managed_by=edpm_ansible, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 05 02:17:21 compute-0 podman[459284]: 2025-12-05 02:17:21.766739665 +0000 UTC m=+0.157660619 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:17:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2051: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 11 KiB/s wr, 3 op/s
Dec 05 02:17:22 compute-0 nova_compute[349548]: 2025-12-05 02:17:22.429 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:17:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:17:23 compute-0 nova_compute[349548]: 2025-12-05 02:17:23.292 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:17:23 compute-0 ceph-mon[192914]: pgmap v2051: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 11 KiB/s wr, 3 op/s
Dec 05 02:17:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2052: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 11 KiB/s wr, 0 op/s
Dec 05 02:17:25 compute-0 ceph-mon[192914]: pgmap v2052: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 11 KiB/s wr, 0 op/s
Dec 05 02:17:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2053: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 11 KiB/s wr, 0 op/s
Dec 05 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015166383815198282 of space, bias 1.0, pg target 0.45499151445594843 quantized to 32 (current 32)
Dec 05 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 05 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:17:27 compute-0 nova_compute[349548]: 2025-12-05 02:17:27.431 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:17:27 compute-0 ceph-mon[192914]: pgmap v2053: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 11 KiB/s wr, 0 op/s
Dec 05 02:17:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.288689) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901048288780, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 954, "num_deletes": 256, "total_data_size": 1374180, "memory_usage": 1400784, "flush_reason": "Manual Compaction"}
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Dec 05 02:17:28 compute-0 nova_compute[349548]: 2025-12-05 02:17:28.295 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901048305446, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 1350701, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41683, "largest_seqno": 42636, "table_properties": {"data_size": 1345919, "index_size": 2370, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10005, "raw_average_key_size": 19, "raw_value_size": 1336462, "raw_average_value_size": 2555, "num_data_blocks": 106, "num_entries": 523, "num_filter_entries": 523, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764900958, "oldest_key_time": 1764900958, "file_creation_time": 1764901048, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 16799 microseconds, and 9564 cpu microseconds.
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.305497) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 1350701 bytes OK
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.305521) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.308372) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.308394) EVENT_LOG_v1 {"time_micros": 1764901048308387, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.308418) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 1369598, prev total WAL file size 1369598, number of live WAL files 2.
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.309680) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353037' seq:72057594037927935, type:22 .. '6C6F676D0031373539' seq:0, type:0; will stop at (end)
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(1319KB)], [98(7608KB)]
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901048309765, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 9142222, "oldest_snapshot_seqno": -1}
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 5767 keys, 9039648 bytes, temperature: kUnknown
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901048375266, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 9039648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9001125, "index_size": 22989, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14469, "raw_key_size": 150197, "raw_average_key_size": 26, "raw_value_size": 8896815, "raw_average_value_size": 1542, "num_data_blocks": 919, "num_entries": 5767, "num_filter_entries": 5767, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764901048, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.375660) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 9039648 bytes
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.378222) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.2 rd, 137.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.4 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(13.5) write-amplify(6.7) OK, records in: 6291, records dropped: 524 output_compression: NoCompression
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.378254) EVENT_LOG_v1 {"time_micros": 1764901048378239, "job": 58, "event": "compaction_finished", "compaction_time_micros": 65662, "compaction_time_cpu_micros": 37590, "output_level": 6, "num_output_files": 1, "total_output_size": 9039648, "num_input_records": 6291, "num_output_records": 5767, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901048378850, "job": 58, "event": "table_file_deletion", "file_number": 100}
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901048381742, "job": 58, "event": "table_file_deletion", "file_number": 98}
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.309461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.382024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.382031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.382033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.382035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.382037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:17:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2054: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 3.1 KiB/s wr, 0 op/s
Dec 05 02:17:29 compute-0 ceph-mon[192914]: pgmap v2054: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 3.1 KiB/s wr, 0 op/s
Dec 05 02:17:29 compute-0 podman[158197]: time="2025-12-05T02:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:17:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:17:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8648 "" "Go-http-client/1.1"
Dec 05 02:17:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2055: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 9.3 KiB/s wr, 0 op/s
Dec 05 02:17:31 compute-0 openstack_network_exporter[366555]: ERROR   02:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:17:31 compute-0 openstack_network_exporter[366555]: ERROR   02:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:17:31 compute-0 openstack_network_exporter[366555]: ERROR   02:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:17:31 compute-0 openstack_network_exporter[366555]: ERROR   02:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:17:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:17:31 compute-0 openstack_network_exporter[366555]: ERROR   02:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:17:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:17:31 compute-0 ceph-mon[192914]: pgmap v2055: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 9.3 KiB/s wr, 0 op/s
Dec 05 02:17:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2056: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 9.7 KiB/s wr, 0 op/s
Dec 05 02:17:32 compute-0 nova_compute[349548]: 2025-12-05 02:17:32.433 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:17:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:17:33 compute-0 nova_compute[349548]: 2025-12-05 02:17:33.298 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:17:33 compute-0 ceph-mon[192914]: pgmap v2056: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 9.7 KiB/s wr, 0 op/s
Dec 05 02:17:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2057: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec 05 02:17:35 compute-0 ceph-mon[192914]: pgmap v2057: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec 05 02:17:35 compute-0 podman[459365]: 2025-12-05 02:17:35.683538387 +0000 UTC m=+0.097355716 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 02:17:35 compute-0 podman[459366]: 2025-12-05 02:17:35.689815883 +0000 UTC m=+0.097946262 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 02:17:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2058: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec 05 02:17:37 compute-0 nova_compute[349548]: 2025-12-05 02:17:37.437 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:17:37 compute-0 ceph-mon[192914]: pgmap v2058: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec 05 02:17:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:17:38 compute-0 nova_compute[349548]: 2025-12-05 02:17:38.300 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:17:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2059: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec 05 02:17:39 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 05 02:17:39 compute-0 ceph-mon[192914]: pgmap v2059: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec 05 02:17:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2060: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Dec 05 02:17:40 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 05 02:17:41 compute-0 podman[459409]: 2025-12-05 02:17:41.026863687 +0000 UTC m=+0.103397835 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-container, container_name=kepler, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release-0.7.12=, distribution-scope=public, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=base rhel9, name=ubi9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 05 02:17:41 compute-0 podman[459408]: 2025-12-05 02:17:41.036357114 +0000 UTC m=+0.113372545 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 02:17:41 compute-0 podman[459410]: 2025-12-05 02:17:41.060517792 +0000 UTC m=+0.124081476 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm)
Dec 05 02:17:41 compute-0 nova_compute[349548]: 2025-12-05 02:17:41.070 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:17:41 compute-0 nova_compute[349548]: 2025-12-05 02:17:41.070 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:17:41 compute-0 ceph-mon[192914]: pgmap v2060: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Dec 05 02:17:41 compute-0 nova_compute[349548]: 2025-12-05 02:17:41.838 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:17:41 compute-0 nova_compute[349548]: 2025-12-05 02:17:41.839 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:17:41 compute-0 nova_compute[349548]: 2025-12-05 02:17:41.839 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 02:17:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2061: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:17:42 compute-0 nova_compute[349548]: 2025-12-05 02:17:42.439 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:17:43 compute-0 nova_compute[349548]: 2025-12-05 02:17:43.239 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Updating instance_info_cache with network_info: [{"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:17:43 compute-0 nova_compute[349548]: 2025-12-05 02:17:43.264 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:17:43 compute-0 nova_compute[349548]: 2025-12-05 02:17:43.265 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 02:17:43 compute-0 nova_compute[349548]: 2025-12-05 02:17:43.266 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:17:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:17:43 compute-0 nova_compute[349548]: 2025-12-05 02:17:43.303 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:17:43 compute-0 ceph-mon[192914]: pgmap v2061: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:17:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2062: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:17:45 compute-0 nova_compute[349548]: 2025-12-05 02:17:45.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:17:45 compute-0 nova_compute[349548]: 2025-12-05 02:17:45.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:17:45 compute-0 nova_compute[349548]: 2025-12-05 02:17:45.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:17:45 compute-0 nova_compute[349548]: 2025-12-05 02:17:45.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:17:45 compute-0 nova_compute[349548]: 2025-12-05 02:17:45.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:17:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:17:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1081539781' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:17:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:17:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1081539781' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:17:45 compute-0 ceph-mon[192914]: pgmap v2062: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:17:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1081539781' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:17:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1081539781' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:17:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:17:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:17:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:17:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:17:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:17:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:17:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2063: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s wr, 0 op/s
Dec 05 02:17:47 compute-0 nova_compute[349548]: 2025-12-05 02:17:47.442 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:17:47 compute-0 ceph-mon[192914]: pgmap v2063: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s wr, 0 op/s
Dec 05 02:17:48 compute-0 nova_compute[349548]: 2025-12-05 02:17:48.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:17:48 compute-0 nova_compute[349548]: 2025-12-05 02:17:48.095 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:17:48 compute-0 nova_compute[349548]: 2025-12-05 02:17:48.096 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:17:48 compute-0 nova_compute[349548]: 2025-12-05 02:17:48.097 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:17:48 compute-0 nova_compute[349548]: 2025-12-05 02:17:48.099 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:17:48 compute-0 nova_compute[349548]: 2025-12-05 02:17:48.100 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:17:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:17:48 compute-0 nova_compute[349548]: 2025-12-05 02:17:48.305 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:17:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2064: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 05 02:17:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:17:48 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/283683035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:17:48 compute-0 nova_compute[349548]: 2025-12-05 02:17:48.637 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:17:48 compute-0 nova_compute[349548]: 2025-12-05 02:17:48.795 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:17:48 compute-0 nova_compute[349548]: 2025-12-05 02:17:48.801 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:17:48 compute-0 nova_compute[349548]: 2025-12-05 02:17:48.809 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:17:48 compute-0 nova_compute[349548]: 2025-12-05 02:17:48.810 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:17:49 compute-0 nova_compute[349548]: 2025-12-05 02:17:49.371 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:17:49 compute-0 nova_compute[349548]: 2025-12-05 02:17:49.373 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3566MB free_disk=59.897212982177734GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:17:49 compute-0 nova_compute[349548]: 2025-12-05 02:17:49.373 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:17:49 compute-0 nova_compute[349548]: 2025-12-05 02:17:49.374 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:17:49 compute-0 nova_compute[349548]: 2025-12-05 02:17:49.477 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:17:49 compute-0 nova_compute[349548]: 2025-12-05 02:17:49.479 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:17:49 compute-0 nova_compute[349548]: 2025-12-05 02:17:49.480 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:17:49 compute-0 nova_compute[349548]: 2025-12-05 02:17:49.481 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:17:49 compute-0 nova_compute[349548]: 2025-12-05 02:17:49.558 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:17:49 compute-0 ceph-mon[192914]: pgmap v2064: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 05 02:17:49 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/283683035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:17:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:17:50 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/473721791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:17:50 compute-0 nova_compute[349548]: 2025-12-05 02:17:50.065 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:17:50 compute-0 nova_compute[349548]: 2025-12-05 02:17:50.077 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:17:50 compute-0 nova_compute[349548]: 2025-12-05 02:17:50.241 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:17:50 compute-0 nova_compute[349548]: 2025-12-05 02:17:50.244 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:17:50 compute-0 nova_compute[349548]: 2025-12-05 02:17:50.244 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:17:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2065: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 05 02:17:50 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/473721791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:17:51 compute-0 nova_compute[349548]: 2025-12-05 02:17:51.246 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:17:51 compute-0 nova_compute[349548]: 2025-12-05 02:17:51.290 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:17:51 compute-0 nova_compute[349548]: 2025-12-05 02:17:51.292 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:17:51 compute-0 ceph-mon[192914]: pgmap v2065: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 05 02:17:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2066: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 05 02:17:52 compute-0 nova_compute[349548]: 2025-12-05 02:17:52.445 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:17:52 compute-0 podman[459508]: 2025-12-05 02:17:52.72546685 +0000 UTC m=+0.121065751 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 02:17:52 compute-0 podman[459507]: 2025-12-05 02:17:52.744620058 +0000 UTC m=+0.147332389 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 05 02:17:52 compute-0 podman[459515]: 2025-12-05 02:17:52.766498023 +0000 UTC m=+0.137844683 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.expose-services=, version=9.6, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, architecture=x86_64, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Dec 05 02:17:52 compute-0 podman[459509]: 2025-12-05 02:17:52.802390691 +0000 UTC m=+0.188757343 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 02:17:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:17:53 compute-0 nova_compute[349548]: 2025-12-05 02:17:53.310 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:17:53 compute-0 ceph-mon[192914]: pgmap v2066: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 05 02:17:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2067: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 05 02:17:55 compute-0 ceph-mon[192914]: pgmap v2067: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 05 02:17:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:17:56.215 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:17:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:17:56.215 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:17:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:17:56.216 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:17:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2068: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 05 02:17:57 compute-0 nova_compute[349548]: 2025-12-05 02:17:57.448 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:17:57 compute-0 ceph-mon[192914]: pgmap v2068: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 05 02:17:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:17:58 compute-0 nova_compute[349548]: 2025-12-05 02:17:58.313 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:17:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2069: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 1 op/s
Dec 05 02:17:59 compute-0 ceph-mon[192914]: pgmap v2069: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 1 op/s
Dec 05 02:17:59 compute-0 podman[158197]: time="2025-12-05T02:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:17:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:17:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8663 "" "Go-http-client/1.1"
Dec 05 02:18:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2070: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:01 compute-0 openstack_network_exporter[366555]: ERROR   02:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:18:01 compute-0 openstack_network_exporter[366555]: ERROR   02:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:18:01 compute-0 openstack_network_exporter[366555]: ERROR   02:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:18:01 compute-0 openstack_network_exporter[366555]: ERROR   02:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:18:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:18:01 compute-0 openstack_network_exporter[366555]: ERROR   02:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:18:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:18:01 compute-0 ceph-mon[192914]: pgmap v2070: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2071: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:18:02 compute-0 nova_compute[349548]: 2025-12-05 02:18:02.451 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:18:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:18:03 compute-0 nova_compute[349548]: 2025-12-05 02:18:03.317 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:18:03 compute-0 ceph-mon[192914]: pgmap v2071: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:18:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2072: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:18:05 compute-0 sudo[459591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:18:05 compute-0 sudo[459591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:18:05 compute-0 sudo[459591]: pam_unix(sudo:session): session closed for user root
Dec 05 02:18:05 compute-0 sudo[459616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:18:05 compute-0 sudo[459616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:18:05 compute-0 sudo[459616]: pam_unix(sudo:session): session closed for user root
Dec 05 02:18:05 compute-0 sudo[459641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:18:05 compute-0 sudo[459641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:18:05 compute-0 sudo[459641]: pam_unix(sudo:session): session closed for user root
Dec 05 02:18:05 compute-0 sudo[459666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 05 02:18:05 compute-0 sudo[459666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:18:05 compute-0 ceph-mon[192914]: pgmap v2072: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:18:06 compute-0 sudo[459666]: pam_unix(sudo:session): session closed for user root
Dec 05 02:18:06 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:18:06 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:18:06 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:18:06 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:18:06 compute-0 podman[459705]: 2025-12-05 02:18:06.127532915 +0000 UTC m=+0.145823557 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 02:18:06 compute-0 podman[459704]: 2025-12-05 02:18:06.127965877 +0000 UTC m=+0.157478094 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 02:18:06 compute-0 sudo[459747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:18:06 compute-0 sudo[459747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:18:06 compute-0 sudo[459747]: pam_unix(sudo:session): session closed for user root
Dec 05 02:18:06 compute-0 sudo[459772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:18:06 compute-0 sudo[459772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:18:06 compute-0 sudo[459772]: pam_unix(sudo:session): session closed for user root
Dec 05 02:18:06 compute-0 sudo[459797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:18:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2073: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:18:06 compute-0 sudo[459797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:18:06 compute-0 sudo[459797]: pam_unix(sudo:session): session closed for user root
Dec 05 02:18:06 compute-0 sudo[459822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:18:06 compute-0 sudo[459822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:18:07 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:18:07 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:18:07 compute-0 ceph-mon[192914]: pgmap v2073: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:18:07 compute-0 sudo[459822]: pam_unix(sudo:session): session closed for user root
Dec 05 02:18:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:18:07 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:18:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:18:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:18:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:18:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:18:07 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7966fe72-9659-42a0-9d24-adeee677e91c does not exist
Dec 05 02:18:07 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 74ef1ba2-9207-4cfe-8566-e095ae41bdcc does not exist
Dec 05 02:18:07 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 43d59e8b-a924-4f69-95f2-4a8b6b678bf7 does not exist
Dec 05 02:18:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:18:07 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:18:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:18:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:18:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:18:07 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:18:07 compute-0 nova_compute[349548]: 2025-12-05 02:18:07.453 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:18:07 compute-0 sudo[459877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:18:07 compute-0 sudo[459877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:18:07 compute-0 sudo[459877]: pam_unix(sudo:session): session closed for user root
Dec 05 02:18:07 compute-0 sudo[459902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:18:07 compute-0 sudo[459902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:18:07 compute-0 sudo[459902]: pam_unix(sudo:session): session closed for user root
Dec 05 02:18:07 compute-0 sudo[459927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:18:07 compute-0 sudo[459927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:18:07 compute-0 sudo[459927]: pam_unix(sudo:session): session closed for user root
Dec 05 02:18:07 compute-0 sudo[459952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:18:07 compute-0 sudo[459952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:18:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:18:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:18:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:18:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:18:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:18:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:18:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:18:08 compute-0 nova_compute[349548]: 2025-12-05 02:18:08.319 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:18:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2074: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:18:08 compute-0 podman[460016]: 2025-12-05 02:18:08.450693943 +0000 UTC m=+0.078479975 container create e7293b92c0fc078c2619dfdc5c97cb79933e2544a39a8f0b25df5ed179f52db6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 05 02:18:08 compute-0 podman[460016]: 2025-12-05 02:18:08.419690932 +0000 UTC m=+0.047476944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:18:08 compute-0 systemd[1]: Started libpod-conmon-e7293b92c0fc078c2619dfdc5c97cb79933e2544a39a8f0b25df5ed179f52db6.scope.
Dec 05 02:18:08 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:18:08 compute-0 podman[460016]: 2025-12-05 02:18:08.595240253 +0000 UTC m=+0.223026305 container init e7293b92c0fc078c2619dfdc5c97cb79933e2544a39a8f0b25df5ed179f52db6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hoover, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:18:08 compute-0 podman[460016]: 2025-12-05 02:18:08.61258179 +0000 UTC m=+0.240367792 container start e7293b92c0fc078c2619dfdc5c97cb79933e2544a39a8f0b25df5ed179f52db6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hoover, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:18:08 compute-0 podman[460016]: 2025-12-05 02:18:08.618249859 +0000 UTC m=+0.246035891 container attach e7293b92c0fc078c2619dfdc5c97cb79933e2544a39a8f0b25df5ed179f52db6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:18:08 compute-0 romantic_hoover[460033]: 167 167
Dec 05 02:18:08 compute-0 systemd[1]: libpod-e7293b92c0fc078c2619dfdc5c97cb79933e2544a39a8f0b25df5ed179f52db6.scope: Deactivated successfully.
Dec 05 02:18:08 compute-0 podman[460016]: 2025-12-05 02:18:08.629376691 +0000 UTC m=+0.257162723 container died e7293b92c0fc078c2619dfdc5c97cb79933e2544a39a8f0b25df5ed179f52db6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hoover, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 05 02:18:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-35df3b719f303882b5de77a9d1fea60fdc19a4c6592c68b51f504ee4050c7d27-merged.mount: Deactivated successfully.
Dec 05 02:18:08 compute-0 podman[460016]: 2025-12-05 02:18:08.707435544 +0000 UTC m=+0.335221536 container remove e7293b92c0fc078c2619dfdc5c97cb79933e2544a39a8f0b25df5ed179f52db6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hoover, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 05 02:18:08 compute-0 systemd[1]: libpod-conmon-e7293b92c0fc078c2619dfdc5c97cb79933e2544a39a8f0b25df5ed179f52db6.scope: Deactivated successfully.
Dec 05 02:18:08 compute-0 podman[460056]: 2025-12-05 02:18:08.970025549 +0000 UTC m=+0.072531228 container create 5723a854545d34fff2a1a53b394f125d8ce8d9a548a095702695ed07f7598e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 05 02:18:09 compute-0 podman[460056]: 2025-12-05 02:18:08.949114182 +0000 UTC m=+0.051619891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:18:09 compute-0 systemd[1]: Started libpod-conmon-5723a854545d34fff2a1a53b394f125d8ce8d9a548a095702695ed07f7598e8b.scope.
Dec 05 02:18:09 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:18:09 compute-0 ceph-mon[192914]: pgmap v2074: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4853bc06573e69b4c7b99346c42517ef181b40b33eef1abb9e99fdbe0e051f5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4853bc06573e69b4c7b99346c42517ef181b40b33eef1abb9e99fdbe0e051f5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4853bc06573e69b4c7b99346c42517ef181b40b33eef1abb9e99fdbe0e051f5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4853bc06573e69b4c7b99346c42517ef181b40b33eef1abb9e99fdbe0e051f5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4853bc06573e69b4c7b99346c42517ef181b40b33eef1abb9e99fdbe0e051f5c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:18:09 compute-0 podman[460056]: 2025-12-05 02:18:09.163372969 +0000 UTC m=+0.265878738 container init 5723a854545d34fff2a1a53b394f125d8ce8d9a548a095702695ed07f7598e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:18:09 compute-0 podman[460056]: 2025-12-05 02:18:09.181725805 +0000 UTC m=+0.284231514 container start 5723a854545d34fff2a1a53b394f125d8ce8d9a548a095702695ed07f7598e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 05 02:18:09 compute-0 podman[460056]: 2025-12-05 02:18:09.188506405 +0000 UTC m=+0.291012114 container attach 5723a854545d34fff2a1a53b394f125d8ce8d9a548a095702695ed07f7598e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Dec 05 02:18:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2075: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:18:10 compute-0 great_grothendieck[460073]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:18:10 compute-0 great_grothendieck[460073]: --> relative data size: 1.0
Dec 05 02:18:10 compute-0 great_grothendieck[460073]: --> All data devices are unavailable
Dec 05 02:18:10 compute-0 systemd[1]: libpod-5723a854545d34fff2a1a53b394f125d8ce8d9a548a095702695ed07f7598e8b.scope: Deactivated successfully.
Dec 05 02:18:10 compute-0 podman[460056]: 2025-12-05 02:18:10.577822704 +0000 UTC m=+1.680328443 container died 5723a854545d34fff2a1a53b394f125d8ce8d9a548a095702695ed07f7598e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 02:18:10 compute-0 systemd[1]: libpod-5723a854545d34fff2a1a53b394f125d8ce8d9a548a095702695ed07f7598e8b.scope: Consumed 1.315s CPU time.
Dec 05 02:18:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-4853bc06573e69b4c7b99346c42517ef181b40b33eef1abb9e99fdbe0e051f5c-merged.mount: Deactivated successfully.
Dec 05 02:18:10 compute-0 podman[460056]: 2025-12-05 02:18:10.683667467 +0000 UTC m=+1.786173156 container remove 5723a854545d34fff2a1a53b394f125d8ce8d9a548a095702695ed07f7598e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 05 02:18:10 compute-0 systemd[1]: libpod-conmon-5723a854545d34fff2a1a53b394f125d8ce8d9a548a095702695ed07f7598e8b.scope: Deactivated successfully.
Dec 05 02:18:10 compute-0 sudo[459952]: pam_unix(sudo:session): session closed for user root
Dec 05 02:18:10 compute-0 sudo[460114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:18:10 compute-0 sudo[460114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:18:10 compute-0 sudo[460114]: pam_unix(sudo:session): session closed for user root
Dec 05 02:18:10 compute-0 sudo[460139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:18:10 compute-0 sudo[460139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:18:10 compute-0 sudo[460139]: pam_unix(sudo:session): session closed for user root
Dec 05 02:18:11 compute-0 sudo[460164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:18:11 compute-0 sudo[460164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:18:11 compute-0 sudo[460164]: pam_unix(sudo:session): session closed for user root
Dec 05 02:18:11 compute-0 sudo[460207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:18:11 compute-0 sudo[460207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:18:11 compute-0 podman[460190]: 2025-12-05 02:18:11.291433167 +0000 UTC m=+0.121647208 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 05 02:18:11 compute-0 podman[460188]: 2025-12-05 02:18:11.298572667 +0000 UTC m=+0.131286648 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 02:18:11 compute-0 podman[460189]: 2025-12-05 02:18:11.298928017 +0000 UTC m=+0.120664160 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=ubi9, vcs-type=git, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 02:18:11 compute-0 ceph-mon[192914]: pgmap v2075: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:18:11 compute-0 podman[460310]: 2025-12-05 02:18:11.902671724 +0000 UTC m=+0.113322674 container create 885e23d0cc39f77eb66ca93f5a57932f4259bca363a6985b1e7ccd0a7fc06424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 05 02:18:11 compute-0 podman[460310]: 2025-12-05 02:18:11.847577007 +0000 UTC m=+0.058228037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:18:11 compute-0 systemd[1]: Started libpod-conmon-885e23d0cc39f77eb66ca93f5a57932f4259bca363a6985b1e7ccd0a7fc06424.scope.
Dec 05 02:18:12 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:18:12 compute-0 podman[460310]: 2025-12-05 02:18:12.067813932 +0000 UTC m=+0.278464902 container init 885e23d0cc39f77eb66ca93f5a57932f4259bca363a6985b1e7ccd0a7fc06424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_babbage, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 05 02:18:12 compute-0 podman[460310]: 2025-12-05 02:18:12.085566631 +0000 UTC m=+0.296217571 container start 885e23d0cc39f77eb66ca93f5a57932f4259bca363a6985b1e7ccd0a7fc06424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_babbage, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:18:12 compute-0 podman[460310]: 2025-12-05 02:18:12.09159159 +0000 UTC m=+0.302242530 container attach 885e23d0cc39f77eb66ca93f5a57932f4259bca363a6985b1e7ccd0a7fc06424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_babbage, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:18:12 compute-0 quizzical_babbage[460326]: 167 167
Dec 05 02:18:12 compute-0 systemd[1]: libpod-885e23d0cc39f77eb66ca93f5a57932f4259bca363a6985b1e7ccd0a7fc06424.scope: Deactivated successfully.
Dec 05 02:18:12 compute-0 podman[460310]: 2025-12-05 02:18:12.101709034 +0000 UTC m=+0.312359984 container died 885e23d0cc39f77eb66ca93f5a57932f4259bca363a6985b1e7ccd0a7fc06424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_babbage, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:18:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-933ddd8ac423354d79f983853331efbe1bf6b94eabbf619cb1c124d85bc275cb-merged.mount: Deactivated successfully.
Dec 05 02:18:12 compute-0 podman[460310]: 2025-12-05 02:18:12.174961532 +0000 UTC m=+0.385612442 container remove 885e23d0cc39f77eb66ca93f5a57932f4259bca363a6985b1e7ccd0a7fc06424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:18:12 compute-0 systemd[1]: libpod-conmon-885e23d0cc39f77eb66ca93f5a57932f4259bca363a6985b1e7ccd0a7fc06424.scope: Deactivated successfully.
Dec 05 02:18:12 compute-0 podman[460348]: 2025-12-05 02:18:12.409318774 +0000 UTC m=+0.072740364 container create 247b2cd8958bf7ae391fff0235fd169d8b5440910bd8e8f8b2f85740d201dec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:18:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2076: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:18:12 compute-0 nova_compute[349548]: 2025-12-05 02:18:12.456 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:18:12 compute-0 podman[460348]: 2025-12-05 02:18:12.378732145 +0000 UTC m=+0.042153715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:18:12 compute-0 systemd[1]: Started libpod-conmon-247b2cd8958bf7ae391fff0235fd169d8b5440910bd8e8f8b2f85740d201dec6.scope.
Dec 05 02:18:12 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:18:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dde3470ec980a142432a3aee6b953e908d4520a47612f3e8cea57dbc588ff2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:18:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dde3470ec980a142432a3aee6b953e908d4520a47612f3e8cea57dbc588ff2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:18:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dde3470ec980a142432a3aee6b953e908d4520a47612f3e8cea57dbc588ff2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:18:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dde3470ec980a142432a3aee6b953e908d4520a47612f3e8cea57dbc588ff2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:18:12 compute-0 podman[460348]: 2025-12-05 02:18:12.579756371 +0000 UTC m=+0.243177941 container init 247b2cd8958bf7ae391fff0235fd169d8b5440910bd8e8f8b2f85740d201dec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 05 02:18:12 compute-0 podman[460348]: 2025-12-05 02:18:12.607580212 +0000 UTC m=+0.271001792 container start 247b2cd8958bf7ae391fff0235fd169d8b5440910bd8e8f8b2f85740d201dec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 02:18:12 compute-0 podman[460348]: 2025-12-05 02:18:12.615382991 +0000 UTC m=+0.278804581 container attach 247b2cd8958bf7ae391fff0235fd169d8b5440910bd8e8f8b2f85740d201dec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_stonebraker, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 05 02:18:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:18:13 compute-0 nova_compute[349548]: 2025-12-05 02:18:13.324 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]: {
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:     "0": [
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:         {
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "devices": [
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "/dev/loop3"
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             ],
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "lv_name": "ceph_lv0",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "lv_size": "21470642176",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "name": "ceph_lv0",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "tags": {
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.cluster_name": "ceph",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.crush_device_class": "",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.encrypted": "0",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.osd_id": "0",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.type": "block",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.vdo": "0"
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             },
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "type": "block",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "vg_name": "ceph_vg0"
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:         }
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:     ],
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:     "1": [
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:         {
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "devices": [
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "/dev/loop4"
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             ],
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "lv_name": "ceph_lv1",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "lv_size": "21470642176",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "name": "ceph_lv1",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "tags": {
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.cluster_name": "ceph",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.crush_device_class": "",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.encrypted": "0",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.osd_id": "1",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.type": "block",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.vdo": "0"
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             },
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "type": "block",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "vg_name": "ceph_vg1"
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:         }
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:     ],
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:     "2": [
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:         {
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "devices": [
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "/dev/loop5"
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             ],
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "lv_name": "ceph_lv2",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "lv_size": "21470642176",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "name": "ceph_lv2",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "tags": {
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.cluster_name": "ceph",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.crush_device_class": "",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.encrypted": "0",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.osd_id": "2",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.type": "block",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:                 "ceph.vdo": "0"
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             },
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "type": "block",
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:             "vg_name": "ceph_vg2"
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:         }
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]:     ]
Dec 05 02:18:13 compute-0 pedantic_stonebraker[460364]: }
Dec 05 02:18:13 compute-0 ceph-mon[192914]: pgmap v2076: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:18:13 compute-0 systemd[1]: libpod-247b2cd8958bf7ae391fff0235fd169d8b5440910bd8e8f8b2f85740d201dec6.scope: Deactivated successfully.
Dec 05 02:18:13 compute-0 podman[460373]: 2025-12-05 02:18:13.623263188 +0000 UTC m=+0.054867913 container died 247b2cd8958bf7ae391fff0235fd169d8b5440910bd8e8f8b2f85740d201dec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_stonebraker, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 02:18:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-6dde3470ec980a142432a3aee6b953e908d4520a47612f3e8cea57dbc588ff2a-merged.mount: Deactivated successfully.
Dec 05 02:18:13 compute-0 podman[460373]: 2025-12-05 02:18:13.727855385 +0000 UTC m=+0.159460070 container remove 247b2cd8958bf7ae391fff0235fd169d8b5440910bd8e8f8b2f85740d201dec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_stonebraker, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 02:18:13 compute-0 systemd[1]: libpod-conmon-247b2cd8958bf7ae391fff0235fd169d8b5440910bd8e8f8b2f85740d201dec6.scope: Deactivated successfully.
Dec 05 02:18:13 compute-0 sudo[460207]: pam_unix(sudo:session): session closed for user root
Dec 05 02:18:13 compute-0 sudo[460388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:18:13 compute-0 sudo[460388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:18:13 compute-0 sudo[460388]: pam_unix(sudo:session): session closed for user root
Dec 05 02:18:14 compute-0 sudo[460413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:18:14 compute-0 sudo[460413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:18:14 compute-0 sudo[460413]: pam_unix(sudo:session): session closed for user root
Dec 05 02:18:14 compute-0 sudo[460438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:18:14 compute-0 sudo[460438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:18:14 compute-0 sudo[460438]: pam_unix(sudo:session): session closed for user root
Dec 05 02:18:14 compute-0 sudo[460463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:18:14 compute-0 sudo[460463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:18:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2077: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:14 compute-0 podman[460526]: 2025-12-05 02:18:14.859255201 +0000 UTC m=+0.097860429 container create f2e11a6006fc7f2deae603a7c9c65499d99f8c87b90f23544984bb1cb699570f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 02:18:14 compute-0 podman[460526]: 2025-12-05 02:18:14.815527453 +0000 UTC m=+0.054132751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:18:14 compute-0 systemd[1]: Started libpod-conmon-f2e11a6006fc7f2deae603a7c9c65499d99f8c87b90f23544984bb1cb699570f.scope.
Dec 05 02:18:14 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:18:15 compute-0 podman[460526]: 2025-12-05 02:18:15.000434487 +0000 UTC m=+0.239039765 container init f2e11a6006fc7f2deae603a7c9c65499d99f8c87b90f23544984bb1cb699570f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:18:15 compute-0 podman[460526]: 2025-12-05 02:18:15.01123685 +0000 UTC m=+0.249842078 container start f2e11a6006fc7f2deae603a7c9c65499d99f8c87b90f23544984bb1cb699570f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_feynman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:18:15 compute-0 podman[460526]: 2025-12-05 02:18:15.016321163 +0000 UTC m=+0.254926471 container attach f2e11a6006fc7f2deae603a7c9c65499d99f8c87b90f23544984bb1cb699570f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_feynman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:18:15 compute-0 quirky_feynman[460542]: 167 167
Dec 05 02:18:15 compute-0 systemd[1]: libpod-f2e11a6006fc7f2deae603a7c9c65499d99f8c87b90f23544984bb1cb699570f.scope: Deactivated successfully.
Dec 05 02:18:15 compute-0 podman[460526]: 2025-12-05 02:18:15.021812977 +0000 UTC m=+0.260418215 container died f2e11a6006fc7f2deae603a7c9c65499d99f8c87b90f23544984bb1cb699570f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Dec 05 02:18:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-914ef960946ebc4ce862bfd48ece97a703e044cd6a709d9204e444fc70e04949-merged.mount: Deactivated successfully.
Dec 05 02:18:15 compute-0 podman[460526]: 2025-12-05 02:18:15.094433457 +0000 UTC m=+0.333038715 container remove f2e11a6006fc7f2deae603a7c9c65499d99f8c87b90f23544984bb1cb699570f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:18:15 compute-0 systemd[1]: libpod-conmon-f2e11a6006fc7f2deae603a7c9c65499d99f8c87b90f23544984bb1cb699570f.scope: Deactivated successfully.
Dec 05 02:18:15 compute-0 podman[460564]: 2025-12-05 02:18:15.407826129 +0000 UTC m=+0.083054924 container create a382d5f2aa612fe33bac13e40b6849a91a630b7f64b133671dd4631dc4e86d25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:18:15 compute-0 podman[460564]: 2025-12-05 02:18:15.379473772 +0000 UTC m=+0.054702557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:18:15 compute-0 systemd[1]: Started libpod-conmon-a382d5f2aa612fe33bac13e40b6849a91a630b7f64b133671dd4631dc4e86d25.scope.
Dec 05 02:18:15 compute-0 ceph-mon[192914]: pgmap v2077: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b8a579164044e79e1344e491651b48ce80631954d581bf940e4b4b30044c31e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b8a579164044e79e1344e491651b48ce80631954d581bf940e4b4b30044c31e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b8a579164044e79e1344e491651b48ce80631954d581bf940e4b4b30044c31e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b8a579164044e79e1344e491651b48ce80631954d581bf940e4b4b30044c31e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:18:15 compute-0 podman[460564]: 2025-12-05 02:18:15.587809144 +0000 UTC m=+0.263037969 container init a382d5f2aa612fe33bac13e40b6849a91a630b7f64b133671dd4631dc4e86d25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_khorana, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:18:15 compute-0 podman[460564]: 2025-12-05 02:18:15.602678201 +0000 UTC m=+0.277906966 container start a382d5f2aa612fe33bac13e40b6849a91a630b7f64b133671dd4631dc4e86d25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 05 02:18:15 compute-0 podman[460564]: 2025-12-05 02:18:15.608427623 +0000 UTC m=+0.283656418 container attach a382d5f2aa612fe33bac13e40b6849a91a630b7f64b133671dd4631dc4e86d25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_khorana, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 05 02:18:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:18:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:18:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:18:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:18:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:18:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:18:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:18:16
Dec 05 02:18:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:18:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:18:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.log', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'vms', 'backups']
Dec 05 02:18:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:18:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2078: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:16 compute-0 flamboyant_khorana[460580]: {
Dec 05 02:18:16 compute-0 flamboyant_khorana[460580]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:18:16 compute-0 flamboyant_khorana[460580]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:18:16 compute-0 flamboyant_khorana[460580]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:18:16 compute-0 flamboyant_khorana[460580]:         "osd_id": 0,
Dec 05 02:18:16 compute-0 flamboyant_khorana[460580]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:18:16 compute-0 flamboyant_khorana[460580]:         "type": "bluestore"
Dec 05 02:18:16 compute-0 flamboyant_khorana[460580]:     },
Dec 05 02:18:16 compute-0 flamboyant_khorana[460580]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:18:16 compute-0 flamboyant_khorana[460580]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:18:16 compute-0 flamboyant_khorana[460580]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:18:16 compute-0 flamboyant_khorana[460580]:         "osd_id": 1,
Dec 05 02:18:16 compute-0 flamboyant_khorana[460580]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:18:16 compute-0 flamboyant_khorana[460580]:         "type": "bluestore"
Dec 05 02:18:16 compute-0 flamboyant_khorana[460580]:     },
Dec 05 02:18:16 compute-0 flamboyant_khorana[460580]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:18:16 compute-0 flamboyant_khorana[460580]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:18:16 compute-0 flamboyant_khorana[460580]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:18:16 compute-0 flamboyant_khorana[460580]:         "osd_id": 2,
Dec 05 02:18:16 compute-0 flamboyant_khorana[460580]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:18:16 compute-0 flamboyant_khorana[460580]:         "type": "bluestore"
Dec 05 02:18:16 compute-0 flamboyant_khorana[460580]:     }
Dec 05 02:18:16 compute-0 flamboyant_khorana[460580]: }
Dec 05 02:18:16 compute-0 systemd[1]: libpod-a382d5f2aa612fe33bac13e40b6849a91a630b7f64b133671dd4631dc4e86d25.scope: Deactivated successfully.
Dec 05 02:18:16 compute-0 systemd[1]: libpod-a382d5f2aa612fe33bac13e40b6849a91a630b7f64b133671dd4631dc4e86d25.scope: Consumed 1.278s CPU time.
Dec 05 02:18:16 compute-0 podman[460564]: 2025-12-05 02:18:16.888342989 +0000 UTC m=+1.563571754 container died a382d5f2aa612fe33bac13e40b6849a91a630b7f64b133671dd4631dc4e86d25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Dec 05 02:18:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b8a579164044e79e1344e491651b48ce80631954d581bf940e4b4b30044c31e-merged.mount: Deactivated successfully.
Dec 05 02:18:17 compute-0 podman[460564]: 2025-12-05 02:18:17.203857711 +0000 UTC m=+1.879086506 container remove a382d5f2aa612fe33bac13e40b6849a91a630b7f64b133671dd4631dc4e86d25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:18:17 compute-0 sudo[460463]: pam_unix(sudo:session): session closed for user root
Dec 05 02:18:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:18:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:18:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:18:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:18:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev f17468f8-2e18-492d-8c82-30d5bfe951d7 does not exist
Dec 05 02:18:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d7fb63d6-c47d-42ed-96f6-7ae9b389c644 does not exist
Dec 05 02:18:17 compute-0 systemd[1]: libpod-conmon-a382d5f2aa612fe33bac13e40b6849a91a630b7f64b133671dd4631dc4e86d25.scope: Deactivated successfully.
Dec 05 02:18:17 compute-0 sudo[460624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:18:17 compute-0 sudo[460624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:18:17 compute-0 sudo[460624]: pam_unix(sudo:session): session closed for user root
Dec 05 02:18:17 compute-0 nova_compute[349548]: 2025-12-05 02:18:17.460 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:18:17 compute-0 ceph-mon[192914]: pgmap v2078: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:18:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:18:17 compute-0 sudo[460649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:18:17 compute-0 sudo[460649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:18:17 compute-0 sudo[460649]: pam_unix(sudo:session): session closed for user root
Dec 05 02:18:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:18:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:18:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:18:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:18:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:18:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:18:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:18:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:18:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:18:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:18:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:18:18 compute-0 nova_compute[349548]: 2025-12-05 02:18:18.328 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:18:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2079: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:19 compute-0 ceph-mon[192914]: pgmap v2079: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2080: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:21 compute-0 ceph-mon[192914]: pgmap v2080: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2081: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:22 compute-0 nova_compute[349548]: 2025-12-05 02:18:22.462 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:18:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:18:23 compute-0 nova_compute[349548]: 2025-12-05 02:18:23.330 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:18:23 compute-0 ceph-mon[192914]: pgmap v2081: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:23 compute-0 podman[460676]: 2025-12-05 02:18:23.718621052 +0000 UTC m=+0.109090335 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 02:18:23 compute-0 podman[460675]: 2025-12-05 02:18:23.747798151 +0000 UTC m=+0.135479966 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 02:18:23 compute-0 podman[460678]: 2025-12-05 02:18:23.757397691 +0000 UTC m=+0.130603389 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_id=edpm, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, release=1755695350, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, architecture=x86_64, container_name=openstack_network_exporter)
Dec 05 02:18:23 compute-0 podman[460677]: 2025-12-05 02:18:23.794204875 +0000 UTC m=+0.184090962 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 02:18:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2082: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:25 compute-0 ceph-mon[192914]: pgmap v2082: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2083: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015181009677997005 of space, bias 1.0, pg target 0.45543029033991017 quantized to 32 (current 32)
Dec 05 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 05 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:18:27 compute-0 nova_compute[349548]: 2025-12-05 02:18:27.466 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:18:27 compute-0 ceph-mon[192914]: pgmap v2083: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:18:28 compute-0 nova_compute[349548]: 2025-12-05 02:18:28.332 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:18:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2084: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:29 compute-0 ceph-mon[192914]: pgmap v2084: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:29 compute-0 podman[158197]: time="2025-12-05T02:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:18:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:18:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8657 "" "Go-http-client/1.1"
Dec 05 02:18:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2085: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:31 compute-0 openstack_network_exporter[366555]: ERROR   02:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:18:31 compute-0 openstack_network_exporter[366555]: ERROR   02:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:18:31 compute-0 openstack_network_exporter[366555]: ERROR   02:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:18:31 compute-0 openstack_network_exporter[366555]: ERROR   02:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:18:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:18:31 compute-0 openstack_network_exporter[366555]: ERROR   02:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:18:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:18:31 compute-0 ceph-mon[192914]: pgmap v2085: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2086: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:32 compute-0 nova_compute[349548]: 2025-12-05 02:18:32.468 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:18:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:18:33 compute-0 nova_compute[349548]: 2025-12-05 02:18:33.335 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:18:33 compute-0 ceph-mon[192914]: pgmap v2086: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2087: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:35 compute-0 ceph-mon[192914]: pgmap v2087: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2088: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:36 compute-0 podman[460762]: 2025-12-05 02:18:36.714416327 +0000 UTC m=+0.114058144 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:18:36 compute-0 podman[460761]: 2025-12-05 02:18:36.732429583 +0000 UTC m=+0.136348360 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec 05 02:18:37 compute-0 nova_compute[349548]: 2025-12-05 02:18:37.471 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:18:37 compute-0 ceph-mon[192914]: pgmap v2088: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.325 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.326 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.337 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '292fd084-0808-4a80-adc1-6ab1f28e188a', 'name': 'te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'hostId': '1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18', 'status': 'active', 'metadata': {'metering.server_group': '92ca195d-98d1-443c-9947-dcb7ca7b926a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 02:18:38 compute-0 nova_compute[349548]: 2025-12-05 02:18:38.338 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.342 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7', 'name': 'te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'hostId': '1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18', 'status': 'active', 'metadata': {'metering.server_group': '92ca195d-98d1-443c-9947-dcb7ca7b926a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.343 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.343 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.343 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.344 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.345 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.346 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.346 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T02:18:38.343853) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.347 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.347 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.347 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T02:18:38.347270) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.367 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.368 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.391 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.392 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.392 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.392 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.393 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.393 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.393 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.393 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.393 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.394 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.394 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.394 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.394 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.395 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.394 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T02:18:38.393339) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.395 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.395 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.396 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T02:18:38.395163) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.439 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 30882304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.440 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2089: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.495 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.bytes volume: 30075904 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.495 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.496 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.496 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.496 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.496 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.496 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.497 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.497 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 3200956192 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.497 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 237184283 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.497 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.latency volume: 2761905668 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.498 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.latency volume: 175446078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.498 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.498 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.498 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.499 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.499 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.499 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.499 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 1101 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.500 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.500 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.requests volume: 1075 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.500 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.500 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T02:18:38.496999) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.501 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.501 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.501 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T02:18:38.499280) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.501 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.501 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.501 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.501 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.501 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.501 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.502 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.502 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.502 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.503 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.503 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.503 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.503 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.503 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.503 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 73146368 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.504 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.504 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.bytes volume: 72822784 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.504 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.504 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T02:18:38.501499) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.505 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.505 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.505 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.505 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.505 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.505 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T02:18:38.503771) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T02:18:38.505659) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.531 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.558 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.558 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.559 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.559 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.559 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.559 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.559 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 11353966152 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.559 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.560 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.latency volume: 10383107676 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.560 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.560 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.561 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.561 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.561 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.561 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.561 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 315 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.562 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.561 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T02:18:38.559378) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.562 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.requests volume: 277 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.562 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.562 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.563 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.563 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.563 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.563 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.563 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.563 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T02:18:38.561636) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.564 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T02:18:38.563725) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.568 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.573 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.573 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.574 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.574 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.574 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.574 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.574 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T02:18:38.574493) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.575 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.575 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.575 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.575 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.575 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.576 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.576 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.576 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.576 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.577 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T02:18:38.576017) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.577 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.577 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.578 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.578 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.578 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.578 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.579 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.579 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.579 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.579 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.579 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.579 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.580 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.580 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T02:18:38.578343) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.580 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T02:18:38.579745) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.580 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.580 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.580 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.581 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.581 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.581 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.581 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.581 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.581 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.582 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.582 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T02:18:38.581362) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.582 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.582 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.582 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.582 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.582 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.583 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.583 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/memory.usage volume: 42.4765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.583 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/memory.usage volume: 43.47265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.583 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.584 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.584 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.584 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.584 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.584 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.584 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.585 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.585 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.585 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T02:18:38.583096) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.585 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.585 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.586 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.586 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.586 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.586 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.586 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.587 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.587 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.587 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T02:18:38.584570) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.587 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.587 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.587 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.588 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.588 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.588 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.588 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.589 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.589 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/cpu volume: 333710000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.589 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T02:18:38.586194) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.589 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T02:18:38.587529) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.589 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/cpu volume: 172180000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.590 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.590 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.590 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.590 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.590 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.591 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.591 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.591 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T02:18:38.589157) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.591 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.591 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.592 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.592 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.592 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.592 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.592 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.592 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.593 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.593 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.593 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T02:18:38.590972) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.594 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.594 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.594 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.594 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.594 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.594 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.594 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T02:18:38.592716) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.594 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:18:39 compute-0 ceph-mon[192914]: pgmap v2089: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2090: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:41 compute-0 podman[460804]: 2025-12-05 02:18:41.70142099 +0000 UTC m=+0.101311007 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:18:41 compute-0 podman[460802]: 2025-12-05 02:18:41.713783387 +0000 UTC m=+0.123229262 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Dec 05 02:18:41 compute-0 ceph-mon[192914]: pgmap v2090: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:41 compute-0 podman[460803]: 2025-12-05 02:18:41.71850834 +0000 UTC m=+0.130505897 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, release=1214.1726694543, config_id=edpm, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., container_name=kepler, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, io.buildah.version=1.29.0, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 05 02:18:42 compute-0 nova_compute[349548]: 2025-12-05 02:18:42.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:18:42 compute-0 nova_compute[349548]: 2025-12-05 02:18:42.069 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:18:42 compute-0 nova_compute[349548]: 2025-12-05 02:18:42.069 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:18:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2091: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:42 compute-0 nova_compute[349548]: 2025-12-05 02:18:42.475 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:18:42 compute-0 nova_compute[349548]: 2025-12-05 02:18:42.518 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:18:42 compute-0 nova_compute[349548]: 2025-12-05 02:18:42.518 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:18:42 compute-0 nova_compute[349548]: 2025-12-05 02:18:42.519 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 02:18:42 compute-0 nova_compute[349548]: 2025-12-05 02:18:42.520 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 292fd084-0808-4a80-adc1-6ab1f28e188a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:18:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:18:43 compute-0 nova_compute[349548]: 2025-12-05 02:18:43.341 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:18:43 compute-0 ceph-mon[192914]: pgmap v2091: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2092: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:18:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1119111735' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:18:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:18:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1119111735' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:18:45 compute-0 nova_compute[349548]: 2025-12-05 02:18:45.522 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updating instance_info_cache with network_info: [{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:18:45 compute-0 nova_compute[349548]: 2025-12-05 02:18:45.549 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:18:45 compute-0 nova_compute[349548]: 2025-12-05 02:18:45.550 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 02:18:45 compute-0 nova_compute[349548]: 2025-12-05 02:18:45.551 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:18:45 compute-0 ceph-mon[192914]: pgmap v2092: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1119111735' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:18:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1119111735' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:18:46 compute-0 nova_compute[349548]: 2025-12-05 02:18:46.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:18:46 compute-0 nova_compute[349548]: 2025-12-05 02:18:46.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:18:46 compute-0 nova_compute[349548]: 2025-12-05 02:18:46.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:18:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:18:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:18:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:18:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:18:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:18:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:18:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2093: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:47 compute-0 nova_compute[349548]: 2025-12-05 02:18:47.063 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:18:47 compute-0 nova_compute[349548]: 2025-12-05 02:18:47.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:18:47 compute-0 nova_compute[349548]: 2025-12-05 02:18:47.479 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:18:47 compute-0 ceph-mon[192914]: pgmap v2093: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:48 compute-0 nova_compute[349548]: 2025-12-05 02:18:48.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:18:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:18:48 compute-0 nova_compute[349548]: 2025-12-05 02:18:48.338 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:18:48 compute-0 nova_compute[349548]: 2025-12-05 02:18:48.338 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:18:48 compute-0 nova_compute[349548]: 2025-12-05 02:18:48.338 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:18:48 compute-0 nova_compute[349548]: 2025-12-05 02:18:48.339 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:18:48 compute-0 nova_compute[349548]: 2025-12-05 02:18:48.339 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:18:48 compute-0 nova_compute[349548]: 2025-12-05 02:18:48.367 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:18:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2094: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:18:48 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3230553076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:18:48 compute-0 nova_compute[349548]: 2025-12-05 02:18:48.830 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:18:48 compute-0 nova_compute[349548]: 2025-12-05 02:18:48.931 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:18:48 compute-0 nova_compute[349548]: 2025-12-05 02:18:48.932 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:18:48 compute-0 nova_compute[349548]: 2025-12-05 02:18:48.938 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:18:48 compute-0 nova_compute[349548]: 2025-12-05 02:18:48.939 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.461 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.463 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3548MB free_disk=59.897212982177734GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.463 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.463 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.545 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.546 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.546 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.547 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.561 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing inventories for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.585 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating ProviderTree inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.586 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.599 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing aggregate associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.618 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing trait associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, traits: HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.681 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:18:49 compute-0 ceph-mon[192914]: pgmap v2094: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:49 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3230553076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:18:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:18:50 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3375728888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:18:50 compute-0 nova_compute[349548]: 2025-12-05 02:18:50.220 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:18:50 compute-0 nova_compute[349548]: 2025-12-05 02:18:50.237 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:18:50 compute-0 nova_compute[349548]: 2025-12-05 02:18:50.283 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:18:50 compute-0 nova_compute[349548]: 2025-12-05 02:18:50.288 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:18:50 compute-0 nova_compute[349548]: 2025-12-05 02:18:50.289 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:18:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2095: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:50 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3375728888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:18:51 compute-0 ceph-mon[192914]: pgmap v2095: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2096: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:52 compute-0 nova_compute[349548]: 2025-12-05 02:18:52.481 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:18:53 compute-0 nova_compute[349548]: 2025-12-05 02:18:53.290 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:18:53 compute-0 nova_compute[349548]: 2025-12-05 02:18:53.290 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:18:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:18:53 compute-0 nova_compute[349548]: 2025-12-05 02:18:53.371 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:18:53 compute-0 ceph-mon[192914]: pgmap v2096: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2097: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:54 compute-0 podman[460902]: 2025-12-05 02:18:54.70627764 +0000 UTC m=+0.105551765 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:18:54 compute-0 podman[460904]: 2025-12-05 02:18:54.712399912 +0000 UTC m=+0.095235826 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, version=9.6, config_id=edpm, io.buildah.version=1.33.7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible)
Dec 05 02:18:54 compute-0 podman[460901]: 2025-12-05 02:18:54.741651314 +0000 UTC m=+0.146258289 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 02:18:54 compute-0 podman[460903]: 2025-12-05 02:18:54.773781476 +0000 UTC m=+0.167563657 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 02:18:55 compute-0 ceph-mon[192914]: pgmap v2097: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:18:56.216 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:18:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:18:56.217 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:18:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:18:56.218 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:18:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2098: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:57 compute-0 nova_compute[349548]: 2025-12-05 02:18:57.484 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:18:57 compute-0 ceph-mon[192914]: pgmap v2098: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:18:58 compute-0 nova_compute[349548]: 2025-12-05 02:18:58.373 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:18:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2099: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:18:59 compute-0 podman[158197]: time="2025-12-05T02:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:18:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:18:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8644 "" "Go-http-client/1.1"
Dec 05 02:18:59 compute-0 ceph-mon[192914]: pgmap v2099: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2100: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:01 compute-0 openstack_network_exporter[366555]: ERROR   02:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:19:01 compute-0 openstack_network_exporter[366555]: ERROR   02:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:19:01 compute-0 openstack_network_exporter[366555]: ERROR   02:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:19:01 compute-0 openstack_network_exporter[366555]: ERROR   02:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:19:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:19:01 compute-0 openstack_network_exporter[366555]: ERROR   02:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:19:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:19:01 compute-0 ceph-mon[192914]: pgmap v2100: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2101: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:02 compute-0 nova_compute[349548]: 2025-12-05 02:19:02.487 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:19:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:19:03 compute-0 nova_compute[349548]: 2025-12-05 02:19:03.376 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:19:03 compute-0 ceph-mon[192914]: pgmap v2101: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2102: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:05 compute-0 ceph-mon[192914]: pgmap v2102: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2103: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:07 compute-0 nova_compute[349548]: 2025-12-05 02:19:07.491 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:19:07 compute-0 podman[460982]: 2025-12-05 02:19:07.746312088 +0000 UTC m=+0.138430309 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 02:19:07 compute-0 podman[460981]: 2025-12-05 02:19:07.766810864 +0000 UTC m=+0.168259727 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 02:19:07 compute-0 ceph-mon[192914]: pgmap v2103: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:19:08 compute-0 nova_compute[349548]: 2025-12-05 02:19:08.380 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:19:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2104: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:09 compute-0 ceph-mon[192914]: pgmap v2104: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2105: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:11 compute-0 ceph-mon[192914]: pgmap v2105: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2106: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:12 compute-0 nova_compute[349548]: 2025-12-05 02:19:12.493 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:19:12 compute-0 podman[461020]: 2025-12-05 02:19:12.72547232 +0000 UTC m=+0.118914550 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, release-0.7.12=, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-container, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 05 02:19:12 compute-0 podman[461019]: 2025-12-05 02:19:12.72973682 +0000 UTC m=+0.130246339 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 02:19:12 compute-0 podman[461021]: 2025-12-05 02:19:12.765038032 +0000 UTC m=+0.147671729 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 02:19:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:19:13 compute-0 nova_compute[349548]: 2025-12-05 02:19:13.384 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:19:13 compute-0 ceph-mon[192914]: pgmap v2106: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2107: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:15 compute-0 ceph-mon[192914]: pgmap v2107: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:19:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:19:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:19:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:19:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:19:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:19:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:19:16
Dec 05 02:19:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:19:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:19:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'backups', 'images', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'volumes', 'cephfs.cephfs.meta']
Dec 05 02:19:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:19:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2108: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:17 compute-0 nova_compute[349548]: 2025-12-05 02:19:17.497 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:19:17 compute-0 sudo[461071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:19:17 compute-0 sudo[461071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:19:17 compute-0 sudo[461071]: pam_unix(sudo:session): session closed for user root
Dec 05 02:19:17 compute-0 sudo[461096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:19:17 compute-0 sudo[461096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:19:17 compute-0 sudo[461096]: pam_unix(sudo:session): session closed for user root
Dec 05 02:19:17 compute-0 ceph-mon[192914]: pgmap v2108: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:18 compute-0 sudo[461121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:19:18 compute-0 sudo[461121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:19:18 compute-0 sudo[461121]: pam_unix(sudo:session): session closed for user root
Dec 05 02:19:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:19:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:19:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:19:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:19:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:19:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:19:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:19:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:19:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:19:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:19:18 compute-0 sudo[461146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:19:18 compute-0 sudo[461146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:19:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:19:18 compute-0 nova_compute[349548]: 2025-12-05 02:19:18.387 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:19:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2109: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:18 compute-0 sudo[461146]: pam_unix(sudo:session): session closed for user root
Dec 05 02:19:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 05 02:19:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 02:19:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:19:18 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:19:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:19:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:19:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:19:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:19:18 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 024cbb5f-47fa-4486-8e2c-c319f394afb2 does not exist
Dec 05 02:19:18 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8387e1ff-adde-4910-a410-080e0539f38f does not exist
Dec 05 02:19:18 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev eaf16309-2a74-4d45-b72b-3aefc30ded0e does not exist
Dec 05 02:19:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:19:18 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:19:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:19:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:19:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:19:18 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:19:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 02:19:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:19:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:19:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:19:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:19:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:19:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:19:19 compute-0 sudo[461200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:19:19 compute-0 sudo[461200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:19:19 compute-0 sudo[461200]: pam_unix(sudo:session): session closed for user root
Dec 05 02:19:19 compute-0 sudo[461225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:19:19 compute-0 sudo[461225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:19:19 compute-0 sudo[461225]: pam_unix(sudo:session): session closed for user root
Dec 05 02:19:19 compute-0 sudo[461250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:19:19 compute-0 sudo[461250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:19:19 compute-0 sudo[461250]: pam_unix(sudo:session): session closed for user root
Dec 05 02:19:19 compute-0 sudo[461275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:19:19 compute-0 sudo[461275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:19:20 compute-0 ceph-mon[192914]: pgmap v2109: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:20 compute-0 podman[461341]: 2025-12-05 02:19:20.24174487 +0000 UTC m=+0.122353987 container create 596c2e745da48c329b2a0418abc4fe122ac3d2a166502816007d7fb90844d184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 02:19:20 compute-0 podman[461341]: 2025-12-05 02:19:20.198849495 +0000 UTC m=+0.079458692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:19:20 compute-0 systemd[1]: Started libpod-conmon-596c2e745da48c329b2a0418abc4fe122ac3d2a166502816007d7fb90844d184.scope.
Dec 05 02:19:20 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:19:20 compute-0 podman[461341]: 2025-12-05 02:19:20.410543721 +0000 UTC m=+0.291152898 container init 596c2e745da48c329b2a0418abc4fe122ac3d2a166502816007d7fb90844d184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_borg, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:19:20 compute-0 podman[461341]: 2025-12-05 02:19:20.423620838 +0000 UTC m=+0.304229965 container start 596c2e745da48c329b2a0418abc4fe122ac3d2a166502816007d7fb90844d184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 05 02:19:20 compute-0 epic_borg[461354]: 167 167
Dec 05 02:19:20 compute-0 podman[461341]: 2025-12-05 02:19:20.436113239 +0000 UTC m=+0.316722376 container attach 596c2e745da48c329b2a0418abc4fe122ac3d2a166502816007d7fb90844d184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:19:20 compute-0 systemd[1]: libpod-596c2e745da48c329b2a0418abc4fe122ac3d2a166502816007d7fb90844d184.scope: Deactivated successfully.
Dec 05 02:19:20 compute-0 podman[461341]: 2025-12-05 02:19:20.438610489 +0000 UTC m=+0.319219636 container died 596c2e745da48c329b2a0418abc4fe122ac3d2a166502816007d7fb90844d184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_borg, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:19:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-1159187bb82294864e58c3b15b9bdd96e55ce2e5c842fa7bf7a027dacdd9cce9-merged.mount: Deactivated successfully.
Dec 05 02:19:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2110: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:20 compute-0 podman[461341]: 2025-12-05 02:19:20.509009736 +0000 UTC m=+0.389618853 container remove 596c2e745da48c329b2a0418abc4fe122ac3d2a166502816007d7fb90844d184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:19:20 compute-0 systemd[1]: libpod-conmon-596c2e745da48c329b2a0418abc4fe122ac3d2a166502816007d7fb90844d184.scope: Deactivated successfully.
Dec 05 02:19:20 compute-0 podman[461381]: 2025-12-05 02:19:20.78655268 +0000 UTC m=+0.096099169 container create 5881e54b91e95abddf2aa0edeadc078d344edbb660d4772b569129f1d98a7760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:19:20 compute-0 podman[461381]: 2025-12-05 02:19:20.748170483 +0000 UTC m=+0.057717022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:19:20 compute-0 systemd[1]: Started libpod-conmon-5881e54b91e95abddf2aa0edeadc078d344edbb660d4772b569129f1d98a7760.scope.
Dec 05 02:19:20 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:19:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e46f887179f8ab96706e950422306fb16a46f82d3862d17978238aa4cea4493e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:19:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e46f887179f8ab96706e950422306fb16a46f82d3862d17978238aa4cea4493e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:19:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e46f887179f8ab96706e950422306fb16a46f82d3862d17978238aa4cea4493e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:19:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e46f887179f8ab96706e950422306fb16a46f82d3862d17978238aa4cea4493e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:19:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e46f887179f8ab96706e950422306fb16a46f82d3862d17978238aa4cea4493e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:19:20 compute-0 podman[461381]: 2025-12-05 02:19:20.961415522 +0000 UTC m=+0.270962051 container init 5881e54b91e95abddf2aa0edeadc078d344edbb660d4772b569129f1d98a7760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:19:20 compute-0 podman[461381]: 2025-12-05 02:19:20.984468379 +0000 UTC m=+0.294014838 container start 5881e54b91e95abddf2aa0edeadc078d344edbb660d4772b569129f1d98a7760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_easley, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 02:19:20 compute-0 podman[461381]: 2025-12-05 02:19:20.989421648 +0000 UTC m=+0.298968177 container attach 5881e54b91e95abddf2aa0edeadc078d344edbb660d4772b569129f1d98a7760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_easley, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 02:19:22 compute-0 ceph-mon[192914]: pgmap v2110: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:22 compute-0 intelligent_easley[461397]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:19:22 compute-0 intelligent_easley[461397]: --> relative data size: 1.0
Dec 05 02:19:22 compute-0 intelligent_easley[461397]: --> All data devices are unavailable
Dec 05 02:19:22 compute-0 systemd[1]: libpod-5881e54b91e95abddf2aa0edeadc078d344edbb660d4772b569129f1d98a7760.scope: Deactivated successfully.
Dec 05 02:19:22 compute-0 systemd[1]: libpod-5881e54b91e95abddf2aa0edeadc078d344edbb660d4772b569129f1d98a7760.scope: Consumed 1.239s CPU time.
Dec 05 02:19:22 compute-0 podman[461381]: 2025-12-05 02:19:22.291363684 +0000 UTC m=+1.600910133 container died 5881e54b91e95abddf2aa0edeadc078d344edbb660d4772b569129f1d98a7760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_easley, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:19:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-e46f887179f8ab96706e950422306fb16a46f82d3862d17978238aa4cea4493e-merged.mount: Deactivated successfully.
Dec 05 02:19:22 compute-0 podman[461381]: 2025-12-05 02:19:22.362715918 +0000 UTC m=+1.672262397 container remove 5881e54b91e95abddf2aa0edeadc078d344edbb660d4772b569129f1d98a7760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_easley, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 05 02:19:22 compute-0 systemd[1]: libpod-conmon-5881e54b91e95abddf2aa0edeadc078d344edbb660d4772b569129f1d98a7760.scope: Deactivated successfully.
Dec 05 02:19:22 compute-0 sudo[461275]: pam_unix(sudo:session): session closed for user root
Dec 05 02:19:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2111: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:22 compute-0 nova_compute[349548]: 2025-12-05 02:19:22.498 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:19:22 compute-0 sudo[461436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:19:22 compute-0 sudo[461436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:19:22 compute-0 sudo[461436]: pam_unix(sudo:session): session closed for user root
Dec 05 02:19:22 compute-0 sudo[461461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:19:22 compute-0 sudo[461461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:19:22 compute-0 sudo[461461]: pam_unix(sudo:session): session closed for user root
Dec 05 02:19:22 compute-0 sudo[461486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:19:22 compute-0 sudo[461486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:19:22 compute-0 sudo[461486]: pam_unix(sudo:session): session closed for user root
Dec 05 02:19:22 compute-0 sudo[461511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:19:22 compute-0 sudo[461511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:19:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:19:23 compute-0 nova_compute[349548]: 2025-12-05 02:19:23.391 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:19:23 compute-0 podman[461574]: 2025-12-05 02:19:23.531091923 +0000 UTC m=+0.081664524 container create d042dd5fb061a7621b743fe5e084b2cdb61b2ff906dea1e3751d3b40024e3390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hamilton, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:19:23 compute-0 podman[461574]: 2025-12-05 02:19:23.509353683 +0000 UTC m=+0.059926314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:19:23 compute-0 systemd[1]: Started libpod-conmon-d042dd5fb061a7621b743fe5e084b2cdb61b2ff906dea1e3751d3b40024e3390.scope.
Dec 05 02:19:23 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:19:23 compute-0 podman[461574]: 2025-12-05 02:19:23.688471993 +0000 UTC m=+0.239044605 container init d042dd5fb061a7621b743fe5e084b2cdb61b2ff906dea1e3751d3b40024e3390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Dec 05 02:19:23 compute-0 podman[461574]: 2025-12-05 02:19:23.704668178 +0000 UTC m=+0.255240779 container start d042dd5fb061a7621b743fe5e084b2cdb61b2ff906dea1e3751d3b40024e3390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hamilton, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:19:23 compute-0 podman[461574]: 2025-12-05 02:19:23.709822933 +0000 UTC m=+0.260395584 container attach d042dd5fb061a7621b743fe5e084b2cdb61b2ff906dea1e3751d3b40024e3390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hamilton, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:19:23 compute-0 clever_hamilton[461590]: 167 167
Dec 05 02:19:23 compute-0 systemd[1]: libpod-d042dd5fb061a7621b743fe5e084b2cdb61b2ff906dea1e3751d3b40024e3390.scope: Deactivated successfully.
Dec 05 02:19:23 compute-0 conmon[461590]: conmon d042dd5fb061a7621b74 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d042dd5fb061a7621b743fe5e084b2cdb61b2ff906dea1e3751d3b40024e3390.scope/container/memory.events
Dec 05 02:19:23 compute-0 podman[461574]: 2025-12-05 02:19:23.718472476 +0000 UTC m=+0.269045077 container died d042dd5fb061a7621b743fe5e084b2cdb61b2ff906dea1e3751d3b40024e3390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hamilton, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:19:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7740b3bd1f2708578514e3c9e0ebb068b624f9095e379a33db8697c1138ef150-merged.mount: Deactivated successfully.
Dec 05 02:19:23 compute-0 podman[461574]: 2025-12-05 02:19:23.78198702 +0000 UTC m=+0.332559621 container remove d042dd5fb061a7621b743fe5e084b2cdb61b2ff906dea1e3751d3b40024e3390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hamilton, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 02:19:23 compute-0 systemd[1]: libpod-conmon-d042dd5fb061a7621b743fe5e084b2cdb61b2ff906dea1e3751d3b40024e3390.scope: Deactivated successfully.
Dec 05 02:19:24 compute-0 ceph-mon[192914]: pgmap v2111: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:24 compute-0 podman[461612]: 2025-12-05 02:19:24.052487647 +0000 UTC m=+0.079342829 container create 606289e9eaee9ba9664cd9a09823d60c8dd7cda24f86b45a93e887cbb44904b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_moore, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 02:19:24 compute-0 podman[461612]: 2025-12-05 02:19:24.015700124 +0000 UTC m=+0.042555386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:19:24 compute-0 systemd[1]: Started libpod-conmon-606289e9eaee9ba9664cd9a09823d60c8dd7cda24f86b45a93e887cbb44904b1.scope.
Dec 05 02:19:24 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:19:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e2b61fff105ab6905be48daba95fea5987be6bfa5ce64100db01b177a42d18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:19:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e2b61fff105ab6905be48daba95fea5987be6bfa5ce64100db01b177a42d18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:19:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e2b61fff105ab6905be48daba95fea5987be6bfa5ce64100db01b177a42d18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:19:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e2b61fff105ab6905be48daba95fea5987be6bfa5ce64100db01b177a42d18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:19:24 compute-0 podman[461612]: 2025-12-05 02:19:24.247074172 +0000 UTC m=+0.273929384 container init 606289e9eaee9ba9664cd9a09823d60c8dd7cda24f86b45a93e887cbb44904b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_moore, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Dec 05 02:19:24 compute-0 podman[461612]: 2025-12-05 02:19:24.263217096 +0000 UTC m=+0.290072308 container start 606289e9eaee9ba9664cd9a09823d60c8dd7cda24f86b45a93e887cbb44904b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_moore, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 02:19:24 compute-0 podman[461612]: 2025-12-05 02:19:24.287275441 +0000 UTC m=+0.314130653 container attach 606289e9eaee9ba9664cd9a09823d60c8dd7cda24f86b45a93e887cbb44904b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_moore, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:19:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2112: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]: {
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:     "0": [
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:         {
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "devices": [
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "/dev/loop3"
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             ],
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "lv_name": "ceph_lv0",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "lv_size": "21470642176",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "name": "ceph_lv0",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "tags": {
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.cluster_name": "ceph",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.crush_device_class": "",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.encrypted": "0",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.osd_id": "0",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.type": "block",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.vdo": "0"
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             },
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "type": "block",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "vg_name": "ceph_vg0"
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:         }
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:     ],
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:     "1": [
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:         {
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "devices": [
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "/dev/loop4"
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             ],
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "lv_name": "ceph_lv1",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "lv_size": "21470642176",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "name": "ceph_lv1",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "tags": {
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.cluster_name": "ceph",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.crush_device_class": "",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.encrypted": "0",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.osd_id": "1",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.type": "block",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.vdo": "0"
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             },
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "type": "block",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "vg_name": "ceph_vg1"
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:         }
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:     ],
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:     "2": [
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:         {
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "devices": [
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "/dev/loop5"
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             ],
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "lv_name": "ceph_lv2",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "lv_size": "21470642176",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "name": "ceph_lv2",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "tags": {
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.cluster_name": "ceph",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.crush_device_class": "",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.encrypted": "0",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.osd_id": "2",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.type": "block",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:                 "ceph.vdo": "0"
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             },
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "type": "block",
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:             "vg_name": "ceph_vg2"
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:         }
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]:     ]
Dec 05 02:19:25 compute-0 flamboyant_moore[461628]: }
Dec 05 02:19:25 compute-0 systemd[1]: libpod-606289e9eaee9ba9664cd9a09823d60c8dd7cda24f86b45a93e887cbb44904b1.scope: Deactivated successfully.
Dec 05 02:19:25 compute-0 podman[461612]: 2025-12-05 02:19:25.185711924 +0000 UTC m=+1.212567116 container died 606289e9eaee9ba9664cd9a09823d60c8dd7cda24f86b45a93e887cbb44904b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_moore, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 02:19:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6e2b61fff105ab6905be48daba95fea5987be6bfa5ce64100db01b177a42d18-merged.mount: Deactivated successfully.
Dec 05 02:19:25 compute-0 podman[461612]: 2025-12-05 02:19:25.284510809 +0000 UTC m=+1.311365991 container remove 606289e9eaee9ba9664cd9a09823d60c8dd7cda24f86b45a93e887cbb44904b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:19:25 compute-0 systemd[1]: libpod-conmon-606289e9eaee9ba9664cd9a09823d60c8dd7cda24f86b45a93e887cbb44904b1.scope: Deactivated successfully.
Dec 05 02:19:25 compute-0 sudo[461511]: pam_unix(sudo:session): session closed for user root
Dec 05 02:19:25 compute-0 podman[461648]: 2025-12-05 02:19:25.354814903 +0000 UTC m=+0.110434142 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, version=9.6, managed_by=edpm_ansible, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, name=ubi9-minimal, distribution-scope=public, config_id=edpm)
Dec 05 02:19:25 compute-0 podman[461646]: 2025-12-05 02:19:25.359973418 +0000 UTC m=+0.130357222 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 02:19:25 compute-0 podman[461639]: 2025-12-05 02:19:25.376356148 +0000 UTC m=+0.130075754 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 02:19:25 compute-0 podman[461647]: 2025-12-05 02:19:25.383801697 +0000 UTC m=+0.145946200 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Dec 05 02:19:25 compute-0 sudo[461715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:19:25 compute-0 sudo[461715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:19:25 compute-0 sudo[461715]: pam_unix(sudo:session): session closed for user root
Dec 05 02:19:25 compute-0 sudo[461756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:19:25 compute-0 sudo[461756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:19:25 compute-0 sudo[461756]: pam_unix(sudo:session): session closed for user root
Dec 05 02:19:25 compute-0 sudo[461781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:19:25 compute-0 sudo[461781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:19:25 compute-0 sudo[461781]: pam_unix(sudo:session): session closed for user root
Dec 05 02:19:25 compute-0 sudo[461806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:19:25 compute-0 sudo[461806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:19:26 compute-0 ceph-mon[192914]: pgmap v2112: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:26 compute-0 podman[461870]: 2025-12-05 02:19:26.304478345 +0000 UTC m=+0.086707526 container create 7c38b0fe964def9e3c96c23a2a7de63abc38e2e50cde9b85e84a96fbe7b39161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 05 02:19:26 compute-0 podman[461870]: 2025-12-05 02:19:26.273842785 +0000 UTC m=+0.056071976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:19:26 compute-0 systemd[1]: Started libpod-conmon-7c38b0fe964def9e3c96c23a2a7de63abc38e2e50cde9b85e84a96fbe7b39161.scope.
Dec 05 02:19:26 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:19:26 compute-0 podman[461870]: 2025-12-05 02:19:26.456994279 +0000 UTC m=+0.239223530 container init 7c38b0fe964def9e3c96c23a2a7de63abc38e2e50cde9b85e84a96fbe7b39161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_banach, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:19:26 compute-0 podman[461870]: 2025-12-05 02:19:26.476274571 +0000 UTC m=+0.258503722 container start 7c38b0fe964def9e3c96c23a2a7de63abc38e2e50cde9b85e84a96fbe7b39161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_banach, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:19:26 compute-0 podman[461870]: 2025-12-05 02:19:26.481442876 +0000 UTC m=+0.263672067 container attach 7c38b0fe964def9e3c96c23a2a7de63abc38e2e50cde9b85e84a96fbe7b39161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_banach, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Dec 05 02:19:26 compute-0 festive_banach[461885]: 167 167
Dec 05 02:19:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2113: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:26 compute-0 systemd[1]: libpod-7c38b0fe964def9e3c96c23a2a7de63abc38e2e50cde9b85e84a96fbe7b39161.scope: Deactivated successfully.
Dec 05 02:19:26 compute-0 conmon[461885]: conmon 7c38b0fe964def9e3c96 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c38b0fe964def9e3c96c23a2a7de63abc38e2e50cde9b85e84a96fbe7b39161.scope/container/memory.events
Dec 05 02:19:26 compute-0 podman[461870]: 2025-12-05 02:19:26.493160425 +0000 UTC m=+0.275389656 container died 7c38b0fe964def9e3c96c23a2a7de63abc38e2e50cde9b85e84a96fbe7b39161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 02:19:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-1206816c622bf730949c2ef77af6ac171ffdbac9722e34b78b1a5ecedd73f227-merged.mount: Deactivated successfully.
Dec 05 02:19:26 compute-0 podman[461870]: 2025-12-05 02:19:26.57667875 +0000 UTC m=+0.358907921 container remove 7c38b0fe964def9e3c96c23a2a7de63abc38e2e50cde9b85e84a96fbe7b39161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:19:26 compute-0 systemd[1]: libpod-conmon-7c38b0fe964def9e3c96c23a2a7de63abc38e2e50cde9b85e84a96fbe7b39161.scope: Deactivated successfully.
Dec 05 02:19:26 compute-0 podman[461907]: 2025-12-05 02:19:26.815019434 +0000 UTC m=+0.055218131 container create fc52739f843ab7650caa025c6f55b98ec4f08e0deb7062038aef24a4e9c767ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lalande, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Dec 05 02:19:26 compute-0 systemd[1]: Started libpod-conmon-fc52739f843ab7650caa025c6f55b98ec4f08e0deb7062038aef24a4e9c767ee.scope.
Dec 05 02:19:26 compute-0 podman[461907]: 2025-12-05 02:19:26.79171477 +0000 UTC m=+0.031913497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:19:26 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:19:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bced3ab3699c217c4d16a2582797ad466ae2a995cfc85ab2e10ae30d025e476/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:19:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bced3ab3699c217c4d16a2582797ad466ae2a995cfc85ab2e10ae30d025e476/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:19:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bced3ab3699c217c4d16a2582797ad466ae2a995cfc85ab2e10ae30d025e476/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:19:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bced3ab3699c217c4d16a2582797ad466ae2a995cfc85ab2e10ae30d025e476/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:19:27 compute-0 podman[461907]: 2025-12-05 02:19:27.030376453 +0000 UTC m=+0.270575170 container init fc52739f843ab7650caa025c6f55b98ec4f08e0deb7062038aef24a4e9c767ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:19:27 compute-0 podman[461907]: 2025-12-05 02:19:27.048246665 +0000 UTC m=+0.288445352 container start fc52739f843ab7650caa025c6f55b98ec4f08e0deb7062038aef24a4e9c767ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lalande, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:19:27 compute-0 podman[461907]: 2025-12-05 02:19:27.053437511 +0000 UTC m=+0.293636208 container attach fc52739f843ab7650caa025c6f55b98ec4f08e0deb7062038aef24a4e9c767ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lalande, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 05 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015181009677997005 of space, bias 1.0, pg target 0.45543029033991017 quantized to 32 (current 32)
Dec 05 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 05 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:19:27 compute-0 nova_compute[349548]: 2025-12-05 02:19:27.501 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:19:28 compute-0 ceph-mon[192914]: pgmap v2113: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:28 compute-0 bold_lalande[461923]: {
Dec 05 02:19:28 compute-0 bold_lalande[461923]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:19:28 compute-0 bold_lalande[461923]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:19:28 compute-0 bold_lalande[461923]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:19:28 compute-0 bold_lalande[461923]:         "osd_id": 0,
Dec 05 02:19:28 compute-0 bold_lalande[461923]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:19:28 compute-0 bold_lalande[461923]:         "type": "bluestore"
Dec 05 02:19:28 compute-0 bold_lalande[461923]:     },
Dec 05 02:19:28 compute-0 bold_lalande[461923]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:19:28 compute-0 bold_lalande[461923]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:19:28 compute-0 bold_lalande[461923]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:19:28 compute-0 bold_lalande[461923]:         "osd_id": 1,
Dec 05 02:19:28 compute-0 bold_lalande[461923]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:19:28 compute-0 bold_lalande[461923]:         "type": "bluestore"
Dec 05 02:19:28 compute-0 bold_lalande[461923]:     },
Dec 05 02:19:28 compute-0 bold_lalande[461923]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:19:28 compute-0 bold_lalande[461923]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:19:28 compute-0 bold_lalande[461923]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:19:28 compute-0 bold_lalande[461923]:         "osd_id": 2,
Dec 05 02:19:28 compute-0 bold_lalande[461923]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:19:28 compute-0 bold_lalande[461923]:         "type": "bluestore"
Dec 05 02:19:28 compute-0 bold_lalande[461923]:     }
Dec 05 02:19:28 compute-0 bold_lalande[461923]: }
Dec 05 02:19:28 compute-0 systemd[1]: libpod-fc52739f843ab7650caa025c6f55b98ec4f08e0deb7062038aef24a4e9c767ee.scope: Deactivated successfully.
Dec 05 02:19:28 compute-0 systemd[1]: libpod-fc52739f843ab7650caa025c6f55b98ec4f08e0deb7062038aef24a4e9c767ee.scope: Consumed 1.125s CPU time.
Dec 05 02:19:28 compute-0 conmon[461923]: conmon fc52739f843ab7650caa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc52739f843ab7650caa025c6f55b98ec4f08e0deb7062038aef24a4e9c767ee.scope/container/memory.events
Dec 05 02:19:28 compute-0 podman[461907]: 2025-12-05 02:19:28.175482793 +0000 UTC m=+1.415681480 container died fc52739f843ab7650caa025c6f55b98ec4f08e0deb7062038aef24a4e9c767ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:19:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bced3ab3699c217c4d16a2582797ad466ae2a995cfc85ab2e10ae30d025e476-merged.mount: Deactivated successfully.
Dec 05 02:19:28 compute-0 podman[461907]: 2025-12-05 02:19:28.24799299 +0000 UTC m=+1.488191697 container remove fc52739f843ab7650caa025c6f55b98ec4f08e0deb7062038aef24a4e9c767ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lalande, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:19:28 compute-0 systemd[1]: libpod-conmon-fc52739f843ab7650caa025c6f55b98ec4f08e0deb7062038aef24a4e9c767ee.scope: Deactivated successfully.
Dec 05 02:19:28 compute-0 sudo[461806]: pam_unix(sudo:session): session closed for user root
Dec 05 02:19:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:19:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:19:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:19:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:19:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d383363a-c9e9-4365-9f2a-cce2440de5b1 does not exist
Dec 05 02:19:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev dc2edbbd-150d-4fb1-946b-d10bba2b1ced does not exist
Dec 05 02:19:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:19:28 compute-0 nova_compute[349548]: 2025-12-05 02:19:28.393 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:19:28 compute-0 sudo[461970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:19:28 compute-0 sudo[461970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:19:28 compute-0 sudo[461970]: pam_unix(sudo:session): session closed for user root
Dec 05 02:19:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2114: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:28 compute-0 sudo[461995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:19:28 compute-0 sudo[461995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:19:28 compute-0 sudo[461995]: pam_unix(sudo:session): session closed for user root
Dec 05 02:19:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:19:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:19:29 compute-0 ceph-mon[192914]: pgmap v2114: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:29 compute-0 podman[158197]: time="2025-12-05T02:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:19:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:19:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8658 "" "Go-http-client/1.1"
Dec 05 02:19:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2115: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:31 compute-0 openstack_network_exporter[366555]: ERROR   02:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:19:31 compute-0 openstack_network_exporter[366555]: ERROR   02:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:19:31 compute-0 openstack_network_exporter[366555]: ERROR   02:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:19:31 compute-0 openstack_network_exporter[366555]: ERROR   02:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:19:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:19:31 compute-0 openstack_network_exporter[366555]: ERROR   02:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:19:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:19:31 compute-0 ceph-mon[192914]: pgmap v2115: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2116: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:32 compute-0 nova_compute[349548]: 2025-12-05 02:19:32.506 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:19:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:19:33 compute-0 nova_compute[349548]: 2025-12-05 02:19:33.399 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:19:33 compute-0 ceph-mon[192914]: pgmap v2116: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2117: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:35 compute-0 ceph-mon[192914]: pgmap v2117: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2118: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:37 compute-0 nova_compute[349548]: 2025-12-05 02:19:37.511 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:19:37 compute-0 ceph-mon[192914]: pgmap v2118: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:19:38 compute-0 nova_compute[349548]: 2025-12-05 02:19:38.402 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:19:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2119: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:38 compute-0 podman[462021]: 2025-12-05 02:19:38.688405256 +0000 UTC m=+0.098153817 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 02:19:38 compute-0 podman[462020]: 2025-12-05 02:19:38.7166804 +0000 UTC m=+0.126814032 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 05 02:19:39 compute-0 ceph-mon[192914]: pgmap v2119: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2120: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:41 compute-0 ceph-mon[192914]: pgmap v2120: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:42 compute-0 nova_compute[349548]: 2025-12-05 02:19:42.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:19:42 compute-0 nova_compute[349548]: 2025-12-05 02:19:42.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:19:42 compute-0 nova_compute[349548]: 2025-12-05 02:19:42.314 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:19:42 compute-0 nova_compute[349548]: 2025-12-05 02:19:42.315 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:19:42 compute-0 nova_compute[349548]: 2025-12-05 02:19:42.316 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 02:19:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2121: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:42 compute-0 nova_compute[349548]: 2025-12-05 02:19:42.514 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:19:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:19:43 compute-0 nova_compute[349548]: 2025-12-05 02:19:43.406 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:19:43 compute-0 ceph-mon[192914]: pgmap v2121: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:43 compute-0 podman[462060]: 2025-12-05 02:19:43.689066133 +0000 UTC m=+0.098902569 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 05 02:19:43 compute-0 podman[462062]: 2025-12-05 02:19:43.710602117 +0000 UTC m=+0.097599232 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 02:19:43 compute-0 podman[462061]: 2025-12-05 02:19:43.714390164 +0000 UTC m=+0.118119609 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, config_id=edpm, release=1214.1726694543, version=9.4, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, release-0.7.12=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 02:19:44 compute-0 nova_compute[349548]: 2025-12-05 02:19:44.369 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Updating instance_info_cache with network_info: [{"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:19:44 compute-0 nova_compute[349548]: 2025-12-05 02:19:44.393 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:19:44 compute-0 nova_compute[349548]: 2025-12-05 02:19:44.394 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 02:19:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2122: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:19:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/296804695' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:19:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:19:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/296804695' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:19:45 compute-0 ceph-mon[192914]: pgmap v2122: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/296804695' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:19:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/296804695' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:19:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:19:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:19:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:19:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:19:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:19:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:19:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2123: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:47 compute-0 nova_compute[349548]: 2025-12-05 02:19:47.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:19:47 compute-0 nova_compute[349548]: 2025-12-05 02:19:47.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:19:47 compute-0 nova_compute[349548]: 2025-12-05 02:19:47.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:19:47 compute-0 nova_compute[349548]: 2025-12-05 02:19:47.517 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:19:47 compute-0 ceph-mon[192914]: pgmap v2123: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.684218) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901187684288, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1347, "num_deletes": 251, "total_data_size": 2126585, "memory_usage": 2161584, "flush_reason": "Manual Compaction"}
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901187701688, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 2084484, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42637, "largest_seqno": 43983, "table_properties": {"data_size": 2078037, "index_size": 3650, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13268, "raw_average_key_size": 19, "raw_value_size": 2065254, "raw_average_value_size": 3091, "num_data_blocks": 164, "num_entries": 668, "num_filter_entries": 668, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764901049, "oldest_key_time": 1764901049, "file_creation_time": 1764901187, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 17573 microseconds, and 9519 cpu microseconds.
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.701797) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 2084484 bytes OK
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.701820) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.705156) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.705179) EVENT_LOG_v1 {"time_micros": 1764901187705172, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.705200) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 2120577, prev total WAL file size 2120577, number of live WAL files 2.
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.706735) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(2035KB)], [101(8827KB)]
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901187706828, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 11124132, "oldest_snapshot_seqno": -1}
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 5921 keys, 9408859 bytes, temperature: kUnknown
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901187777170, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 9408859, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9368914, "index_size": 24027, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14853, "raw_key_size": 154038, "raw_average_key_size": 26, "raw_value_size": 9261461, "raw_average_value_size": 1564, "num_data_blocks": 958, "num_entries": 5921, "num_filter_entries": 5921, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764901187, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.777422) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9408859 bytes
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.780266) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.0 rd, 133.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 8.6 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(9.9) write-amplify(4.5) OK, records in: 6435, records dropped: 514 output_compression: NoCompression
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.780295) EVENT_LOG_v1 {"time_micros": 1764901187780282, "job": 60, "event": "compaction_finished", "compaction_time_micros": 70413, "compaction_time_cpu_micros": 42020, "output_level": 6, "num_output_files": 1, "total_output_size": 9408859, "num_input_records": 6435, "num_output_records": 5921, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901187781104, "job": 60, "event": "table_file_deletion", "file_number": 103}
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901187784003, "job": 60, "event": "table_file_deletion", "file_number": 101}
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.706451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.784200) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.784209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.784213) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.784217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.784221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:19:48 compute-0 nova_compute[349548]: 2025-12-05 02:19:48.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:19:48 compute-0 nova_compute[349548]: 2025-12-05 02:19:48.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:19:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:19:48 compute-0 nova_compute[349548]: 2025-12-05 02:19:48.410 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:19:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2124: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:49 compute-0 nova_compute[349548]: 2025-12-05 02:19:49.063 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:19:49 compute-0 nova_compute[349548]: 2025-12-05 02:19:49.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:19:49 compute-0 nova_compute[349548]: 2025-12-05 02:19:49.107 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:19:49 compute-0 nova_compute[349548]: 2025-12-05 02:19:49.108 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:19:49 compute-0 nova_compute[349548]: 2025-12-05 02:19:49.109 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:19:49 compute-0 nova_compute[349548]: 2025-12-05 02:19:49.110 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:19:49 compute-0 nova_compute[349548]: 2025-12-05 02:19:49.111 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:19:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:19:49 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1496427799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:19:49 compute-0 nova_compute[349548]: 2025-12-05 02:19:49.596 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:19:49 compute-0 nova_compute[349548]: 2025-12-05 02:19:49.700 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:19:49 compute-0 ceph-mon[192914]: pgmap v2124: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:49 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1496427799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:19:49 compute-0 nova_compute[349548]: 2025-12-05 02:19:49.700 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:19:49 compute-0 nova_compute[349548]: 2025-12-05 02:19:49.715 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:19:49 compute-0 nova_compute[349548]: 2025-12-05 02:19:49.716 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:19:50 compute-0 nova_compute[349548]: 2025-12-05 02:19:50.374 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:19:50 compute-0 nova_compute[349548]: 2025-12-05 02:19:50.376 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3501MB free_disk=59.897212982177734GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:19:50 compute-0 nova_compute[349548]: 2025-12-05 02:19:50.376 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:19:50 compute-0 nova_compute[349548]: 2025-12-05 02:19:50.377 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:19:50 compute-0 nova_compute[349548]: 2025-12-05 02:19:50.465 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:19:50 compute-0 nova_compute[349548]: 2025-12-05 02:19:50.466 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:19:50 compute-0 nova_compute[349548]: 2025-12-05 02:19:50.466 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:19:50 compute-0 nova_compute[349548]: 2025-12-05 02:19:50.467 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:19:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2125: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:50 compute-0 nova_compute[349548]: 2025-12-05 02:19:50.535 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:19:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:19:51 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3049850232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:19:51 compute-0 nova_compute[349548]: 2025-12-05 02:19:51.087 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:19:51 compute-0 nova_compute[349548]: 2025-12-05 02:19:51.099 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:19:51 compute-0 nova_compute[349548]: 2025-12-05 02:19:51.123 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:19:51 compute-0 nova_compute[349548]: 2025-12-05 02:19:51.127 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:19:51 compute-0 nova_compute[349548]: 2025-12-05 02:19:51.127 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:19:51 compute-0 ceph-mon[192914]: pgmap v2125: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:51 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3049850232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:19:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2126: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:52 compute-0 nova_compute[349548]: 2025-12-05 02:19:52.521 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:19:53 compute-0 nova_compute[349548]: 2025-12-05 02:19:53.125 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:19:53 compute-0 nova_compute[349548]: 2025-12-05 02:19:53.149 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:19:53 compute-0 nova_compute[349548]: 2025-12-05 02:19:53.150 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:19:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:19:53 compute-0 nova_compute[349548]: 2025-12-05 02:19:53.413 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:19:53 compute-0 ceph-mon[192914]: pgmap v2126: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2127: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:55 compute-0 podman[462157]: 2025-12-05 02:19:55.708745863 +0000 UTC m=+0.109180077 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 05 02:19:55 compute-0 ceph-mon[192914]: pgmap v2127: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:55 compute-0 podman[462158]: 2025-12-05 02:19:55.747506692 +0000 UTC m=+0.135325522 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:19:55 compute-0 podman[462160]: 2025-12-05 02:19:55.755804865 +0000 UTC m=+0.133689226 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, config_id=edpm, architecture=x86_64, container_name=openstack_network_exporter, distribution-scope=public, version=9.6, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git)
Dec 05 02:19:55 compute-0 podman[462159]: 2025-12-05 02:19:55.781000623 +0000 UTC m=+0.165516410 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:19:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:19:56.218 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:19:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:19:56.218 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:19:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:19:56.219 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:19:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2128: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:57 compute-0 nova_compute[349548]: 2025-12-05 02:19:57.525 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:19:57 compute-0 ceph-mon[192914]: pgmap v2128: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:19:58 compute-0 nova_compute[349548]: 2025-12-05 02:19:58.416 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:19:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2129: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:59 compute-0 podman[158197]: time="2025-12-05T02:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:19:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:19:59 compute-0 ceph-mon[192914]: pgmap v2129: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:19:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8638 "" "Go-http-client/1.1"
Dec 05 02:20:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2130: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:01 compute-0 openstack_network_exporter[366555]: ERROR   02:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:20:01 compute-0 openstack_network_exporter[366555]: ERROR   02:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:20:01 compute-0 openstack_network_exporter[366555]: ERROR   02:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:20:01 compute-0 openstack_network_exporter[366555]: ERROR   02:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:20:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:20:01 compute-0 openstack_network_exporter[366555]: ERROR   02:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:20:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:20:01 compute-0 ceph-mon[192914]: pgmap v2130: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2131: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:02 compute-0 nova_compute[349548]: 2025-12-05 02:20:02.528 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:20:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:20:03 compute-0 nova_compute[349548]: 2025-12-05 02:20:03.419 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:20:03 compute-0 ceph-mon[192914]: pgmap v2131: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2132: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:05 compute-0 ceph-mon[192914]: pgmap v2132: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2133: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:07 compute-0 nova_compute[349548]: 2025-12-05 02:20:07.531 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:20:07 compute-0 ceph-mon[192914]: pgmap v2133: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:20:08 compute-0 nova_compute[349548]: 2025-12-05 02:20:08.423 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:20:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2134: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:09 compute-0 podman[462240]: 2025-12-05 02:20:09.692212168 +0000 UTC m=+0.093428385 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:20:09 compute-0 podman[462241]: 2025-12-05 02:20:09.733785156 +0000 UTC m=+0.129475198 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:20:09 compute-0 ceph-mon[192914]: pgmap v2134: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2135: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:11 compute-0 ceph-mon[192914]: pgmap v2135: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2136: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:12 compute-0 nova_compute[349548]: 2025-12-05 02:20:12.535 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:20:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:20:13 compute-0 nova_compute[349548]: 2025-12-05 02:20:13.425 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:20:13 compute-0 ceph-mon[192914]: pgmap v2136: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2137: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:14 compute-0 podman[462280]: 2025-12-05 02:20:14.736824139 +0000 UTC m=+0.137693949 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 05 02:20:14 compute-0 podman[462282]: 2025-12-05 02:20:14.74009124 +0000 UTC m=+0.134864729 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:20:14 compute-0 podman[462281]: 2025-12-05 02:20:14.754925837 +0000 UTC m=+0.152355730 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, name=ubi9, architecture=x86_64, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, config_id=edpm, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, release-0.7.12=, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container)
Dec 05 02:20:15 compute-0 ceph-mon[192914]: pgmap v2137: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:20:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:20:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:20:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:20:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:20:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:20:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:20:16
Dec 05 02:20:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:20:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:20:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups']
Dec 05 02:20:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:20:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2138: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:17 compute-0 nova_compute[349548]: 2025-12-05 02:20:17.540 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:20:17 compute-0 ceph-mon[192914]: pgmap v2138: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:20:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:20:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:20:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:20:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:20:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:20:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:20:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:20:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:20:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:20:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:20:18 compute-0 nova_compute[349548]: 2025-12-05 02:20:18.427 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:20:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2139: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:19 compute-0 ceph-mon[192914]: pgmap v2139: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2140: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:21 compute-0 ceph-mon[192914]: pgmap v2140: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2141: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:22 compute-0 nova_compute[349548]: 2025-12-05 02:20:22.543 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:20:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:20:23 compute-0 nova_compute[349548]: 2025-12-05 02:20:23.431 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:20:23 compute-0 ceph-mon[192914]: pgmap v2141: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2142: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:25 compute-0 ceph-mon[192914]: pgmap v2142: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2143: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:26 compute-0 podman[462337]: 2025-12-05 02:20:26.716167026 +0000 UTC m=+0.117533112 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 02:20:26 compute-0 podman[462339]: 2025-12-05 02:20:26.721601628 +0000 UTC m=+0.102596112 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, version=9.6, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41)
Dec 05 02:20:26 compute-0 podman[462336]: 2025-12-05 02:20:26.742051853 +0000 UTC m=+0.146866746 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec 05 02:20:26 compute-0 podman[462338]: 2025-12-05 02:20:26.781383567 +0000 UTC m=+0.172072134 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015181009677997005 of space, bias 1.0, pg target 0.45543029033991017 quantized to 32 (current 32)
Dec 05 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 05 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:20:27 compute-0 nova_compute[349548]: 2025-12-05 02:20:27.546 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:20:27 compute-0 ceph-mon[192914]: pgmap v2143: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:20:28 compute-0 nova_compute[349548]: 2025-12-05 02:20:28.434 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:20:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2144: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:28 compute-0 sudo[462418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:20:28 compute-0 sudo[462418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:20:28 compute-0 sudo[462418]: pam_unix(sudo:session): session closed for user root
Dec 05 02:20:28 compute-0 sudo[462443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:20:28 compute-0 sudo[462443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:20:28 compute-0 sudo[462443]: pam_unix(sudo:session): session closed for user root
Dec 05 02:20:29 compute-0 sudo[462468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:20:29 compute-0 sudo[462468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:20:29 compute-0 sudo[462468]: pam_unix(sudo:session): session closed for user root
Dec 05 02:20:29 compute-0 sudo[462493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:20:29 compute-0 sudo[462493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:20:29 compute-0 podman[158197]: time="2025-12-05T02:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:20:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:20:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8664 "" "Go-http-client/1.1"
Dec 05 02:20:29 compute-0 ceph-mon[192914]: pgmap v2144: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:29 compute-0 sudo[462493]: pam_unix(sudo:session): session closed for user root
Dec 05 02:20:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:20:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:20:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:20:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:20:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:20:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:20:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d78192db-4b9a-4b0d-b611-8379a2a3ec9a does not exist
Dec 05 02:20:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 9162fe8d-8ae2-49ea-8334-c9b251a7a41a does not exist
Dec 05 02:20:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev ef73b983-9d66-4211-8018-5d968d1f552d does not exist
Dec 05 02:20:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:20:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:20:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:20:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:20:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:20:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:20:30 compute-0 sudo[462548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:20:30 compute-0 sudo[462548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:20:30 compute-0 sudo[462548]: pam_unix(sudo:session): session closed for user root
Dec 05 02:20:30 compute-0 sudo[462573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:20:30 compute-0 sudo[462573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:20:30 compute-0 sudo[462573]: pam_unix(sudo:session): session closed for user root
Dec 05 02:20:30 compute-0 sudo[462598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:20:30 compute-0 sudo[462598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:20:30 compute-0 sudo[462598]: pam_unix(sudo:session): session closed for user root
Dec 05 02:20:30 compute-0 sudo[462623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:20:30 compute-0 sudo[462623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:20:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2145: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:20:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:20:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:20:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:20:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:20:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:20:31 compute-0 podman[462685]: 2025-12-05 02:20:31.085823391 +0000 UTC m=+0.076265513 container create e6e0646d7b34835984c52b1513a9e129a05aa7fa24d2b677a1b105ac0b5b5e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lehmann, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 05 02:20:31 compute-0 podman[462685]: 2025-12-05 02:20:31.052588517 +0000 UTC m=+0.043030689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:20:31 compute-0 systemd[1]: Started libpod-conmon-e6e0646d7b34835984c52b1513a9e129a05aa7fa24d2b677a1b105ac0b5b5e2a.scope.
Dec 05 02:20:31 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:20:31 compute-0 podman[462685]: 2025-12-05 02:20:31.294032258 +0000 UTC m=+0.284474410 container init e6e0646d7b34835984c52b1513a9e129a05aa7fa24d2b677a1b105ac0b5b5e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:20:31 compute-0 podman[462685]: 2025-12-05 02:20:31.315747038 +0000 UTC m=+0.306189140 container start e6e0646d7b34835984c52b1513a9e129a05aa7fa24d2b677a1b105ac0b5b5e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 02:20:31 compute-0 podman[462685]: 2025-12-05 02:20:31.32079473 +0000 UTC m=+0.311236832 container attach e6e0646d7b34835984c52b1513a9e129a05aa7fa24d2b677a1b105ac0b5b5e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lehmann, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:20:31 compute-0 quirky_lehmann[462701]: 167 167
Dec 05 02:20:31 compute-0 systemd[1]: libpod-e6e0646d7b34835984c52b1513a9e129a05aa7fa24d2b677a1b105ac0b5b5e2a.scope: Deactivated successfully.
Dec 05 02:20:31 compute-0 conmon[462701]: conmon e6e0646d7b34835984c5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e6e0646d7b34835984c52b1513a9e129a05aa7fa24d2b677a1b105ac0b5b5e2a.scope/container/memory.events
Dec 05 02:20:31 compute-0 podman[462685]: 2025-12-05 02:20:31.332966692 +0000 UTC m=+0.323408784 container died e6e0646d7b34835984c52b1513a9e129a05aa7fa24d2b677a1b105ac0b5b5e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:20:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5a0b7623a66856cf8003bb224e77346c8a1e3e12e2032103147363361245318-merged.mount: Deactivated successfully.
Dec 05 02:20:31 compute-0 podman[462685]: 2025-12-05 02:20:31.395700224 +0000 UTC m=+0.386142316 container remove e6e0646d7b34835984c52b1513a9e129a05aa7fa24d2b677a1b105ac0b5b5e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 05 02:20:31 compute-0 openstack_network_exporter[366555]: ERROR   02:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:20:31 compute-0 openstack_network_exporter[366555]: ERROR   02:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:20:31 compute-0 openstack_network_exporter[366555]: ERROR   02:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:20:31 compute-0 openstack_network_exporter[366555]: ERROR   02:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:20:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:20:31 compute-0 systemd[1]: libpod-conmon-e6e0646d7b34835984c52b1513a9e129a05aa7fa24d2b677a1b105ac0b5b5e2a.scope: Deactivated successfully.
Dec 05 02:20:31 compute-0 openstack_network_exporter[366555]: ERROR   02:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:20:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:20:31 compute-0 podman[462723]: 2025-12-05 02:20:31.69822007 +0000 UTC m=+0.087341704 container create c893a13fb1a1b06cc000de47b93fc2ff36b0f3a2360fad0bf8f04f373d6a6140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hertz, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:20:31 compute-0 podman[462723]: 2025-12-05 02:20:31.662324132 +0000 UTC m=+0.051445816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:20:31 compute-0 systemd[1]: Started libpod-conmon-c893a13fb1a1b06cc000de47b93fc2ff36b0f3a2360fad0bf8f04f373d6a6140.scope.
Dec 05 02:20:31 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:20:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/015ea9561c7b9afc648a84141aeb5d0539b625d175f77d4beb05640d4b3921c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:20:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/015ea9561c7b9afc648a84141aeb5d0539b625d175f77d4beb05640d4b3921c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:20:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/015ea9561c7b9afc648a84141aeb5d0539b625d175f77d4beb05640d4b3921c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:20:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/015ea9561c7b9afc648a84141aeb5d0539b625d175f77d4beb05640d4b3921c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:20:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/015ea9561c7b9afc648a84141aeb5d0539b625d175f77d4beb05640d4b3921c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:20:31 compute-0 podman[462723]: 2025-12-05 02:20:31.886227661 +0000 UTC m=+0.275349285 container init c893a13fb1a1b06cc000de47b93fc2ff36b0f3a2360fad0bf8f04f373d6a6140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hertz, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:20:31 compute-0 podman[462723]: 2025-12-05 02:20:31.914610628 +0000 UTC m=+0.303732232 container start c893a13fb1a1b06cc000de47b93fc2ff36b0f3a2360fad0bf8f04f373d6a6140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:20:31 compute-0 podman[462723]: 2025-12-05 02:20:31.919189286 +0000 UTC m=+0.308310890 container attach c893a13fb1a1b06cc000de47b93fc2ff36b0f3a2360fad0bf8f04f373d6a6140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hertz, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:20:32 compute-0 ceph-mon[192914]: pgmap v2145: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2146: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:32 compute-0 nova_compute[349548]: 2025-12-05 02:20:32.549 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:20:33 compute-0 dreamy_hertz[462739]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:20:33 compute-0 dreamy_hertz[462739]: --> relative data size: 1.0
Dec 05 02:20:33 compute-0 dreamy_hertz[462739]: --> All data devices are unavailable
Dec 05 02:20:33 compute-0 systemd[1]: libpod-c893a13fb1a1b06cc000de47b93fc2ff36b0f3a2360fad0bf8f04f373d6a6140.scope: Deactivated successfully.
Dec 05 02:20:33 compute-0 systemd[1]: libpod-c893a13fb1a1b06cc000de47b93fc2ff36b0f3a2360fad0bf8f04f373d6a6140.scope: Consumed 1.146s CPU time.
Dec 05 02:20:33 compute-0 podman[462723]: 2025-12-05 02:20:33.116006669 +0000 UTC m=+1.505128323 container died c893a13fb1a1b06cc000de47b93fc2ff36b0f3a2360fad0bf8f04f373d6a6140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hertz, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Dec 05 02:20:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-015ea9561c7b9afc648a84141aeb5d0539b625d175f77d4beb05640d4b3921c0-merged.mount: Deactivated successfully.
Dec 05 02:20:33 compute-0 podman[462723]: 2025-12-05 02:20:33.207447807 +0000 UTC m=+1.596569421 container remove c893a13fb1a1b06cc000de47b93fc2ff36b0f3a2360fad0bf8f04f373d6a6140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hertz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Dec 05 02:20:33 compute-0 systemd[1]: libpod-conmon-c893a13fb1a1b06cc000de47b93fc2ff36b0f3a2360fad0bf8f04f373d6a6140.scope: Deactivated successfully.
Dec 05 02:20:33 compute-0 sudo[462623]: pam_unix(sudo:session): session closed for user root
Dec 05 02:20:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.354748) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901233354794, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 593, "num_deletes": 250, "total_data_size": 663003, "memory_usage": 674784, "flush_reason": "Manual Compaction"}
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901233361396, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 433384, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43984, "largest_seqno": 44576, "table_properties": {"data_size": 430566, "index_size": 790, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7550, "raw_average_key_size": 20, "raw_value_size": 424717, "raw_average_value_size": 1150, "num_data_blocks": 36, "num_entries": 369, "num_filter_entries": 369, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764901188, "oldest_key_time": 1764901188, "file_creation_time": 1764901233, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 7030 microseconds, and 2443 cpu microseconds.
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.361773) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 433384 bytes OK
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.361795) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.365126) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.365152) EVENT_LOG_v1 {"time_micros": 1764901233365144, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.365171) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 659754, prev total WAL file size 659754, number of live WAL files 2.
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.366380) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373533' seq:72057594037927935, type:22 .. '6D6772737461740032303034' seq:0, type:0; will stop at (end)
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(423KB)], [104(9188KB)]
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901233366436, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 9842243, "oldest_snapshot_seqno": -1}
Dec 05 02:20:33 compute-0 sudo[462778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:20:33 compute-0 sudo[462778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:20:33 compute-0 sudo[462778]: pam_unix(sudo:session): session closed for user root
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 5797 keys, 6776292 bytes, temperature: kUnknown
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901233422937, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 6776292, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6741445, "index_size": 19249, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 151643, "raw_average_key_size": 26, "raw_value_size": 6640378, "raw_average_value_size": 1145, "num_data_blocks": 760, "num_entries": 5797, "num_filter_entries": 5797, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764901233, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.423617) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 6776292 bytes
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.426105) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.0 rd, 119.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.0 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(38.3) write-amplify(15.6) OK, records in: 6290, records dropped: 493 output_compression: NoCompression
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.426128) EVENT_LOG_v1 {"time_micros": 1764901233426118, "job": 62, "event": "compaction_finished", "compaction_time_micros": 56571, "compaction_time_cpu_micros": 31189, "output_level": 6, "num_output_files": 1, "total_output_size": 6776292, "num_input_records": 6290, "num_output_records": 5797, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901233426690, "job": 62, "event": "table_file_deletion", "file_number": 106}
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901233428737, "job": 62, "event": "table_file_deletion", "file_number": 104}
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.366106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.429033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.429042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.429045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.429048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.429051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:20:33 compute-0 nova_compute[349548]: 2025-12-05 02:20:33.436 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:20:33 compute-0 sudo[462803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:20:33 compute-0 sudo[462803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:20:33 compute-0 sudo[462803]: pam_unix(sudo:session): session closed for user root
Dec 05 02:20:33 compute-0 sudo[462828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:20:33 compute-0 sudo[462828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:20:33 compute-0 sudo[462828]: pam_unix(sudo:session): session closed for user root
Dec 05 02:20:33 compute-0 sudo[462853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:20:33 compute-0 sudo[462853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:20:34 compute-0 ceph-mon[192914]: pgmap v2146: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:34 compute-0 podman[462916]: 2025-12-05 02:20:34.425384354 +0000 UTC m=+0.096297265 container create 8066b69f096faf35806cc122e7fb61799008f922de339694b3625b703acc6c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dhawan, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:20:34 compute-0 podman[462916]: 2025-12-05 02:20:34.391599405 +0000 UTC m=+0.062512366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:20:34 compute-0 systemd[1]: Started libpod-conmon-8066b69f096faf35806cc122e7fb61799008f922de339694b3625b703acc6c60.scope.
Dec 05 02:20:34 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:20:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2147: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:34 compute-0 podman[462916]: 2025-12-05 02:20:34.558523924 +0000 UTC m=+0.229436825 container init 8066b69f096faf35806cc122e7fb61799008f922de339694b3625b703acc6c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:20:34 compute-0 podman[462916]: 2025-12-05 02:20:34.568812663 +0000 UTC m=+0.239725544 container start 8066b69f096faf35806cc122e7fb61799008f922de339694b3625b703acc6c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 05 02:20:34 compute-0 condescending_dhawan[462931]: 167 167
Dec 05 02:20:34 compute-0 podman[462916]: 2025-12-05 02:20:34.573431322 +0000 UTC m=+0.244344203 container attach 8066b69f096faf35806cc122e7fb61799008f922de339694b3625b703acc6c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dhawan, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:20:34 compute-0 systemd[1]: libpod-8066b69f096faf35806cc122e7fb61799008f922de339694b3625b703acc6c60.scope: Deactivated successfully.
Dec 05 02:20:34 compute-0 podman[462916]: 2025-12-05 02:20:34.576122928 +0000 UTC m=+0.247035809 container died 8066b69f096faf35806cc122e7fb61799008f922de339694b3625b703acc6c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dhawan, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 02:20:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb8c01d3b3a6f0cfd6fb4bf8d21981b15a4ffa90cb66f2855d791992cab23ea5-merged.mount: Deactivated successfully.
Dec 05 02:20:34 compute-0 podman[462916]: 2025-12-05 02:20:34.628678784 +0000 UTC m=+0.299591665 container remove 8066b69f096faf35806cc122e7fb61799008f922de339694b3625b703acc6c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec 05 02:20:34 compute-0 systemd[1]: libpod-conmon-8066b69f096faf35806cc122e7fb61799008f922de339694b3625b703acc6c60.scope: Deactivated successfully.
Dec 05 02:20:34 compute-0 podman[462954]: 2025-12-05 02:20:34.872169183 +0000 UTC m=+0.086644795 container create b26ca18470a3b5468d22033325a2389df3b2c7dbb86b60f92874d229ab711b11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chandrasekhar, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:20:34 compute-0 podman[462954]: 2025-12-05 02:20:34.834364501 +0000 UTC m=+0.048840163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:20:34 compute-0 systemd[1]: Started libpod-conmon-b26ca18470a3b5468d22033325a2389df3b2c7dbb86b60f92874d229ab711b11.scope.
Dec 05 02:20:34 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad38289dbc43e83f63dd5f0af46d66dd8fd06e17d0ac7b15516b8450f656e0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad38289dbc43e83f63dd5f0af46d66dd8fd06e17d0ac7b15516b8450f656e0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad38289dbc43e83f63dd5f0af46d66dd8fd06e17d0ac7b15516b8450f656e0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad38289dbc43e83f63dd5f0af46d66dd8fd06e17d0ac7b15516b8450f656e0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:20:35 compute-0 podman[462954]: 2025-12-05 02:20:35.012412702 +0000 UTC m=+0.226888364 container init b26ca18470a3b5468d22033325a2389df3b2c7dbb86b60f92874d229ab711b11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chandrasekhar, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 05 02:20:35 compute-0 podman[462954]: 2025-12-05 02:20:35.036189509 +0000 UTC m=+0.250665121 container start b26ca18470a3b5468d22033325a2389df3b2c7dbb86b60f92874d229ab711b11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Dec 05 02:20:35 compute-0 podman[462954]: 2025-12-05 02:20:35.042418154 +0000 UTC m=+0.256893826 container attach b26ca18470a3b5468d22033325a2389df3b2c7dbb86b60f92874d229ab711b11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chandrasekhar, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]: {
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:     "0": [
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:         {
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "devices": [
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "/dev/loop3"
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             ],
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "lv_name": "ceph_lv0",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "lv_size": "21470642176",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "name": "ceph_lv0",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "tags": {
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.cluster_name": "ceph",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.crush_device_class": "",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.encrypted": "0",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.osd_id": "0",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.type": "block",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.vdo": "0"
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             },
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "type": "block",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "vg_name": "ceph_vg0"
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:         }
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:     ],
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:     "1": [
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:         {
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "devices": [
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "/dev/loop4"
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             ],
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "lv_name": "ceph_lv1",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "lv_size": "21470642176",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "name": "ceph_lv1",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "tags": {
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.cluster_name": "ceph",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.crush_device_class": "",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.encrypted": "0",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.osd_id": "1",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.type": "block",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.vdo": "0"
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             },
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "type": "block",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "vg_name": "ceph_vg1"
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:         }
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:     ],
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:     "2": [
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:         {
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "devices": [
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "/dev/loop5"
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             ],
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "lv_name": "ceph_lv2",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "lv_size": "21470642176",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "name": "ceph_lv2",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "tags": {
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.cluster_name": "ceph",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.crush_device_class": "",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.encrypted": "0",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.osd_id": "2",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.type": "block",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:                 "ceph.vdo": "0"
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             },
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "type": "block",
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:             "vg_name": "ceph_vg2"
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:         }
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]:     ]
Dec 05 02:20:35 compute-0 fervent_chandrasekhar[462970]: }
Dec 05 02:20:35 compute-0 systemd[1]: libpod-b26ca18470a3b5468d22033325a2389df3b2c7dbb86b60f92874d229ab711b11.scope: Deactivated successfully.
Dec 05 02:20:35 compute-0 podman[462954]: 2025-12-05 02:20:35.797134981 +0000 UTC m=+1.011610563 container died b26ca18470a3b5468d22033325a2389df3b2c7dbb86b60f92874d229ab711b11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:20:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ad38289dbc43e83f63dd5f0af46d66dd8fd06e17d0ac7b15516b8450f656e0d-merged.mount: Deactivated successfully.
Dec 05 02:20:35 compute-0 podman[462954]: 2025-12-05 02:20:35.885220875 +0000 UTC m=+1.099696447 container remove b26ca18470a3b5468d22033325a2389df3b2c7dbb86b60f92874d229ab711b11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 05 02:20:35 compute-0 systemd[1]: libpod-conmon-b26ca18470a3b5468d22033325a2389df3b2c7dbb86b60f92874d229ab711b11.scope: Deactivated successfully.
Dec 05 02:20:35 compute-0 sudo[462853]: pam_unix(sudo:session): session closed for user root
Dec 05 02:20:36 compute-0 ceph-mon[192914]: pgmap v2147: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:36 compute-0 sudo[462992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:20:36 compute-0 sudo[462992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:20:36 compute-0 sudo[462992]: pam_unix(sudo:session): session closed for user root
Dec 05 02:20:36 compute-0 sudo[463017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:20:36 compute-0 sudo[463017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:20:36 compute-0 sudo[463017]: pam_unix(sudo:session): session closed for user root
Dec 05 02:20:36 compute-0 sudo[463042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:20:36 compute-0 sudo[463042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:20:36 compute-0 sudo[463042]: pam_unix(sudo:session): session closed for user root
Dec 05 02:20:36 compute-0 sudo[463067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:20:36 compute-0 sudo[463067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:20:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2148: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:37 compute-0 podman[463127]: 2025-12-05 02:20:37.096386161 +0000 UTC m=+0.098544459 container create 2d895d60c514f0d5ea572a5e32fb437152d3e3e5b376c53d4a1efba8689c9266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_liskov, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Dec 05 02:20:37 compute-0 podman[463127]: 2025-12-05 02:20:37.059033082 +0000 UTC m=+0.061191440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:20:37 compute-0 systemd[1]: Started libpod-conmon-2d895d60c514f0d5ea572a5e32fb437152d3e3e5b376c53d4a1efba8689c9266.scope.
Dec 05 02:20:37 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:20:37 compute-0 podman[463127]: 2025-12-05 02:20:37.233744869 +0000 UTC m=+0.235903167 container init 2d895d60c514f0d5ea572a5e32fb437152d3e3e5b376c53d4a1efba8689c9266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 02:20:37 compute-0 podman[463127]: 2025-12-05 02:20:37.250255152 +0000 UTC m=+0.252413440 container start 2d895d60c514f0d5ea572a5e32fb437152d3e3e5b376c53d4a1efba8689c9266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_liskov, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 02:20:37 compute-0 podman[463127]: 2025-12-05 02:20:37.257682651 +0000 UTC m=+0.259840999 container attach 2d895d60c514f0d5ea572a5e32fb437152d3e3e5b376c53d4a1efba8689c9266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_liskov, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:20:37 compute-0 nervous_liskov[463143]: 167 167
Dec 05 02:20:37 compute-0 systemd[1]: libpod-2d895d60c514f0d5ea572a5e32fb437152d3e3e5b376c53d4a1efba8689c9266.scope: Deactivated successfully.
Dec 05 02:20:37 compute-0 conmon[463143]: conmon 2d895d60c514f0d5ea57 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2d895d60c514f0d5ea572a5e32fb437152d3e3e5b376c53d4a1efba8689c9266.scope/container/memory.events
Dec 05 02:20:37 compute-0 podman[463148]: 2025-12-05 02:20:37.348597864 +0000 UTC m=+0.059432360 container died 2d895d60c514f0d5ea572a5e32fb437152d3e3e5b376c53d4a1efba8689c9266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_liskov, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:20:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-1417fbb7d2e13be256f38812a6c06daf9ec7396f38f98d146e726615b3e01d29-merged.mount: Deactivated successfully.
Dec 05 02:20:37 compute-0 podman[463148]: 2025-12-05 02:20:37.42356944 +0000 UTC m=+0.134403866 container remove 2d895d60c514f0d5ea572a5e32fb437152d3e3e5b376c53d4a1efba8689c9266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 05 02:20:37 compute-0 systemd[1]: libpod-conmon-2d895d60c514f0d5ea572a5e32fb437152d3e3e5b376c53d4a1efba8689c9266.scope: Deactivated successfully.
Dec 05 02:20:37 compute-0 nova_compute[349548]: 2025-12-05 02:20:37.551 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:20:37 compute-0 podman[463170]: 2025-12-05 02:20:37.661188804 +0000 UTC m=+0.062544708 container create 3dbbd3cd8d7f6eedaf5dd9656ebb96d3af606652a8eb60d40472952dcaa46164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:20:37 compute-0 podman[463170]: 2025-12-05 02:20:37.641271124 +0000 UTC m=+0.042627048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:20:37 compute-0 systemd[1]: Started libpod-conmon-3dbbd3cd8d7f6eedaf5dd9656ebb96d3af606652a8eb60d40472952dcaa46164.scope.
Dec 05 02:20:37 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:20:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85494183898c1b5641806e6429cbc1b4e3252187a7673d3131ec9933adf2f2f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:20:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85494183898c1b5641806e6429cbc1b4e3252187a7673d3131ec9933adf2f2f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:20:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85494183898c1b5641806e6429cbc1b4e3252187a7673d3131ec9933adf2f2f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:20:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85494183898c1b5641806e6429cbc1b4e3252187a7673d3131ec9933adf2f2f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:20:37 compute-0 podman[463170]: 2025-12-05 02:20:37.86151876 +0000 UTC m=+0.262874754 container init 3dbbd3cd8d7f6eedaf5dd9656ebb96d3af606652a8eb60d40472952dcaa46164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hofstadter, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:20:37 compute-0 podman[463170]: 2025-12-05 02:20:37.878673192 +0000 UTC m=+0.280029096 container start 3dbbd3cd8d7f6eedaf5dd9656ebb96d3af606652a8eb60d40472952dcaa46164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:20:37 compute-0 podman[463170]: 2025-12-05 02:20:37.88321902 +0000 UTC m=+0.284575004 container attach 3dbbd3cd8d7f6eedaf5dd9656ebb96d3af606652a8eb60d40472952dcaa46164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hofstadter, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Dec 05 02:20:38 compute-0 ceph-mon[192914]: pgmap v2148: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.326 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.329 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.340 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '292fd084-0808-4a80-adc1-6ab1f28e188a', 'name': 'te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'hostId': '1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18', 'status': 'active', 'metadata': {'metering.server_group': '92ca195d-98d1-443c-9947-dcb7ca7b926a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.354 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7', 'name': 'te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'hostId': '1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18', 'status': 'active', 'metadata': {'metering.server_group': '92ca195d-98d1-443c-9947-dcb7ca7b926a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.358 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.359 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.359 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.359 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.360 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.360 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.361 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.361 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.361 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.361 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T02:20:38.359251) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.361 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.362 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T02:20:38.361849) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.383 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.383 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.403 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.403 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.404 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.404 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.404 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.405 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.405 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.405 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.406 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.406 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.406 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.406 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.407 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.407 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T02:20:38.405589) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.407 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T02:20:38.407532) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.407 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.439 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 30882304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.439 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 nova_compute[349548]: 2025-12-05 02:20:38.439 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.479 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.bytes volume: 30075904 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.479 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.480 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.481 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.481 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.481 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.481 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.482 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.482 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T02:20:38.481858) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.482 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 3200956192 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.483 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 237184283 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.483 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.latency volume: 2761905668 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.484 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.latency volume: 175446078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.484 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.485 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.485 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.485 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.485 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.485 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.486 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T02:20:38.485852) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.486 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 1101 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.486 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.487 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.requests volume: 1075 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.487 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.488 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.488 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.488 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.489 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.489 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.489 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.489 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.489 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T02:20:38.489373) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.490 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.490 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.491 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.492 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.492 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.493 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.493 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.493 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.493 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.494 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 73146368 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.494 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.495 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.bytes volume: 72822784 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.496 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.497 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.497 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.498 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.498 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.499 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T02:20:38.493763) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.499 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.499 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.499 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T02:20:38.499464) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.527 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2149: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.555 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.555 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.556 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.556 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.556 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.556 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.556 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 11353966152 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.556 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.556 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.latency volume: 10383107676 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.557 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.557 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.557 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.557 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.557 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.557 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.557 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 315 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.558 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.558 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.requests volume: 277 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.558 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.558 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.559 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.559 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.559 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.559 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.560 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T02:20:38.556398) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.560 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T02:20:38.557858) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.561 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T02:20:38.559318) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.573 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.576 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.577 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.577 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.577 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.577 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.578 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T02:20:38.577813) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.577 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.578 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.579 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.580 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.580 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.580 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.580 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.581 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.581 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.581 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.582 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.582 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.583 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.583 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.583 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.583 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.584 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.584 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.584 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.584 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.585 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.585 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.585 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.585 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.585 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.585 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.586 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.586 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.586 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.586 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.586 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.586 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.586 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.587 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.587 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.587 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.588 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.588 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.588 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.588 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.588 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.588 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/memory.usage volume: 42.4765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.589 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/memory.usage volume: 43.47265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.589 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.589 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.590 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.590 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.590 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.590 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.591 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.591 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.591 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.591 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.591 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.591 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.591 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.592 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.592 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.592 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.593 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.593 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.593 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.593 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.593 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.593 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.594 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.594 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.594 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.595 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/cpu volume: 335710000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.595 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/cpu volume: 291630000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.595 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.596 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.596 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.596 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.596 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.596 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.596 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.596 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.597 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.597 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.597 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.597 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.597 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.598 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.598 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.598 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.598 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.599 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.599 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T02:20:38.581198) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T02:20:38.584167) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T02:20:38.585585) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T02:20:38.586957) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T02:20:38.588714) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T02:20:38.590198) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T02:20:38.591671) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T02:20:38.593315) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T02:20:38.594847) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T02:20:38.596575) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T02:20:38.598052) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:20:38 compute-0 stupefied_hofstadter[463186]: {
Dec 05 02:20:38 compute-0 stupefied_hofstadter[463186]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:20:38 compute-0 stupefied_hofstadter[463186]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:20:38 compute-0 stupefied_hofstadter[463186]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:20:38 compute-0 stupefied_hofstadter[463186]:         "osd_id": 0,
Dec 05 02:20:38 compute-0 stupefied_hofstadter[463186]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:20:38 compute-0 stupefied_hofstadter[463186]:         "type": "bluestore"
Dec 05 02:20:38 compute-0 stupefied_hofstadter[463186]:     },
Dec 05 02:20:38 compute-0 stupefied_hofstadter[463186]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:20:38 compute-0 stupefied_hofstadter[463186]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:20:38 compute-0 stupefied_hofstadter[463186]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:20:38 compute-0 stupefied_hofstadter[463186]:         "osd_id": 1,
Dec 05 02:20:38 compute-0 stupefied_hofstadter[463186]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:20:38 compute-0 stupefied_hofstadter[463186]:         "type": "bluestore"
Dec 05 02:20:38 compute-0 stupefied_hofstadter[463186]:     },
Dec 05 02:20:38 compute-0 stupefied_hofstadter[463186]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:20:38 compute-0 stupefied_hofstadter[463186]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:20:38 compute-0 stupefied_hofstadter[463186]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:20:38 compute-0 stupefied_hofstadter[463186]:         "osd_id": 2,
Dec 05 02:20:38 compute-0 stupefied_hofstadter[463186]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:20:38 compute-0 stupefied_hofstadter[463186]:         "type": "bluestore"
Dec 05 02:20:38 compute-0 stupefied_hofstadter[463186]:     }
Dec 05 02:20:38 compute-0 stupefied_hofstadter[463186]: }
Dec 05 02:20:39 compute-0 systemd[1]: libpod-3dbbd3cd8d7f6eedaf5dd9656ebb96d3af606652a8eb60d40472952dcaa46164.scope: Deactivated successfully.
Dec 05 02:20:39 compute-0 podman[463170]: 2025-12-05 02:20:39.009958045 +0000 UTC m=+1.411313959 container died 3dbbd3cd8d7f6eedaf5dd9656ebb96d3af606652a8eb60d40472952dcaa46164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hofstadter, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 05 02:20:39 compute-0 systemd[1]: libpod-3dbbd3cd8d7f6eedaf5dd9656ebb96d3af606652a8eb60d40472952dcaa46164.scope: Consumed 1.119s CPU time.
Dec 05 02:20:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-85494183898c1b5641806e6429cbc1b4e3252187a7673d3131ec9933adf2f2f9-merged.mount: Deactivated successfully.
Dec 05 02:20:39 compute-0 podman[463170]: 2025-12-05 02:20:39.10128851 +0000 UTC m=+1.502644464 container remove 3dbbd3cd8d7f6eedaf5dd9656ebb96d3af606652a8eb60d40472952dcaa46164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hofstadter, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:20:39 compute-0 systemd[1]: libpod-conmon-3dbbd3cd8d7f6eedaf5dd9656ebb96d3af606652a8eb60d40472952dcaa46164.scope: Deactivated successfully.
Dec 05 02:20:39 compute-0 sudo[463067]: pam_unix(sudo:session): session closed for user root
Dec 05 02:20:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:20:39 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:20:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:20:39 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:20:39 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b60b39e4-1c15-4620-8d30-9fd9ba303b62 does not exist
Dec 05 02:20:39 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 2c5987b6-66a2-4217-b4fe-a3274340f6c2 does not exist
Dec 05 02:20:39 compute-0 sudo[463232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:20:39 compute-0 sudo[463232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:20:39 compute-0 sudo[463232]: pam_unix(sudo:session): session closed for user root
Dec 05 02:20:39 compute-0 sudo[463257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:20:39 compute-0 sudo[463257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:20:39 compute-0 sudo[463257]: pam_unix(sudo:session): session closed for user root
Dec 05 02:20:40 compute-0 ceph-mon[192914]: pgmap v2149: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:40 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:20:40 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:20:40 compute-0 nova_compute[349548]: 2025-12-05 02:20:40.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:20:40 compute-0 nova_compute[349548]: 2025-12-05 02:20:40.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 02:20:40 compute-0 nova_compute[349548]: 2025-12-05 02:20:40.083 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 02:20:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2150: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:40 compute-0 podman[463282]: 2025-12-05 02:20:40.721278548 +0000 UTC m=+0.125621189 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 02:20:40 compute-0 podman[463283]: 2025-12-05 02:20:40.727371959 +0000 UTC m=+0.137769230 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 02:20:42 compute-0 ceph-mon[192914]: pgmap v2150: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2151: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:42 compute-0 nova_compute[349548]: 2025-12-05 02:20:42.555 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:20:43 compute-0 nova_compute[349548]: 2025-12-05 02:20:43.084 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:20:43 compute-0 nova_compute[349548]: 2025-12-05 02:20:43.084 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:20:43 compute-0 nova_compute[349548]: 2025-12-05 02:20:43.084 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:20:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:20:43 compute-0 nova_compute[349548]: 2025-12-05 02:20:43.442 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:20:44 compute-0 ceph-mon[192914]: pgmap v2151: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:44 compute-0 nova_compute[349548]: 2025-12-05 02:20:44.368 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:20:44 compute-0 nova_compute[349548]: 2025-12-05 02:20:44.368 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:20:44 compute-0 nova_compute[349548]: 2025-12-05 02:20:44.369 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 02:20:44 compute-0 nova_compute[349548]: 2025-12-05 02:20:44.370 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 292fd084-0808-4a80-adc1-6ab1f28e188a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:20:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2152: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:20:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3474213154' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:20:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:20:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3474213154' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:20:45 compute-0 podman[463326]: 2025-12-05 02:20:45.729790206 +0000 UTC m=+0.123993983 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec 05 02:20:45 compute-0 podman[463324]: 2025-12-05 02:20:45.730005182 +0000 UTC m=+0.131672689 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 05 02:20:45 compute-0 podman[463325]: 2025-12-05 02:20:45.730223598 +0000 UTC m=+0.130692271 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.29.0, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.component=ubi9-container, release=1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9)
Dec 05 02:20:46 compute-0 ceph-mon[192914]: pgmap v2152: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/3474213154' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:20:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/3474213154' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:20:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:20:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:20:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:20:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:20:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:20:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:20:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2153: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:46 compute-0 nova_compute[349548]: 2025-12-05 02:20:46.586 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updating instance_info_cache with network_info: [{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:20:46 compute-0 nova_compute[349548]: 2025-12-05 02:20:46.609 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:20:46 compute-0 nova_compute[349548]: 2025-12-05 02:20:46.610 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 02:20:47 compute-0 nova_compute[349548]: 2025-12-05 02:20:47.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:20:47 compute-0 nova_compute[349548]: 2025-12-05 02:20:47.561 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:20:48 compute-0 ceph-mon[192914]: pgmap v2153: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:20:48 compute-0 nova_compute[349548]: 2025-12-05 02:20:48.445 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:20:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2154: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:49 compute-0 nova_compute[349548]: 2025-12-05 02:20:49.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:20:49 compute-0 nova_compute[349548]: 2025-12-05 02:20:49.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:20:49 compute-0 ceph-mon[192914]: pgmap v2154: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:50 compute-0 nova_compute[349548]: 2025-12-05 02:20:50.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:20:50 compute-0 nova_compute[349548]: 2025-12-05 02:20:50.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:20:50 compute-0 nova_compute[349548]: 2025-12-05 02:20:50.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:20:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2155: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:51 compute-0 nova_compute[349548]: 2025-12-05 02:20:51.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:20:51 compute-0 nova_compute[349548]: 2025-12-05 02:20:51.106 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:20:51 compute-0 nova_compute[349548]: 2025-12-05 02:20:51.107 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:20:51 compute-0 nova_compute[349548]: 2025-12-05 02:20:51.108 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:20:51 compute-0 nova_compute[349548]: 2025-12-05 02:20:51.109 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:20:51 compute-0 nova_compute[349548]: 2025-12-05 02:20:51.109 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:20:51 compute-0 ceph-mon[192914]: pgmap v2155: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:20:51 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1805188336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:20:51 compute-0 nova_compute[349548]: 2025-12-05 02:20:51.675 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:20:51 compute-0 nova_compute[349548]: 2025-12-05 02:20:51.764 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:20:51 compute-0 nova_compute[349548]: 2025-12-05 02:20:51.765 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:20:51 compute-0 nova_compute[349548]: 2025-12-05 02:20:51.772 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:20:51 compute-0 nova_compute[349548]: 2025-12-05 02:20:51.773 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:20:52 compute-0 nova_compute[349548]: 2025-12-05 02:20:52.279 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:20:52 compute-0 nova_compute[349548]: 2025-12-05 02:20:52.280 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3523MB free_disk=59.897212982177734GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:20:52 compute-0 nova_compute[349548]: 2025-12-05 02:20:52.281 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:20:52 compute-0 nova_compute[349548]: 2025-12-05 02:20:52.281 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:20:52 compute-0 nova_compute[349548]: 2025-12-05 02:20:52.443 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:20:52 compute-0 nova_compute[349548]: 2025-12-05 02:20:52.443 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:20:52 compute-0 nova_compute[349548]: 2025-12-05 02:20:52.444 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:20:52 compute-0 nova_compute[349548]: 2025-12-05 02:20:52.444 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:20:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2156: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:52 compute-0 nova_compute[349548]: 2025-12-05 02:20:52.562 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:20:52 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1805188336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:20:52 compute-0 nova_compute[349548]: 2025-12-05 02:20:52.656 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:20:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:20:53 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1724534420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:20:53 compute-0 nova_compute[349548]: 2025-12-05 02:20:53.175 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:20:53 compute-0 nova_compute[349548]: 2025-12-05 02:20:53.191 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:20:53 compute-0 nova_compute[349548]: 2025-12-05 02:20:53.212 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:20:53 compute-0 nova_compute[349548]: 2025-12-05 02:20:53.218 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:20:53 compute-0 nova_compute[349548]: 2025-12-05 02:20:53.219 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:20:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:20:53 compute-0 nova_compute[349548]: 2025-12-05 02:20:53.449 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:20:53 compute-0 ceph-mon[192914]: pgmap v2156: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:53 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1724534420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:20:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2157: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:55 compute-0 ceph-mon[192914]: pgmap v2157: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:56 compute-0 nova_compute[349548]: 2025-12-05 02:20:56.220 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:20:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:20:56.219 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:20:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:20:56.220 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:20:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:20:56.221 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:20:56 compute-0 nova_compute[349548]: 2025-12-05 02:20:56.221 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:20:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2158: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:57 compute-0 nova_compute[349548]: 2025-12-05 02:20:57.567 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:20:57 compute-0 ceph-mon[192914]: pgmap v2158: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:57 compute-0 podman[463424]: 2025-12-05 02:20:57.724024221 +0000 UTC m=+0.112444389 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:20:57 compute-0 podman[463431]: 2025-12-05 02:20:57.724239047 +0000 UTC m=+0.103081076 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., version=9.6, config_id=edpm, io.buildah.version=1.33.7, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, vendor=Red Hat, Inc.)
Dec 05 02:20:57 compute-0 podman[463423]: 2025-12-05 02:20:57.739794764 +0000 UTC m=+0.145283271 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125)
Dec 05 02:20:57 compute-0 podman[463425]: 2025-12-05 02:20:57.775212659 +0000 UTC m=+0.161517548 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:20:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:20:58 compute-0 nova_compute[349548]: 2025-12-05 02:20:58.454 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:20:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2159: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:59 compute-0 ceph-mon[192914]: pgmap v2159: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:20:59 compute-0 podman[158197]: time="2025-12-05T02:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:20:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:20:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8675 "" "Go-http-client/1.1"
Dec 05 02:20:59 compute-0 sshd-session[463504]: Connection closed by authenticating user root 123.253.22.45 port 38954 [preauth]
Dec 05 02:21:00 compute-0 nova_compute[349548]: 2025-12-05 02:21:00.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:21:00 compute-0 nova_compute[349548]: 2025-12-05 02:21:00.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 02:21:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2160: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:21:01 compute-0 openstack_network_exporter[366555]: ERROR   02:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:21:01 compute-0 openstack_network_exporter[366555]: ERROR   02:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:21:01 compute-0 openstack_network_exporter[366555]: ERROR   02:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:21:01 compute-0 openstack_network_exporter[366555]: ERROR   02:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:21:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:21:01 compute-0 openstack_network_exporter[366555]: ERROR   02:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:21:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:21:01 compute-0 ceph-mon[192914]: pgmap v2160: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:21:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2161: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:21:02 compute-0 nova_compute[349548]: 2025-12-05 02:21:02.568 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:21:03 compute-0 nova_compute[349548]: 2025-12-05 02:21:03.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:21:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:21:03 compute-0 nova_compute[349548]: 2025-12-05 02:21:03.456 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:21:03 compute-0 ceph-mon[192914]: pgmap v2161: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:21:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2162: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:21:05 compute-0 ceph-mon[192914]: pgmap v2162: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:21:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2163: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:21:07 compute-0 nova_compute[349548]: 2025-12-05 02:21:07.570 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:21:07 compute-0 ceph-mon[192914]: pgmap v2163: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:21:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:21:08 compute-0 nova_compute[349548]: 2025-12-05 02:21:08.460 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:21:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2164: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:21:09 compute-0 ceph-mon[192914]: pgmap v2164: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:21:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2165: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:21:11 compute-0 podman[463509]: 2025-12-05 02:21:11.719764571 +0000 UTC m=+0.119495388 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 02:21:11 compute-0 podman[463508]: 2025-12-05 02:21:11.736802919 +0000 UTC m=+0.139719824 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 02:21:11 compute-0 ceph-mon[192914]: pgmap v2165: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:21:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2166: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:21:12 compute-0 nova_compute[349548]: 2025-12-05 02:21:12.574 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:21:12 compute-0 sshd-session[463506]: Connection reset by authenticating user root 45.135.232.92 port 30454 [preauth]
Dec 05 02:21:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:21:13 compute-0 nova_compute[349548]: 2025-12-05 02:21:13.464 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:21:13 compute-0 ceph-mon[192914]: pgmap v2166: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:21:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2167: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:21:14 compute-0 sshd-session[463548]: Invalid user admin from 45.135.232.92 port 30462
Dec 05 02:21:15 compute-0 sshd-session[463548]: Connection reset by invalid user admin 45.135.232.92 port 30462 [preauth]
Dec 05 02:21:15 compute-0 ceph-mon[192914]: pgmap v2167: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:21:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:21:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:21:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:21:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:21:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:21:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:21:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:21:16
Dec 05 02:21:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:21:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:21:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'default.rgw.log', '.mgr', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta']
Dec 05 02:21:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:21:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2168: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:21:16 compute-0 podman[463554]: 2025-12-05 02:21:16.733724341 +0000 UTC m=+0.119025694 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 02:21:16 compute-0 podman[463553]: 2025-12-05 02:21:16.748703862 +0000 UTC m=+0.146178817 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release=1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler)
Dec 05 02:21:16 compute-0 podman[463552]: 2025-12-05 02:21:16.76322531 +0000 UTC m=+0.165103298 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 02:21:17 compute-0 sshd-session[463550]: Connection reset by authenticating user root 45.135.232.92 port 39768 [preauth]
Dec 05 02:21:17 compute-0 nova_compute[349548]: 2025-12-05 02:21:17.579 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:21:17 compute-0 ceph-mon[192914]: pgmap v2168: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:21:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:21:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:21:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:21:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:21:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:21:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:21:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:21:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:21:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:21:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:21:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:21:18 compute-0 nova_compute[349548]: 2025-12-05 02:21:18.467 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:21:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2169: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:21:19 compute-0 ceph-mon[192914]: pgmap v2169: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:21:19 compute-0 sshd-session[463606]: Invalid user cisco from 45.135.232.92 port 39798
Dec 05 02:21:20 compute-0 sshd-session[463606]: Connection reset by invalid user cisco 45.135.232.92 port 39798 [preauth]
Dec 05 02:21:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2170: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 2 op/s
Dec 05 02:21:21 compute-0 ceph-mon[192914]: pgmap v2170: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 2 op/s
Dec 05 02:21:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2171: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 170 B/s wr, 4 op/s
Dec 05 02:21:22 compute-0 nova_compute[349548]: 2025-12-05 02:21:22.581 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:21:22 compute-0 sshd-session[463609]: Invalid user admin from 45.135.232.92 port 39834
Dec 05 02:21:23 compute-0 sshd-session[463609]: Connection reset by invalid user admin 45.135.232.92 port 39834 [preauth]
Dec 05 02:21:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:21:23 compute-0 nova_compute[349548]: 2025-12-05 02:21:23.471 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:21:23 compute-0 ceph-mon[192914]: pgmap v2171: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 170 B/s wr, 4 op/s
Dec 05 02:21:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2172: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 170 B/s wr, 4 op/s
Dec 05 02:21:25 compute-0 ceph-mon[192914]: pgmap v2172: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 170 B/s wr, 4 op/s
Dec 05 02:21:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2173: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 170 B/s wr, 4 op/s
Dec 05 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015181009677997005 of space, bias 1.0, pg target 0.45543029033991017 quantized to 32 (current 32)
Dec 05 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 05 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:21:27 compute-0 nova_compute[349548]: 2025-12-05 02:21:27.585 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:21:27 compute-0 ceph-mon[192914]: pgmap v2173: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 170 B/s wr, 4 op/s
Dec 05 02:21:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:21:28 compute-0 nova_compute[349548]: 2025-12-05 02:21:28.475 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:21:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2174: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec 05 02:21:28 compute-0 podman[463611]: 2025-12-05 02:21:28.712875753 +0000 UTC m=+0.105388431 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec 05 02:21:28 compute-0 podman[463612]: 2025-12-05 02:21:28.747840885 +0000 UTC m=+0.134854909 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:21:28 compute-0 podman[463613]: 2025-12-05 02:21:28.776821269 +0000 UTC m=+0.160570031 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 05 02:21:28 compute-0 podman[463614]: 2025-12-05 02:21:28.778417443 +0000 UTC m=+0.138124370 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, config_id=edpm, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 05 02:21:29 compute-0 podman[158197]: time="2025-12-05T02:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:21:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:21:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8664 "" "Go-http-client/1.1"
Dec 05 02:21:29 compute-0 ceph-mon[192914]: pgmap v2174: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec 05 02:21:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2175: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec 05 02:21:31 compute-0 openstack_network_exporter[366555]: ERROR   02:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:21:31 compute-0 openstack_network_exporter[366555]: ERROR   02:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:21:31 compute-0 openstack_network_exporter[366555]: ERROR   02:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:21:31 compute-0 openstack_network_exporter[366555]: ERROR   02:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:21:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:21:31 compute-0 openstack_network_exporter[366555]: ERROR   02:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:21:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:21:31 compute-0 ceph-mon[192914]: pgmap v2175: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec 05 02:21:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2176: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 8.6 KiB/s wr, 2 op/s
Dec 05 02:21:32 compute-0 nova_compute[349548]: 2025-12-05 02:21:32.588 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:21:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:21:33 compute-0 nova_compute[349548]: 2025-12-05 02:21:33.479 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:21:33 compute-0 ceph-mon[192914]: pgmap v2176: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 8.6 KiB/s wr, 2 op/s
Dec 05 02:21:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2177: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 0 op/s
Dec 05 02:21:35 compute-0 ceph-mon[192914]: pgmap v2177: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 0 op/s
Dec 05 02:21:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2178: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 0 op/s
Dec 05 02:21:37 compute-0 nova_compute[349548]: 2025-12-05 02:21:37.592 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:21:37 compute-0 ceph-mon[192914]: pgmap v2178: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 0 op/s
Dec 05 02:21:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:21:38 compute-0 nova_compute[349548]: 2025-12-05 02:21:38.482 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:21:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2179: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 8.4 KiB/s wr, 0 op/s
Dec 05 02:21:39 compute-0 sudo[463698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:21:39 compute-0 sudo[463698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:21:39 compute-0 sudo[463698]: pam_unix(sudo:session): session closed for user root
Dec 05 02:21:39 compute-0 sudo[463723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:21:39 compute-0 sudo[463723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:21:39 compute-0 sudo[463723]: pam_unix(sudo:session): session closed for user root
Dec 05 02:21:39 compute-0 sudo[463748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:21:39 compute-0 sudo[463748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:21:39 compute-0 sudo[463748]: pam_unix(sudo:session): session closed for user root
Dec 05 02:21:39 compute-0 ceph-mon[192914]: pgmap v2179: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 8.4 KiB/s wr, 0 op/s
Dec 05 02:21:40 compute-0 sudo[463773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 05 02:21:40 compute-0 sudo[463773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:21:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2180: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 05 02:21:40 compute-0 podman[463867]: 2025-12-05 02:21:40.876866063 +0000 UTC m=+0.133394708 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:21:41 compute-0 podman[463867]: 2025-12-05 02:21:41.024150909 +0000 UTC m=+0.280679504 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:21:41 compute-0 podman[463966]: 2025-12-05 02:21:41.932075639 +0000 UTC m=+0.121015779 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Dec 05 02:21:41 compute-0 podman[463967]: 2025-12-05 02:21:41.95953289 +0000 UTC m=+0.144995383 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:21:41 compute-0 ceph-mon[192914]: pgmap v2180: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 05 02:21:42 compute-0 sudo[463773]: pam_unix(sudo:session): session closed for user root
Dec 05 02:21:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:21:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:21:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:21:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:21:42 compute-0 sudo[464055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:21:42 compute-0 sudo[464055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:21:42 compute-0 sudo[464055]: pam_unix(sudo:session): session closed for user root
Dec 05 02:21:42 compute-0 sudo[464080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:21:42 compute-0 sudo[464080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:21:42 compute-0 sudo[464080]: pam_unix(sudo:session): session closed for user root
Dec 05 02:21:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2181: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec 05 02:21:42 compute-0 sudo[464105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:21:42 compute-0 sudo[464105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:21:42 compute-0 nova_compute[349548]: 2025-12-05 02:21:42.596 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:21:42 compute-0 sudo[464105]: pam_unix(sudo:session): session closed for user root
Dec 05 02:21:42 compute-0 sudo[464130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:21:42 compute-0 sudo[464130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:21:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:21:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:21:43 compute-0 ceph-mon[192914]: pgmap v2181: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec 05 02:21:43 compute-0 sudo[464130]: pam_unix(sudo:session): session closed for user root
Dec 05 02:21:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:21:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:21:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:21:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:21:43 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:21:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:21:43 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:21:43 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 05ca3532-828f-43a6-a572-23b8e9172fd5 does not exist
Dec 05 02:21:43 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 285a2eae-9077-4e33-92ab-1902929aafd3 does not exist
Dec 05 02:21:43 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 1119e751-fca1-4640-98dd-9d1938c924f9 does not exist
Dec 05 02:21:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:21:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:21:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:21:43 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:21:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:21:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:21:43 compute-0 nova_compute[349548]: 2025-12-05 02:21:43.485 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:21:43 compute-0 sudo[464184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:21:43 compute-0 sudo[464184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:21:43 compute-0 sudo[464184]: pam_unix(sudo:session): session closed for user root
Dec 05 02:21:43 compute-0 sudo[464209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:21:43 compute-0 sudo[464209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:21:43 compute-0 sudo[464209]: pam_unix(sudo:session): session closed for user root
Dec 05 02:21:43 compute-0 sudo[464234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:21:43 compute-0 sudo[464234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:21:43 compute-0 sudo[464234]: pam_unix(sudo:session): session closed for user root
Dec 05 02:21:43 compute-0 sudo[464259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:21:43 compute-0 sudo[464259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:21:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:21:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:21:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:21:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:21:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:21:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:21:44 compute-0 podman[464321]: 2025-12-05 02:21:44.544761088 +0000 UTC m=+0.074224036 container create 39adaf01a479cdbd8c929057a517104407bd8fd39dcdd08e6e3f9e6de0ee3f3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wright, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 02:21:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2182: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec 05 02:21:44 compute-0 systemd[1]: Started libpod-conmon-39adaf01a479cdbd8c929057a517104407bd8fd39dcdd08e6e3f9e6de0ee3f3b.scope.
Dec 05 02:21:44 compute-0 podman[464321]: 2025-12-05 02:21:44.524591722 +0000 UTC m=+0.054054670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:21:44 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:21:44 compute-0 podman[464321]: 2025-12-05 02:21:44.676547379 +0000 UTC m=+0.206010377 container init 39adaf01a479cdbd8c929057a517104407bd8fd39dcdd08e6e3f9e6de0ee3f3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:21:44 compute-0 podman[464321]: 2025-12-05 02:21:44.69080647 +0000 UTC m=+0.220269448 container start 39adaf01a479cdbd8c929057a517104407bd8fd39dcdd08e6e3f9e6de0ee3f3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wright, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Dec 05 02:21:44 compute-0 suspicious_wright[464336]: 167 167
Dec 05 02:21:44 compute-0 systemd[1]: libpod-39adaf01a479cdbd8c929057a517104407bd8fd39dcdd08e6e3f9e6de0ee3f3b.scope: Deactivated successfully.
Dec 05 02:21:44 compute-0 conmon[464336]: conmon 39adaf01a479cdbd8c92 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-39adaf01a479cdbd8c929057a517104407bd8fd39dcdd08e6e3f9e6de0ee3f3b.scope/container/memory.events
Dec 05 02:21:44 compute-0 podman[464321]: 2025-12-05 02:21:44.699626538 +0000 UTC m=+0.229089486 container attach 39adaf01a479cdbd8c929057a517104407bd8fd39dcdd08e6e3f9e6de0ee3f3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wright, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 05 02:21:44 compute-0 podman[464342]: 2025-12-05 02:21:44.770316013 +0000 UTC m=+0.049769209 container died 39adaf01a479cdbd8c929057a517104407bd8fd39dcdd08e6e3f9e6de0ee3f3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wright, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:21:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-9927bfbd29561224a3dc5ef1847c0d8adeda4f2ea8d25d5d2223132d8fa432d3-merged.mount: Deactivated successfully.
Dec 05 02:21:44 compute-0 podman[464342]: 2025-12-05 02:21:44.854024034 +0000 UTC m=+0.133477210 container remove 39adaf01a479cdbd8c929057a517104407bd8fd39dcdd08e6e3f9e6de0ee3f3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wright, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:21:44 compute-0 systemd[1]: libpod-conmon-39adaf01a479cdbd8c929057a517104407bd8fd39dcdd08e6e3f9e6de0ee3f3b.scope: Deactivated successfully.
Dec 05 02:21:45 compute-0 nova_compute[349548]: 2025-12-05 02:21:45.115 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:21:45 compute-0 nova_compute[349548]: 2025-12-05 02:21:45.115 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:21:45 compute-0 podman[464362]: 2025-12-05 02:21:45.153474404 +0000 UTC m=+0.081025536 container create f31ce91c05a3b09329bf6ca9f9006e75ad372e6fff109519bb17fbf21858d20c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jemison, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:21:45 compute-0 podman[464362]: 2025-12-05 02:21:45.124380857 +0000 UTC m=+0.051932029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:21:45 compute-0 systemd[1]: Started libpod-conmon-f31ce91c05a3b09329bf6ca9f9006e75ad372e6fff109519bb17fbf21858d20c.scope.
Dec 05 02:21:45 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:21:45 compute-0 ceph-mon[192914]: pgmap v2182: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec 05 02:21:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33473c531ea96d8ddfc377e4d713c49e3c1ef57caa3b65fc50749d60b3d064eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:21:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33473c531ea96d8ddfc377e4d713c49e3c1ef57caa3b65fc50749d60b3d064eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:21:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33473c531ea96d8ddfc377e4d713c49e3c1ef57caa3b65fc50749d60b3d064eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:21:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33473c531ea96d8ddfc377e4d713c49e3c1ef57caa3b65fc50749d60b3d064eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:21:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33473c531ea96d8ddfc377e4d713c49e3c1ef57caa3b65fc50749d60b3d064eb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:21:45 compute-0 podman[464362]: 2025-12-05 02:21:45.32424088 +0000 UTC m=+0.251792082 container init f31ce91c05a3b09329bf6ca9f9006e75ad372e6fff109519bb17fbf21858d20c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 02:21:45 compute-0 podman[464362]: 2025-12-05 02:21:45.339317084 +0000 UTC m=+0.266868236 container start f31ce91c05a3b09329bf6ca9f9006e75ad372e6fff109519bb17fbf21858d20c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jemison, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 05 02:21:45 compute-0 podman[464362]: 2025-12-05 02:21:45.346798504 +0000 UTC m=+0.274349656 container attach f31ce91c05a3b09329bf6ca9f9006e75ad372e6fff109519bb17fbf21858d20c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jemison, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 02:21:45 compute-0 nova_compute[349548]: 2025-12-05 02:21:45.382 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:21:45 compute-0 nova_compute[349548]: 2025-12-05 02:21:45.384 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:21:45 compute-0 nova_compute[349548]: 2025-12-05 02:21:45.398 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 02:21:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:21:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1591678610' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:21:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:21:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1591678610' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:21:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1591678610' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:21:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1591678610' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:21:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:21:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:21:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:21:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:21:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:21:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:21:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2183: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec 05 02:21:46 compute-0 nova_compute[349548]: 2025-12-05 02:21:46.625 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Updating instance_info_cache with network_info: [{"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:21:46 compute-0 nova_compute[349548]: 2025-12-05 02:21:46.643 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:21:46 compute-0 nova_compute[349548]: 2025-12-05 02:21:46.644 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 02:21:46 compute-0 eager_jemison[464377]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:21:46 compute-0 eager_jemison[464377]: --> relative data size: 1.0
Dec 05 02:21:46 compute-0 eager_jemison[464377]: --> All data devices are unavailable
Dec 05 02:21:46 compute-0 systemd[1]: libpod-f31ce91c05a3b09329bf6ca9f9006e75ad372e6fff109519bb17fbf21858d20c.scope: Deactivated successfully.
Dec 05 02:21:46 compute-0 systemd[1]: libpod-f31ce91c05a3b09329bf6ca9f9006e75ad372e6fff109519bb17fbf21858d20c.scope: Consumed 1.301s CPU time.
Dec 05 02:21:46 compute-0 podman[464406]: 2025-12-05 02:21:46.775616724 +0000 UTC m=+0.050885740 container died f31ce91c05a3b09329bf6ca9f9006e75ad372e6fff109519bb17fbf21858d20c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jemison, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:21:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-33473c531ea96d8ddfc377e4d713c49e3c1ef57caa3b65fc50749d60b3d064eb-merged.mount: Deactivated successfully.
Dec 05 02:21:46 compute-0 podman[464406]: 2025-12-05 02:21:46.888400961 +0000 UTC m=+0.163669907 container remove f31ce91c05a3b09329bf6ca9f9006e75ad372e6fff109519bb17fbf21858d20c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jemison, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 05 02:21:46 compute-0 systemd[1]: libpod-conmon-f31ce91c05a3b09329bf6ca9f9006e75ad372e6fff109519bb17fbf21858d20c.scope: Deactivated successfully.
Dec 05 02:21:46 compute-0 podman[464419]: 2025-12-05 02:21:46.952873822 +0000 UTC m=+0.108384195 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 05 02:21:46 compute-0 sudo[464259]: pam_unix(sudo:session): session closed for user root
Dec 05 02:21:46 compute-0 podman[464421]: 2025-12-05 02:21:46.961824994 +0000 UTC m=+0.107886412 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 05 02:21:46 compute-0 podman[464420]: 2025-12-05 02:21:46.979482549 +0000 UTC m=+0.133239183 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, managed_by=edpm_ansible, name=ubi9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, release-0.7.12=, version=9.4)
Dec 05 02:21:47 compute-0 sudo[464471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:21:47 compute-0 sudo[464471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:21:47 compute-0 sudo[464471]: pam_unix(sudo:session): session closed for user root
Dec 05 02:21:47 compute-0 sudo[464498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:21:47 compute-0 sudo[464498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:21:47 compute-0 sudo[464498]: pam_unix(sudo:session): session closed for user root
Dec 05 02:21:47 compute-0 sudo[464523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:21:47 compute-0 sudo[464523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:21:47 compute-0 sudo[464523]: pam_unix(sudo:session): session closed for user root
Dec 05 02:21:47 compute-0 ceph-mon[192914]: pgmap v2183: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec 05 02:21:47 compute-0 sudo[464548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:21:47 compute-0 sudo[464548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:21:47 compute-0 nova_compute[349548]: 2025-12-05 02:21:47.599 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:21:47 compute-0 podman[464613]: 2025-12-05 02:21:47.987109679 +0000 UTC m=+0.080855232 container create 3d99a0999eca9765e36d1d7289a5b5691f8e767a65107b15b48a64c042c73abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 05 02:21:48 compute-0 podman[464613]: 2025-12-05 02:21:47.957372053 +0000 UTC m=+0.051117636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:21:48 compute-0 systemd[1]: Started libpod-conmon-3d99a0999eca9765e36d1d7289a5b5691f8e767a65107b15b48a64c042c73abb.scope.
Dec 05 02:21:48 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:21:48 compute-0 podman[464613]: 2025-12-05 02:21:48.126356269 +0000 UTC m=+0.220101912 container init 3d99a0999eca9765e36d1d7289a5b5691f8e767a65107b15b48a64c042c73abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendeleev, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 05 02:21:48 compute-0 podman[464613]: 2025-12-05 02:21:48.14702078 +0000 UTC m=+0.240766353 container start 3d99a0999eca9765e36d1d7289a5b5691f8e767a65107b15b48a64c042c73abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendeleev, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 05 02:21:48 compute-0 podman[464613]: 2025-12-05 02:21:48.153508312 +0000 UTC m=+0.247253935 container attach 3d99a0999eca9765e36d1d7289a5b5691f8e767a65107b15b48a64c042c73abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendeleev, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 05 02:21:48 compute-0 dreamy_mendeleev[464629]: 167 167
Dec 05 02:21:48 compute-0 systemd[1]: libpod-3d99a0999eca9765e36d1d7289a5b5691f8e767a65107b15b48a64c042c73abb.scope: Deactivated successfully.
Dec 05 02:21:48 compute-0 podman[464613]: 2025-12-05 02:21:48.160103807 +0000 UTC m=+0.253849360 container died 3d99a0999eca9765e36d1d7289a5b5691f8e767a65107b15b48a64c042c73abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendeleev, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 05 02:21:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b14ad9a9f8075721f1042e05b95a69441bc05aac097b564dc590dc4b942e4d48-merged.mount: Deactivated successfully.
Dec 05 02:21:48 compute-0 podman[464613]: 2025-12-05 02:21:48.231442771 +0000 UTC m=+0.325188344 container remove 3d99a0999eca9765e36d1d7289a5b5691f8e767a65107b15b48a64c042c73abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendeleev, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 05 02:21:48 compute-0 systemd[1]: libpod-conmon-3d99a0999eca9765e36d1d7289a5b5691f8e767a65107b15b48a64c042c73abb.scope: Deactivated successfully.
Dec 05 02:21:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:21:48 compute-0 podman[464651]: 2025-12-05 02:21:48.479343433 +0000 UTC m=+0.072682132 container create 2dfd20c83f2b6e712956c8464d7d29e03b633151785d12a0893b1bf5019d50e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_joliot, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:21:48 compute-0 nova_compute[349548]: 2025-12-05 02:21:48.487 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:21:48 compute-0 podman[464651]: 2025-12-05 02:21:48.454516916 +0000 UTC m=+0.047855645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:21:48 compute-0 systemd[1]: Started libpod-conmon-2dfd20c83f2b6e712956c8464d7d29e03b633151785d12a0893b1bf5019d50e2.scope.
Dec 05 02:21:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2184: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec 05 02:21:48 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:21:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8ed31d2b3681b87e1c93c130c524c89af2a0ceab43f45f8ff40ee886b52815/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:21:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8ed31d2b3681b87e1c93c130c524c89af2a0ceab43f45f8ff40ee886b52815/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:21:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8ed31d2b3681b87e1c93c130c524c89af2a0ceab43f45f8ff40ee886b52815/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:21:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8ed31d2b3681b87e1c93c130c524c89af2a0ceab43f45f8ff40ee886b52815/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:21:48 compute-0 podman[464651]: 2025-12-05 02:21:48.647233869 +0000 UTC m=+0.240572658 container init 2dfd20c83f2b6e712956c8464d7d29e03b633151785d12a0893b1bf5019d50e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_joliot, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:21:48 compute-0 podman[464651]: 2025-12-05 02:21:48.658023932 +0000 UTC m=+0.251362631 container start 2dfd20c83f2b6e712956c8464d7d29e03b633151785d12a0893b1bf5019d50e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 05 02:21:48 compute-0 podman[464651]: 2025-12-05 02:21:48.663358932 +0000 UTC m=+0.256697711 container attach 2dfd20c83f2b6e712956c8464d7d29e03b633151785d12a0893b1bf5019d50e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_joliot, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:21:49 compute-0 nova_compute[349548]: 2025-12-05 02:21:49.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:21:49 compute-0 silly_joliot[464668]: {
Dec 05 02:21:49 compute-0 silly_joliot[464668]:     "0": [
Dec 05 02:21:49 compute-0 silly_joliot[464668]:         {
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "devices": [
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "/dev/loop3"
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             ],
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "lv_name": "ceph_lv0",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "lv_size": "21470642176",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "name": "ceph_lv0",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "tags": {
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.cluster_name": "ceph",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.crush_device_class": "",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.encrypted": "0",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.osd_id": "0",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.type": "block",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.vdo": "0"
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             },
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "type": "block",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "vg_name": "ceph_vg0"
Dec 05 02:21:49 compute-0 silly_joliot[464668]:         }
Dec 05 02:21:49 compute-0 silly_joliot[464668]:     ],
Dec 05 02:21:49 compute-0 silly_joliot[464668]:     "1": [
Dec 05 02:21:49 compute-0 silly_joliot[464668]:         {
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "devices": [
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "/dev/loop4"
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             ],
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "lv_name": "ceph_lv1",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "lv_size": "21470642176",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "name": "ceph_lv1",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "tags": {
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.cluster_name": "ceph",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.crush_device_class": "",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.encrypted": "0",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.osd_id": "1",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.type": "block",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.vdo": "0"
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             },
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "type": "block",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "vg_name": "ceph_vg1"
Dec 05 02:21:49 compute-0 silly_joliot[464668]:         }
Dec 05 02:21:49 compute-0 silly_joliot[464668]:     ],
Dec 05 02:21:49 compute-0 silly_joliot[464668]:     "2": [
Dec 05 02:21:49 compute-0 silly_joliot[464668]:         {
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "devices": [
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "/dev/loop5"
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             ],
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "lv_name": "ceph_lv2",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "lv_size": "21470642176",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "name": "ceph_lv2",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "tags": {
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.cluster_name": "ceph",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.crush_device_class": "",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.encrypted": "0",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.osd_id": "2",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.type": "block",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:                 "ceph.vdo": "0"
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             },
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "type": "block",
Dec 05 02:21:49 compute-0 silly_joliot[464668]:             "vg_name": "ceph_vg2"
Dec 05 02:21:49 compute-0 silly_joliot[464668]:         }
Dec 05 02:21:49 compute-0 silly_joliot[464668]:     ]
Dec 05 02:21:49 compute-0 silly_joliot[464668]: }
Dec 05 02:21:49 compute-0 systemd[1]: libpod-2dfd20c83f2b6e712956c8464d7d29e03b633151785d12a0893b1bf5019d50e2.scope: Deactivated successfully.
Dec 05 02:21:49 compute-0 podman[464651]: 2025-12-05 02:21:49.571834817 +0000 UTC m=+1.165173536 container died 2dfd20c83f2b6e712956c8464d7d29e03b633151785d12a0893b1bf5019d50e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_joliot, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 05 02:21:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc8ed31d2b3681b87e1c93c130c524c89af2a0ceab43f45f8ff40ee886b52815-merged.mount: Deactivated successfully.
Dec 05 02:21:49 compute-0 ceph-mon[192914]: pgmap v2184: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec 05 02:21:49 compute-0 podman[464651]: 2025-12-05 02:21:49.684392948 +0000 UTC m=+1.277731677 container remove 2dfd20c83f2b6e712956c8464d7d29e03b633151785d12a0893b1bf5019d50e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_joliot, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:21:49 compute-0 systemd[1]: libpod-conmon-2dfd20c83f2b6e712956c8464d7d29e03b633151785d12a0893b1bf5019d50e2.scope: Deactivated successfully.
Dec 05 02:21:49 compute-0 sudo[464548]: pam_unix(sudo:session): session closed for user root
Dec 05 02:21:49 compute-0 sudo[464689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:21:49 compute-0 sudo[464689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:21:49 compute-0 sudo[464689]: pam_unix(sudo:session): session closed for user root
Dec 05 02:21:50 compute-0 sudo[464714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:21:50 compute-0 sudo[464714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:21:50 compute-0 sudo[464714]: pam_unix(sudo:session): session closed for user root
Dec 05 02:21:50 compute-0 nova_compute[349548]: 2025-12-05 02:21:50.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:21:50 compute-0 sudo[464739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:21:50 compute-0 sudo[464739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:21:50 compute-0 sudo[464739]: pam_unix(sudo:session): session closed for user root
Dec 05 02:21:50 compute-0 sudo[464764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:21:50 compute-0 sudo[464764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:21:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2185: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Dec 05 02:21:50 compute-0 podman[464825]: 2025-12-05 02:21:50.927876723 +0000 UTC m=+0.086489120 container create 26215aba242d6cc3f6eb8b14412065e03ae60fb471813acafa0197654e4d9193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_knuth, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 05 02:21:50 compute-0 podman[464825]: 2025-12-05 02:21:50.892035526 +0000 UTC m=+0.050647983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:21:51 compute-0 systemd[1]: Started libpod-conmon-26215aba242d6cc3f6eb8b14412065e03ae60fb471813acafa0197654e4d9193.scope.
Dec 05 02:21:51 compute-0 nova_compute[349548]: 2025-12-05 02:21:51.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:21:51 compute-0 nova_compute[349548]: 2025-12-05 02:21:51.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:21:51 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:21:51 compute-0 podman[464825]: 2025-12-05 02:21:51.098653949 +0000 UTC m=+0.257266406 container init 26215aba242d6cc3f6eb8b14412065e03ae60fb471813acafa0197654e4d9193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 05 02:21:51 compute-0 podman[464825]: 2025-12-05 02:21:51.116481939 +0000 UTC m=+0.275094306 container start 26215aba242d6cc3f6eb8b14412065e03ae60fb471813acafa0197654e4d9193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:21:51 compute-0 podman[464825]: 2025-12-05 02:21:51.122800746 +0000 UTC m=+0.281413143 container attach 26215aba242d6cc3f6eb8b14412065e03ae60fb471813acafa0197654e4d9193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_knuth, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 05 02:21:51 compute-0 exciting_knuth[464840]: 167 167
Dec 05 02:21:51 compute-0 systemd[1]: libpod-26215aba242d6cc3f6eb8b14412065e03ae60fb471813acafa0197654e4d9193.scope: Deactivated successfully.
Dec 05 02:21:51 compute-0 podman[464825]: 2025-12-05 02:21:51.129503155 +0000 UTC m=+0.288115572 container died 26215aba242d6cc3f6eb8b14412065e03ae60fb471813acafa0197654e4d9193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_knuth, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 05 02:21:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0a02b0eb470474e9c2a49b7221ccd94a78c2ea2a1d226cccb06eaa7a429b091-merged.mount: Deactivated successfully.
Dec 05 02:21:51 compute-0 podman[464825]: 2025-12-05 02:21:51.216408615 +0000 UTC m=+0.375021012 container remove 26215aba242d6cc3f6eb8b14412065e03ae60fb471813acafa0197654e4d9193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_knuth, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:21:51 compute-0 systemd[1]: libpod-conmon-26215aba242d6cc3f6eb8b14412065e03ae60fb471813acafa0197654e4d9193.scope: Deactivated successfully.
Dec 05 02:21:51 compute-0 podman[464863]: 2025-12-05 02:21:51.525400974 +0000 UTC m=+0.093854897 container create 8690ad637b2bfdbdfb33b751020337d5c2df2a4b70ce975210baefb862f05aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_beaver, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 02:21:51 compute-0 podman[464863]: 2025-12-05 02:21:51.489594848 +0000 UTC m=+0.058048811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:21:51 compute-0 systemd[1]: Started libpod-conmon-8690ad637b2bfdbdfb33b751020337d5c2df2a4b70ce975210baefb862f05aa9.scope.
Dec 05 02:21:51 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b783119f3c40d9749e9237a933b37ad08090de0347ea9d5d410520b8aae9f62c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b783119f3c40d9749e9237a933b37ad08090de0347ea9d5d410520b8aae9f62c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b783119f3c40d9749e9237a933b37ad08090de0347ea9d5d410520b8aae9f62c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b783119f3c40d9749e9237a933b37ad08090de0347ea9d5d410520b8aae9f62c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:21:51 compute-0 ceph-mon[192914]: pgmap v2185: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Dec 05 02:21:51 compute-0 podman[464863]: 2025-12-05 02:21:51.714317869 +0000 UTC m=+0.282771842 container init 8690ad637b2bfdbdfb33b751020337d5c2df2a4b70ce975210baefb862f05aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_beaver, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:21:51 compute-0 podman[464863]: 2025-12-05 02:21:51.737620714 +0000 UTC m=+0.306074637 container start 8690ad637b2bfdbdfb33b751020337d5c2df2a4b70ce975210baefb862f05aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_beaver, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 02:21:51 compute-0 podman[464863]: 2025-12-05 02:21:51.746286357 +0000 UTC m=+0.314740340 container attach 8690ad637b2bfdbdfb33b751020337d5c2df2a4b70ce975210baefb862f05aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.092 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.092 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.092 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.092 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.092 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:21:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2186: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Dec 05 02:21:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:21:52 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1587739206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.600 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.625 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:21:52 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1587739206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.739 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.740 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.749 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.749 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:21:53 compute-0 sweet_beaver[464880]: {
Dec 05 02:21:53 compute-0 sweet_beaver[464880]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:21:53 compute-0 sweet_beaver[464880]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:21:53 compute-0 sweet_beaver[464880]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:21:53 compute-0 sweet_beaver[464880]:         "osd_id": 0,
Dec 05 02:21:53 compute-0 sweet_beaver[464880]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:21:53 compute-0 sweet_beaver[464880]:         "type": "bluestore"
Dec 05 02:21:53 compute-0 sweet_beaver[464880]:     },
Dec 05 02:21:53 compute-0 sweet_beaver[464880]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:21:53 compute-0 sweet_beaver[464880]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:21:53 compute-0 sweet_beaver[464880]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:21:53 compute-0 sweet_beaver[464880]:         "osd_id": 1,
Dec 05 02:21:53 compute-0 sweet_beaver[464880]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:21:53 compute-0 sweet_beaver[464880]:         "type": "bluestore"
Dec 05 02:21:53 compute-0 sweet_beaver[464880]:     },
Dec 05 02:21:53 compute-0 sweet_beaver[464880]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:21:53 compute-0 sweet_beaver[464880]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:21:53 compute-0 sweet_beaver[464880]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:21:53 compute-0 sweet_beaver[464880]:         "osd_id": 2,
Dec 05 02:21:53 compute-0 sweet_beaver[464880]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:21:53 compute-0 sweet_beaver[464880]:         "type": "bluestore"
Dec 05 02:21:53 compute-0 sweet_beaver[464880]:     }
Dec 05 02:21:53 compute-0 sweet_beaver[464880]: }
Dec 05 02:21:53 compute-0 systemd[1]: libpod-8690ad637b2bfdbdfb33b751020337d5c2df2a4b70ce975210baefb862f05aa9.scope: Deactivated successfully.
Dec 05 02:21:53 compute-0 systemd[1]: libpod-8690ad637b2bfdbdfb33b751020337d5c2df2a4b70ce975210baefb862f05aa9.scope: Consumed 1.282s CPU time.
Dec 05 02:21:53 compute-0 podman[464935]: 2025-12-05 02:21:53.142729718 +0000 UTC m=+0.065414468 container died 8690ad637b2bfdbdfb33b751020337d5c2df2a4b70ce975210baefb862f05aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:21:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b783119f3c40d9749e9237a933b37ad08090de0347ea9d5d410520b8aae9f62c-merged.mount: Deactivated successfully.
Dec 05 02:21:53 compute-0 podman[464935]: 2025-12-05 02:21:53.249467376 +0000 UTC m=+0.172152116 container remove 8690ad637b2bfdbdfb33b751020337d5c2df2a4b70ce975210baefb862f05aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_beaver, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 05 02:21:53 compute-0 systemd[1]: libpod-conmon-8690ad637b2bfdbdfb33b751020337d5c2df2a4b70ce975210baefb862f05aa9.scope: Deactivated successfully.
Dec 05 02:21:53 compute-0 nova_compute[349548]: 2025-12-05 02:21:53.319 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:21:53 compute-0 nova_compute[349548]: 2025-12-05 02:21:53.320 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3454MB free_disk=59.89703369140625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:21:53 compute-0 nova_compute[349548]: 2025-12-05 02:21:53.321 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:21:53 compute-0 nova_compute[349548]: 2025-12-05 02:21:53.322 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:21:53 compute-0 sudo[464764]: pam_unix(sudo:session): session closed for user root
Dec 05 02:21:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:21:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:21:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:21:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:21:53 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8c027028-f77e-47bc-84eb-4b1776dca2ed does not exist
Dec 05 02:21:53 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 790d325f-67f9-4c67-8596-a6b94e915f9f does not exist
Dec 05 02:21:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:21:53 compute-0 nova_compute[349548]: 2025-12-05 02:21:53.438 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:21:53 compute-0 nova_compute[349548]: 2025-12-05 02:21:53.439 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:21:53 compute-0 nova_compute[349548]: 2025-12-05 02:21:53.440 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:21:53 compute-0 nova_compute[349548]: 2025-12-05 02:21:53.440 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:21:53 compute-0 nova_compute[349548]: 2025-12-05 02:21:53.490 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:21:53 compute-0 nova_compute[349548]: 2025-12-05 02:21:53.498 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:21:53 compute-0 sudo[464947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:21:53 compute-0 sudo[464947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:21:53 compute-0 sudo[464947]: pam_unix(sudo:session): session closed for user root
Dec 05 02:21:53 compute-0 sudo[464973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:21:53 compute-0 sudo[464973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:21:53 compute-0 sudo[464973]: pam_unix(sudo:session): session closed for user root
Dec 05 02:21:53 compute-0 ceph-mon[192914]: pgmap v2186: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Dec 05 02:21:53 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:21:53 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:21:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:21:53 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3999271080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:21:54 compute-0 nova_compute[349548]: 2025-12-05 02:21:54.027 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:21:54 compute-0 nova_compute[349548]: 2025-12-05 02:21:54.042 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:21:54 compute-0 nova_compute[349548]: 2025-12-05 02:21:54.070 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:21:54 compute-0 nova_compute[349548]: 2025-12-05 02:21:54.072 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:21:54 compute-0 nova_compute[349548]: 2025-12-05 02:21:54.073 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:21:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2187: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:21:54 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3999271080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:21:55 compute-0 ceph-mon[192914]: pgmap v2187: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:21:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:21:56.221 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:21:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:21:56.222 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:21:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:21:56.223 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:21:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2188: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:21:57 compute-0 nova_compute[349548]: 2025-12-05 02:21:57.069 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:21:57 compute-0 nova_compute[349548]: 2025-12-05 02:21:57.096 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:21:57 compute-0 nova_compute[349548]: 2025-12-05 02:21:57.097 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:21:57 compute-0 nova_compute[349548]: 2025-12-05 02:21:57.603 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:21:57 compute-0 ceph-mon[192914]: pgmap v2188: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:21:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:21:58 compute-0 nova_compute[349548]: 2025-12-05 02:21:58.494 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:21:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2189: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Dec 05 02:21:59 compute-0 podman[465022]: 2025-12-05 02:21:59.713476352 +0000 UTC m=+0.101232935 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, release=1755695350, io.buildah.version=1.33.7, maintainer=Red Hat, Inc.)
Dec 05 02:21:59 compute-0 podman[465020]: 2025-12-05 02:21:59.741956572 +0000 UTC m=+0.129685164 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:21:59 compute-0 podman[158197]: time="2025-12-05T02:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:21:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:21:59 compute-0 podman[465019]: 2025-12-05 02:21:59.756814509 +0000 UTC m=+0.146724832 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:21:59 compute-0 ceph-mon[192914]: pgmap v2189: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Dec 05 02:21:59 compute-0 podman[465021]: 2025-12-05 02:21:59.763401594 +0000 UTC m=+0.144597742 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 05 02:21:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8667 "" "Go-http-client/1.1"
Dec 05 02:22:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2190: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 05 02:22:01 compute-0 openstack_network_exporter[366555]: ERROR   02:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:22:01 compute-0 openstack_network_exporter[366555]: ERROR   02:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:22:01 compute-0 openstack_network_exporter[366555]: ERROR   02:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:22:01 compute-0 openstack_network_exporter[366555]: ERROR   02:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:22:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:22:01 compute-0 openstack_network_exporter[366555]: ERROR   02:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:22:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:22:01 compute-0 ceph-mon[192914]: pgmap v2190: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 05 02:22:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2191: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 05 02:22:02 compute-0 nova_compute[349548]: 2025-12-05 02:22:02.608 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:22:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:22:03 compute-0 nova_compute[349548]: 2025-12-05 02:22:03.496 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:22:03 compute-0 ceph-mon[192914]: pgmap v2191: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 05 02:22:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2192: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 05 02:22:05 compute-0 ceph-mon[192914]: pgmap v2192: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 05 02:22:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2193: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 05 02:22:07 compute-0 nova_compute[349548]: 2025-12-05 02:22:07.609 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:22:07 compute-0 ceph-mon[192914]: pgmap v2193: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 05 02:22:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:22:08 compute-0 nova_compute[349548]: 2025-12-05 02:22:08.499 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:22:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2194: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 05 02:22:09 compute-0 ceph-mon[192914]: pgmap v2194: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec 05 02:22:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2195: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 05 02:22:11 compute-0 ceph-mon[192914]: pgmap v2195: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec 05 02:22:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2196: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:22:12 compute-0 nova_compute[349548]: 2025-12-05 02:22:12.612 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:22:12 compute-0 podman[465107]: 2025-12-05 02:22:12.715164117 +0000 UTC m=+0.121556405 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 05 02:22:12 compute-0 podman[465108]: 2025-12-05 02:22:12.734798618 +0000 UTC m=+0.134072146 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 02:22:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:22:13 compute-0 nova_compute[349548]: 2025-12-05 02:22:13.503 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:22:13 compute-0 ceph-mon[192914]: pgmap v2196: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:22:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2197: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:22:15 compute-0 ceph-mon[192914]: pgmap v2197: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:22:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:22:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:22:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:22:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:22:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:22:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:22:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:22:16
Dec 05 02:22:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:22:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:22:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'images', 'vms', '.rgw.root', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes']
Dec 05 02:22:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:22:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2198: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:22:17 compute-0 nova_compute[349548]: 2025-12-05 02:22:17.616 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:22:17 compute-0 podman[465149]: 2025-12-05 02:22:17.720796159 +0000 UTC m=+0.124269231 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 02:22:17 compute-0 podman[465151]: 2025-12-05 02:22:17.726922901 +0000 UTC m=+0.110560436 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 02:22:17 compute-0 podman[465150]: 2025-12-05 02:22:17.73114486 +0000 UTC m=+0.126052742 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vendor=Red Hat, Inc., version=9.4, container_name=kepler, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, architecture=x86_64, com.redhat.component=ubi9-container, config_id=edpm, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc.)
Dec 05 02:22:17 compute-0 ceph-mon[192914]: pgmap v2198: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:22:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:22:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:22:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:22:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:22:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:22:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:22:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:22:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:22:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:22:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:22:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:22:18 compute-0 nova_compute[349548]: 2025-12-05 02:22:18.507 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:22:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2199: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:22:19 compute-0 ceph-mon[192914]: pgmap v2199: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:22:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2200: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:22:21 compute-0 ceph-mon[192914]: pgmap v2200: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:22:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2201: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:22:22 compute-0 nova_compute[349548]: 2025-12-05 02:22:22.620 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:22:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:22:23 compute-0 nova_compute[349548]: 2025-12-05 02:22:23.511 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:22:23 compute-0 ceph-mon[192914]: pgmap v2201: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec 05 02:22:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2202: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:25 compute-0 ceph-mon[192914]: pgmap v2202: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2203: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:22:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.0 total, 600.0 interval
                                            Cumulative writes: 10K writes, 45K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s
                                            Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.06 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1314 writes, 5712 keys, 1314 commit groups, 1.0 writes per commit group, ingest: 8.62 MB, 0.01 MB/s
                                            Interval WAL: 1314 writes, 1314 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    102.3      0.54              0.25        31    0.017       0      0       0.0       0.0
                                              L6      1/0    6.46 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.1    124.2    102.0      2.19              0.97        30    0.073    159K    16K       0.0       0.0
                                             Sum      1/0    6.46 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.1     99.7    102.1      2.73              1.22        61    0.045    159K    16K       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.6    123.7    125.4      0.31              0.16         8    0.038     25K   2045       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    124.2    102.0      2.19              0.97        30    0.073    159K    16K       0.0       0.0
                                            High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    103.2      0.53              0.25        30    0.018       0      0       0.0       0.0
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 4200.0 total, 600.0 interval
                                            Flush(GB): cumulative 0.054, interval 0.007
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.27 GB write, 0.07 MB/s write, 0.27 GB read, 0.06 MB/s read, 2.7 seconds
                                            Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.3 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56463779d1f0#2 capacity: 304.00 MB usage: 32.18 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000266 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2067,31.01 MB,10.2003%) FilterBlock(62,452.73 KB,0.145435%) IndexBlock(62,747.39 KB,0.24009%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Dec 05 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015211533217750863 of space, bias 1.0, pg target 0.4563459965325259 quantized to 32 (current 32)
Dec 05 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 05 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:22:27 compute-0 nova_compute[349548]: 2025-12-05 02:22:27.622 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:22:27 compute-0 ceph-mon[192914]: pgmap v2203: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:22:28 compute-0 nova_compute[349548]: 2025-12-05 02:22:28.514 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:22:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2204: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:29 compute-0 podman[158197]: time="2025-12-05T02:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:22:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:22:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8666 "" "Go-http-client/1.1"
Dec 05 02:22:29 compute-0 ceph-mon[192914]: pgmap v2204: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2205: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:30 compute-0 podman[465208]: 2025-12-05 02:22:30.693629025 +0000 UTC m=+0.091883561 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 02:22:30 compute-0 podman[465207]: 2025-12-05 02:22:30.701955099 +0000 UTC m=+0.116949755 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:22:30 compute-0 podman[465215]: 2025-12-05 02:22:30.723570746 +0000 UTC m=+0.105800832 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.openshift.expose-services=, version=9.6, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 05 02:22:30 compute-0 podman[465214]: 2025-12-05 02:22:30.768148998 +0000 UTC m=+0.154695465 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 05 02:22:31 compute-0 openstack_network_exporter[366555]: ERROR   02:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:22:31 compute-0 openstack_network_exporter[366555]: ERROR   02:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:22:31 compute-0 openstack_network_exporter[366555]: ERROR   02:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:22:31 compute-0 openstack_network_exporter[366555]: ERROR   02:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:22:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:22:31 compute-0 openstack_network_exporter[366555]: ERROR   02:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:22:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:22:32 compute-0 ceph-mon[192914]: pgmap v2205: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2206: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:32 compute-0 nova_compute[349548]: 2025-12-05 02:22:32.626 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:22:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:22:33 compute-0 nova_compute[349548]: 2025-12-05 02:22:33.518 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:22:34 compute-0 ceph-mon[192914]: pgmap v2206: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2207: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:36 compute-0 ceph-mon[192914]: pgmap v2207: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2208: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:37 compute-0 nova_compute[349548]: 2025-12-05 02:22:37.630 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:22:38 compute-0 ceph-mon[192914]: pgmap v2208: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.326 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.327 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.340 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '292fd084-0808-4a80-adc1-6ab1f28e188a', 'name': 'te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'hostId': '1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18', 'status': 'active', 'metadata': {'metering.server_group': '92ca195d-98d1-443c-9947-dcb7ca7b926a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.345 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7', 'name': 'te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'hostId': '1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18', 'status': 'active', 'metadata': {'metering.server_group': '92ca195d-98d1-443c-9947-dcb7ca7b926a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.346 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.346 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.346 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.346 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.347 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T02:22:38.346661) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.347 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.348 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.348 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.348 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.349 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.349 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T02:22:38.348966) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.371 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.372 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.397 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.398 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.399 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.400 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.400 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.400 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.401 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.401 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.402 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T02:22:38.401244) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.402 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.403 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.403 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.404 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.404 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.404 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.404 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.404 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.405 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T02:22:38.404789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.457 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 30882304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.458 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 nova_compute[349548]: 2025-12-05 02:22:38.522 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.523 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.bytes volume: 31304192 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.523 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.524 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.524 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.524 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.524 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.524 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.524 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 3200956192 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.525 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 237184283 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.525 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.latency volume: 2882860455 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.525 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T02:22:38.524751) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.526 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.latency volume: 200982064 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.526 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.526 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.526 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.527 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.527 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.527 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 1101 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.527 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T02:22:38.527110) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.527 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.528 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.requests volume: 1122 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.528 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.528 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.529 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.529 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.529 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.529 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.529 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.529 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.530 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T02:22:38.529320) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.530 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.531 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.531 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.531 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.531 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.531 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.531 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 73146368 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.532 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.532 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.bytes volume: 73129984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.532 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T02:22:38.531604) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.533 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.533 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.533 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.533 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.534 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.534 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.534 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T02:22:38.534152) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.565 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.600 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.601 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.601 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.601 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.601 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.601 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.602 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.602 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 11353966152 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.602 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.602 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.latency volume: 10991220303 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.603 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.603 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.604 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.604 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.604 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T02:22:38.601870) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.604 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.604 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.604 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.604 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 315 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.605 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.605 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.requests volume: 302 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.605 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T02:22:38.604621) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.605 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.606 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.606 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.606 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.606 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.606 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.606 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.607 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T02:22:38.606776) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.611 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.615 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.616 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.616 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.616 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.616 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.616 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.616 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.616 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.617 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T02:22:38.616617) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.617 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.617 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.618 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.618 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.618 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.618 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.618 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.618 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.619 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.619 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T02:22:38.618628) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.619 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.620 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.620 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 05 02:22:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2209: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.620 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.620 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.621 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.621 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.621 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.621 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T02:22:38.621149) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.621 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.622 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.622 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.623 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.623 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.623 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.623 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.623 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.623 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.624 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T02:22:38.623485) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.624 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.624 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.624 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.624 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.625 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.625 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.625 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.625 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.625 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.626 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.626 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.627 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T02:22:38.625362) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.627 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.627 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.627 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.627 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.627 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/memory.usage volume: 42.4765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.628 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T02:22:38.627775) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.628 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/memory.usage volume: 42.26953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.628 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.629 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.629 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.629 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.629 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.629 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.630 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.630 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.630 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.631 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.631 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.632 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.632 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.632 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T02:22:38.629701) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.633 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.633 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T02:22:38.632422) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.633 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.633 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.634 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.634 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.635 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.635 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.635 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.635 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.635 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.636 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T02:22:38.635275) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.636 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.636 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.636 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.636 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.637 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.637 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.638 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T02:22:38.637124) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.637 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/cpu volume: 337740000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.638 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/cpu volume: 335240000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.638 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.639 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.639 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.639 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.639 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.639 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.640 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.640 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.640 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.640 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.641 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.641 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.641 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.641 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T02:22:38.639812) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.641 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.641 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.642 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.643 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.643 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.643 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.645 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.645 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.645 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.645 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.645 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.645 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.645 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.645 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.645 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.645 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.645 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.646 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.646 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.646 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T02:22:38.641601) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:22:40 compute-0 ceph-mon[192914]: pgmap v2209: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2210: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:42 compute-0 ceph-mon[192914]: pgmap v2210: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2211: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:42 compute-0 nova_compute[349548]: 2025-12-05 02:22:42.632 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:22:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:22:43 compute-0 nova_compute[349548]: 2025-12-05 02:22:43.525 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:22:43 compute-0 podman[465295]: 2025-12-05 02:22:43.713150756 +0000 UTC m=+0.116872294 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 02:22:43 compute-0 podman[465296]: 2025-12-05 02:22:43.724405712 +0000 UTC m=+0.129063986 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:22:44 compute-0 ceph-mon[192914]: pgmap v2211: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2212: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:22:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4268910848' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:22:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:22:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4268910848' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:22:46 compute-0 nova_compute[349548]: 2025-12-05 02:22:46.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:22:46 compute-0 nova_compute[349548]: 2025-12-05 02:22:46.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:22:46 compute-0 nova_compute[349548]: 2025-12-05 02:22:46.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:22:46 compute-0 ceph-mon[192914]: pgmap v2212: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/4268910848' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:22:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/4268910848' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:22:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:22:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:22:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:22:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:22:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:22:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:22:46 compute-0 nova_compute[349548]: 2025-12-05 02:22:46.406 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:22:46 compute-0 nova_compute[349548]: 2025-12-05 02:22:46.407 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:22:46 compute-0 nova_compute[349548]: 2025-12-05 02:22:46.407 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 02:22:46 compute-0 nova_compute[349548]: 2025-12-05 02:22:46.407 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 292fd084-0808-4a80-adc1-6ab1f28e188a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:22:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2213: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:47 compute-0 nova_compute[349548]: 2025-12-05 02:22:47.635 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:22:48 compute-0 ceph-mon[192914]: pgmap v2213: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:22:48 compute-0 nova_compute[349548]: 2025-12-05 02:22:48.529 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:22:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2214: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:48 compute-0 podman[465339]: 2025-12-05 02:22:48.752358271 +0000 UTC m=+0.147365520 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.component=ubi9-container, release-0.7.12=, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, vcs-type=git)
Dec 05 02:22:48 compute-0 podman[465338]: 2025-12-05 02:22:48.755421797 +0000 UTC m=+0.160057057 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:22:48 compute-0 podman[465340]: 2025-12-05 02:22:48.756760774 +0000 UTC m=+0.146615139 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:22:49 compute-0 nova_compute[349548]: 2025-12-05 02:22:49.293 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updating instance_info_cache with network_info: [{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:22:49 compute-0 nova_compute[349548]: 2025-12-05 02:22:49.313 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:22:49 compute-0 nova_compute[349548]: 2025-12-05 02:22:49.313 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 02:22:50 compute-0 nova_compute[349548]: 2025-12-05 02:22:50.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:22:50 compute-0 nova_compute[349548]: 2025-12-05 02:22:50.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:22:50 compute-0 ceph-mon[192914]: pgmap v2214: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2215: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.100 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.100 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.101 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.101 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.102 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:22:52 compute-0 ceph-mon[192914]: pgmap v2215: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:22:52 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3302821614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:22:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2216: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.637 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.639 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.781 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.781 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.790 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.791 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:22:53 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3302821614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:22:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:22:53 compute-0 nova_compute[349548]: 2025-12-05 02:22:53.478 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:22:53 compute-0 nova_compute[349548]: 2025-12-05 02:22:53.482 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3498MB free_disk=59.897029876708984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:22:53 compute-0 nova_compute[349548]: 2025-12-05 02:22:53.483 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:22:53 compute-0 nova_compute[349548]: 2025-12-05 02:22:53.484 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:22:53 compute-0 nova_compute[349548]: 2025-12-05 02:22:53.533 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:22:53 compute-0 nova_compute[349548]: 2025-12-05 02:22:53.593 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:22:53 compute-0 nova_compute[349548]: 2025-12-05 02:22:53.593 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:22:53 compute-0 nova_compute[349548]: 2025-12-05 02:22:53.594 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:22:53 compute-0 nova_compute[349548]: 2025-12-05 02:22:53.594 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:22:53 compute-0 nova_compute[349548]: 2025-12-05 02:22:53.659 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:22:53 compute-0 sudo[465418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:22:53 compute-0 sudo[465418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:22:53 compute-0 sudo[465418]: pam_unix(sudo:session): session closed for user root
Dec 05 02:22:53 compute-0 sudo[465462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:22:53 compute-0 sudo[465462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:22:53 compute-0 sudo[465462]: pam_unix(sudo:session): session closed for user root
Dec 05 02:22:54 compute-0 sudo[465487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:22:54 compute-0 sudo[465487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:22:54 compute-0 sudo[465487]: pam_unix(sudo:session): session closed for user root
Dec 05 02:22:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:22:54 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/546819003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:22:54 compute-0 nova_compute[349548]: 2025-12-05 02:22:54.166 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:22:54 compute-0 ceph-mon[192914]: pgmap v2216: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:54 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/546819003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:22:54 compute-0 nova_compute[349548]: 2025-12-05 02:22:54.181 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:22:54 compute-0 nova_compute[349548]: 2025-12-05 02:22:54.201 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:22:54 compute-0 nova_compute[349548]: 2025-12-05 02:22:54.203 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:22:54 compute-0 nova_compute[349548]: 2025-12-05 02:22:54.204 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:22:54 compute-0 sudo[465512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:22:54 compute-0 sudo[465512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:22:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2217: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:54 compute-0 sudo[465512]: pam_unix(sudo:session): session closed for user root
Dec 05 02:22:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:22:54 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:22:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:22:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:22:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:22:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:22:54 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 3bfa0005-1e4c-4b8f-b5dc-e2a6e9cec7b8 does not exist
Dec 05 02:22:54 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 0347786b-636d-47b7-8583-6108b094601e does not exist
Dec 05 02:22:54 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d4acefaa-04a8-473a-a4c5-344a872483fa does not exist
Dec 05 02:22:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:22:54 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:22:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:22:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:22:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:22:54 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:22:55 compute-0 sudo[465569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:22:55 compute-0 sudo[465569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:22:55 compute-0 sudo[465569]: pam_unix(sudo:session): session closed for user root
Dec 05 02:22:55 compute-0 ceph-mon[192914]: pgmap v2217: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:22:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:22:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:22:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:22:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:22:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:22:55 compute-0 nova_compute[349548]: 2025-12-05 02:22:55.204 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:22:55 compute-0 sudo[465594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:22:55 compute-0 sudo[465594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:22:55 compute-0 sudo[465594]: pam_unix(sudo:session): session closed for user root
Dec 05 02:22:55 compute-0 sudo[465619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:22:55 compute-0 sudo[465619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:22:55 compute-0 sudo[465619]: pam_unix(sudo:session): session closed for user root
Dec 05 02:22:55 compute-0 sudo[465644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:22:55 compute-0 sudo[465644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:22:56 compute-0 nova_compute[349548]: 2025-12-05 02:22:56.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:22:56 compute-0 podman[465708]: 2025-12-05 02:22:56.193210062 +0000 UTC m=+0.089364431 container create 89210e048792d271fb91655843b9510647a5ef7edac0723aee861aea7aabc151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_allen, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:22:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:22:56.223 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:22:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:22:56.224 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:22:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:22:56.224 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:22:56 compute-0 podman[465708]: 2025-12-05 02:22:56.159417843 +0000 UTC m=+0.055572262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:22:56 compute-0 systemd[1]: Started libpod-conmon-89210e048792d271fb91655843b9510647a5ef7edac0723aee861aea7aabc151.scope.
Dec 05 02:22:56 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:22:56 compute-0 podman[465708]: 2025-12-05 02:22:56.362661082 +0000 UTC m=+0.258815471 container init 89210e048792d271fb91655843b9510647a5ef7edac0723aee861aea7aabc151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:22:56 compute-0 podman[465708]: 2025-12-05 02:22:56.38147538 +0000 UTC m=+0.277629769 container start 89210e048792d271fb91655843b9510647a5ef7edac0723aee861aea7aabc151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_allen, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:22:56 compute-0 podman[465708]: 2025-12-05 02:22:56.38965295 +0000 UTC m=+0.285807369 container attach 89210e048792d271fb91655843b9510647a5ef7edac0723aee861aea7aabc151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_allen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:22:56 compute-0 lucid_allen[465724]: 167 167
Dec 05 02:22:56 compute-0 systemd[1]: libpod-89210e048792d271fb91655843b9510647a5ef7edac0723aee861aea7aabc151.scope: Deactivated successfully.
Dec 05 02:22:56 compute-0 podman[465708]: 2025-12-05 02:22:56.395672409 +0000 UTC m=+0.291826798 container died 89210e048792d271fb91655843b9510647a5ef7edac0723aee861aea7aabc151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec 05 02:22:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f141b35b3ce0b3d3a1704558fbe4eaacc84c15e122b04ce4c1b3ab1c71f3ff5-merged.mount: Deactivated successfully.
Dec 05 02:22:56 compute-0 podman[465708]: 2025-12-05 02:22:56.495594795 +0000 UTC m=+0.391749174 container remove 89210e048792d271fb91655843b9510647a5ef7edac0723aee861aea7aabc151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_allen, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:22:56 compute-0 systemd[1]: libpod-conmon-89210e048792d271fb91655843b9510647a5ef7edac0723aee861aea7aabc151.scope: Deactivated successfully.
Dec 05 02:22:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2218: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:56 compute-0 podman[465746]: 2025-12-05 02:22:56.763210051 +0000 UTC m=+0.062850726 container create 7b4d994dbce429895b221f27c5ace852fd86fcbae4406af928c51cdfe3fea25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:22:56 compute-0 podman[465746]: 2025-12-05 02:22:56.742264893 +0000 UTC m=+0.041905598 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:22:56 compute-0 systemd[1]: Started libpod-conmon-7b4d994dbce429895b221f27c5ace852fd86fcbae4406af928c51cdfe3fea25e.scope.
Dec 05 02:22:56 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:22:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eec4d152ff69ac7157e9fa3d1004d2d6c243028dca1b19ae5913af197f3f5a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:22:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eec4d152ff69ac7157e9fa3d1004d2d6c243028dca1b19ae5913af197f3f5a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:22:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eec4d152ff69ac7157e9fa3d1004d2d6c243028dca1b19ae5913af197f3f5a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:22:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eec4d152ff69ac7157e9fa3d1004d2d6c243028dca1b19ae5913af197f3f5a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:22:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eec4d152ff69ac7157e9fa3d1004d2d6c243028dca1b19ae5913af197f3f5a4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:22:56 compute-0 podman[465746]: 2025-12-05 02:22:56.933329269 +0000 UTC m=+0.232970044 container init 7b4d994dbce429895b221f27c5ace852fd86fcbae4406af928c51cdfe3fea25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:22:56 compute-0 podman[465746]: 2025-12-05 02:22:56.971608564 +0000 UTC m=+0.271249259 container start 7b4d994dbce429895b221f27c5ace852fd86fcbae4406af928c51cdfe3fea25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:22:56 compute-0 podman[465746]: 2025-12-05 02:22:56.977692645 +0000 UTC m=+0.277333370 container attach 7b4d994dbce429895b221f27c5ace852fd86fcbae4406af928c51cdfe3fea25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:22:57 compute-0 nova_compute[349548]: 2025-12-05 02:22:57.639 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:22:57 compute-0 ceph-mon[192914]: pgmap v2218: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:58 compute-0 nova_compute[349548]: 2025-12-05 02:22:58.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:22:58 compute-0 musing_bell[465762]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:22:58 compute-0 musing_bell[465762]: --> relative data size: 1.0
Dec 05 02:22:58 compute-0 musing_bell[465762]: --> All data devices are unavailable
Dec 05 02:22:58 compute-0 systemd[1]: libpod-7b4d994dbce429895b221f27c5ace852fd86fcbae4406af928c51cdfe3fea25e.scope: Deactivated successfully.
Dec 05 02:22:58 compute-0 podman[465746]: 2025-12-05 02:22:58.128502827 +0000 UTC m=+1.428143522 container died 7b4d994dbce429895b221f27c5ace852fd86fcbae4406af928c51cdfe3fea25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 05 02:22:58 compute-0 systemd[1]: libpod-7b4d994dbce429895b221f27c5ace852fd86fcbae4406af928c51cdfe3fea25e.scope: Consumed 1.074s CPU time.
Dec 05 02:22:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-2eec4d152ff69ac7157e9fa3d1004d2d6c243028dca1b19ae5913af197f3f5a4-merged.mount: Deactivated successfully.
Dec 05 02:22:58 compute-0 podman[465746]: 2025-12-05 02:22:58.2016042 +0000 UTC m=+1.501244885 container remove 7b4d994dbce429895b221f27c5ace852fd86fcbae4406af928c51cdfe3fea25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 02:22:58 compute-0 systemd[1]: libpod-conmon-7b4d994dbce429895b221f27c5ace852fd86fcbae4406af928c51cdfe3fea25e.scope: Deactivated successfully.
Dec 05 02:22:58 compute-0 sudo[465644]: pam_unix(sudo:session): session closed for user root
Dec 05 02:22:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:22:58 compute-0 sudo[465803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:22:58 compute-0 sudo[465803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:22:58 compute-0 sudo[465803]: pam_unix(sudo:session): session closed for user root
Dec 05 02:22:58 compute-0 sudo[465828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:22:58 compute-0 nova_compute[349548]: 2025-12-05 02:22:58.537 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:22:58 compute-0 sudo[465828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:22:58 compute-0 sudo[465828]: pam_unix(sudo:session): session closed for user root
Dec 05 02:22:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2219: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:58 compute-0 sudo[465853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:22:58 compute-0 sudo[465853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:22:58 compute-0 sudo[465853]: pam_unix(sudo:session): session closed for user root
Dec 05 02:22:58 compute-0 sudo[465878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:22:58 compute-0 sudo[465878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:22:59 compute-0 podman[465940]: 2025-12-05 02:22:59.366554048 +0000 UTC m=+0.076752367 container create aa26114970af139a98612e44a41252b1e21ea1c58bfb2e2ba2ea24d5f9402ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:22:59 compute-0 podman[465940]: 2025-12-05 02:22:59.335011552 +0000 UTC m=+0.045209961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:22:59 compute-0 systemd[1]: Started libpod-conmon-aa26114970af139a98612e44a41252b1e21ea1c58bfb2e2ba2ea24d5f9402ee0.scope.
Dec 05 02:22:59 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:22:59 compute-0 podman[465940]: 2025-12-05 02:22:59.505550461 +0000 UTC m=+0.215748840 container init aa26114970af139a98612e44a41252b1e21ea1c58bfb2e2ba2ea24d5f9402ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:22:59 compute-0 podman[465940]: 2025-12-05 02:22:59.521367146 +0000 UTC m=+0.231565505 container start aa26114970af139a98612e44a41252b1e21ea1c58bfb2e2ba2ea24d5f9402ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kowalevski, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:22:59 compute-0 podman[465940]: 2025-12-05 02:22:59.528498236 +0000 UTC m=+0.238696595 container attach aa26114970af139a98612e44a41252b1e21ea1c58bfb2e2ba2ea24d5f9402ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kowalevski, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 02:22:59 compute-0 gallant_kowalevski[465956]: 167 167
Dec 05 02:22:59 compute-0 systemd[1]: libpod-aa26114970af139a98612e44a41252b1e21ea1c58bfb2e2ba2ea24d5f9402ee0.scope: Deactivated successfully.
Dec 05 02:22:59 compute-0 podman[465940]: 2025-12-05 02:22:59.533133636 +0000 UTC m=+0.243331995 container died aa26114970af139a98612e44a41252b1e21ea1c58bfb2e2ba2ea24d5f9402ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 02:22:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-064d9b997fa7f719f664d6bd6c1afcd6640b12a6b593d8a8956a9b7d9d2494be-merged.mount: Deactivated successfully.
Dec 05 02:22:59 compute-0 podman[465940]: 2025-12-05 02:22:59.619548013 +0000 UTC m=+0.329746342 container remove aa26114970af139a98612e44a41252b1e21ea1c58bfb2e2ba2ea24d5f9402ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kowalevski, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 02:22:59 compute-0 systemd[1]: libpod-conmon-aa26114970af139a98612e44a41252b1e21ea1c58bfb2e2ba2ea24d5f9402ee0.scope: Deactivated successfully.
Dec 05 02:22:59 compute-0 ceph-mon[192914]: pgmap v2219: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:22:59 compute-0 podman[158197]: time="2025-12-05T02:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:22:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:22:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8660 "" "Go-http-client/1.1"
Dec 05 02:22:59 compute-0 podman[465979]: 2025-12-05 02:22:59.907255904 +0000 UTC m=+0.081373787 container create d7112ed883246c4c2b5d1828859a0cac6522c3faf23a1bd1db4de43d8a9b2807 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:22:59 compute-0 podman[465979]: 2025-12-05 02:22:59.873861376 +0000 UTC m=+0.047979319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:23:00 compute-0 systemd[1]: Started libpod-conmon-d7112ed883246c4c2b5d1828859a0cac6522c3faf23a1bd1db4de43d8a9b2807.scope.
Dec 05 02:23:00 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d47acaf05e6c3deec280a40fe3b4fa55ea213c7c5375dbadfec42696b2d84c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d47acaf05e6c3deec280a40fe3b4fa55ea213c7c5375dbadfec42696b2d84c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d47acaf05e6c3deec280a40fe3b4fa55ea213c7c5375dbadfec42696b2d84c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d47acaf05e6c3deec280a40fe3b4fa55ea213c7c5375dbadfec42696b2d84c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:23:00 compute-0 podman[465979]: 2025-12-05 02:23:00.075433787 +0000 UTC m=+0.249551720 container init d7112ed883246c4c2b5d1828859a0cac6522c3faf23a1bd1db4de43d8a9b2807 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 02:23:00 compute-0 podman[465979]: 2025-12-05 02:23:00.096485008 +0000 UTC m=+0.270602861 container start d7112ed883246c4c2b5d1828859a0cac6522c3faf23a1bd1db4de43d8a9b2807 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goodall, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:23:00 compute-0 podman[465979]: 2025-12-05 02:23:00.104639477 +0000 UTC m=+0.278757410 container attach d7112ed883246c4c2b5d1828859a0cac6522c3faf23a1bd1db4de43d8a9b2807 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:23:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2220: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:00 compute-0 elastic_goodall[465996]: {
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:     "0": [
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:         {
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "devices": [
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "/dev/loop3"
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             ],
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "lv_name": "ceph_lv0",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "lv_size": "21470642176",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "name": "ceph_lv0",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "tags": {
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.cluster_name": "ceph",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.crush_device_class": "",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.encrypted": "0",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.osd_id": "0",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.type": "block",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.vdo": "0"
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             },
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "type": "block",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "vg_name": "ceph_vg0"
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:         }
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:     ],
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:     "1": [
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:         {
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "devices": [
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "/dev/loop4"
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             ],
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "lv_name": "ceph_lv1",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "lv_size": "21470642176",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "name": "ceph_lv1",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "tags": {
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.cluster_name": "ceph",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.crush_device_class": "",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.encrypted": "0",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.osd_id": "1",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.type": "block",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.vdo": "0"
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             },
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "type": "block",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "vg_name": "ceph_vg1"
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:         }
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:     ],
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:     "2": [
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:         {
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "devices": [
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "/dev/loop5"
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             ],
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "lv_name": "ceph_lv2",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "lv_size": "21470642176",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "name": "ceph_lv2",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "tags": {
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.cluster_name": "ceph",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.crush_device_class": "",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.encrypted": "0",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.osd_id": "2",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.type": "block",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:                 "ceph.vdo": "0"
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             },
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "type": "block",
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:             "vg_name": "ceph_vg2"
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:         }
Dec 05 02:23:00 compute-0 elastic_goodall[465996]:     ]
Dec 05 02:23:00 compute-0 elastic_goodall[465996]: }
Dec 05 02:23:01 compute-0 systemd[1]: libpod-d7112ed883246c4c2b5d1828859a0cac6522c3faf23a1bd1db4de43d8a9b2807.scope: Deactivated successfully.
Dec 05 02:23:01 compute-0 podman[466005]: 2025-12-05 02:23:01.086077832 +0000 UTC m=+0.063245807 container died d7112ed883246c4c2b5d1828859a0cac6522c3faf23a1bd1db4de43d8a9b2807 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goodall, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:23:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8d47acaf05e6c3deec280a40fe3b4fa55ea213c7c5375dbadfec42696b2d84c-merged.mount: Deactivated successfully.
Dec 05 02:23:01 compute-0 podman[466005]: 2025-12-05 02:23:01.157769235 +0000 UTC m=+0.134937170 container remove d7112ed883246c4c2b5d1828859a0cac6522c3faf23a1bd1db4de43d8a9b2807 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 02:23:01 compute-0 systemd[1]: libpod-conmon-d7112ed883246c4c2b5d1828859a0cac6522c3faf23a1bd1db4de43d8a9b2807.scope: Deactivated successfully.
Dec 05 02:23:01 compute-0 podman[466006]: 2025-12-05 02:23:01.181743839 +0000 UTC m=+0.126256737 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible)
Dec 05 02:23:01 compute-0 podman[466012]: 2025-12-05 02:23:01.192050518 +0000 UTC m=+0.134254581 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 02:23:01 compute-0 sudo[465878]: pam_unix(sudo:session): session closed for user root
Dec 05 02:23:01 compute-0 podman[466018]: 2025-12-05 02:23:01.204263381 +0000 UTC m=+0.127760289 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, version=9.6, container_name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41)
Dec 05 02:23:01 compute-0 podman[466014]: 2025-12-05 02:23:01.261092107 +0000 UTC m=+0.196072658 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 02:23:01 compute-0 sudo[466096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:23:01 compute-0 sudo[466096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:23:01 compute-0 sudo[466096]: pam_unix(sudo:session): session closed for user root
Dec 05 02:23:01 compute-0 sudo[466127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:23:01 compute-0 sudo[466127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:23:01 compute-0 sudo[466127]: pam_unix(sudo:session): session closed for user root
Dec 05 02:23:01 compute-0 openstack_network_exporter[366555]: ERROR   02:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:23:01 compute-0 openstack_network_exporter[366555]: ERROR   02:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:23:01 compute-0 openstack_network_exporter[366555]: ERROR   02:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:23:01 compute-0 openstack_network_exporter[366555]: ERROR   02:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:23:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:23:01 compute-0 openstack_network_exporter[366555]: ERROR   02:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:23:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:23:01 compute-0 sudo[466152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:23:01 compute-0 sudo[466152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:23:01 compute-0 sudo[466152]: pam_unix(sudo:session): session closed for user root
Dec 05 02:23:01 compute-0 sudo[466177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:23:01 compute-0 sudo[466177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:23:01 compute-0 ceph-mon[192914]: pgmap v2220: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:02 compute-0 podman[466240]: 2025-12-05 02:23:02.198523836 +0000 UTC m=+0.088163827 container create ddb6596bd04ebbe48929c33de13fd9f5e6a91e6bab1c603aa694c0854e38bb3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_einstein, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:23:02 compute-0 podman[466240]: 2025-12-05 02:23:02.159573422 +0000 UTC m=+0.049213413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:23:02 compute-0 systemd[1]: Started libpod-conmon-ddb6596bd04ebbe48929c33de13fd9f5e6a91e6bab1c603aa694c0854e38bb3d.scope.
Dec 05 02:23:02 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:23:02 compute-0 podman[466240]: 2025-12-05 02:23:02.350390791 +0000 UTC m=+0.240030842 container init ddb6596bd04ebbe48929c33de13fd9f5e6a91e6bab1c603aa694c0854e38bb3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_einstein, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 05 02:23:02 compute-0 podman[466240]: 2025-12-05 02:23:02.360734382 +0000 UTC m=+0.250374343 container start ddb6596bd04ebbe48929c33de13fd9f5e6a91e6bab1c603aa694c0854e38bb3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 02:23:02 compute-0 podman[466240]: 2025-12-05 02:23:02.367821191 +0000 UTC m=+0.257461252 container attach ddb6596bd04ebbe48929c33de13fd9f5e6a91e6bab1c603aa694c0854e38bb3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_einstein, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:23:02 compute-0 priceless_einstein[466255]: 167 167
Dec 05 02:23:02 compute-0 systemd[1]: libpod-ddb6596bd04ebbe48929c33de13fd9f5e6a91e6bab1c603aa694c0854e38bb3d.scope: Deactivated successfully.
Dec 05 02:23:02 compute-0 podman[466240]: 2025-12-05 02:23:02.370339581 +0000 UTC m=+0.259979572 container died ddb6596bd04ebbe48929c33de13fd9f5e6a91e6bab1c603aa694c0854e38bb3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 05 02:23:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5645ff46ac9214f3262221e33abdf7a9621eb3afaab316764f7ae6e81d81e24f-merged.mount: Deactivated successfully.
Dec 05 02:23:02 compute-0 podman[466240]: 2025-12-05 02:23:02.436008776 +0000 UTC m=+0.325648737 container remove ddb6596bd04ebbe48929c33de13fd9f5e6a91e6bab1c603aa694c0854e38bb3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_einstein, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 05 02:23:02 compute-0 systemd[1]: libpod-conmon-ddb6596bd04ebbe48929c33de13fd9f5e6a91e6bab1c603aa694c0854e38bb3d.scope: Deactivated successfully.
Dec 05 02:23:02 compute-0 podman[466277]: 2025-12-05 02:23:02.622587736 +0000 UTC m=+0.055539561 container create 1ccd4501e28ba317170863879e8c5c9e03c8a552f94eb4b9ab989ae05c51ca9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_booth, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 05 02:23:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2221: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:02 compute-0 nova_compute[349548]: 2025-12-05 02:23:02.642 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:23:02 compute-0 systemd[1]: Started libpod-conmon-1ccd4501e28ba317170863879e8c5c9e03c8a552f94eb4b9ab989ae05c51ca9b.scope.
Dec 05 02:23:02 compute-0 podman[466277]: 2025-12-05 02:23:02.597267405 +0000 UTC m=+0.030219290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:23:02 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:23:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5f15f084e406a04f74f32ebb055915011ac82428983786c9a6854301b63f29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:23:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5f15f084e406a04f74f32ebb055915011ac82428983786c9a6854301b63f29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:23:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5f15f084e406a04f74f32ebb055915011ac82428983786c9a6854301b63f29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:23:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5f15f084e406a04f74f32ebb055915011ac82428983786c9a6854301b63f29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:23:02 compute-0 podman[466277]: 2025-12-05 02:23:02.771809786 +0000 UTC m=+0.204761621 container init 1ccd4501e28ba317170863879e8c5c9e03c8a552f94eb4b9ab989ae05c51ca9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_booth, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:23:02 compute-0 podman[466277]: 2025-12-05 02:23:02.794405671 +0000 UTC m=+0.227357496 container start 1ccd4501e28ba317170863879e8c5c9e03c8a552f94eb4b9ab989ae05c51ca9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_booth, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:23:02 compute-0 podman[466277]: 2025-12-05 02:23:02.799313319 +0000 UTC m=+0.232265154 container attach 1ccd4501e28ba317170863879e8c5c9e03c8a552f94eb4b9ab989ae05c51ca9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_booth, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:23:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:23:03 compute-0 nova_compute[349548]: 2025-12-05 02:23:03.541 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:23:03 compute-0 ceph-mon[192914]: pgmap v2221: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:03 compute-0 wizardly_booth[466293]: {
Dec 05 02:23:03 compute-0 wizardly_booth[466293]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:23:03 compute-0 wizardly_booth[466293]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:23:03 compute-0 wizardly_booth[466293]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:23:03 compute-0 wizardly_booth[466293]:         "osd_id": 0,
Dec 05 02:23:03 compute-0 wizardly_booth[466293]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:23:03 compute-0 wizardly_booth[466293]:         "type": "bluestore"
Dec 05 02:23:03 compute-0 wizardly_booth[466293]:     },
Dec 05 02:23:03 compute-0 wizardly_booth[466293]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:23:03 compute-0 wizardly_booth[466293]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:23:03 compute-0 wizardly_booth[466293]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:23:03 compute-0 wizardly_booth[466293]:         "osd_id": 1,
Dec 05 02:23:03 compute-0 wizardly_booth[466293]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:23:03 compute-0 wizardly_booth[466293]:         "type": "bluestore"
Dec 05 02:23:03 compute-0 wizardly_booth[466293]:     },
Dec 05 02:23:03 compute-0 wizardly_booth[466293]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:23:03 compute-0 wizardly_booth[466293]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:23:03 compute-0 wizardly_booth[466293]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:23:03 compute-0 wizardly_booth[466293]:         "osd_id": 2,
Dec 05 02:23:03 compute-0 wizardly_booth[466293]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:23:03 compute-0 wizardly_booth[466293]:         "type": "bluestore"
Dec 05 02:23:03 compute-0 wizardly_booth[466293]:     }
Dec 05 02:23:03 compute-0 wizardly_booth[466293]: }
Dec 05 02:23:03 compute-0 systemd[1]: libpod-1ccd4501e28ba317170863879e8c5c9e03c8a552f94eb4b9ab989ae05c51ca9b.scope: Deactivated successfully.
Dec 05 02:23:03 compute-0 systemd[1]: libpod-1ccd4501e28ba317170863879e8c5c9e03c8a552f94eb4b9ab989ae05c51ca9b.scope: Consumed 1.168s CPU time.
Dec 05 02:23:03 compute-0 conmon[466293]: conmon 1ccd4501e28ba3171708 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1ccd4501e28ba317170863879e8c5c9e03c8a552f94eb4b9ab989ae05c51ca9b.scope/container/memory.events
Dec 05 02:23:03 compute-0 podman[466277]: 2025-12-05 02:23:03.972081437 +0000 UTC m=+1.405033272 container died 1ccd4501e28ba317170863879e8c5c9e03c8a552f94eb4b9ab989ae05c51ca9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_booth, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:23:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d5f15f084e406a04f74f32ebb055915011ac82428983786c9a6854301b63f29-merged.mount: Deactivated successfully.
Dec 05 02:23:04 compute-0 podman[466277]: 2025-12-05 02:23:04.055535641 +0000 UTC m=+1.488487516 container remove 1ccd4501e28ba317170863879e8c5c9e03c8a552f94eb4b9ab989ae05c51ca9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 05 02:23:04 compute-0 systemd[1]: libpod-conmon-1ccd4501e28ba317170863879e8c5c9e03c8a552f94eb4b9ab989ae05c51ca9b.scope: Deactivated successfully.
Dec 05 02:23:04 compute-0 sudo[466177]: pam_unix(sudo:session): session closed for user root
Dec 05 02:23:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:23:04 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:23:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:23:04 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:23:04 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 2c302bd2-db7b-4c21-97cb-ccf9f4d9c2ea does not exist
Dec 05 02:23:04 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 9c1246e1-4abd-4bcc-883a-601574855148 does not exist
Dec 05 02:23:04 compute-0 sudo[466338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:23:04 compute-0 sudo[466338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:23:04 compute-0 sudo[466338]: pam_unix(sudo:session): session closed for user root
Dec 05 02:23:04 compute-0 sudo[466363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:23:04 compute-0 sudo[466363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:23:04 compute-0 sudo[466363]: pam_unix(sudo:session): session closed for user root
Dec 05 02:23:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2222: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:05 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:23:05 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:23:06 compute-0 ceph-mon[192914]: pgmap v2222: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2223: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:07 compute-0 sshd-session[466388]: Connection closed by 176.32.195.85 port 40868
Dec 05 02:23:07 compute-0 sshd-session[466389]: Unable to negotiate with 176.32.195.85 port 40882: no matching host key type found. Their offer: ssh-rsa,ssh-dss [preauth]
Dec 05 02:23:07 compute-0 nova_compute[349548]: 2025-12-05 02:23:07.645 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:23:08 compute-0 ceph-mon[192914]: pgmap v2223: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:23:08 compute-0 nova_compute[349548]: 2025-12-05 02:23:08.545 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:23:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2224: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:10 compute-0 ceph-mon[192914]: pgmap v2224: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2225: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:11 compute-0 ceph-mon[192914]: pgmap v2225: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2226: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:12 compute-0 nova_compute[349548]: 2025-12-05 02:23:12.650 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:23:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:23:13 compute-0 nova_compute[349548]: 2025-12-05 02:23:13.548 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:23:13 compute-0 ceph-mon[192914]: pgmap v2226: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2227: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:14 compute-0 podman[466392]: 2025-12-05 02:23:14.745016381 +0000 UTC m=+0.146669310 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 02:23:14 compute-0 podman[466391]: 2025-12-05 02:23:14.763025507 +0000 UTC m=+0.164244524 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:23:15 compute-0 ceph-mon[192914]: pgmap v2227: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:23:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:23:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:23:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:23:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:23:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:23:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:23:16
Dec 05 02:23:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:23:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:23:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', 'images', 'backups', '.rgw.root']
Dec 05 02:23:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:23:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2228: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:17 compute-0 nova_compute[349548]: 2025-12-05 02:23:17.652 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:23:17 compute-0 ceph-mon[192914]: pgmap v2228: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:23:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:23:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:23:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:23:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:23:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:23:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:23:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:23:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:23:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:23:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:23:18 compute-0 nova_compute[349548]: 2025-12-05 02:23:18.551 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:23:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2229: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:19 compute-0 podman[466431]: 2025-12-05 02:23:19.729647737 +0000 UTC m=+0.131617348 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec 05 02:23:19 compute-0 podman[466433]: 2025-12-05 02:23:19.735298025 +0000 UTC m=+0.121636277 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm)
Dec 05 02:23:19 compute-0 podman[466432]: 2025-12-05 02:23:19.737828116 +0000 UTC m=+0.135417644 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, container_name=kepler, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 05 02:23:19 compute-0 ceph-mon[192914]: pgmap v2229: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2230: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:21 compute-0 ceph-mon[192914]: pgmap v2230: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:22 compute-0 nova_compute[349548]: 2025-12-05 02:23:22.653 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:23:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2231: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:23:23 compute-0 nova_compute[349548]: 2025-12-05 02:23:23.555 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:23:23 compute-0 ceph-mon[192914]: pgmap v2231: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2232: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:25 compute-0 ceph-mon[192914]: pgmap v2232: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2233: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015211533217750863 of space, bias 1.0, pg target 0.4563459965325259 quantized to 32 (current 32)
Dec 05 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 05 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:23:27 compute-0 nova_compute[349548]: 2025-12-05 02:23:27.656 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:23:27 compute-0 ceph-mon[192914]: pgmap v2233: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:23:28 compute-0 nova_compute[349548]: 2025-12-05 02:23:28.558 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:23:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2234: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:29 compute-0 podman[158197]: time="2025-12-05T02:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:23:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:23:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8661 "" "Go-http-client/1.1"
Dec 05 02:23:29 compute-0 ceph-mon[192914]: pgmap v2234: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2235: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:31 compute-0 openstack_network_exporter[366555]: ERROR   02:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:23:31 compute-0 openstack_network_exporter[366555]: ERROR   02:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:23:31 compute-0 openstack_network_exporter[366555]: ERROR   02:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:23:31 compute-0 openstack_network_exporter[366555]: ERROR   02:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:23:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:23:31 compute-0 openstack_network_exporter[366555]: ERROR   02:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:23:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:23:31 compute-0 podman[466485]: 2025-12-05 02:23:31.691736197 +0000 UTC m=+0.100365590 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 02:23:31 compute-0 podman[466486]: 2025-12-05 02:23:31.713108147 +0000 UTC m=+0.104366292 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:23:31 compute-0 podman[466493]: 2025-12-05 02:23:31.762445623 +0000 UTC m=+0.133573333 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, distribution-scope=public, release=1755695350, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, version=9.6)
Dec 05 02:23:31 compute-0 podman[466488]: 2025-12-05 02:23:31.783847254 +0000 UTC m=+0.166625041 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:23:31 compute-0 ceph-mon[192914]: pgmap v2235: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:32 compute-0 nova_compute[349548]: 2025-12-05 02:23:32.658 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:23:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2236: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:23:33 compute-0 nova_compute[349548]: 2025-12-05 02:23:33.561 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:23:33 compute-0 ceph-mon[192914]: pgmap v2236: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2237: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:35 compute-0 ceph-mon[192914]: pgmap v2237: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2238: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:37 compute-0 nova_compute[349548]: 2025-12-05 02:23:37.662 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:23:37 compute-0 ceph-mon[192914]: pgmap v2238: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.901334) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901417901371, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1699, "num_deletes": 251, "total_data_size": 2789867, "memory_usage": 2834800, "flush_reason": "Manual Compaction"}
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901417920351, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 2730021, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44577, "largest_seqno": 46275, "table_properties": {"data_size": 2722127, "index_size": 4773, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 15848, "raw_average_key_size": 19, "raw_value_size": 2706500, "raw_average_value_size": 3404, "num_data_blocks": 213, "num_entries": 795, "num_filter_entries": 795, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764901234, "oldest_key_time": 1764901234, "file_creation_time": 1764901417, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 19086 microseconds, and 7999 cpu microseconds.
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.920414) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 2730021 bytes OK
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.920442) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.923407) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.923429) EVENT_LOG_v1 {"time_micros": 1764901417923423, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.923448) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 2782595, prev total WAL file size 2782595, number of live WAL files 2.
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.924757) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(2666KB)], [107(6617KB)]
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901417924797, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 9506313, "oldest_snapshot_seqno": -1}
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6078 keys, 7758851 bytes, temperature: kUnknown
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901417984607, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 7758851, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7721165, "index_size": 21384, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15237, "raw_key_size": 158064, "raw_average_key_size": 26, "raw_value_size": 7614122, "raw_average_value_size": 1252, "num_data_blocks": 847, "num_entries": 6078, "num_filter_entries": 6078, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764901417, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.985089) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 7758851 bytes
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.987978) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.7 rd, 129.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 6.5 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(6.3) write-amplify(2.8) OK, records in: 6592, records dropped: 514 output_compression: NoCompression
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.988010) EVENT_LOG_v1 {"time_micros": 1764901417987995, "job": 64, "event": "compaction_finished", "compaction_time_micros": 59903, "compaction_time_cpu_micros": 31913, "output_level": 6, "num_output_files": 1, "total_output_size": 7758851, "num_input_records": 6592, "num_output_records": 6078, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901417989262, "job": 64, "event": "table_file_deletion", "file_number": 109}
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901417992564, "job": 64, "event": "table_file_deletion", "file_number": 107}
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.924357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.992970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.992979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.992983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.992986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.992989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:23:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:23:38 compute-0 nova_compute[349548]: 2025-12-05 02:23:38.566 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:23:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2239: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:39 compute-0 ceph-mon[192914]: pgmap v2239: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2240: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:41 compute-0 ceph-mon[192914]: pgmap v2240: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:42 compute-0 nova_compute[349548]: 2025-12-05 02:23:42.666 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:23:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2241: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:23:43 compute-0 nova_compute[349548]: 2025-12-05 02:23:43.570 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:23:43 compute-0 ceph-mon[192914]: pgmap v2241: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2242: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:23:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2231307422' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:23:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:23:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2231307422' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:23:45 compute-0 podman[466566]: 2025-12-05 02:23:45.722990439 +0000 UTC m=+0.121482472 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:23:45 compute-0 podman[466565]: 2025-12-05 02:23:45.751689135 +0000 UTC m=+0.156958668 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:23:45 compute-0 ceph-mon[192914]: pgmap v2242: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2231307422' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:23:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2231307422' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:23:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:23:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:23:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:23:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:23:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:23:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:23:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2243: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:47 compute-0 nova_compute[349548]: 2025-12-05 02:23:47.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:23:47 compute-0 nova_compute[349548]: 2025-12-05 02:23:47.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:23:47 compute-0 nova_compute[349548]: 2025-12-05 02:23:47.439 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:23:47 compute-0 nova_compute[349548]: 2025-12-05 02:23:47.440 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:23:47 compute-0 nova_compute[349548]: 2025-12-05 02:23:47.440 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 02:23:47 compute-0 nova_compute[349548]: 2025-12-05 02:23:47.667 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:23:47 compute-0 ceph-mon[192914]: pgmap v2243: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:23:48 compute-0 nova_compute[349548]: 2025-12-05 02:23:48.574 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:23:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2244: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:48 compute-0 nova_compute[349548]: 2025-12-05 02:23:48.680 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Updating instance_info_cache with network_info: [{"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:23:48 compute-0 nova_compute[349548]: 2025-12-05 02:23:48.695 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:23:48 compute-0 nova_compute[349548]: 2025-12-05 02:23:48.696 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 02:23:50 compute-0 ceph-mon[192914]: pgmap v2244: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2245: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:50 compute-0 podman[466609]: 2025-12-05 02:23:50.712527634 +0000 UTC m=+0.111173903 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543)
Dec 05 02:23:50 compute-0 podman[466610]: 2025-12-05 02:23:50.72947013 +0000 UTC m=+0.132180983 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 02:23:50 compute-0 podman[466608]: 2025-12-05 02:23:50.748988558 +0000 UTC m=+0.153489171 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 02:23:51 compute-0 nova_compute[349548]: 2025-12-05 02:23:51.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:23:52 compute-0 ceph-mon[192914]: pgmap v2245: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:52 compute-0 nova_compute[349548]: 2025-12-05 02:23:52.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:23:52 compute-0 nova_compute[349548]: 2025-12-05 02:23:52.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:23:52 compute-0 nova_compute[349548]: 2025-12-05 02:23:52.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:23:52 compute-0 nova_compute[349548]: 2025-12-05 02:23:52.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:23:52 compute-0 nova_compute[349548]: 2025-12-05 02:23:52.671 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:23:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2246: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:53 compute-0 nova_compute[349548]: 2025-12-05 02:23:53.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:23:53 compute-0 nova_compute[349548]: 2025-12-05 02:23:53.115 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:23:53 compute-0 nova_compute[349548]: 2025-12-05 02:23:53.116 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:23:53 compute-0 nova_compute[349548]: 2025-12-05 02:23:53.116 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:23:53 compute-0 nova_compute[349548]: 2025-12-05 02:23:53.117 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:23:53 compute-0 nova_compute[349548]: 2025-12-05 02:23:53.117 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:23:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:23:53 compute-0 nova_compute[349548]: 2025-12-05 02:23:53.576 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:23:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:23:53 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3548959651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:23:53 compute-0 nova_compute[349548]: 2025-12-05 02:23:53.667 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:23:53 compute-0 nova_compute[349548]: 2025-12-05 02:23:53.792 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:23:53 compute-0 nova_compute[349548]: 2025-12-05 02:23:53.793 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:23:53 compute-0 nova_compute[349548]: 2025-12-05 02:23:53.802 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:23:53 compute-0 nova_compute[349548]: 2025-12-05 02:23:53.803 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:23:54 compute-0 ceph-mon[192914]: pgmap v2246: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:54 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3548959651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.426 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.427 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3488MB free_disk=59.897029876708984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.427 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.427 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.551 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.551 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.552 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.552 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.570 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing inventories for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.594 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating ProviderTree inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.595 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.610 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing aggregate associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.653 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing trait associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, traits: HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 02:23:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2247: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.746 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:23:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:23:55 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3739333258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:23:55 compute-0 nova_compute[349548]: 2025-12-05 02:23:55.275 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:23:55 compute-0 nova_compute[349548]: 2025-12-05 02:23:55.286 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:23:55 compute-0 nova_compute[349548]: 2025-12-05 02:23:55.304 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:23:55 compute-0 nova_compute[349548]: 2025-12-05 02:23:55.307 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:23:55 compute-0 nova_compute[349548]: 2025-12-05 02:23:55.308 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:23:56 compute-0 ceph-mon[192914]: pgmap v2247: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:56 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3739333258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:23:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:23:56.225 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:23:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:23:56.226 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:23:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:23:56.228 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:23:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2248: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:57 compute-0 nova_compute[349548]: 2025-12-05 02:23:57.305 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:23:57 compute-0 nova_compute[349548]: 2025-12-05 02:23:57.342 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:23:57 compute-0 nova_compute[349548]: 2025-12-05 02:23:57.342 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:23:57 compute-0 nova_compute[349548]: 2025-12-05 02:23:57.674 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:23:58 compute-0 ceph-mon[192914]: pgmap v2248: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:23:58 compute-0 nova_compute[349548]: 2025-12-05 02:23:58.580 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:23:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2249: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:23:59 compute-0 nova_compute[349548]: 2025-12-05 02:23:59.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:23:59 compute-0 podman[158197]: time="2025-12-05T02:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:23:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:23:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8663 "" "Go-http-client/1.1"
Dec 05 02:24:00 compute-0 ceph-mon[192914]: pgmap v2249: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2250: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:01 compute-0 openstack_network_exporter[366555]: ERROR   02:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:24:01 compute-0 openstack_network_exporter[366555]: ERROR   02:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:24:01 compute-0 openstack_network_exporter[366555]: ERROR   02:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:24:01 compute-0 openstack_network_exporter[366555]: ERROR   02:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:24:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:24:01 compute-0 openstack_network_exporter[366555]: ERROR   02:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:24:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:24:02 compute-0 ceph-mon[192914]: pgmap v2250: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:02 compute-0 nova_compute[349548]: 2025-12-05 02:24:02.676 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:24:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2251: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:02 compute-0 podman[466712]: 2025-12-05 02:24:02.717664912 +0000 UTC m=+0.119261371 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 02:24:02 compute-0 podman[466719]: 2025-12-05 02:24:02.727258951 +0000 UTC m=+0.114232389 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_id=edpm, name=ubi9-minimal, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., version=9.6, distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.)
Dec 05 02:24:02 compute-0 podman[466711]: 2025-12-05 02:24:02.733785444 +0000 UTC m=+0.149224802 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:24:02 compute-0 podman[466713]: 2025-12-05 02:24:02.769292552 +0000 UTC m=+0.162056263 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 02:24:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:24:03 compute-0 nova_compute[349548]: 2025-12-05 02:24:03.584 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:24:04 compute-0 ceph-mon[192914]: pgmap v2251: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:04 compute-0 sudo[466793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:24:04 compute-0 sudo[466793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:24:04 compute-0 sudo[466793]: pam_unix(sudo:session): session closed for user root
Dec 05 02:24:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2252: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:04 compute-0 sudo[466818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:24:04 compute-0 sudo[466818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:24:04 compute-0 sudo[466818]: pam_unix(sudo:session): session closed for user root
Dec 05 02:24:04 compute-0 sudo[466843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:24:04 compute-0 sudo[466843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:24:04 compute-0 sudo[466843]: pam_unix(sudo:session): session closed for user root
Dec 05 02:24:04 compute-0 sudo[466868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:24:04 compute-0 sudo[466868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:24:05 compute-0 sudo[466868]: pam_unix(sudo:session): session closed for user root
Dec 05 02:24:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:24:05 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:24:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:24:05 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:24:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:24:05 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:24:05 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d088cadc-97a8-4b1a-bf55-6536012cd2bb does not exist
Dec 05 02:24:05 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev faacd52a-bced-489b-bded-5bc36cd515c9 does not exist
Dec 05 02:24:05 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a5acf6f2-7b27-4d53-8f15-5b733cde70c4 does not exist
Dec 05 02:24:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:24:05 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:24:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:24:05 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:24:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:24:05 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:24:05 compute-0 sudo[466922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:24:05 compute-0 sudo[466922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:24:05 compute-0 sudo[466922]: pam_unix(sudo:session): session closed for user root
Dec 05 02:24:06 compute-0 sudo[466947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:24:06 compute-0 sudo[466947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:24:06 compute-0 sudo[466947]: pam_unix(sudo:session): session closed for user root
Dec 05 02:24:06 compute-0 ceph-mon[192914]: pgmap v2252: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:06 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:24:06 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:24:06 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:24:06 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:24:06 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:24:06 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:24:06 compute-0 sudo[466972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:24:06 compute-0 sudo[466972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:24:06 compute-0 sudo[466972]: pam_unix(sudo:session): session closed for user root
Dec 05 02:24:06 compute-0 sudo[466997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:24:06 compute-0 sudo[466997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:24:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2253: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:06 compute-0 podman[467060]: 2025-12-05 02:24:06.885230138 +0000 UTC m=+0.079075402 container create a86886461fb495a47b0ce9b3ca593d50e2ac77c96bfd94331100bc1501314156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chatterjee, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:24:06 compute-0 systemd[1]: Started libpod-conmon-a86886461fb495a47b0ce9b3ca593d50e2ac77c96bfd94331100bc1501314156.scope.
Dec 05 02:24:06 compute-0 podman[467060]: 2025-12-05 02:24:06.853679782 +0000 UTC m=+0.047525106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:24:06 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:24:07 compute-0 podman[467060]: 2025-12-05 02:24:07.017335398 +0000 UTC m=+0.211180682 container init a86886461fb495a47b0ce9b3ca593d50e2ac77c96bfd94331100bc1501314156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:24:07 compute-0 podman[467060]: 2025-12-05 02:24:07.034164581 +0000 UTC m=+0.228009865 container start a86886461fb495a47b0ce9b3ca593d50e2ac77c96bfd94331100bc1501314156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chatterjee, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 05 02:24:07 compute-0 podman[467060]: 2025-12-05 02:24:07.041372153 +0000 UTC m=+0.235217437 container attach a86886461fb495a47b0ce9b3ca593d50e2ac77c96bfd94331100bc1501314156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chatterjee, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:24:07 compute-0 determined_chatterjee[467076]: 167 167
Dec 05 02:24:07 compute-0 systemd[1]: libpod-a86886461fb495a47b0ce9b3ca593d50e2ac77c96bfd94331100bc1501314156.scope: Deactivated successfully.
Dec 05 02:24:07 compute-0 podman[467060]: 2025-12-05 02:24:07.04516654 +0000 UTC m=+0.239011824 container died a86886461fb495a47b0ce9b3ca593d50e2ac77c96bfd94331100bc1501314156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chatterjee, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:24:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-64eaf4cfecac5b5d7f99519898db350aaa0d864b706b84bbbb505a98632e76f4-merged.mount: Deactivated successfully.
Dec 05 02:24:07 compute-0 podman[467060]: 2025-12-05 02:24:07.122689826 +0000 UTC m=+0.316535080 container remove a86886461fb495a47b0ce9b3ca593d50e2ac77c96bfd94331100bc1501314156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 05 02:24:07 compute-0 systemd[1]: libpod-conmon-a86886461fb495a47b0ce9b3ca593d50e2ac77c96bfd94331100bc1501314156.scope: Deactivated successfully.
Dec 05 02:24:07 compute-0 podman[467099]: 2025-12-05 02:24:07.397834044 +0000 UTC m=+0.076459639 container create f047144d4a4bcb52799bbcf3d0c6b15a2429ef066429b65787e026e01018c1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curran, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:24:07 compute-0 podman[467099]: 2025-12-05 02:24:07.366082552 +0000 UTC m=+0.044708197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:24:07 compute-0 systemd[1]: Started libpod-conmon-f047144d4a4bcb52799bbcf3d0c6b15a2429ef066429b65787e026e01018c1f6.scope.
Dec 05 02:24:07 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8dfd8744e51bf35fcf2090e1a0584581ee1801652d8f7a211e071ece7dfe4d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8dfd8744e51bf35fcf2090e1a0584581ee1801652d8f7a211e071ece7dfe4d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8dfd8744e51bf35fcf2090e1a0584581ee1801652d8f7a211e071ece7dfe4d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8dfd8744e51bf35fcf2090e1a0584581ee1801652d8f7a211e071ece7dfe4d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8dfd8744e51bf35fcf2090e1a0584581ee1801652d8f7a211e071ece7dfe4d4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:24:07 compute-0 podman[467099]: 2025-12-05 02:24:07.57571013 +0000 UTC m=+0.254335705 container init f047144d4a4bcb52799bbcf3d0c6b15a2429ef066429b65787e026e01018c1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curran, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:24:07 compute-0 podman[467099]: 2025-12-05 02:24:07.597191443 +0000 UTC m=+0.275817008 container start f047144d4a4bcb52799bbcf3d0c6b15a2429ef066429b65787e026e01018c1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 05 02:24:07 compute-0 podman[467099]: 2025-12-05 02:24:07.602019919 +0000 UTC m=+0.280645504 container attach f047144d4a4bcb52799bbcf3d0c6b15a2429ef066429b65787e026e01018c1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curran, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 05 02:24:07 compute-0 nova_compute[349548]: 2025-12-05 02:24:07.681 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:24:08 compute-0 ceph-mon[192914]: pgmap v2253: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:24:08 compute-0 nova_compute[349548]: 2025-12-05 02:24:08.589 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:24:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2254: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:08 compute-0 keen_curran[467114]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:24:08 compute-0 keen_curran[467114]: --> relative data size: 1.0
Dec 05 02:24:08 compute-0 keen_curran[467114]: --> All data devices are unavailable
Dec 05 02:24:08 compute-0 systemd[1]: libpod-f047144d4a4bcb52799bbcf3d0c6b15a2429ef066429b65787e026e01018c1f6.scope: Deactivated successfully.
Dec 05 02:24:08 compute-0 systemd[1]: libpod-f047144d4a4bcb52799bbcf3d0c6b15a2429ef066429b65787e026e01018c1f6.scope: Consumed 1.260s CPU time.
Dec 05 02:24:08 compute-0 podman[467099]: 2025-12-05 02:24:08.938059132 +0000 UTC m=+1.616684747 container died f047144d4a4bcb52799bbcf3d0c6b15a2429ef066429b65787e026e01018c1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curran, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 05 02:24:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8dfd8744e51bf35fcf2090e1a0584581ee1801652d8f7a211e071ece7dfe4d4-merged.mount: Deactivated successfully.
Dec 05 02:24:09 compute-0 podman[467099]: 2025-12-05 02:24:09.045545501 +0000 UTC m=+1.724171076 container remove f047144d4a4bcb52799bbcf3d0c6b15a2429ef066429b65787e026e01018c1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:24:09 compute-0 systemd[1]: libpod-conmon-f047144d4a4bcb52799bbcf3d0c6b15a2429ef066429b65787e026e01018c1f6.scope: Deactivated successfully.
Dec 05 02:24:09 compute-0 sudo[466997]: pam_unix(sudo:session): session closed for user root
Dec 05 02:24:09 compute-0 sudo[467156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:24:09 compute-0 sudo[467156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:24:09 compute-0 sudo[467156]: pam_unix(sudo:session): session closed for user root
Dec 05 02:24:09 compute-0 sudo[467181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:24:09 compute-0 sudo[467181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:24:09 compute-0 sudo[467181]: pam_unix(sudo:session): session closed for user root
Dec 05 02:24:09 compute-0 sudo[467206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:24:09 compute-0 sudo[467206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:24:09 compute-0 sudo[467206]: pam_unix(sudo:session): session closed for user root
Dec 05 02:24:09 compute-0 sudo[467231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:24:09 compute-0 sudo[467231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:24:10 compute-0 ceph-mon[192914]: pgmap v2254: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:10 compute-0 podman[467292]: 2025-12-05 02:24:10.200700595 +0000 UTC m=+0.102413577 container create c4b89e938c23cf3957a1d57142445cfaedae4c416752d3bfa8baf538bd54c7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:24:10 compute-0 podman[467292]: 2025-12-05 02:24:10.151495803 +0000 UTC m=+0.053208825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:24:10 compute-0 systemd[1]: Started libpod-conmon-c4b89e938c23cf3957a1d57142445cfaedae4c416752d3bfa8baf538bd54c7cb.scope.
Dec 05 02:24:10 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:24:10 compute-0 podman[467292]: 2025-12-05 02:24:10.336223431 +0000 UTC m=+0.237936453 container init c4b89e938c23cf3957a1d57142445cfaedae4c416752d3bfa8baf538bd54c7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:24:10 compute-0 podman[467292]: 2025-12-05 02:24:10.355353449 +0000 UTC m=+0.257066431 container start c4b89e938c23cf3957a1d57142445cfaedae4c416752d3bfa8baf538bd54c7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:24:10 compute-0 serene_almeida[467307]: 167 167
Dec 05 02:24:10 compute-0 podman[467292]: 2025-12-05 02:24:10.362837699 +0000 UTC m=+0.264550681 container attach c4b89e938c23cf3957a1d57142445cfaedae4c416752d3bfa8baf538bd54c7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 05 02:24:10 compute-0 podman[467292]: 2025-12-05 02:24:10.366413639 +0000 UTC m=+0.268126621 container died c4b89e938c23cf3957a1d57142445cfaedae4c416752d3bfa8baf538bd54c7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:24:10 compute-0 systemd[1]: libpod-c4b89e938c23cf3957a1d57142445cfaedae4c416752d3bfa8baf538bd54c7cb.scope: Deactivated successfully.
Dec 05 02:24:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a0c2d72d42a611c4f808ff2f778a22c970c538521c2db42247bd567546aca2e-merged.mount: Deactivated successfully.
Dec 05 02:24:10 compute-0 podman[467292]: 2025-12-05 02:24:10.449662467 +0000 UTC m=+0.351375449 container remove c4b89e938c23cf3957a1d57142445cfaedae4c416752d3bfa8baf538bd54c7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 02:24:10 compute-0 systemd[1]: libpod-conmon-c4b89e938c23cf3957a1d57142445cfaedae4c416752d3bfa8baf538bd54c7cb.scope: Deactivated successfully.
Dec 05 02:24:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2255: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:10 compute-0 podman[467331]: 2025-12-05 02:24:10.767204695 +0000 UTC m=+0.099736201 container create ddd7f85ef2cbcbe11d6df9d2d9f90f34f4ba2c850c3bd137580278d8493d3e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hertz, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 05 02:24:10 compute-0 podman[467331]: 2025-12-05 02:24:10.729876986 +0000 UTC m=+0.062408532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:24:10 compute-0 systemd[1]: Started libpod-conmon-ddd7f85ef2cbcbe11d6df9d2d9f90f34f4ba2c850c3bd137580278d8493d3e66.scope.
Dec 05 02:24:10 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:24:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ab2ff402b86960098750a7e50dc0a4ca62c4f508e7052a39892cb8b2aae942/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:24:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ab2ff402b86960098750a7e50dc0a4ca62c4f508e7052a39892cb8b2aae942/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:24:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ab2ff402b86960098750a7e50dc0a4ca62c4f508e7052a39892cb8b2aae942/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:24:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ab2ff402b86960098750a7e50dc0a4ca62c4f508e7052a39892cb8b2aae942/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:24:10 compute-0 podman[467331]: 2025-12-05 02:24:10.948046694 +0000 UTC m=+0.280578250 container init ddd7f85ef2cbcbe11d6df9d2d9f90f34f4ba2c850c3bd137580278d8493d3e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hertz, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 05 02:24:10 compute-0 podman[467331]: 2025-12-05 02:24:10.977630325 +0000 UTC m=+0.310161831 container start ddd7f85ef2cbcbe11d6df9d2d9f90f34f4ba2c850c3bd137580278d8493d3e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hertz, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 05 02:24:10 compute-0 podman[467331]: 2025-12-05 02:24:10.984813716 +0000 UTC m=+0.317345273 container attach ddd7f85ef2cbcbe11d6df9d2d9f90f34f4ba2c850c3bd137580278d8493d3e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hertz, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]: {
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:     "0": [
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:         {
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "devices": [
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "/dev/loop3"
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             ],
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "lv_name": "ceph_lv0",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "lv_size": "21470642176",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "name": "ceph_lv0",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "tags": {
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.cluster_name": "ceph",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.crush_device_class": "",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.encrypted": "0",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.osd_id": "0",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.type": "block",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.vdo": "0"
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             },
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "type": "block",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "vg_name": "ceph_vg0"
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:         }
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:     ],
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:     "1": [
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:         {
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "devices": [
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "/dev/loop4"
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             ],
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "lv_name": "ceph_lv1",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "lv_size": "21470642176",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "name": "ceph_lv1",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "tags": {
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.cluster_name": "ceph",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.crush_device_class": "",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.encrypted": "0",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.osd_id": "1",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.type": "block",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.vdo": "0"
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             },
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "type": "block",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "vg_name": "ceph_vg1"
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:         }
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:     ],
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:     "2": [
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:         {
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "devices": [
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "/dev/loop5"
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             ],
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "lv_name": "ceph_lv2",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "lv_size": "21470642176",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "name": "ceph_lv2",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "tags": {
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.cluster_name": "ceph",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.crush_device_class": "",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.encrypted": "0",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.osd_id": "2",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.type": "block",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:                 "ceph.vdo": "0"
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             },
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "type": "block",
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:             "vg_name": "ceph_vg2"
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:         }
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]:     ]
Dec 05 02:24:11 compute-0 dazzling_hertz[467347]: }
Dec 05 02:24:11 compute-0 systemd[1]: libpod-ddd7f85ef2cbcbe11d6df9d2d9f90f34f4ba2c850c3bd137580278d8493d3e66.scope: Deactivated successfully.
Dec 05 02:24:11 compute-0 podman[467331]: 2025-12-05 02:24:11.831190198 +0000 UTC m=+1.163721674 container died ddd7f85ef2cbcbe11d6df9d2d9f90f34f4ba2c850c3bd137580278d8493d3e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hertz, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:24:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-45ab2ff402b86960098750a7e50dc0a4ca62c4f508e7052a39892cb8b2aae942-merged.mount: Deactivated successfully.
Dec 05 02:24:11 compute-0 podman[467331]: 2025-12-05 02:24:11.926509075 +0000 UTC m=+1.259040581 container remove ddd7f85ef2cbcbe11d6df9d2d9f90f34f4ba2c850c3bd137580278d8493d3e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hertz, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:24:11 compute-0 systemd[1]: libpod-conmon-ddd7f85ef2cbcbe11d6df9d2d9f90f34f4ba2c850c3bd137580278d8493d3e66.scope: Deactivated successfully.
Dec 05 02:24:11 compute-0 sudo[467231]: pam_unix(sudo:session): session closed for user root
Dec 05 02:24:12 compute-0 sudo[467367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:24:12 compute-0 sudo[467367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:24:12 compute-0 sudo[467367]: pam_unix(sudo:session): session closed for user root
Dec 05 02:24:12 compute-0 ceph-mon[192914]: pgmap v2255: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:12 compute-0 sudo[467392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:24:12 compute-0 sudo[467392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:24:12 compute-0 sudo[467392]: pam_unix(sudo:session): session closed for user root
Dec 05 02:24:12 compute-0 sudo[467417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:24:12 compute-0 sudo[467417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:24:12 compute-0 sudo[467417]: pam_unix(sudo:session): session closed for user root
Dec 05 02:24:12 compute-0 sudo[467442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:24:12 compute-0 sudo[467442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:24:12 compute-0 nova_compute[349548]: 2025-12-05 02:24:12.683 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:24:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2256: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:13 compute-0 podman[467505]: 2025-12-05 02:24:13.093404458 +0000 UTC m=+0.082945440 container create 25e54172e4bd8b5e62cece1b1342e49e77f30d2b2eaae75bbffd71efbbbeabe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_buck, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:24:13 compute-0 podman[467505]: 2025-12-05 02:24:13.072324066 +0000 UTC m=+0.061865068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:24:13 compute-0 systemd[1]: Started libpod-conmon-25e54172e4bd8b5e62cece1b1342e49e77f30d2b2eaae75bbffd71efbbbeabe8.scope.
Dec 05 02:24:13 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:24:13 compute-0 podman[467505]: 2025-12-05 02:24:13.240745457 +0000 UTC m=+0.230286529 container init 25e54172e4bd8b5e62cece1b1342e49e77f30d2b2eaae75bbffd71efbbbeabe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 05 02:24:13 compute-0 podman[467505]: 2025-12-05 02:24:13.259337759 +0000 UTC m=+0.248878771 container start 25e54172e4bd8b5e62cece1b1342e49e77f30d2b2eaae75bbffd71efbbbeabe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 05 02:24:13 compute-0 podman[467505]: 2025-12-05 02:24:13.26652003 +0000 UTC m=+0.256061042 container attach 25e54172e4bd8b5e62cece1b1342e49e77f30d2b2eaae75bbffd71efbbbeabe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:24:13 compute-0 peaceful_buck[467520]: 167 167
Dec 05 02:24:13 compute-0 systemd[1]: libpod-25e54172e4bd8b5e62cece1b1342e49e77f30d2b2eaae75bbffd71efbbbeabe8.scope: Deactivated successfully.
Dec 05 02:24:13 compute-0 podman[467525]: 2025-12-05 02:24:13.360593903 +0000 UTC m=+0.063570617 container died 25e54172e4bd8b5e62cece1b1342e49e77f30d2b2eaae75bbffd71efbbbeabe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:24:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:24:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e3c47d56e4402339041cb994e60580ede9e0748d602fc1eb30ec3836f40387b-merged.mount: Deactivated successfully.
Dec 05 02:24:13 compute-0 podman[467525]: 2025-12-05 02:24:13.451554307 +0000 UTC m=+0.154530951 container remove 25e54172e4bd8b5e62cece1b1342e49e77f30d2b2eaae75bbffd71efbbbeabe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_buck, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:24:13 compute-0 systemd[1]: libpod-conmon-25e54172e4bd8b5e62cece1b1342e49e77f30d2b2eaae75bbffd71efbbbeabe8.scope: Deactivated successfully.
Dec 05 02:24:13 compute-0 nova_compute[349548]: 2025-12-05 02:24:13.593 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:24:13 compute-0 podman[467546]: 2025-12-05 02:24:13.817053033 +0000 UTC m=+0.111202334 container create 6321ce3e0d24e45183bbcf336bb1ad4e151e8ba5cd3458011817be62ecfc1310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:24:13 compute-0 podman[467546]: 2025-12-05 02:24:13.772160302 +0000 UTC m=+0.066309653 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:24:13 compute-0 systemd[1]: Started libpod-conmon-6321ce3e0d24e45183bbcf336bb1ad4e151e8ba5cd3458011817be62ecfc1310.scope.
Dec 05 02:24:13 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fde6c0fcf2c3be4e483ad998f9a906337b4be96b4d822bf766f0f9b2bbf96fe5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fde6c0fcf2c3be4e483ad998f9a906337b4be96b4d822bf766f0f9b2bbf96fe5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fde6c0fcf2c3be4e483ad998f9a906337b4be96b4d822bf766f0f9b2bbf96fe5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fde6c0fcf2c3be4e483ad998f9a906337b4be96b4d822bf766f0f9b2bbf96fe5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:24:13 compute-0 podman[467546]: 2025-12-05 02:24:13.983009834 +0000 UTC m=+0.277159185 container init 6321ce3e0d24e45183bbcf336bb1ad4e151e8ba5cd3458011817be62ecfc1310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:24:14 compute-0 podman[467546]: 2025-12-05 02:24:14.015565698 +0000 UTC m=+0.309715009 container start 6321ce3e0d24e45183bbcf336bb1ad4e151e8ba5cd3458011817be62ecfc1310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hopper, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 02:24:14 compute-0 podman[467546]: 2025-12-05 02:24:14.022065101 +0000 UTC m=+0.316214412 container attach 6321ce3e0d24e45183bbcf336bb1ad4e151e8ba5cd3458011817be62ecfc1310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Dec 05 02:24:14 compute-0 ceph-mon[192914]: pgmap v2256: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2257: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:15 compute-0 dreamy_hopper[467561]: {
Dec 05 02:24:15 compute-0 dreamy_hopper[467561]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:24:15 compute-0 dreamy_hopper[467561]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:24:15 compute-0 dreamy_hopper[467561]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:24:15 compute-0 dreamy_hopper[467561]:         "osd_id": 0,
Dec 05 02:24:15 compute-0 dreamy_hopper[467561]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:24:15 compute-0 dreamy_hopper[467561]:         "type": "bluestore"
Dec 05 02:24:15 compute-0 dreamy_hopper[467561]:     },
Dec 05 02:24:15 compute-0 dreamy_hopper[467561]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:24:15 compute-0 dreamy_hopper[467561]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:24:15 compute-0 dreamy_hopper[467561]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:24:15 compute-0 dreamy_hopper[467561]:         "osd_id": 1,
Dec 05 02:24:15 compute-0 dreamy_hopper[467561]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:24:15 compute-0 dreamy_hopper[467561]:         "type": "bluestore"
Dec 05 02:24:15 compute-0 dreamy_hopper[467561]:     },
Dec 05 02:24:15 compute-0 dreamy_hopper[467561]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:24:15 compute-0 dreamy_hopper[467561]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:24:15 compute-0 dreamy_hopper[467561]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:24:15 compute-0 dreamy_hopper[467561]:         "osd_id": 2,
Dec 05 02:24:15 compute-0 dreamy_hopper[467561]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:24:15 compute-0 dreamy_hopper[467561]:         "type": "bluestore"
Dec 05 02:24:15 compute-0 dreamy_hopper[467561]:     }
Dec 05 02:24:15 compute-0 dreamy_hopper[467561]: }
Dec 05 02:24:15 compute-0 systemd[1]: libpod-6321ce3e0d24e45183bbcf336bb1ad4e151e8ba5cd3458011817be62ecfc1310.scope: Deactivated successfully.
Dec 05 02:24:15 compute-0 systemd[1]: libpod-6321ce3e0d24e45183bbcf336bb1ad4e151e8ba5cd3458011817be62ecfc1310.scope: Consumed 1.291s CPU time.
Dec 05 02:24:15 compute-0 podman[467594]: 2025-12-05 02:24:15.39430826 +0000 UTC m=+0.057897047 container died 6321ce3e0d24e45183bbcf336bb1ad4e151e8ba5cd3458011817be62ecfc1310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hopper, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:24:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-fde6c0fcf2c3be4e483ad998f9a906337b4be96b4d822bf766f0f9b2bbf96fe5-merged.mount: Deactivated successfully.
Dec 05 02:24:15 compute-0 podman[467594]: 2025-12-05 02:24:15.508085486 +0000 UTC m=+0.171674233 container remove 6321ce3e0d24e45183bbcf336bb1ad4e151e8ba5cd3458011817be62ecfc1310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hopper, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:24:15 compute-0 systemd[1]: libpod-conmon-6321ce3e0d24e45183bbcf336bb1ad4e151e8ba5cd3458011817be62ecfc1310.scope: Deactivated successfully.
Dec 05 02:24:15 compute-0 sudo[467442]: pam_unix(sudo:session): session closed for user root
Dec 05 02:24:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:24:15 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:24:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:24:15 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:24:15 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 2b08f8c6-8f9c-4d6b-9bee-aa9e27ca399f does not exist
Dec 05 02:24:15 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 621d11ca-9aba-4734-8b8d-14923b5301a6 does not exist
Dec 05 02:24:15 compute-0 sudo[467610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:24:15 compute-0 sudo[467610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:24:15 compute-0 sudo[467610]: pam_unix(sudo:session): session closed for user root
Dec 05 02:24:15 compute-0 podman[467634]: 2025-12-05 02:24:15.918116152 +0000 UTC m=+0.137846162 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:24:15 compute-0 sudo[467641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:24:15 compute-0 sudo[467641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:24:15 compute-0 sudo[467641]: pam_unix(sudo:session): session closed for user root
Dec 05 02:24:16 compute-0 podman[467681]: 2025-12-05 02:24:16.028872003 +0000 UTC m=+0.083738983 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 02:24:16 compute-0 ceph-mon[192914]: pgmap v2257: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:16 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:24:16 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:24:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:24:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:24:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:24:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:24:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:24:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:24:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:24:16
Dec 05 02:24:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:24:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:24:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'vms', 'images', 'backups']
Dec 05 02:24:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:24:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2258: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:17 compute-0 ceph-mon[192914]: pgmap v2258: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:17 compute-0 nova_compute[349548]: 2025-12-05 02:24:17.687 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:24:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:24:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:24:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:24:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:24:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:24:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:24:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:24:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:24:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:24:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:24:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:24:18 compute-0 nova_compute[349548]: 2025-12-05 02:24:18.597 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:24:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2259: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:19 compute-0 ceph-mon[192914]: pgmap v2259: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2260: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:21 compute-0 podman[467702]: 2025-12-05 02:24:21.733875422 +0000 UTC m=+0.129667263 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.29.0, io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, distribution-scope=public, name=ubi9, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, release-0.7.12=, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container)
Dec 05 02:24:21 compute-0 podman[467703]: 2025-12-05 02:24:21.745740445 +0000 UTC m=+0.133929173 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 05 02:24:21 compute-0 podman[467701]: 2025-12-05 02:24:21.748615586 +0000 UTC m=+0.147547975 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 02:24:21 compute-0 ceph-mon[192914]: pgmap v2260: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:22 compute-0 nova_compute[349548]: 2025-12-05 02:24:22.691 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:24:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2261: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:24:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.1 total, 600.0 interval
                                            Cumulative writes: 10K writes, 37K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 10K writes, 2749 syncs, 3.64 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 337 writes, 730 keys, 337 commit groups, 1.0 writes per commit group, ingest: 0.46 MB, 0.00 MB/s
                                            Interval WAL: 337 writes, 162 syncs, 2.08 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:24:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:24:23 compute-0 nova_compute[349548]: 2025-12-05 02:24:23.599 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:24:23 compute-0 ceph-mon[192914]: pgmap v2261: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2262: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:25 compute-0 ceph-mon[192914]: pgmap v2262: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2263: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015211533217750863 of space, bias 1.0, pg target 0.4563459965325259 quantized to 32 (current 32)
Dec 05 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 05 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:24:27 compute-0 nova_compute[349548]: 2025-12-05 02:24:27.694 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:24:27 compute-0 ceph-mon[192914]: pgmap v2263: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:24:28 compute-0 nova_compute[349548]: 2025-12-05 02:24:28.603 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:24:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2264: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:24:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.2 total, 600.0 interval
                                            Cumulative writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 11K writes, 3184 syncs, 3.69 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 439 writes, 1119 keys, 439 commit groups, 1.0 writes per commit group, ingest: 1.04 MB, 0.00 MB/s
                                            Interval WAL: 439 writes, 202 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:24:29 compute-0 podman[158197]: time="2025-12-05T02:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:24:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:24:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8671 "" "Go-http-client/1.1"
Dec 05 02:24:29 compute-0 ceph-mon[192914]: pgmap v2264: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2265: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:31 compute-0 openstack_network_exporter[366555]: ERROR   02:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:24:31 compute-0 openstack_network_exporter[366555]: ERROR   02:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:24:31 compute-0 openstack_network_exporter[366555]: ERROR   02:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:24:31 compute-0 openstack_network_exporter[366555]: ERROR   02:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:24:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:24:31 compute-0 openstack_network_exporter[366555]: ERROR   02:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:24:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:24:31 compute-0 ceph-mon[192914]: pgmap v2265: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:32 compute-0 nova_compute[349548]: 2025-12-05 02:24:32.698 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:24:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2266: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:24:33 compute-0 nova_compute[349548]: 2025-12-05 02:24:33.607 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:24:33 compute-0 podman[467759]: 2025-12-05 02:24:33.737340454 +0000 UTC m=+0.135190818 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:24:33 compute-0 podman[467761]: 2025-12-05 02:24:33.748501197 +0000 UTC m=+0.135246739 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1755695350, version=9.6, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm)
Dec 05 02:24:33 compute-0 podman[467758]: 2025-12-05 02:24:33.754957729 +0000 UTC m=+0.157773683 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 02:24:33 compute-0 podman[467760]: 2025-12-05 02:24:33.783333306 +0000 UTC m=+0.173638158 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 02:24:33 compute-0 ceph-mon[192914]: pgmap v2266: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2267: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:24:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.1 total, 600.0 interval
                                            Cumulative writes: 9570 writes, 37K keys, 9570 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 9570 writes, 2504 syncs, 3.82 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 506 writes, 1776 keys, 506 commit groups, 1.0 writes per commit group, ingest: 2.56 MB, 0.00 MB/s
                                            Interval WAL: 506 writes, 185 syncs, 2.74 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:24:35 compute-0 ceph-mon[192914]: pgmap v2267: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2268: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:37 compute-0 ceph-mgr[193209]: [devicehealth INFO root] Check health
Dec 05 02:24:37 compute-0 nova_compute[349548]: 2025-12-05 02:24:37.700 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:24:37 compute-0 ceph-mon[192914]: pgmap v2268: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.327 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.328 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.342 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '292fd084-0808-4a80-adc1-6ab1f28e188a', 'name': 'te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'hostId': '1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18', 'status': 'active', 'metadata': {'metering.server_group': '92ca195d-98d1-443c-9947-dcb7ca7b926a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.347 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7', 'name': 'te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'hostId': '1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18', 'status': 'active', 'metadata': {'metering.server_group': '92ca195d-98d1-443c-9947-dcb7ca7b926a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.348 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.348 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.349 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.349 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.350 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T02:24:38.349364) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.352 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.353 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.353 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.353 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.353 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.354 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T02:24:38.353756) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.378 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.379 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.402 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.403 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.404 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.404 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.404 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.404 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.405 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.405 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.405 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T02:24:38.405215) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.406 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.406 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.406 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.407 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.407 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.407 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.407 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.407 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.409 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T02:24:38.407735) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.469 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 30882304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.470 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.536 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.bytes volume: 31304192 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.536 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.537 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.538 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.538 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.538 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.539 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.539 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T02:24:38.539163) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.539 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 3200956192 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.540 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 237184283 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.541 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.latency volume: 2882860455 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.541 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.latency volume: 200982064 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.542 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.542 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.543 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.543 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.543 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.543 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.543 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 1101 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.544 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.544 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.requests volume: 1122 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.545 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.545 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.546 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.546 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.546 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T02:24:38.543549) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.547 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.547 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.547 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.547 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.548 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T02:24:38.547143) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.548 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.549 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.549 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.550 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.550 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.550 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.550 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.551 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 73146368 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.551 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.552 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.bytes volume: 73129984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.552 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.553 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.553 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.553 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.554 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.554 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T02:24:38.550738) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.555 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T02:24:38.554626) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.595 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 nova_compute[349548]: 2025-12-05 02:24:38.609 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.637 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.638 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.638 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.638 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.639 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.639 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.639 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.639 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 11353966152 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.640 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.641 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.latency volume: 10991220303 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.642 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.642 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T02:24:38.639511) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.643 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.643 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.644 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.644 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.644 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.645 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 315 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.645 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.646 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T02:24:38.644571) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.646 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.requests volume: 302 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.647 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.648 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.648 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.648 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.648 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.649 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.649 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T02:24:38.648963) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.655 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.662 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.663 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.663 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.663 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.663 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.663 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.664 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.664 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.664 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T02:24:38.663982) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.665 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.665 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.665 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.666 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.666 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.666 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.666 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.667 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.667 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.668 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.668 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.669 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.669 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.669 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.670 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.670 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.670 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.670 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T02:24:38.666705) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.671 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.671 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.672 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.672 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.673 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.673 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.673 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T02:24:38.670657) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.673 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.674 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.674 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T02:24:38.673837) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.674 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.675 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.675 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.676 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.676 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.676 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.676 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.676 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.676 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.677 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T02:24:38.676550) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.677 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.677 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.677 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.678 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.678 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.678 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.678 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.678 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.678 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.678 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/memory.usage volume: 42.4765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.679 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T02:24:38.678546) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.679 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/memory.usage volume: 42.26953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.679 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.679 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.679 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.680 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.680 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.680 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.680 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.680 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.681 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.681 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.681 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.681 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.681 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.681 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T02:24:38.680230) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.681 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.682 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.682 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.682 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T02:24:38.681830) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.683 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.683 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.683 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.683 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.683 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.683 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.683 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.684 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.684 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.684 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.684 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.684 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.684 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.685 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.685 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T02:24:38.683598) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.685 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/cpu volume: 339720000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.685 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/cpu volume: 337230000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.685 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T02:24:38.685324) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.686 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.686 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.686 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.686 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.686 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.687 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.687 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.687 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.687 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.688 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.688 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.688 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.688 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.688 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.688 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.688 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T02:24:38.686981) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.689 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.689 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.690 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.690 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.690 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.690 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.691 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.691 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.691 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.691 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.691 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.691 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.692 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.692 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T02:24:38.688694) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.692 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.692 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.692 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.695 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.695 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.695 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:24:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2269: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:39 compute-0 ceph-mon[192914]: pgmap v2269: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2270: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:41 compute-0 ceph-mon[192914]: pgmap v2270: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:42 compute-0 nova_compute[349548]: 2025-12-05 02:24:42.704 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:24:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2271: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:24:43 compute-0 nova_compute[349548]: 2025-12-05 02:24:43.613 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:24:43 compute-0 ceph-mon[192914]: pgmap v2271: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2272: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:24:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1786086419' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:24:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:24:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1786086419' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:24:45 compute-0 ceph-mon[192914]: pgmap v2272: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1786086419' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:24:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1786086419' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:24:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:24:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:24:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:24:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:24:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:24:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:24:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2273: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:46 compute-0 podman[467846]: 2025-12-05 02:24:46.714589239 +0000 UTC m=+0.116622936 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:24:46 compute-0 podman[467845]: 2025-12-05 02:24:46.722303386 +0000 UTC m=+0.129258181 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 05 02:24:47 compute-0 nova_compute[349548]: 2025-12-05 02:24:47.705 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:24:47 compute-0 ceph-mon[192914]: pgmap v2273: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:48 compute-0 nova_compute[349548]: 2025-12-05 02:24:48.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:24:48 compute-0 nova_compute[349548]: 2025-12-05 02:24:48.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:24:48 compute-0 nova_compute[349548]: 2025-12-05 02:24:48.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:24:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:24:48 compute-0 nova_compute[349548]: 2025-12-05 02:24:48.462 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 05 02:24:48 compute-0 nova_compute[349548]: 2025-12-05 02:24:48.463 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 05 02:24:48 compute-0 nova_compute[349548]: 2025-12-05 02:24:48.463 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 05 02:24:48 compute-0 nova_compute[349548]: 2025-12-05 02:24:48.464 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 292fd084-0808-4a80-adc1-6ab1f28e188a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:24:48 compute-0 nova_compute[349548]: 2025-12-05 02:24:48.616 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:24:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2274: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:49 compute-0 ceph-mon[192914]: pgmap v2274: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:50 compute-0 nova_compute[349548]: 2025-12-05 02:24:50.498 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updating instance_info_cache with network_info: [{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:24:50 compute-0 nova_compute[349548]: 2025-12-05 02:24:50.520 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 05 02:24:50 compute-0 nova_compute[349548]: 2025-12-05 02:24:50.521 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 05 02:24:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2275: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:52 compute-0 ceph-mon[192914]: pgmap v2275: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:52 compute-0 nova_compute[349548]: 2025-12-05 02:24:52.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:24:52 compute-0 nova_compute[349548]: 2025-12-05 02:24:52.065 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:24:52 compute-0 nova_compute[349548]: 2025-12-05 02:24:52.709 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:24:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2276: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:52 compute-0 podman[467889]: 2025-12-05 02:24:52.73405842 +0000 UTC m=+0.123729496 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, com.redhat.component=ubi9-container, config_id=edpm, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, managed_by=edpm_ansible)
Dec 05 02:24:52 compute-0 podman[467890]: 2025-12-05 02:24:52.736407426 +0000 UTC m=+0.120865796 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 05 02:24:52 compute-0 podman[467888]: 2025-12-05 02:24:52.744961286 +0000 UTC m=+0.141434463 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Dec 05 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.115 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.116 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.117 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.118 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.119 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:24:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:24:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:24:53 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2111209664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.614 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.620 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.717 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.718 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.727 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.728 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 05 02:24:54 compute-0 ceph-mon[192914]: pgmap v2276: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:54 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2111209664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:24:54 compute-0 nova_compute[349548]: 2025-12-05 02:24:54.323 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:24:54 compute-0 nova_compute[349548]: 2025-12-05 02:24:54.325 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3438MB free_disk=59.897029876708984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:24:54 compute-0 nova_compute[349548]: 2025-12-05 02:24:54.326 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:24:54 compute-0 nova_compute[349548]: 2025-12-05 02:24:54.326 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:24:54 compute-0 nova_compute[349548]: 2025-12-05 02:24:54.441 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:24:54 compute-0 nova_compute[349548]: 2025-12-05 02:24:54.441 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 05 02:24:54 compute-0 nova_compute[349548]: 2025-12-05 02:24:54.442 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:24:54 compute-0 nova_compute[349548]: 2025-12-05 02:24:54.442 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:24:54 compute-0 nova_compute[349548]: 2025-12-05 02:24:54.517 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:24:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2277: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:24:54 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2511445024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:24:55 compute-0 nova_compute[349548]: 2025-12-05 02:24:55.013 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:24:55 compute-0 nova_compute[349548]: 2025-12-05 02:24:55.026 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:24:55 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2511445024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:24:55 compute-0 nova_compute[349548]: 2025-12-05 02:24:55.062 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:24:55 compute-0 nova_compute[349548]: 2025-12-05 02:24:55.066 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:24:55 compute-0 nova_compute[349548]: 2025-12-05 02:24:55.067 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:24:56 compute-0 ceph-mon[192914]: pgmap v2277: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:56 compute-0 nova_compute[349548]: 2025-12-05 02:24:56.064 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:24:56 compute-0 nova_compute[349548]: 2025-12-05 02:24:56.064 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:24:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:24:56.227 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:24:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:24:56.227 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:24:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:24:56.228 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:24:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2278: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:57 compute-0 nova_compute[349548]: 2025-12-05 02:24:57.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:24:57 compute-0 nova_compute[349548]: 2025-12-05 02:24:57.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:24:57 compute-0 nova_compute[349548]: 2025-12-05 02:24:57.712 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:24:58 compute-0 ceph-mon[192914]: pgmap v2278: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:24:58 compute-0 nova_compute[349548]: 2025-12-05 02:24:58.626 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:24:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2279: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:24:59 compute-0 podman[158197]: time="2025-12-05T02:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:24:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:24:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8656 "" "Go-http-client/1.1"
Dec 05 02:25:00 compute-0 ceph-mon[192914]: pgmap v2279: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2280: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:01 compute-0 nova_compute[349548]: 2025-12-05 02:25:01.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:25:01 compute-0 openstack_network_exporter[366555]: ERROR   02:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:25:01 compute-0 openstack_network_exporter[366555]: ERROR   02:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:25:01 compute-0 openstack_network_exporter[366555]: ERROR   02:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:25:01 compute-0 openstack_network_exporter[366555]: ERROR   02:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:25:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:25:01 compute-0 openstack_network_exporter[366555]: ERROR   02:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:25:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:25:02 compute-0 ceph-mon[192914]: pgmap v2280: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:02 compute-0 nova_compute[349548]: 2025-12-05 02:25:02.714 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2281: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:25:03 compute-0 nova_compute[349548]: 2025-12-05 02:25:03.630 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:04 compute-0 ceph-mon[192914]: pgmap v2281: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:04 compute-0 podman[467988]: 2025-12-05 02:25:04.704869567 +0000 UTC m=+0.106138733 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:25:04 compute-0 podman[467989]: 2025-12-05 02:25:04.717746948 +0000 UTC m=+0.115677300 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 02:25:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2282: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:04 compute-0 podman[467991]: 2025-12-05 02:25:04.731380791 +0000 UTC m=+0.112382788 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, name=ubi9-minimal, config_id=edpm, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public)
Dec 05 02:25:04 compute-0 podman[467990]: 2025-12-05 02:25:04.791643983 +0000 UTC m=+0.182945949 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 02:25:06 compute-0 ceph-mon[192914]: pgmap v2282: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2283: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:07 compute-0 nova_compute[349548]: 2025-12-05 02:25:07.717 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:08 compute-0 ceph-mon[192914]: pgmap v2283: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:25:08 compute-0 nova_compute[349548]: 2025-12-05 02:25:08.635 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2284: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:10 compute-0 ceph-mon[192914]: pgmap v2284: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2285: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:12 compute-0 ceph-mon[192914]: pgmap v2285: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:12 compute-0 nova_compute[349548]: 2025-12-05 02:25:12.720 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2286: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:25:13 compute-0 nova_compute[349548]: 2025-12-05 02:25:13.638 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:14 compute-0 ceph-mon[192914]: pgmap v2286: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2287: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:16 compute-0 sudo[468073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:25:16 compute-0 sudo[468073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:25:16 compute-0 sudo[468073]: pam_unix(sudo:session): session closed for user root
Dec 05 02:25:16 compute-0 ceph-mon[192914]: pgmap v2287: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:16 compute-0 sudo[468098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:25:16 compute-0 sudo[468098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:25:16 compute-0 sudo[468098]: pam_unix(sudo:session): session closed for user root
Dec 05 02:25:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:25:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:25:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:25:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:25:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:25:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:25:16 compute-0 sudo[468123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:25:16 compute-0 sudo[468123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:25:16 compute-0 sudo[468123]: pam_unix(sudo:session): session closed for user root
Dec 05 02:25:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:25:16
Dec 05 02:25:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:25:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:25:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['images', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'vms']
Dec 05 02:25:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:25:16 compute-0 sudo[468148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:25:16 compute-0 sudo[468148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:25:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2288: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:17 compute-0 sudo[468148]: pam_unix(sudo:session): session closed for user root
Dec 05 02:25:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:25:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:25:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:25:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:25:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:25:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:25:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 73ede9fb-70f1-4556-854b-1fc07962fd79 does not exist
Dec 05 02:25:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev bc87b376-2ec2-4497-908a-6957447c51a3 does not exist
Dec 05 02:25:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 75fcb778-ad7c-45ce-a39a-639e55449ad7 does not exist
Dec 05 02:25:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:25:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:25:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:25:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:25:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:25:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:25:17 compute-0 sudo[468204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:25:17 compute-0 sudo[468204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:25:17 compute-0 sudo[468204]: pam_unix(sudo:session): session closed for user root
Dec 05 02:25:17 compute-0 sudo[468241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:25:17 compute-0 sudo[468241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:25:17 compute-0 sudo[468241]: pam_unix(sudo:session): session closed for user root
Dec 05 02:25:17 compute-0 podman[468229]: 2025-12-05 02:25:17.721295027 +0000 UTC m=+0.137586025 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 02:25:17 compute-0 nova_compute[349548]: 2025-12-05 02:25:17.722 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:17 compute-0 podman[468228]: 2025-12-05 02:25:17.729784526 +0000 UTC m=+0.151052914 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 05 02:25:17 compute-0 sudo[468296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:25:17 compute-0 sudo[468296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:25:17 compute-0 sudo[468296]: pam_unix(sudo:session): session closed for user root
Dec 05 02:25:17 compute-0 sudo[468321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:25:17 compute-0 sudo[468321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:25:18 compute-0 ceph-mon[192914]: pgmap v2288: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:25:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:25:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:25:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:25:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:25:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:25:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:25:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:25:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:25:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:25:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:25:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:25:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:25:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:25:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:25:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:25:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:25:18 compute-0 podman[468383]: 2025-12-05 02:25:18.545180737 +0000 UTC m=+0.082129258 container create 3b240ea3e6a750fb1e6d146cc6dc234ef2f825fe2eb44f5837d055c26bbb59ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:25:18 compute-0 systemd[1]: Started libpod-conmon-3b240ea3e6a750fb1e6d146cc6dc234ef2f825fe2eb44f5837d055c26bbb59ae.scope.
Dec 05 02:25:18 compute-0 podman[468383]: 2025-12-05 02:25:18.516654616 +0000 UTC m=+0.053603117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:25:18 compute-0 nova_compute[349548]: 2025-12-05 02:25:18.642 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:18 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:25:18 compute-0 podman[468383]: 2025-12-05 02:25:18.707419883 +0000 UTC m=+0.244368414 container init 3b240ea3e6a750fb1e6d146cc6dc234ef2f825fe2eb44f5837d055c26bbb59ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_euler, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:25:18 compute-0 podman[468383]: 2025-12-05 02:25:18.725555082 +0000 UTC m=+0.262503603 container start 3b240ea3e6a750fb1e6d146cc6dc234ef2f825fe2eb44f5837d055c26bbb59ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 02:25:18 compute-0 podman[468383]: 2025-12-05 02:25:18.732371293 +0000 UTC m=+0.269319814 container attach 3b240ea3e6a750fb1e6d146cc6dc234ef2f825fe2eb44f5837d055c26bbb59ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_euler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:25:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2289: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:18 compute-0 lucid_euler[468398]: 167 167
Dec 05 02:25:18 compute-0 systemd[1]: libpod-3b240ea3e6a750fb1e6d146cc6dc234ef2f825fe2eb44f5837d055c26bbb59ae.scope: Deactivated successfully.
Dec 05 02:25:18 compute-0 podman[468383]: 2025-12-05 02:25:18.739566666 +0000 UTC m=+0.276515187 container died 3b240ea3e6a750fb1e6d146cc6dc234ef2f825fe2eb44f5837d055c26bbb59ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:25:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-7da4b8089aeb3086dfe5b2b2ab193da303889768c97df5d10a5180ef377498e6-merged.mount: Deactivated successfully.
Dec 05 02:25:18 compute-0 podman[468383]: 2025-12-05 02:25:18.826017344 +0000 UTC m=+0.362965855 container remove 3b240ea3e6a750fb1e6d146cc6dc234ef2f825fe2eb44f5837d055c26bbb59ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_euler, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 02:25:18 compute-0 systemd[1]: libpod-conmon-3b240ea3e6a750fb1e6d146cc6dc234ef2f825fe2eb44f5837d055c26bbb59ae.scope: Deactivated successfully.
Dec 05 02:25:19 compute-0 podman[468422]: 2025-12-05 02:25:19.12640923 +0000 UTC m=+0.099284439 container create b4e9acd19f7c6e04d9a7b4c31dde4bf5916daf967817cac4b9db899dafb5a261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:25:19 compute-0 podman[468422]: 2025-12-05 02:25:19.09506603 +0000 UTC m=+0.067941299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:25:19 compute-0 systemd[1]: Started libpod-conmon-b4e9acd19f7c6e04d9a7b4c31dde4bf5916daf967817cac4b9db899dafb5a261.scope.
Dec 05 02:25:19 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd11b5c318ce8ec7668e4f8a3f9b0d14884e56253c97645c004b555381b6e85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd11b5c318ce8ec7668e4f8a3f9b0d14884e56253c97645c004b555381b6e85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd11b5c318ce8ec7668e4f8a3f9b0d14884e56253c97645c004b555381b6e85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd11b5c318ce8ec7668e4f8a3f9b0d14884e56253c97645c004b555381b6e85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd11b5c318ce8ec7668e4f8a3f9b0d14884e56253c97645c004b555381b6e85/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:25:19 compute-0 podman[468422]: 2025-12-05 02:25:19.31618375 +0000 UTC m=+0.289059019 container init b4e9acd19f7c6e04d9a7b4c31dde4bf5916daf967817cac4b9db899dafb5a261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Dec 05 02:25:19 compute-0 podman[468422]: 2025-12-05 02:25:19.348783026 +0000 UTC m=+0.321658225 container start b4e9acd19f7c6e04d9a7b4c31dde4bf5916daf967817cac4b9db899dafb5a261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 05 02:25:19 compute-0 podman[468422]: 2025-12-05 02:25:19.355508365 +0000 UTC m=+0.328383634 container attach b4e9acd19f7c6e04d9a7b4c31dde4bf5916daf967817cac4b9db899dafb5a261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 02:25:20 compute-0 ceph-mon[192914]: pgmap v2289: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:20 compute-0 charming_davinci[468436]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:25:20 compute-0 charming_davinci[468436]: --> relative data size: 1.0
Dec 05 02:25:20 compute-0 charming_davinci[468436]: --> All data devices are unavailable
Dec 05 02:25:20 compute-0 systemd[1]: libpod-b4e9acd19f7c6e04d9a7b4c31dde4bf5916daf967817cac4b9db899dafb5a261.scope: Deactivated successfully.
Dec 05 02:25:20 compute-0 systemd[1]: libpod-b4e9acd19f7c6e04d9a7b4c31dde4bf5916daf967817cac4b9db899dafb5a261.scope: Consumed 1.259s CPU time.
Dec 05 02:25:20 compute-0 podman[468422]: 2025-12-05 02:25:20.685806468 +0000 UTC m=+1.658681677 container died b4e9acd19f7c6e04d9a7b4c31dde4bf5916daf967817cac4b9db899dafb5a261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:25:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2290: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fd11b5c318ce8ec7668e4f8a3f9b0d14884e56253c97645c004b555381b6e85-merged.mount: Deactivated successfully.
Dec 05 02:25:20 compute-0 podman[468422]: 2025-12-05 02:25:20.80126865 +0000 UTC m=+1.774143859 container remove b4e9acd19f7c6e04d9a7b4c31dde4bf5916daf967817cac4b9db899dafb5a261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:25:20 compute-0 systemd[1]: libpod-conmon-b4e9acd19f7c6e04d9a7b4c31dde4bf5916daf967817cac4b9db899dafb5a261.scope: Deactivated successfully.
Dec 05 02:25:20 compute-0 sudo[468321]: pam_unix(sudo:session): session closed for user root
Dec 05 02:25:21 compute-0 sudo[468480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:25:21 compute-0 sudo[468480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:25:21 compute-0 sudo[468480]: pam_unix(sudo:session): session closed for user root
Dec 05 02:25:21 compute-0 sudo[468505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:25:21 compute-0 sudo[468505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:25:21 compute-0 sudo[468505]: pam_unix(sudo:session): session closed for user root
Dec 05 02:25:21 compute-0 sudo[468530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:25:21 compute-0 sudo[468530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:25:21 compute-0 sudo[468530]: pam_unix(sudo:session): session closed for user root
Dec 05 02:25:21 compute-0 sudo[468555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:25:21 compute-0 sudo[468555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:25:22 compute-0 podman[468618]: 2025-12-05 02:25:22.010349349 +0000 UTC m=+0.068722642 container create dfe9802e0e31b9233065e599125cc6a276ad22e75766333a7208049a4064ecd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shamir, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 05 02:25:22 compute-0 podman[468618]: 2025-12-05 02:25:21.978063582 +0000 UTC m=+0.036436875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:25:22 compute-0 systemd[1]: Started libpod-conmon-dfe9802e0e31b9233065e599125cc6a276ad22e75766333a7208049a4064ecd4.scope.
Dec 05 02:25:22 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:25:22 compute-0 podman[468618]: 2025-12-05 02:25:22.156335039 +0000 UTC m=+0.214708382 container init dfe9802e0e31b9233065e599125cc6a276ad22e75766333a7208049a4064ecd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shamir, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 05 02:25:22 compute-0 podman[468618]: 2025-12-05 02:25:22.174558621 +0000 UTC m=+0.232931874 container start dfe9802e0e31b9233065e599125cc6a276ad22e75766333a7208049a4064ecd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shamir, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:25:22 compute-0 podman[468618]: 2025-12-05 02:25:22.180167868 +0000 UTC m=+0.238541201 container attach dfe9802e0e31b9233065e599125cc6a276ad22e75766333a7208049a4064ecd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shamir, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 05 02:25:22 compute-0 upbeat_shamir[468634]: 167 167
Dec 05 02:25:22 compute-0 systemd[1]: libpod-dfe9802e0e31b9233065e599125cc6a276ad22e75766333a7208049a4064ecd4.scope: Deactivated successfully.
Dec 05 02:25:22 compute-0 podman[468618]: 2025-12-05 02:25:22.187411812 +0000 UTC m=+0.245785095 container died dfe9802e0e31b9233065e599125cc6a276ad22e75766333a7208049a4064ecd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:25:22 compute-0 ceph-mon[192914]: pgmap v2290: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdda18f28c946cda09eed0a9dcd803fa020cf02e83f1dd83dab0807ac0843d13-merged.mount: Deactivated successfully.
Dec 05 02:25:22 compute-0 podman[468618]: 2025-12-05 02:25:22.27350398 +0000 UTC m=+0.331877263 container remove dfe9802e0e31b9233065e599125cc6a276ad22e75766333a7208049a4064ecd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:25:22 compute-0 systemd[1]: libpod-conmon-dfe9802e0e31b9233065e599125cc6a276ad22e75766333a7208049a4064ecd4.scope: Deactivated successfully.
Dec 05 02:25:22 compute-0 podman[468658]: 2025-12-05 02:25:22.529097737 +0000 UTC m=+0.106113451 container create 99eed49c996ce859f852f2110c6e544650d58b040e1d9966c4561ce9b87d13db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:25:22 compute-0 podman[468658]: 2025-12-05 02:25:22.478082164 +0000 UTC m=+0.055097998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:25:22 compute-0 systemd[1]: Started libpod-conmon-99eed49c996ce859f852f2110c6e544650d58b040e1d9966c4561ce9b87d13db.scope.
Dec 05 02:25:22 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:25:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0632bc5c54943c3c74ec6dcb911259c24096b2713800bbe5a5d2ae35789fa947/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:25:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0632bc5c54943c3c74ec6dcb911259c24096b2713800bbe5a5d2ae35789fa947/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:25:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0632bc5c54943c3c74ec6dcb911259c24096b2713800bbe5a5d2ae35789fa947/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:25:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0632bc5c54943c3c74ec6dcb911259c24096b2713800bbe5a5d2ae35789fa947/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:25:22 compute-0 podman[468658]: 2025-12-05 02:25:22.717090667 +0000 UTC m=+0.294106421 container init 99eed49c996ce859f852f2110c6e544650d58b040e1d9966c4561ce9b87d13db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_satoshi, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:25:22 compute-0 nova_compute[349548]: 2025-12-05 02:25:22.730 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:22 compute-0 podman[468658]: 2025-12-05 02:25:22.734841766 +0000 UTC m=+0.311857490 container start 99eed49c996ce859f852f2110c6e544650d58b040e1d9966c4561ce9b87d13db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_satoshi, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:25:22 compute-0 podman[468658]: 2025-12-05 02:25:22.740466164 +0000 UTC m=+0.317481908 container attach 99eed49c996ce859f852f2110c6e544650d58b040e1d9966c4561ce9b87d13db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:25:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2291: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:23 compute-0 ceph-mon[192914]: pgmap v2291: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.430341) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901523430422, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 1061, "num_deletes": 256, "total_data_size": 1555138, "memory_usage": 1584464, "flush_reason": "Manual Compaction"}
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901523445529, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 1540836, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46276, "largest_seqno": 47336, "table_properties": {"data_size": 1535574, "index_size": 2722, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 10970, "raw_average_key_size": 19, "raw_value_size": 1525110, "raw_average_value_size": 2689, "num_data_blocks": 122, "num_entries": 567, "num_filter_entries": 567, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764901418, "oldest_key_time": 1764901418, "file_creation_time": 1764901523, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 15270 microseconds, and 7644 cpu microseconds.
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.445609) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 1540836 bytes OK
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.445632) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.448219) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.448240) EVENT_LOG_v1 {"time_micros": 1764901523448233, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.448261) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 1550143, prev total WAL file size 1550143, number of live WAL files 2.
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.449513) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373538' seq:72057594037927935, type:22 .. '6C6F676D0032303130' seq:0, type:0; will stop at (end)
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(1504KB)], [110(7577KB)]
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901523449605, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 9299687, "oldest_snapshot_seqno": -1}
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6121 keys, 9192886 bytes, temperature: kUnknown
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901523527103, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 9192886, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9152744, "index_size": 23712, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15365, "raw_key_size": 159890, "raw_average_key_size": 26, "raw_value_size": 9042807, "raw_average_value_size": 1477, "num_data_blocks": 946, "num_entries": 6121, "num_filter_entries": 6121, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764901523, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.527474) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 9192886 bytes
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.531454) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.8 rd, 118.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.4 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(12.0) write-amplify(6.0) OK, records in: 6645, records dropped: 524 output_compression: NoCompression
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.531492) EVENT_LOG_v1 {"time_micros": 1764901523531475, "job": 66, "event": "compaction_finished", "compaction_time_micros": 77608, "compaction_time_cpu_micros": 43916, "output_level": 6, "num_output_files": 1, "total_output_size": 9192886, "num_input_records": 6645, "num_output_records": 6121, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901523532315, "job": 66, "event": "table_file_deletion", "file_number": 112}
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901523536330, "job": 66, "event": "table_file_deletion", "file_number": 110}
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.449188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.536452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.536458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.536460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.536463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.536465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]: {
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:     "0": [
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:         {
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "devices": [
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "/dev/loop3"
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             ],
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "lv_name": "ceph_lv0",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "lv_size": "21470642176",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "name": "ceph_lv0",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "tags": {
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.cluster_name": "ceph",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.crush_device_class": "",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.encrypted": "0",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.osd_id": "0",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.type": "block",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.vdo": "0"
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             },
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "type": "block",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "vg_name": "ceph_vg0"
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:         }
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:     ],
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:     "1": [
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:         {
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "devices": [
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "/dev/loop4"
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             ],
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "lv_name": "ceph_lv1",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "lv_size": "21470642176",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "name": "ceph_lv1",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "tags": {
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.cluster_name": "ceph",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.crush_device_class": "",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.encrypted": "0",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.osd_id": "1",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.type": "block",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.vdo": "0"
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             },
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "type": "block",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "vg_name": "ceph_vg1"
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:         }
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:     ],
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:     "2": [
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:         {
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "devices": [
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "/dev/loop5"
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             ],
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "lv_name": "ceph_lv2",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "lv_size": "21470642176",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "name": "ceph_lv2",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "tags": {
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.cluster_name": "ceph",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.crush_device_class": "",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.encrypted": "0",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.osd_id": "2",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.type": "block",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:                 "ceph.vdo": "0"
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             },
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "type": "block",
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:             "vg_name": "ceph_vg2"
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:         }
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]:     ]
Dec 05 02:25:23 compute-0 crazy_satoshi[468674]: }
Dec 05 02:25:23 compute-0 systemd[1]: libpod-99eed49c996ce859f852f2110c6e544650d58b040e1d9966c4561ce9b87d13db.scope: Deactivated successfully.
Dec 05 02:25:23 compute-0 podman[468658]: 2025-12-05 02:25:23.624802291 +0000 UTC m=+1.201818045 container died 99eed49c996ce859f852f2110c6e544650d58b040e1d9966c4561ce9b87d13db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_satoshi, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 02:25:23 compute-0 nova_compute[349548]: 2025-12-05 02:25:23.646 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0632bc5c54943c3c74ec6dcb911259c24096b2713800bbe5a5d2ae35789fa947-merged.mount: Deactivated successfully.
Dec 05 02:25:23 compute-0 podman[468658]: 2025-12-05 02:25:23.708478311 +0000 UTC m=+1.285494035 container remove 99eed49c996ce859f852f2110c6e544650d58b040e1d9966c4561ce9b87d13db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 05 02:25:23 compute-0 systemd[1]: libpod-conmon-99eed49c996ce859f852f2110c6e544650d58b040e1d9966c4561ce9b87d13db.scope: Deactivated successfully.
Dec 05 02:25:23 compute-0 podman[468683]: 2025-12-05 02:25:23.730042327 +0000 UTC m=+0.139615152 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 05 02:25:23 compute-0 sudo[468555]: pam_unix(sudo:session): session closed for user root
Dec 05 02:25:23 compute-0 podman[468684]: 2025-12-05 02:25:23.741045046 +0000 UTC m=+0.139741146 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vendor=Red Hat, Inc., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Dec 05 02:25:23 compute-0 podman[468685]: 2025-12-05 02:25:23.748711221 +0000 UTC m=+0.143349207 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 05 02:25:23 compute-0 sudo[468751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:25:23 compute-0 sudo[468751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:25:23 compute-0 sudo[468751]: pam_unix(sudo:session): session closed for user root
Dec 05 02:25:23 compute-0 sudo[468776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:25:23 compute-0 sudo[468776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:25:23 compute-0 sudo[468776]: pam_unix(sudo:session): session closed for user root
Dec 05 02:25:24 compute-0 sudo[468801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:25:24 compute-0 sudo[468801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:25:24 compute-0 sudo[468801]: pam_unix(sudo:session): session closed for user root
Dec 05 02:25:24 compute-0 sudo[468826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:25:24 compute-0 sudo[468826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:25:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2292: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:24 compute-0 podman[468888]: 2025-12-05 02:25:24.834596219 +0000 UTC m=+0.088911708 container create 9dc59d4d2dcfa88bb8d3652b39feb3596df67eaaa873303767edbb4a95222361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wright, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 05 02:25:24 compute-0 podman[468888]: 2025-12-05 02:25:24.807534319 +0000 UTC m=+0.061849878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:25:24 compute-0 systemd[1]: Started libpod-conmon-9dc59d4d2dcfa88bb8d3652b39feb3596df67eaaa873303767edbb4a95222361.scope.
Dec 05 02:25:24 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:25:24 compute-0 podman[468888]: 2025-12-05 02:25:24.984225082 +0000 UTC m=+0.238540611 container init 9dc59d4d2dcfa88bb8d3652b39feb3596df67eaaa873303767edbb4a95222361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:25:25 compute-0 podman[468888]: 2025-12-05 02:25:25.000629612 +0000 UTC m=+0.254945101 container start 9dc59d4d2dcfa88bb8d3652b39feb3596df67eaaa873303767edbb4a95222361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 05 02:25:25 compute-0 podman[468888]: 2025-12-05 02:25:25.00873913 +0000 UTC m=+0.263054679 container attach 9dc59d4d2dcfa88bb8d3652b39feb3596df67eaaa873303767edbb4a95222361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 05 02:25:25 compute-0 infallible_wright[468904]: 167 167
Dec 05 02:25:25 compute-0 systemd[1]: libpod-9dc59d4d2dcfa88bb8d3652b39feb3596df67eaaa873303767edbb4a95222361.scope: Deactivated successfully.
Dec 05 02:25:25 compute-0 podman[468888]: 2025-12-05 02:25:25.013976997 +0000 UTC m=+0.268292486 container died 9dc59d4d2dcfa88bb8d3652b39feb3596df67eaaa873303767edbb4a95222361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:25:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f7e13bd03303cc09e79fbb408ae93e8897b911166c56c90585fea3ad5bb14e6-merged.mount: Deactivated successfully.
Dec 05 02:25:25 compute-0 podman[468888]: 2025-12-05 02:25:25.093500201 +0000 UTC m=+0.347815710 container remove 9dc59d4d2dcfa88bb8d3652b39feb3596df67eaaa873303767edbb4a95222361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 05 02:25:25 compute-0 systemd[1]: libpod-conmon-9dc59d4d2dcfa88bb8d3652b39feb3596df67eaaa873303767edbb4a95222361.scope: Deactivated successfully.
Dec 05 02:25:25 compute-0 podman[468927]: 2025-12-05 02:25:25.398866297 +0000 UTC m=+0.112291235 container create 792b82be21e2917cd76caab184cc671d5e73bcf00df0c324250d269eea94d444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pike, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 02:25:25 compute-0 podman[468927]: 2025-12-05 02:25:25.355960532 +0000 UTC m=+0.069385520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:25:25 compute-0 systemd[1]: Started libpod-conmon-792b82be21e2917cd76caab184cc671d5e73bcf00df0c324250d269eea94d444.scope.
Dec 05 02:25:25 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:25:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb6b144b12e995e297b661738217946bf38b8b4e1f38992bfc3735d6724d708/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:25:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb6b144b12e995e297b661738217946bf38b8b4e1f38992bfc3735d6724d708/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:25:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb6b144b12e995e297b661738217946bf38b8b4e1f38992bfc3735d6724d708/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:25:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb6b144b12e995e297b661738217946bf38b8b4e1f38992bfc3735d6724d708/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:25:25 compute-0 podman[468927]: 2025-12-05 02:25:25.57947558 +0000 UTC m=+0.292900518 container init 792b82be21e2917cd76caab184cc671d5e73bcf00df0c324250d269eea94d444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:25:25 compute-0 podman[468927]: 2025-12-05 02:25:25.60475884 +0000 UTC m=+0.318183768 container start 792b82be21e2917cd76caab184cc671d5e73bcf00df0c324250d269eea94d444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Dec 05 02:25:25 compute-0 podman[468927]: 2025-12-05 02:25:25.611587352 +0000 UTC m=+0.325012290 container attach 792b82be21e2917cd76caab184cc671d5e73bcf00df0c324250d269eea94d444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pike, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 02:25:25 compute-0 ceph-mon[192914]: pgmap v2292: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.627 349552 DEBUG oslo_concurrency.lockutils [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "292fd084-0808-4a80-adc1-6ab1f28e188a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.629 349552 DEBUG oslo_concurrency.lockutils [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.629 349552 DEBUG oslo_concurrency.lockutils [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.629 349552 DEBUG oslo_concurrency.lockutils [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.632 349552 DEBUG oslo_concurrency.lockutils [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.635 349552 INFO nova.compute.manager [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Terminating instance
Dec 05 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.637 349552 DEBUG nova.compute.manager [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 05 02:25:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2293: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:26 compute-0 kernel: tap706f9405-40 (unregistering): left promiscuous mode
Dec 05 02:25:26 compute-0 NetworkManager[49092]: <info>  [1764901526.7827] device (tap706f9405-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 05 02:25:26 compute-0 jolly_pike[468943]: {
Dec 05 02:25:26 compute-0 jolly_pike[468943]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:25:26 compute-0 jolly_pike[468943]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:25:26 compute-0 jolly_pike[468943]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:25:26 compute-0 jolly_pike[468943]:         "osd_id": 0,
Dec 05 02:25:26 compute-0 jolly_pike[468943]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:25:26 compute-0 jolly_pike[468943]:         "type": "bluestore"
Dec 05 02:25:26 compute-0 jolly_pike[468943]:     },
Dec 05 02:25:26 compute-0 jolly_pike[468943]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:25:26 compute-0 jolly_pike[468943]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:25:26 compute-0 jolly_pike[468943]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:25:26 compute-0 jolly_pike[468943]:         "osd_id": 1,
Dec 05 02:25:26 compute-0 jolly_pike[468943]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:25:26 compute-0 jolly_pike[468943]:         "type": "bluestore"
Dec 05 02:25:26 compute-0 jolly_pike[468943]:     },
Dec 05 02:25:26 compute-0 jolly_pike[468943]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:25:26 compute-0 jolly_pike[468943]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:25:26 compute-0 jolly_pike[468943]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:25:26 compute-0 jolly_pike[468943]:         "osd_id": 2,
Dec 05 02:25:26 compute-0 jolly_pike[468943]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:25:26 compute-0 jolly_pike[468943]:         "type": "bluestore"
Dec 05 02:25:26 compute-0 jolly_pike[468943]:     }
Dec 05 02:25:26 compute-0 jolly_pike[468943]: }
Dec 05 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.794 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:26 compute-0 ovn_controller[89286]: 2025-12-05T02:25:26Z|00183|binding|INFO|Releasing lport 706f9405-4061-481e-a252-9b14f4534a4e from this chassis (sb_readonly=0)
Dec 05 02:25:26 compute-0 ovn_controller[89286]: 2025-12-05T02:25:26Z|00184|binding|INFO|Setting lport 706f9405-4061-481e-a252-9b14f4534a4e down in Southbound
Dec 05 02:25:26 compute-0 ovn_controller[89286]: 2025-12-05T02:25:26Z|00185|binding|INFO|Removing iface tap706f9405-40 ovn-installed in OVS
Dec 05 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.799 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.810 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:10:bc 10.100.0.151'], port_security=['fa:16:3e:cf:10:bc 10.100.0.151'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.151/16', 'neutron:device_id': '292fd084-0808-4a80-adc1-6ab1f28e188a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cb556767-8d1b-4432-9d0a-485dcba856ee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40610b26-f7eb-46a6-9c49-714ab1f77db8, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=706f9405-4061-481e-a252-9b14f4534a4e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.814 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 706f9405-4061-481e-a252-9b14f4534a4e in datapath d7842201-32d0-4f34-ad6b-51f98e5f8322 unbound from our chassis
Dec 05 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.818 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7842201-32d0-4f34-ad6b-51f98e5f8322
Dec 05 02:25:26 compute-0 systemd[1]: libpod-792b82be21e2917cd76caab184cc671d5e73bcf00df0c324250d269eea94d444.scope: Deactivated successfully.
Dec 05 02:25:26 compute-0 podman[468927]: 2025-12-05 02:25:26.825622268 +0000 UTC m=+1.539047166 container died 792b82be21e2917cd76caab184cc671d5e73bcf00df0c324250d269eea94d444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pike, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 05 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.830 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:26 compute-0 systemd[1]: libpod-792b82be21e2917cd76caab184cc671d5e73bcf00df0c324250d269eea94d444.scope: Consumed 1.208s CPU time.
Dec 05 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.840 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[775d4d5d-f8e3-4e8e-8d5f-ef40b0d67580]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:25:26 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Dec 05 02:25:26 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Consumed 7min 37.742s CPU time.
Dec 05 02:25:26 compute-0 systemd-machined[138700]: Machine qemu-12-instance-0000000b terminated.
Dec 05 02:25:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-afb6b144b12e995e297b661738217946bf38b8b4e1f38992bfc3735d6724d708-merged.mount: Deactivated successfully.
Dec 05 02:25:26 compute-0 NetworkManager[49092]: <info>  [1764901526.8703] manager: (tap706f9405-40): new Tun device (/org/freedesktop/NetworkManager/Devices/81)
Dec 05 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.895 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba5bea6-effc-4be5-b191-7cc93efcf199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.903 349552 INFO nova.virt.libvirt.driver [-] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Instance destroyed successfully.
Dec 05 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.904 349552 DEBUG nova.objects.instance [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lazy-loading 'resources' on Instance uuid 292fd084-0808-4a80-adc1-6ab1f28e188a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.902 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[3eddc6a7-37c8-445c-901a-28a70a0db463]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:25:26 compute-0 podman[468927]: 2025-12-05 02:25:26.915354628 +0000 UTC m=+1.628779526 container remove 792b82be21e2917cd76caab184cc671d5e73bcf00df0c324250d269eea94d444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pike, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 05 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.922 349552 DEBUG nova.virt.libvirt.vif [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:11:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa',id=11,image_ref='773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:11:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='92ca195d-98d1-443c-9947-dcb7ca7b926a'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b01709a3378347e1a3f25eeb2b8b1bca',ramdisk_id='',reservation_id='r-d903m2ip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-257639068',owner_user_name='tempest-PrometheusGabbiTest-257639068-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:11:30Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='99591ed8361e41579fee1d14f16bf0f7',uuid=292fd084-0808-4a80-adc1-6ab1f28e188a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 05 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.922 349552 DEBUG nova.network.os_vif_util [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Converting VIF {"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.923 349552 DEBUG nova.network.os_vif_util [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:10:bc,bridge_name='br-int',has_traffic_filtering=True,id=706f9405-4061-481e-a252-9b14f4534a4e,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706f9405-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.923 349552 DEBUG os_vif [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:10:bc,bridge_name='br-int',has_traffic_filtering=True,id=706f9405-4061-481e-a252-9b14f4534a4e,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706f9405-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 05 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.925 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.926 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap706f9405-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:25:26 compute-0 systemd[1]: libpod-conmon-792b82be21e2917cd76caab184cc671d5e73bcf00df0c324250d269eea94d444.scope: Deactivated successfully.
Dec 05 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.927 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.933 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.936 349552 INFO os_vif [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:10:bc,bridge_name='br-int',has_traffic_filtering=True,id=706f9405-4061-481e-a252-9b14f4534a4e,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706f9405-40')
Dec 05 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.937 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[fa5ec4f1-1775-439c-a909-4a3d2d608653]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.959 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[8a6f8012-d5ce-4ee1-93bf-c9857ffa1bb3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7842201-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:26:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 8, 'rx_bytes': 1960, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 8, 'rx_bytes': 1960, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677128, 'reachable_time': 17791, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 469015, 'error': None, 'target': 'ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:25:26 compute-0 sudo[468826]: pam_unix(sudo:session): session closed for user root
Dec 05 02:25:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:25:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:25:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.976 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[291654eb-68e0-4a01-bd1e-abb9feba5878]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd7842201-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677143, 'tstamp': 677143}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 469027, 'error': None, 'target': 'ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapd7842201-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677147, 'tstamp': 677147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 469027, 'error': None, 'target': 'ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.979 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7842201-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:25:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:25:26 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 07b70f09-6882-44e3-8765-2f8f1114554d does not exist
Dec 05 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.982 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:26 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev bdcb81db-eaff-41bd-8873-ac2cfb5aeeb0 does not exist
Dec 05 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.983 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7842201-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.983 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.984 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7842201-30, col_values=(('external_ids', {'iface-id': '9309009c-26a0-4ed9-8142-14ad142ca1c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.984 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 05 02:25:27 compute-0 sudo[469031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:25:27 compute-0 sudo[469031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:25:27 compute-0 sudo[469031]: pam_unix(sudo:session): session closed for user root
Dec 05 02:25:27 compute-0 sudo[469056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:25:27 compute-0 sudo[469056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:25:27 compute-0 sudo[469056]: pam_unix(sudo:session): session closed for user root
Dec 05 02:25:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:27.331 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:25:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:27.332 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.337 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.376 349552 DEBUG nova.compute.manager [req-5d3ef75e-ece6-4088-bd2d-d6f06bac962b req-bfc1b7ba-543c-4722-8785-1ec4bf5e1e18 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Received event network-vif-unplugged-706f9405-4061-481e-a252-9b14f4534a4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.376 349552 DEBUG oslo_concurrency.lockutils [req-5d3ef75e-ece6-4088-bd2d-d6f06bac962b req-bfc1b7ba-543c-4722-8785-1ec4bf5e1e18 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.376 349552 DEBUG oslo_concurrency.lockutils [req-5d3ef75e-ece6-4088-bd2d-d6f06bac962b req-bfc1b7ba-543c-4722-8785-1ec4bf5e1e18 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.376 349552 DEBUG oslo_concurrency.lockutils [req-5d3ef75e-ece6-4088-bd2d-d6f06bac962b req-bfc1b7ba-543c-4722-8785-1ec4bf5e1e18 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.376 349552 DEBUG nova.compute.manager [req-5d3ef75e-ece6-4088-bd2d-d6f06bac962b req-bfc1b7ba-543c-4722-8785-1ec4bf5e1e18 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] No waiting events found dispatching network-vif-unplugged-706f9405-4061-481e-a252-9b14f4534a4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.377 349552 DEBUG nova.compute.manager [req-5d3ef75e-ece6-4088-bd2d-d6f06bac962b req-bfc1b7ba-543c-4722-8785-1ec4bf5e1e18 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Received event network-vif-unplugged-706f9405-4061-481e-a252-9b14f4534a4e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 05 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015211533217750863 of space, bias 1.0, pg target 0.4563459965325259 quantized to 32 (current 32)
Dec 05 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec 05 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.732 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.749 349552 INFO nova.virt.libvirt.driver [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Deleting instance files /var/lib/nova/instances/292fd084-0808-4a80-adc1-6ab1f28e188a_del
Dec 05 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.750 349552 INFO nova.virt.libvirt.driver [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Deletion of /var/lib/nova/instances/292fd084-0808-4a80-adc1-6ab1f28e188a_del complete
Dec 05 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.850 349552 INFO nova.compute.manager [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Took 1.21 seconds to destroy the instance on the hypervisor.
Dec 05 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.850 349552 DEBUG oslo.service.loopingcall [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 05 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.851 349552 DEBUG nova.compute.manager [-] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 05 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.851 349552 DEBUG nova.network.neutron [-] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 05 02:25:27 compute-0 ceph-mon[192914]: pgmap v2293: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:25:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:25:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:25:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:25:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2294: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec 05 02:25:29 compute-0 podman[158197]: time="2025-12-05T02:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:25:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec 05 02:25:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8672 "" "Go-http-client/1.1"
Dec 05 02:25:29 compute-0 ceph-mon[192914]: pgmap v2294: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec 05 02:25:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2295: 321 pgs: 321 active+clean; 217 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 2 op/s
Dec 05 02:25:31 compute-0 openstack_network_exporter[366555]: ERROR   02:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:25:31 compute-0 openstack_network_exporter[366555]: ERROR   02:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:25:31 compute-0 openstack_network_exporter[366555]: ERROR   02:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:25:31 compute-0 openstack_network_exporter[366555]: ERROR   02:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:25:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:25:31 compute-0 openstack_network_exporter[366555]: ERROR   02:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:25:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:25:31 compute-0 nova_compute[349548]: 2025-12-05 02:25:31.702 349552 DEBUG nova.compute.manager [req-03225acd-c0cd-4888-9127-7a6ca0abdd40 req-2055049e-35f8-4479-9ea8-0d8881d567cc a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Received event network-vif-plugged-706f9405-4061-481e-a252-9b14f4534a4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:25:31 compute-0 nova_compute[349548]: 2025-12-05 02:25:31.703 349552 DEBUG oslo_concurrency.lockutils [req-03225acd-c0cd-4888-9127-7a6ca0abdd40 req-2055049e-35f8-4479-9ea8-0d8881d567cc a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:25:31 compute-0 nova_compute[349548]: 2025-12-05 02:25:31.704 349552 DEBUG oslo_concurrency.lockutils [req-03225acd-c0cd-4888-9127-7a6ca0abdd40 req-2055049e-35f8-4479-9ea8-0d8881d567cc a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:25:31 compute-0 nova_compute[349548]: 2025-12-05 02:25:31.705 349552 DEBUG oslo_concurrency.lockutils [req-03225acd-c0cd-4888-9127-7a6ca0abdd40 req-2055049e-35f8-4479-9ea8-0d8881d567cc a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:25:31 compute-0 nova_compute[349548]: 2025-12-05 02:25:31.705 349552 DEBUG nova.compute.manager [req-03225acd-c0cd-4888-9127-7a6ca0abdd40 req-2055049e-35f8-4479-9ea8-0d8881d567cc a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] No waiting events found dispatching network-vif-plugged-706f9405-4061-481e-a252-9b14f4534a4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:25:31 compute-0 nova_compute[349548]: 2025-12-05 02:25:31.706 349552 WARNING nova.compute.manager [req-03225acd-c0cd-4888-9127-7a6ca0abdd40 req-2055049e-35f8-4479-9ea8-0d8881d567cc a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Received unexpected event network-vif-plugged-706f9405-4061-481e-a252-9b14f4534a4e for instance with vm_state active and task_state deleting.
Dec 05 02:25:31 compute-0 nova_compute[349548]: 2025-12-05 02:25:31.929 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:32 compute-0 ceph-mon[192914]: pgmap v2295: 321 pgs: 321 active+clean; 217 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 2 op/s
Dec 05 02:25:32 compute-0 nova_compute[349548]: 2025-12-05 02:25:32.151 349552 DEBUG nova.network.neutron [-] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:25:32 compute-0 nova_compute[349548]: 2025-12-05 02:25:32.170 349552 INFO nova.compute.manager [-] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Took 4.32 seconds to deallocate network for instance.
Dec 05 02:25:32 compute-0 nova_compute[349548]: 2025-12-05 02:25:32.221 349552 DEBUG oslo_concurrency.lockutils [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:25:32 compute-0 nova_compute[349548]: 2025-12-05 02:25:32.222 349552 DEBUG oslo_concurrency.lockutils [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:25:32 compute-0 nova_compute[349548]: 2025-12-05 02:25:32.321 349552 DEBUG oslo_concurrency.processutils [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:25:32 compute-0 nova_compute[349548]: 2025-12-05 02:25:32.735 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2296: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 05 02:25:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:25:32 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/424006511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:25:32 compute-0 nova_compute[349548]: 2025-12-05 02:25:32.837 349552 DEBUG oslo_concurrency.processutils [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:25:32 compute-0 nova_compute[349548]: 2025-12-05 02:25:32.850 349552 DEBUG nova.compute.provider_tree [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:25:32 compute-0 nova_compute[349548]: 2025-12-05 02:25:32.868 349552 DEBUG nova.scheduler.client.report [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:25:32 compute-0 nova_compute[349548]: 2025-12-05 02:25:32.895 349552 DEBUG oslo_concurrency.lockutils [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:25:32 compute-0 nova_compute[349548]: 2025-12-05 02:25:32.927 349552 INFO nova.scheduler.client.report [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Deleted allocations for instance 292fd084-0808-4a80-adc1-6ab1f28e188a
Dec 05 02:25:33 compute-0 nova_compute[349548]: 2025-12-05 02:25:33.010 349552 DEBUG oslo_concurrency.lockutils [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:25:33 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/424006511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:25:33 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:33.335 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:25:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:25:33 compute-0 nova_compute[349548]: 2025-12-05 02:25:33.806 349552 DEBUG nova.compute.manager [req-2a116095-54a5-4ba7-ae51-3f19bec548dc req-cf3ca216-b0e2-4087-b4a8-8d125204fc3f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Received event network-vif-deleted-706f9405-4061-481e-a252-9b14f4534a4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:25:34 compute-0 ceph-mon[192914]: pgmap v2296: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 05 02:25:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2297: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 05 02:25:35 compute-0 podman[469104]: 2025-12-05 02:25:35.747160607 +0000 UTC m=+0.151496696 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:25:35 compute-0 podman[469105]: 2025-12-05 02:25:35.751110427 +0000 UTC m=+0.152609947 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 02:25:35 compute-0 podman[469107]: 2025-12-05 02:25:35.751782226 +0000 UTC m=+0.134168599 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, architecture=x86_64, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, version=9.6, io.buildah.version=1.33.7, distribution-scope=public, build-date=2025-08-20T13:12:41, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, io.openshift.tags=minimal rhel9)
Dec 05 02:25:35 compute-0 podman[469106]: 2025-12-05 02:25:35.764158644 +0000 UTC m=+0.158058360 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 02:25:36 compute-0 ceph-mon[192914]: pgmap v2297: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 05 02:25:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2298: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 05 02:25:36 compute-0 nova_compute[349548]: 2025-12-05 02:25:36.933 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:37 compute-0 nova_compute[349548]: 2025-12-05 02:25:37.738 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:38 compute-0 ceph-mon[192914]: pgmap v2298: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 05 02:25:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:25:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2299: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.034 349552 DEBUG oslo_concurrency.lockutils [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.035 349552 DEBUG oslo_concurrency.lockutils [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.036 349552 DEBUG oslo_concurrency.lockutils [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.037 349552 DEBUG oslo_concurrency.lockutils [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.038 349552 DEBUG oslo_concurrency.lockutils [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.041 349552 INFO nova.compute.manager [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Terminating instance
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.044 349552 DEBUG nova.compute.manager [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 05 02:25:40 compute-0 ceph-mon[192914]: pgmap v2299: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 05 02:25:40 compute-0 kernel: tapafc3cf6c-cb (unregistering): left promiscuous mode
Dec 05 02:25:40 compute-0 NetworkManager[49092]: <info>  [1764901540.1705] device (tapafc3cf6c-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 05 02:25:40 compute-0 ovn_controller[89286]: 2025-12-05T02:25:40Z|00186|binding|INFO|Releasing lport afc3cf6c-cbe3-4163-920e-7122f474d371 from this chassis (sb_readonly=0)
Dec 05 02:25:40 compute-0 ovn_controller[89286]: 2025-12-05T02:25:40Z|00187|binding|INFO|Setting lport afc3cf6c-cbe3-4163-920e-7122f474d371 down in Southbound
Dec 05 02:25:40 compute-0 ovn_controller[89286]: 2025-12-05T02:25:40Z|00188|binding|INFO|Removing iface tapafc3cf6c-cb ovn-installed in OVS
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.190 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.193 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.200 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:80:52 10.100.2.8'], port_security=['fa:16:3e:69:80:52 10.100.2.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.2.8/16', 'neutron:device_id': 'e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cb556767-8d1b-4432-9d0a-485dcba856ee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40610b26-f7eb-46a6-9c49-714ab1f77db8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=afc3cf6c-cbe3-4163-920e-7122f474d371) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.203 287122 INFO neutron.agent.ovn.metadata.agent [-] Port afc3cf6c-cbe3-4163-920e-7122f474d371 in datapath d7842201-32d0-4f34-ad6b-51f98e5f8322 unbound from our chassis
Dec 05 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.205 287122 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d7842201-32d0-4f34-ad6b-51f98e5f8322, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 05 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.206 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[29dd5691-c458-4afb-99a5-8333229c19db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.207 287122 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322 namespace which is not needed anymore
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.235 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:40 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Dec 05 02:25:40 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 7min 1.246s CPU time.
Dec 05 02:25:40 compute-0 systemd-machined[138700]: Machine qemu-16-instance-0000000f terminated.
Dec 05 02:25:40 compute-0 kernel: tapafc3cf6c-cb: entered promiscuous mode
Dec 05 02:25:40 compute-0 kernel: tapafc3cf6c-cb (unregistering): left promiscuous mode
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.303 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.331 349552 INFO nova.virt.libvirt.driver [-] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Instance destroyed successfully.
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.332 349552 DEBUG nova.objects.instance [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lazy-loading 'resources' on Instance uuid e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.353 349552 DEBUG nova.virt.libvirt.vif [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:15:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo',id=15,image_ref='773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:15:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='92ca195d-98d1-443c-9947-dcb7ca7b926a'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b01709a3378347e1a3f25eeb2b8b1bca',ramdisk_id='',reservation_id='r-hkm16u1q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-257639068',owner_user_name='tempest-PrometheusGabbiTest-257639068-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:15:42Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='99591ed8361e41579fee1d14f16bf0f7',uuid=e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.354 349552 DEBUG nova.network.os_vif_util [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Converting VIF {"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.356 349552 DEBUG nova.network.os_vif_util [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:69:80:52,bridge_name='br-int',has_traffic_filtering=True,id=afc3cf6c-cbe3-4163-920e-7122f474d371,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafc3cf6c-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.357 349552 DEBUG os_vif [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:80:52,bridge_name='br-int',has_traffic_filtering=True,id=afc3cf6c-cbe3-4163-920e-7122f474d371,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafc3cf6c-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.361 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.363 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapafc3cf6c-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.366 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.368 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.371 349552 INFO os_vif [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:80:52,bridge_name='br-int',has_traffic_filtering=True,id=afc3cf6c-cbe3-4163-920e-7122f474d371,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafc3cf6c-cb')
Dec 05 02:25:40 compute-0 neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322[448122]: [NOTICE]   (448153) : haproxy version is 2.8.14-c23fe91
Dec 05 02:25:40 compute-0 neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322[448122]: [NOTICE]   (448153) : path to executable is /usr/sbin/haproxy
Dec 05 02:25:40 compute-0 neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322[448122]: [WARNING]  (448153) : Exiting Master process...
Dec 05 02:25:40 compute-0 neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322[448122]: [WARNING]  (448153) : Exiting Master process...
Dec 05 02:25:40 compute-0 neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322[448122]: [ALERT]    (448153) : Current worker (448155) exited with code 143 (Terminated)
Dec 05 02:25:40 compute-0 neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322[448122]: [WARNING]  (448153) : All workers exited. Exiting... (0)
Dec 05 02:25:40 compute-0 systemd[1]: libpod-41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a.scope: Deactivated successfully.
Dec 05 02:25:40 compute-0 podman[469219]: 2025-12-05 02:25:40.485835144 +0000 UTC m=+0.091129721 container died 41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 02:25:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a-userdata-shm.mount: Deactivated successfully.
Dec 05 02:25:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b0054f478c906d197442626f618ca33515a9d994cb127e662a2ffd07bf0dae3-merged.mount: Deactivated successfully.
Dec 05 02:25:40 compute-0 podman[469219]: 2025-12-05 02:25:40.561105618 +0000 UTC m=+0.166400175 container cleanup 41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 05 02:25:40 compute-0 systemd[1]: libpod-conmon-41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a.scope: Deactivated successfully.
Dec 05 02:25:40 compute-0 podman[469267]: 2025-12-05 02:25:40.684485863 +0000 UTC m=+0.080884473 container remove 41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 05 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.692 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[5ed36477-1ba0-4adc-9167-7cafac69eb1b]: (4, ('Fri Dec  5 02:25:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322 (41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a)\n41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a\nFri Dec  5 02:25:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322 (41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a)\n41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.694 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[a3efce11-42ad-47e4-aa34-0a7632c3c308]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.695 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7842201-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.698 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:40 compute-0 kernel: tapd7842201-30: left promiscuous mode
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.725 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.729 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[811fb664-4c35-4a56-abf7-3f9911be34e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.749 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c53621-3bff-47fe-83d1-19061ec3aa01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.751 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[48e9abeb-cf44-4efd-9b47-56b1dc1d2c6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:25:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2300: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 05 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.767 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[7542e4c1-f3c8-4dca-b5ad-dde3c9cee611]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677120, 'reachable_time': 20407, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 469281, 'error': None, 'target': 'ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.770 287504 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 05 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.770 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[5bbf8028-8372-412c-b863-c9aa40cf280c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 05 02:25:40 compute-0 systemd[1]: run-netns-ovnmeta\x2dd7842201\x2d32d0\x2d4f34\x2dad6b\x2d51f98e5f8322.mount: Deactivated successfully.
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.978 349552 DEBUG nova.compute.manager [req-c1cc4d6f-cc5b-4ee8-b9c0-2c288df953c0 req-d309cc95-be14-4cb2-8a37-59f7939ce07a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Received event network-vif-unplugged-afc3cf6c-cbe3-4163-920e-7122f474d371 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.979 349552 DEBUG oslo_concurrency.lockutils [req-c1cc4d6f-cc5b-4ee8-b9c0-2c288df953c0 req-d309cc95-be14-4cb2-8a37-59f7939ce07a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.982 349552 DEBUG oslo_concurrency.lockutils [req-c1cc4d6f-cc5b-4ee8-b9c0-2c288df953c0 req-d309cc95-be14-4cb2-8a37-59f7939ce07a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.982 349552 DEBUG oslo_concurrency.lockutils [req-c1cc4d6f-cc5b-4ee8-b9c0-2c288df953c0 req-d309cc95-be14-4cb2-8a37-59f7939ce07a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.982 349552 DEBUG nova.compute.manager [req-c1cc4d6f-cc5b-4ee8-b9c0-2c288df953c0 req-d309cc95-be14-4cb2-8a37-59f7939ce07a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] No waiting events found dispatching network-vif-unplugged-afc3cf6c-cbe3-4163-920e-7122f474d371 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.983 349552 DEBUG nova.compute.manager [req-c1cc4d6f-cc5b-4ee8-b9c0-2c288df953c0 req-d309cc95-be14-4cb2-8a37-59f7939ce07a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Received event network-vif-unplugged-afc3cf6c-cbe3-4163-920e-7122f474d371 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 05 02:25:41 compute-0 nova_compute[349548]: 2025-12-05 02:25:41.257 349552 INFO nova.virt.libvirt.driver [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Deleting instance files /var/lib/nova/instances/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_del
Dec 05 02:25:41 compute-0 nova_compute[349548]: 2025-12-05 02:25:41.258 349552 INFO nova.virt.libvirt.driver [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Deletion of /var/lib/nova/instances/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_del complete
Dec 05 02:25:41 compute-0 nova_compute[349548]: 2025-12-05 02:25:41.344 349552 INFO nova.compute.manager [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Took 1.30 seconds to destroy the instance on the hypervisor.
Dec 05 02:25:41 compute-0 nova_compute[349548]: 2025-12-05 02:25:41.345 349552 DEBUG oslo.service.loopingcall [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 05 02:25:41 compute-0 nova_compute[349548]: 2025-12-05 02:25:41.345 349552 DEBUG nova.compute.manager [-] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 05 02:25:41 compute-0 nova_compute[349548]: 2025-12-05 02:25:41.346 349552 DEBUG nova.network.neutron [-] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 05 02:25:41 compute-0 nova_compute[349548]: 2025-12-05 02:25:41.889 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764901526.887611, 292fd084-0808-4a80-adc1-6ab1f28e188a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:25:41 compute-0 nova_compute[349548]: 2025-12-05 02:25:41.889 349552 INFO nova.compute.manager [-] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] VM Stopped (Lifecycle Event)
Dec 05 02:25:41 compute-0 nova_compute[349548]: 2025-12-05 02:25:41.912 349552 DEBUG nova.compute.manager [None req-60e48250-4e16-4ef0-b54d-3b2d708a498d - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:25:42 compute-0 nova_compute[349548]: 2025-12-05 02:25:42.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:25:42 compute-0 nova_compute[349548]: 2025-12-05 02:25:42.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 02:25:42 compute-0 nova_compute[349548]: 2025-12-05 02:25:42.088 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 02:25:42 compute-0 ceph-mon[192914]: pgmap v2300: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 05 02:25:42 compute-0 nova_compute[349548]: 2025-12-05 02:25:42.742 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2301: 321 pgs: 321 active+clean; 93 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 2.3 KiB/s wr, 50 op/s
Dec 05 02:25:43 compute-0 nova_compute[349548]: 2025-12-05 02:25:43.080 349552 DEBUG nova.compute.manager [req-a105f9ce-3157-42d5-86c2-2f8b9fef44c5 req-30a9a9c3-90d7-4533-bf9d-9244eac16136 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Received event network-vif-plugged-afc3cf6c-cbe3-4163-920e-7122f474d371 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:25:43 compute-0 nova_compute[349548]: 2025-12-05 02:25:43.081 349552 DEBUG oslo_concurrency.lockutils [req-a105f9ce-3157-42d5-86c2-2f8b9fef44c5 req-30a9a9c3-90d7-4533-bf9d-9244eac16136 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:25:43 compute-0 nova_compute[349548]: 2025-12-05 02:25:43.081 349552 DEBUG oslo_concurrency.lockutils [req-a105f9ce-3157-42d5-86c2-2f8b9fef44c5 req-30a9a9c3-90d7-4533-bf9d-9244eac16136 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:25:43 compute-0 nova_compute[349548]: 2025-12-05 02:25:43.081 349552 DEBUG oslo_concurrency.lockutils [req-a105f9ce-3157-42d5-86c2-2f8b9fef44c5 req-30a9a9c3-90d7-4533-bf9d-9244eac16136 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:25:43 compute-0 nova_compute[349548]: 2025-12-05 02:25:43.082 349552 DEBUG nova.compute.manager [req-a105f9ce-3157-42d5-86c2-2f8b9fef44c5 req-30a9a9c3-90d7-4533-bf9d-9244eac16136 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] No waiting events found dispatching network-vif-plugged-afc3cf6c-cbe3-4163-920e-7122f474d371 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 05 02:25:43 compute-0 nova_compute[349548]: 2025-12-05 02:25:43.082 349552 WARNING nova.compute.manager [req-a105f9ce-3157-42d5-86c2-2f8b9fef44c5 req-30a9a9c3-90d7-4533-bf9d-9244eac16136 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Received unexpected event network-vif-plugged-afc3cf6c-cbe3-4163-920e-7122f474d371 for instance with vm_state active and task_state deleting.
Dec 05 02:25:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:25:43 compute-0 nova_compute[349548]: 2025-12-05 02:25:43.542 349552 DEBUG nova.network.neutron [-] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 05 02:25:43 compute-0 nova_compute[349548]: 2025-12-05 02:25:43.563 349552 INFO nova.compute.manager [-] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Took 2.22 seconds to deallocate network for instance.
Dec 05 02:25:43 compute-0 nova_compute[349548]: 2025-12-05 02:25:43.625 349552 DEBUG oslo_concurrency.lockutils [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:25:43 compute-0 nova_compute[349548]: 2025-12-05 02:25:43.626 349552 DEBUG oslo_concurrency.lockutils [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:25:43 compute-0 nova_compute[349548]: 2025-12-05 02:25:43.688 349552 DEBUG oslo_concurrency.processutils [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:25:44 compute-0 ceph-mon[192914]: pgmap v2301: 321 pgs: 321 active+clean; 93 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 2.3 KiB/s wr, 50 op/s
Dec 05 02:25:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:25:44 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1129251840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:25:44 compute-0 nova_compute[349548]: 2025-12-05 02:25:44.185 349552 DEBUG oslo_concurrency.processutils [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:25:44 compute-0 nova_compute[349548]: 2025-12-05 02:25:44.193 349552 DEBUG nova.compute.provider_tree [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:25:44 compute-0 nova_compute[349548]: 2025-12-05 02:25:44.212 349552 DEBUG nova.scheduler.client.report [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:25:44 compute-0 nova_compute[349548]: 2025-12-05 02:25:44.293 349552 DEBUG oslo_concurrency.lockutils [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:25:44 compute-0 nova_compute[349548]: 2025-12-05 02:25:44.327 349552 INFO nova.scheduler.client.report [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Deleted allocations for instance e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7
Dec 05 02:25:44 compute-0 nova_compute[349548]: 2025-12-05 02:25:44.386 349552 DEBUG oslo_concurrency.lockutils [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.351s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:25:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2302: 321 pgs: 321 active+clean; 93 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Dec 05 02:25:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1129251840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:25:45 compute-0 nova_compute[349548]: 2025-12-05 02:25:45.293 349552 DEBUG nova.compute.manager [req-66ea741d-9d2f-4a20-bbd1-5052153c7497 req-3cfb8963-9904-41da-a7be-68f2b29f2ed6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Received event network-vif-deleted-afc3cf6c-cbe3-4163-920e-7122f474d371 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 05 02:25:45 compute-0 nova_compute[349548]: 2025-12-05 02:25:45.369 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:25:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1737544502' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:25:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:25:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1737544502' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:25:46 compute-0 ceph-mon[192914]: pgmap v2302: 321 pgs: 321 active+clean; 93 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Dec 05 02:25:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1737544502' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:25:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1737544502' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:25:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:25:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:25:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:25:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:25:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:25:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:25:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2303: 321 pgs: 321 active+clean; 77 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 05 02:25:47 compute-0 nova_compute[349548]: 2025-12-05 02:25:47.744 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:48 compute-0 nova_compute[349548]: 2025-12-05 02:25:48.087 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:25:48 compute-0 nova_compute[349548]: 2025-12-05 02:25:48.088 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:25:48 compute-0 nova_compute[349548]: 2025-12-05 02:25:48.127 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 02:25:48 compute-0 ceph-mon[192914]: pgmap v2303: 321 pgs: 321 active+clean; 77 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 05 02:25:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:25:48 compute-0 podman[469305]: 2025-12-05 02:25:48.706971848 +0000 UTC m=+0.112532131 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 05 02:25:48 compute-0 podman[469306]: 2025-12-05 02:25:48.721960619 +0000 UTC m=+0.124830197 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 02:25:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2304: 321 pgs: 321 active+clean; 77 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 05 02:25:50 compute-0 ceph-mon[192914]: pgmap v2304: 321 pgs: 321 active+clean; 77 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 05 02:25:50 compute-0 nova_compute[349548]: 2025-12-05 02:25:50.374 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2305: 321 pgs: 321 active+clean; 77 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 05 02:25:52 compute-0 nova_compute[349548]: 2025-12-05 02:25:52.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:25:52 compute-0 nova_compute[349548]: 2025-12-05 02:25:52.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:25:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Dec 05 02:25:52 compute-0 ceph-mon[192914]: pgmap v2305: 321 pgs: 321 active+clean; 77 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 05 02:25:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Dec 05 02:25:52 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Dec 05 02:25:52 compute-0 nova_compute[349548]: 2025-12-05 02:25:52.747 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2307: 321 pgs: 321 active+clean; 77 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 511 B/s wr, 10 op/s
Dec 05 02:25:53 compute-0 ceph-mon[192914]: osdmap e139: 3 total, 3 up, 3 in
Dec 05 02:25:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:25:54 compute-0 nova_compute[349548]: 2025-12-05 02:25:54.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:25:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Dec 05 02:25:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Dec 05 02:25:54 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Dec 05 02:25:54 compute-0 ceph-mon[192914]: pgmap v2307: 321 pgs: 321 active+clean; 77 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 511 B/s wr, 10 op/s
Dec 05 02:25:54 compute-0 podman[469351]: 2025-12-05 02:25:54.723830133 +0000 UTC m=+0.129371864 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 05 02:25:54 compute-0 podman[469353]: 2025-12-05 02:25:54.742812456 +0000 UTC m=+0.127571504 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:25:54 compute-0 podman[469352]: 2025-12-05 02:25:54.754832514 +0000 UTC m=+0.151756533 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release-0.7.12=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, vcs-type=git, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release=1214.1726694543, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 05 02:25:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2309: 321 pgs: 321 active+clean; 77 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 639 B/s wr, 9 op/s
Dec 05 02:25:55 compute-0 nova_compute[349548]: 2025-12-05 02:25:55.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:25:55 compute-0 nova_compute[349548]: 2025-12-05 02:25:55.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:25:55 compute-0 nova_compute[349548]: 2025-12-05 02:25:55.131 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:25:55 compute-0 nova_compute[349548]: 2025-12-05 02:25:55.132 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:25:55 compute-0 nova_compute[349548]: 2025-12-05 02:25:55.132 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:25:55 compute-0 nova_compute[349548]: 2025-12-05 02:25:55.133 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:25:55 compute-0 nova_compute[349548]: 2025-12-05 02:25:55.134 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:25:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Dec 05 02:25:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Dec 05 02:25:55 compute-0 ceph-mon[192914]: osdmap e140: 3 total, 3 up, 3 in
Dec 05 02:25:55 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Dec 05 02:25:55 compute-0 nova_compute[349548]: 2025-12-05 02:25:55.324 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764901540.3228314, e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 05 02:25:55 compute-0 nova_compute[349548]: 2025-12-05 02:25:55.326 349552 INFO nova.compute.manager [-] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] VM Stopped (Lifecycle Event)
Dec 05 02:25:55 compute-0 nova_compute[349548]: 2025-12-05 02:25:55.352 349552 DEBUG nova.compute.manager [None req-0dbaae16-d180-419b-a043-f3b8f258d1d5 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 05 02:25:55 compute-0 nova_compute[349548]: 2025-12-05 02:25:55.379 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:25:55 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/955854603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:25:55 compute-0 nova_compute[349548]: 2025-12-05 02:25:55.690 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:25:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:56.228 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:25:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:56.229 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:25:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:56.229 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:25:56 compute-0 ceph-mon[192914]: pgmap v2309: 321 pgs: 321 active+clean; 77 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 639 B/s wr, 9 op/s
Dec 05 02:25:56 compute-0 ceph-mon[192914]: osdmap e141: 3 total, 3 up, 3 in
Dec 05 02:25:56 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/955854603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:25:56 compute-0 nova_compute[349548]: 2025-12-05 02:25:56.323 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:25:56 compute-0 nova_compute[349548]: 2025-12-05 02:25:56.326 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4005MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:25:56 compute-0 nova_compute[349548]: 2025-12-05 02:25:56.326 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:25:56 compute-0 nova_compute[349548]: 2025-12-05 02:25:56.327 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:25:56 compute-0 nova_compute[349548]: 2025-12-05 02:25:56.550 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:25:56 compute-0 nova_compute[349548]: 2025-12-05 02:25:56.552 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:25:56 compute-0 nova_compute[349548]: 2025-12-05 02:25:56.637 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:25:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2311: 321 pgs: 321 active+clean; 85 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.3 MiB/s wr, 22 op/s
Dec 05 02:25:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:25:57 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3052801297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:25:57 compute-0 nova_compute[349548]: 2025-12-05 02:25:57.165 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:25:57 compute-0 nova_compute[349548]: 2025-12-05 02:25:57.180 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:25:57 compute-0 nova_compute[349548]: 2025-12-05 02:25:57.202 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:25:57 compute-0 nova_compute[349548]: 2025-12-05 02:25:57.231 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:25:57 compute-0 nova_compute[349548]: 2025-12-05 02:25:57.233 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:25:57 compute-0 ceph-mon[192914]: pgmap v2311: 321 pgs: 321 active+clean; 85 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.3 MiB/s wr, 22 op/s
Dec 05 02:25:57 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3052801297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:25:57 compute-0 nova_compute[349548]: 2025-12-05 02:25:57.777 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:25:58 compute-0 nova_compute[349548]: 2025-12-05 02:25:58.230 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:25:58 compute-0 nova_compute[349548]: 2025-12-05 02:25:58.231 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:25:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:25:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2312: 321 pgs: 321 active+clean; 65 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 3.1 MiB/s wr, 92 op/s
Dec 05 02:25:59 compute-0 nova_compute[349548]: 2025-12-05 02:25:59.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:25:59 compute-0 podman[158197]: time="2025-12-05T02:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:25:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:25:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8198 "" "Go-http-client/1.1"
Dec 05 02:25:59 compute-0 ceph-mon[192914]: pgmap v2312: 321 pgs: 321 active+clean; 65 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 3.1 MiB/s wr, 92 op/s
Dec 05 02:26:00 compute-0 nova_compute[349548]: 2025-12-05 02:26:00.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:26:00 compute-0 nova_compute[349548]: 2025-12-05 02:26:00.383 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:26:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2313: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Dec 05 02:26:01 compute-0 openstack_network_exporter[366555]: ERROR   02:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:26:01 compute-0 openstack_network_exporter[366555]: ERROR   02:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:26:01 compute-0 openstack_network_exporter[366555]: ERROR   02:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:26:01 compute-0 openstack_network_exporter[366555]: ERROR   02:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:26:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:26:01 compute-0 openstack_network_exporter[366555]: ERROR   02:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:26:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:26:01 compute-0 ceph-mon[192914]: pgmap v2313: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Dec 05 02:26:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2314: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 2.4 MiB/s wr, 72 op/s
Dec 05 02:26:02 compute-0 nova_compute[349548]: 2025-12-05 02:26:02.782 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:26:03 compute-0 nova_compute[349548]: 2025-12-05 02:26:03.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:26:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:26:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Dec 05 02:26:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Dec 05 02:26:03 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Dec 05 02:26:03 compute-0 ceph-mon[192914]: pgmap v2314: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 2.4 MiB/s wr, 72 op/s
Dec 05 02:26:03 compute-0 ceph-mon[192914]: osdmap e142: 3 total, 3 up, 3 in
Dec 05 02:26:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2316: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1.3 MiB/s wr, 58 op/s
Dec 05 02:26:05 compute-0 nova_compute[349548]: 2025-12-05 02:26:05.387 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:26:05 compute-0 ceph-mon[192914]: pgmap v2316: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1.3 MiB/s wr, 58 op/s
Dec 05 02:26:06 compute-0 nova_compute[349548]: 2025-12-05 02:26:06.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:26:06 compute-0 podman[469453]: 2025-12-05 02:26:06.712060521 +0000 UTC m=+0.106204304 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:26:06 compute-0 podman[469452]: 2025-12-05 02:26:06.727101424 +0000 UTC m=+0.124710064 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Dec 05 02:26:06 compute-0 podman[469455]: 2025-12-05 02:26:06.727991689 +0000 UTC m=+0.109833136 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, name=ubi9-minimal, managed_by=edpm_ansible, architecture=x86_64, version=9.6, container_name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 02:26:06 compute-0 podman[469454]: 2025-12-05 02:26:06.769193456 +0000 UTC m=+0.152706610 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:26:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2317: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 1.2 MiB/s wr, 55 op/s
Dec 05 02:26:07 compute-0 nova_compute[349548]: 2025-12-05 02:26:07.080 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:26:07 compute-0 nova_compute[349548]: 2025-12-05 02:26:07.081 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 02:26:07 compute-0 nova_compute[349548]: 2025-12-05 02:26:07.784 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:26:07 compute-0 ceph-mon[192914]: pgmap v2317: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 1.2 MiB/s wr, 55 op/s
Dec 05 02:26:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:26:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2318: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 307 B/s wr, 29 op/s
Dec 05 02:26:09 compute-0 ceph-mon[192914]: pgmap v2318: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 307 B/s wr, 29 op/s
Dec 05 02:26:10 compute-0 nova_compute[349548]: 2025-12-05 02:26:10.394 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:26:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2319: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Dec 05 02:26:11 compute-0 ceph-mon[192914]: pgmap v2319: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Dec 05 02:26:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2320: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 0 B/s wr, 71 op/s
Dec 05 02:26:12 compute-0 nova_compute[349548]: 2025-12-05 02:26:12.785 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:26:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:26:13 compute-0 ceph-mon[192914]: pgmap v2320: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 0 B/s wr, 71 op/s
Dec 05 02:26:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2321: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 0 B/s wr, 63 op/s
Dec 05 02:26:15 compute-0 nova_compute[349548]: 2025-12-05 02:26:15.397 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:26:15 compute-0 ceph-mon[192914]: pgmap v2321: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 0 B/s wr, 63 op/s
Dec 05 02:26:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:26:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:26:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:26:16
Dec 05 02:26:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:26:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:26:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', '.rgw.root', 'volumes', 'backups', 'vms', 'images', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log']
Dec 05 02:26:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:26:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:26:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:26:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:26:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:26:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2322: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 02:26:17 compute-0 nova_compute[349548]: 2025-12-05 02:26:17.789 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:26:17 compute-0 ceph-mon[192914]: pgmap v2322: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 02:26:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:26:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:26:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:26:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:26:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:26:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:26:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:26:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:26:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:26:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:26:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:26:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2323: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 02:26:19 compute-0 podman[469537]: 2025-12-05 02:26:19.707634121 +0000 UTC m=+0.115455183 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 05 02:26:19 compute-0 podman[469538]: 2025-12-05 02:26:19.740437592 +0000 UTC m=+0.143099690 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:26:19 compute-0 ceph-mon[192914]: pgmap v2323: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 02:26:20 compute-0 nova_compute[349548]: 2025-12-05 02:26:20.401 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:26:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2324: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Dec 05 02:26:21 compute-0 ceph-mon[192914]: pgmap v2324: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Dec 05 02:26:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2325: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 0 B/s wr, 13 op/s
Dec 05 02:26:22 compute-0 nova_compute[349548]: 2025-12-05 02:26:22.792 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:26:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:26:24 compute-0 ceph-mon[192914]: pgmap v2325: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 0 B/s wr, 13 op/s
Dec 05 02:26:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2326: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:25 compute-0 nova_compute[349548]: 2025-12-05 02:26:25.407 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:26:25 compute-0 podman[469577]: 2025-12-05 02:26:25.716233346 +0000 UTC m=+0.119823007 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 05 02:26:25 compute-0 podman[469578]: 2025-12-05 02:26:25.728309975 +0000 UTC m=+0.122186373 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, maintainer=Red Hat, Inc., name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, distribution-scope=public, managed_by=edpm_ansible, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release-0.7.12=)
Dec 05 02:26:25 compute-0 podman[469579]: 2025-12-05 02:26:25.740864538 +0000 UTC m=+0.123421608 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:26:26 compute-0 ceph-mon[192914]: pgmap v2326: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2327: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:27 compute-0 sudo[469635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:26:27 compute-0 sudo[469635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:26:27 compute-0 sudo[469635]: pam_unix(sudo:session): session closed for user root
Dec 05 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 05 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:26:27 compute-0 sudo[469660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:26:27 compute-0 sudo[469660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:26:27 compute-0 sudo[469660]: pam_unix(sudo:session): session closed for user root
Dec 05 02:26:27 compute-0 sudo[469685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:26:27 compute-0 sudo[469685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:26:27 compute-0 sudo[469685]: pam_unix(sudo:session): session closed for user root
Dec 05 02:26:27 compute-0 sudo[469710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:26:27 compute-0 sudo[469710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:26:27 compute-0 nova_compute[349548]: 2025-12-05 02:26:27.795 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:26:28 compute-0 ceph-mon[192914]: pgmap v2327: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:26:28 compute-0 sudo[469710]: pam_unix(sudo:session): session closed for user root
Dec 05 02:26:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:26:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:26:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:26:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:26:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:26:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:26:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7428df2d-cd5a-456d-be9b-75c57688bc4f does not exist
Dec 05 02:26:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 46673529-1cb0-467d-b38f-60fdebea18fc does not exist
Dec 05 02:26:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev e105e11f-c350-4062-b99a-ddcec7ac8488 does not exist
Dec 05 02:26:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:26:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:26:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:26:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:26:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:26:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:26:28 compute-0 sudo[469765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:26:28 compute-0 sudo[469765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:26:28 compute-0 sudo[469765]: pam_unix(sudo:session): session closed for user root
Dec 05 02:26:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2328: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:28 compute-0 sudo[469790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:26:28 compute-0 sudo[469790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:26:28 compute-0 sudo[469790]: pam_unix(sudo:session): session closed for user root
Dec 05 02:26:28 compute-0 sudo[469815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:26:28 compute-0 sudo[469815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:26:28 compute-0 sudo[469815]: pam_unix(sudo:session): session closed for user root
Dec 05 02:26:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:26:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:26:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:26:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:26:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:26:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:26:29 compute-0 sudo[469840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:26:29 compute-0 sudo[469840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:26:29 compute-0 podman[469902]: 2025-12-05 02:26:29.683396786 +0000 UTC m=+0.092197080 container create 276cd942c49468324aefd5ebe189668f72ad7ee03d058cc42e9519a797d32daf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 02:26:29 compute-0 podman[469902]: 2025-12-05 02:26:29.654153625 +0000 UTC m=+0.062953959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:26:29 compute-0 podman[158197]: time="2025-12-05T02:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:26:29 compute-0 systemd[1]: Started libpod-conmon-276cd942c49468324aefd5ebe189668f72ad7ee03d058cc42e9519a797d32daf.scope.
Dec 05 02:26:29 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:26:29 compute-0 podman[469902]: 2025-12-05 02:26:29.822724409 +0000 UTC m=+0.231524763 container init 276cd942c49468324aefd5ebe189668f72ad7ee03d058cc42e9519a797d32daf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_noether, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 05 02:26:29 compute-0 podman[469902]: 2025-12-05 02:26:29.839184632 +0000 UTC m=+0.247984926 container start 276cd942c49468324aefd5ebe189668f72ad7ee03d058cc42e9519a797d32daf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_noether, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 05 02:26:29 compute-0 podman[469902]: 2025-12-05 02:26:29.845415717 +0000 UTC m=+0.254216061 container attach 276cd942c49468324aefd5ebe189668f72ad7ee03d058cc42e9519a797d32daf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 05 02:26:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43954 "" "Go-http-client/1.1"
Dec 05 02:26:29 compute-0 admiring_noether[469917]: 167 167
Dec 05 02:26:29 compute-0 podman[469902]: 2025-12-05 02:26:29.858440253 +0000 UTC m=+0.267240547 container died 276cd942c49468324aefd5ebe189668f72ad7ee03d058cc42e9519a797d32daf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_noether, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 02:26:29 compute-0 systemd[1]: libpod-276cd942c49468324aefd5ebe189668f72ad7ee03d058cc42e9519a797d32daf.scope: Deactivated successfully.
Dec 05 02:26:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8190 "" "Go-http-client/1.1"
Dec 05 02:26:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-712987297a90ba39d12d0209fa2bfc10910b34f2212bf01c7c46faaaf40013a3-merged.mount: Deactivated successfully.
Dec 05 02:26:29 compute-0 podman[469902]: 2025-12-05 02:26:29.938631355 +0000 UTC m=+0.347431619 container remove 276cd942c49468324aefd5ebe189668f72ad7ee03d058cc42e9519a797d32daf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_noether, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:26:29 compute-0 systemd[1]: libpod-conmon-276cd942c49468324aefd5ebe189668f72ad7ee03d058cc42e9519a797d32daf.scope: Deactivated successfully.
Dec 05 02:26:30 compute-0 ceph-mon[192914]: pgmap v2328: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:30 compute-0 podman[469943]: 2025-12-05 02:26:30.227716164 +0000 UTC m=+0.091695686 container create 4d75d118ab3fb9cc10a2f500e5bd44a8b3c2b7b6d018fef8f1ce3ffc51b0fe7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 02:26:30 compute-0 podman[469943]: 2025-12-05 02:26:30.186349072 +0000 UTC m=+0.050328644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:26:30 compute-0 systemd[1]: Started libpod-conmon-4d75d118ab3fb9cc10a2f500e5bd44a8b3c2b7b6d018fef8f1ce3ffc51b0fe7c.scope.
Dec 05 02:26:30 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5853ebb4a996c5eeabebb06445f9f5a5640501e55558f24fb85080bf4c52836/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5853ebb4a996c5eeabebb06445f9f5a5640501e55558f24fb85080bf4c52836/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5853ebb4a996c5eeabebb06445f9f5a5640501e55558f24fb85080bf4c52836/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5853ebb4a996c5eeabebb06445f9f5a5640501e55558f24fb85080bf4c52836/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5853ebb4a996c5eeabebb06445f9f5a5640501e55558f24fb85080bf4c52836/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:26:30 compute-0 podman[469943]: 2025-12-05 02:26:30.388145659 +0000 UTC m=+0.252125221 container init 4d75d118ab3fb9cc10a2f500e5bd44a8b3c2b7b6d018fef8f1ce3ffc51b0fe7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 05 02:26:30 compute-0 nova_compute[349548]: 2025-12-05 02:26:30.411 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:26:30 compute-0 podman[469943]: 2025-12-05 02:26:30.417107812 +0000 UTC m=+0.281087324 container start 4d75d118ab3fb9cc10a2f500e5bd44a8b3c2b7b6d018fef8f1ce3ffc51b0fe7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keller, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Dec 05 02:26:30 compute-0 podman[469943]: 2025-12-05 02:26:30.423198593 +0000 UTC m=+0.287178175 container attach 4d75d118ab3fb9cc10a2f500e5bd44a8b3c2b7b6d018fef8f1ce3ffc51b0fe7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keller, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 05 02:26:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2329: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:31 compute-0 openstack_network_exporter[366555]: ERROR   02:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:26:31 compute-0 openstack_network_exporter[366555]: ERROR   02:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:26:31 compute-0 openstack_network_exporter[366555]: ERROR   02:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:26:31 compute-0 openstack_network_exporter[366555]: ERROR   02:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:26:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:26:31 compute-0 openstack_network_exporter[366555]: ERROR   02:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:26:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:26:31 compute-0 blissful_keller[469959]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:26:31 compute-0 blissful_keller[469959]: --> relative data size: 1.0
Dec 05 02:26:31 compute-0 blissful_keller[469959]: --> All data devices are unavailable
Dec 05 02:26:31 compute-0 systemd[1]: libpod-4d75d118ab3fb9cc10a2f500e5bd44a8b3c2b7b6d018fef8f1ce3ffc51b0fe7c.scope: Deactivated successfully.
Dec 05 02:26:31 compute-0 systemd[1]: libpod-4d75d118ab3fb9cc10a2f500e5bd44a8b3c2b7b6d018fef8f1ce3ffc51b0fe7c.scope: Consumed 1.332s CPU time.
Dec 05 02:26:31 compute-0 podman[469943]: 2025-12-05 02:26:31.797139411 +0000 UTC m=+1.661118923 container died 4d75d118ab3fb9cc10a2f500e5bd44a8b3c2b7b6d018fef8f1ce3ffc51b0fe7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keller, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 05 02:26:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5853ebb4a996c5eeabebb06445f9f5a5640501e55558f24fb85080bf4c52836-merged.mount: Deactivated successfully.
Dec 05 02:26:31 compute-0 podman[469943]: 2025-12-05 02:26:31.933429549 +0000 UTC m=+1.797409061 container remove 4d75d118ab3fb9cc10a2f500e5bd44a8b3c2b7b6d018fef8f1ce3ffc51b0fe7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keller, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 02:26:31 compute-0 systemd[1]: libpod-conmon-4d75d118ab3fb9cc10a2f500e5bd44a8b3c2b7b6d018fef8f1ce3ffc51b0fe7c.scope: Deactivated successfully.
Dec 05 02:26:31 compute-0 sudo[469840]: pam_unix(sudo:session): session closed for user root
Dec 05 02:26:32 compute-0 ceph-mon[192914]: pgmap v2329: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:32 compute-0 sudo[470001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:26:32 compute-0 sudo[470001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:26:32 compute-0 sudo[470001]: pam_unix(sudo:session): session closed for user root
Dec 05 02:26:32 compute-0 sudo[470026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:26:32 compute-0 sudo[470026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:26:32 compute-0 sudo[470026]: pam_unix(sudo:session): session closed for user root
Dec 05 02:26:32 compute-0 sudo[470051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:26:32 compute-0 sudo[470051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:26:32 compute-0 sudo[470051]: pam_unix(sudo:session): session closed for user root
Dec 05 02:26:32 compute-0 sudo[470076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:26:32 compute-0 sudo[470076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:26:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2330: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:32 compute-0 nova_compute[349548]: 2025-12-05 02:26:32.798 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:26:33 compute-0 podman[470139]: 2025-12-05 02:26:33.07708983 +0000 UTC m=+0.092571171 container create a0e974d76641cd6d297decefb2fa5e2872a506f61e30cc86a6f737a13f9d1dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 05 02:26:33 compute-0 podman[470139]: 2025-12-05 02:26:33.042076597 +0000 UTC m=+0.057557978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:26:33 compute-0 systemd[1]: Started libpod-conmon-a0e974d76641cd6d297decefb2fa5e2872a506f61e30cc86a6f737a13f9d1dfa.scope.
Dec 05 02:26:33 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:26:33 compute-0 podman[470139]: 2025-12-05 02:26:33.22450652 +0000 UTC m=+0.239987901 container init a0e974d76641cd6d297decefb2fa5e2872a506f61e30cc86a6f737a13f9d1dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bartik, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:26:33 compute-0 podman[470139]: 2025-12-05 02:26:33.240201971 +0000 UTC m=+0.255683312 container start a0e974d76641cd6d297decefb2fa5e2872a506f61e30cc86a6f737a13f9d1dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 05 02:26:33 compute-0 podman[470139]: 2025-12-05 02:26:33.247579838 +0000 UTC m=+0.263061179 container attach a0e974d76641cd6d297decefb2fa5e2872a506f61e30cc86a6f737a13f9d1dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bartik, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Dec 05 02:26:33 compute-0 funny_bartik[470154]: 167 167
Dec 05 02:26:33 compute-0 systemd[1]: libpod-a0e974d76641cd6d297decefb2fa5e2872a506f61e30cc86a6f737a13f9d1dfa.scope: Deactivated successfully.
Dec 05 02:26:33 compute-0 podman[470139]: 2025-12-05 02:26:33.250717366 +0000 UTC m=+0.266198727 container died a0e974d76641cd6d297decefb2fa5e2872a506f61e30cc86a6f737a13f9d1dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 05 02:26:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-72cea38ead625c29569bf959665300a0873118d82065e5e1e2d5a20433b7cbd8-merged.mount: Deactivated successfully.
Dec 05 02:26:33 compute-0 podman[470139]: 2025-12-05 02:26:33.316997768 +0000 UTC m=+0.332479079 container remove a0e974d76641cd6d297decefb2fa5e2872a506f61e30cc86a6f737a13f9d1dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bartik, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Dec 05 02:26:33 compute-0 systemd[1]: libpod-conmon-a0e974d76641cd6d297decefb2fa5e2872a506f61e30cc86a6f737a13f9d1dfa.scope: Deactivated successfully.
Dec 05 02:26:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:26:33 compute-0 podman[470177]: 2025-12-05 02:26:33.605561553 +0000 UTC m=+0.087252052 container create b82b15e8e1e24664d4af56e4d4aaebad42f061e7adb571ce33c1e67a89f9e6d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 05 02:26:33 compute-0 podman[470177]: 2025-12-05 02:26:33.576175167 +0000 UTC m=+0.057865646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:26:33 compute-0 systemd[1]: Started libpod-conmon-b82b15e8e1e24664d4af56e4d4aaebad42f061e7adb571ce33c1e67a89f9e6d6.scope.
Dec 05 02:26:33 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:26:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52018a5df5fb1c36637641c58e84df0f222bf91df66824de22b28c508b823e92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:26:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52018a5df5fb1c36637641c58e84df0f222bf91df66824de22b28c508b823e92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:26:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52018a5df5fb1c36637641c58e84df0f222bf91df66824de22b28c508b823e92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:26:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52018a5df5fb1c36637641c58e84df0f222bf91df66824de22b28c508b823e92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:26:33 compute-0 podman[470177]: 2025-12-05 02:26:33.804403417 +0000 UTC m=+0.286093946 container init b82b15e8e1e24664d4af56e4d4aaebad42f061e7adb571ce33c1e67a89f9e6d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackburn, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 05 02:26:33 compute-0 podman[470177]: 2025-12-05 02:26:33.8162474 +0000 UTC m=+0.297937889 container start b82b15e8e1e24664d4af56e4d4aaebad42f061e7adb571ce33c1e67a89f9e6d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:26:33 compute-0 podman[470177]: 2025-12-05 02:26:33.822557937 +0000 UTC m=+0.304248436 container attach b82b15e8e1e24664d4af56e4d4aaebad42f061e7adb571ce33c1e67a89f9e6d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 02:26:34 compute-0 ceph-mon[192914]: pgmap v2330: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]: {
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:     "0": [
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:         {
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "devices": [
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "/dev/loop3"
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             ],
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "lv_name": "ceph_lv0",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "lv_size": "21470642176",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "name": "ceph_lv0",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "tags": {
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.cluster_name": "ceph",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.crush_device_class": "",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.encrypted": "0",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.osd_id": "0",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.type": "block",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.vdo": "0"
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             },
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "type": "block",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "vg_name": "ceph_vg0"
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:         }
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:     ],
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:     "1": [
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:         {
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "devices": [
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "/dev/loop4"
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             ],
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "lv_name": "ceph_lv1",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "lv_size": "21470642176",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "name": "ceph_lv1",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "tags": {
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.cluster_name": "ceph",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.crush_device_class": "",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.encrypted": "0",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.osd_id": "1",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.type": "block",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.vdo": "0"
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             },
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "type": "block",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "vg_name": "ceph_vg1"
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:         }
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:     ],
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:     "2": [
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:         {
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "devices": [
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "/dev/loop5"
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             ],
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "lv_name": "ceph_lv2",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "lv_size": "21470642176",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "name": "ceph_lv2",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "tags": {
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.cluster_name": "ceph",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.crush_device_class": "",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.encrypted": "0",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.osd_id": "2",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.type": "block",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:                 "ceph.vdo": "0"
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             },
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "type": "block",
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:             "vg_name": "ceph_vg2"
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:         }
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]:     ]
Dec 05 02:26:34 compute-0 elegant_blackburn[470193]: }
Dec 05 02:26:34 compute-0 systemd[1]: libpod-b82b15e8e1e24664d4af56e4d4aaebad42f061e7adb571ce33c1e67a89f9e6d6.scope: Deactivated successfully.
Dec 05 02:26:34 compute-0 podman[470177]: 2025-12-05 02:26:34.655044817 +0000 UTC m=+1.136735316 container died b82b15e8e1e24664d4af56e4d4aaebad42f061e7adb571ce33c1e67a89f9e6d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 05 02:26:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-52018a5df5fb1c36637641c58e84df0f222bf91df66824de22b28c508b823e92-merged.mount: Deactivated successfully.
Dec 05 02:26:34 compute-0 podman[470177]: 2025-12-05 02:26:34.745958581 +0000 UTC m=+1.227649050 container remove b82b15e8e1e24664d4af56e4d4aaebad42f061e7adb571ce33c1e67a89f9e6d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 05 02:26:34 compute-0 systemd[1]: libpod-conmon-b82b15e8e1e24664d4af56e4d4aaebad42f061e7adb571ce33c1e67a89f9e6d6.scope: Deactivated successfully.
Dec 05 02:26:34 compute-0 sudo[470076]: pam_unix(sudo:session): session closed for user root
Dec 05 02:26:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2331: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:34 compute-0 sudo[470212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:26:34 compute-0 sudo[470212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:26:34 compute-0 sudo[470212]: pam_unix(sudo:session): session closed for user root
Dec 05 02:26:35 compute-0 sudo[470237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:26:35 compute-0 sudo[470237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:26:35 compute-0 sudo[470237]: pam_unix(sudo:session): session closed for user root
Dec 05 02:26:35 compute-0 sudo[470262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:26:35 compute-0 sudo[470262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:26:35 compute-0 sudo[470262]: pam_unix(sudo:session): session closed for user root
Dec 05 02:26:35 compute-0 sudo[470287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:26:35 compute-0 sudo[470287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:26:35 compute-0 nova_compute[349548]: 2025-12-05 02:26:35.417 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:26:35 compute-0 podman[470349]: 2025-12-05 02:26:35.850669007 +0000 UTC m=+0.087726984 container create f75e0aa3c5e1e0d20327c1a45800b56e8e2a100bc1be18f4da00aea1154cd42b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bhabha, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:26:35 compute-0 podman[470349]: 2025-12-05 02:26:35.816100977 +0000 UTC m=+0.053159014 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:26:35 compute-0 systemd[1]: Started libpod-conmon-f75e0aa3c5e1e0d20327c1a45800b56e8e2a100bc1be18f4da00aea1154cd42b.scope.
Dec 05 02:26:35 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:26:35 compute-0 podman[470349]: 2025-12-05 02:26:35.985010011 +0000 UTC m=+0.222068038 container init f75e0aa3c5e1e0d20327c1a45800b56e8e2a100bc1be18f4da00aea1154cd42b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bhabha, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:26:36 compute-0 podman[470349]: 2025-12-05 02:26:36.000403913 +0000 UTC m=+0.237461910 container start f75e0aa3c5e1e0d20327c1a45800b56e8e2a100bc1be18f4da00aea1154cd42b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bhabha, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 02:26:36 compute-0 podman[470349]: 2025-12-05 02:26:36.007955215 +0000 UTC m=+0.245013182 container attach f75e0aa3c5e1e0d20327c1a45800b56e8e2a100bc1be18f4da00aea1154cd42b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bhabha, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:26:36 compute-0 hopeful_bhabha[470364]: 167 167
Dec 05 02:26:36 compute-0 podman[470349]: 2025-12-05 02:26:36.011952127 +0000 UTC m=+0.249010094 container died f75e0aa3c5e1e0d20327c1a45800b56e8e2a100bc1be18f4da00aea1154cd42b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bhabha, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 05 02:26:36 compute-0 systemd[1]: libpod-f75e0aa3c5e1e0d20327c1a45800b56e8e2a100bc1be18f4da00aea1154cd42b.scope: Deactivated successfully.
Dec 05 02:26:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-14c1de820f9b2704449172cf3f57ca4ea3a7119546dbccb9f07cda48194777ae-merged.mount: Deactivated successfully.
Dec 05 02:26:36 compute-0 podman[470349]: 2025-12-05 02:26:36.078716812 +0000 UTC m=+0.315774759 container remove f75e0aa3c5e1e0d20327c1a45800b56e8e2a100bc1be18f4da00aea1154cd42b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bhabha, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 02:26:36 compute-0 ceph-mon[192914]: pgmap v2331: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:36 compute-0 systemd[1]: libpod-conmon-f75e0aa3c5e1e0d20327c1a45800b56e8e2a100bc1be18f4da00aea1154cd42b.scope: Deactivated successfully.
Dec 05 02:26:36 compute-0 podman[470387]: 2025-12-05 02:26:36.313512227 +0000 UTC m=+0.074333059 container create c8c7c4e8ff4ee1cb6cfab13a0a23c166a5a19ab363eb126ac339125613c3a976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wright, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 05 02:26:36 compute-0 podman[470387]: 2025-12-05 02:26:36.277065923 +0000 UTC m=+0.037886765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:26:36 compute-0 systemd[1]: Started libpod-conmon-c8c7c4e8ff4ee1cb6cfab13a0a23c166a5a19ab363eb126ac339125613c3a976.scope.
Dec 05 02:26:36 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:26:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4961c78851a0e43162b22e067eda534efea07d920f61f54f1d5588f36f08a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:26:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4961c78851a0e43162b22e067eda534efea07d920f61f54f1d5588f36f08a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:26:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4961c78851a0e43162b22e067eda534efea07d920f61f54f1d5588f36f08a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:26:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4961c78851a0e43162b22e067eda534efea07d920f61f54f1d5588f36f08a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:26:36 compute-0 podman[470387]: 2025-12-05 02:26:36.476204006 +0000 UTC m=+0.237024848 container init c8c7c4e8ff4ee1cb6cfab13a0a23c166a5a19ab363eb126ac339125613c3a976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 02:26:36 compute-0 podman[470387]: 2025-12-05 02:26:36.508315768 +0000 UTC m=+0.269136610 container start c8c7c4e8ff4ee1cb6cfab13a0a23c166a5a19ab363eb126ac339125613c3a976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wright, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:26:36 compute-0 podman[470387]: 2025-12-05 02:26:36.515209292 +0000 UTC m=+0.276030164 container attach c8c7c4e8ff4ee1cb6cfab13a0a23c166a5a19ab363eb126ac339125613c3a976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 05 02:26:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2332: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:36 compute-0 ovn_controller[89286]: 2025-12-05T02:26:36Z|00189|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 05 02:26:37 compute-0 nova_compute[349548]: 2025-12-05 02:26:37.110 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:26:37 compute-0 musing_wright[470403]: {
Dec 05 02:26:37 compute-0 musing_wright[470403]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:26:37 compute-0 musing_wright[470403]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:26:37 compute-0 musing_wright[470403]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:26:37 compute-0 musing_wright[470403]:         "osd_id": 0,
Dec 05 02:26:37 compute-0 musing_wright[470403]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:26:37 compute-0 musing_wright[470403]:         "type": "bluestore"
Dec 05 02:26:37 compute-0 musing_wright[470403]:     },
Dec 05 02:26:37 compute-0 musing_wright[470403]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:26:37 compute-0 musing_wright[470403]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:26:37 compute-0 musing_wright[470403]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:26:37 compute-0 musing_wright[470403]:         "osd_id": 1,
Dec 05 02:26:37 compute-0 musing_wright[470403]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:26:37 compute-0 musing_wright[470403]:         "type": "bluestore"
Dec 05 02:26:37 compute-0 musing_wright[470403]:     },
Dec 05 02:26:37 compute-0 musing_wright[470403]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:26:37 compute-0 musing_wright[470403]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:26:37 compute-0 musing_wright[470403]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:26:37 compute-0 musing_wright[470403]:         "osd_id": 2,
Dec 05 02:26:37 compute-0 musing_wright[470403]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:26:37 compute-0 musing_wright[470403]:         "type": "bluestore"
Dec 05 02:26:37 compute-0 musing_wright[470403]:     }
Dec 05 02:26:37 compute-0 musing_wright[470403]: }
Dec 05 02:26:37 compute-0 systemd[1]: libpod-c8c7c4e8ff4ee1cb6cfab13a0a23c166a5a19ab363eb126ac339125613c3a976.scope: Deactivated successfully.
Dec 05 02:26:37 compute-0 systemd[1]: libpod-c8c7c4e8ff4ee1cb6cfab13a0a23c166a5a19ab363eb126ac339125613c3a976.scope: Consumed 1.201s CPU time.
Dec 05 02:26:37 compute-0 podman[470387]: 2025-12-05 02:26:37.710569443 +0000 UTC m=+1.471390255 container died c8c7c4e8ff4ee1cb6cfab13a0a23c166a5a19ab363eb126ac339125613c3a976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wright, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 02:26:37 compute-0 podman[470434]: 2025-12-05 02:26:37.722030295 +0000 UTC m=+0.117613934 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, version=9.6, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 05 02:26:37 compute-0 podman[470429]: 2025-12-05 02:26:37.726659675 +0000 UTC m=+0.131399551 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec 05 02:26:37 compute-0 podman[470430]: 2025-12-05 02:26:37.729728911 +0000 UTC m=+0.130313381 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:26:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f4961c78851a0e43162b22e067eda534efea07d920f61f54f1d5588f36f08a2-merged.mount: Deactivated successfully.
Dec 05 02:26:37 compute-0 podman[470387]: 2025-12-05 02:26:37.776109114 +0000 UTC m=+1.536929916 container remove c8c7c4e8ff4ee1cb6cfab13a0a23c166a5a19ab363eb126ac339125613c3a976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wright, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:26:37 compute-0 podman[470433]: 2025-12-05 02:26:37.776643189 +0000 UTC m=+0.165638043 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:26:37 compute-0 systemd[1]: libpod-conmon-c8c7c4e8ff4ee1cb6cfab13a0a23c166a5a19ab363eb126ac339125613c3a976.scope: Deactivated successfully.
Dec 05 02:26:37 compute-0 nova_compute[349548]: 2025-12-05 02:26:37.800 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:26:37 compute-0 sudo[470287]: pam_unix(sudo:session): session closed for user root
Dec 05 02:26:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:26:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:26:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:26:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:26:37 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 15087611-b428-4a72-a071-d9d103b888bc does not exist
Dec 05 02:26:37 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 0fe80d39-c08d-4c04-a278-fa03667be193 does not exist
Dec 05 02:26:37 compute-0 sudo[470530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:26:37 compute-0 sudo[470530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:26:37 compute-0 sudo[470530]: pam_unix(sudo:session): session closed for user root
Dec 05 02:26:38 compute-0 sudo[470555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:26:38 compute-0 sudo[470555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:26:38 compute-0 sudo[470555]: pam_unix(sudo:session): session closed for user root
Dec 05 02:26:38 compute-0 ceph-mon[192914]: pgmap v2332: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:26:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.328 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.329 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.343 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.343 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.343 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:26:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:26:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2333: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:40 compute-0 ceph-mon[192914]: pgmap v2333: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:40 compute-0 nova_compute[349548]: 2025-12-05 02:26:40.421 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:26:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2334: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:42 compute-0 ceph-mon[192914]: pgmap v2334: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2335: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:42 compute-0 nova_compute[349548]: 2025-12-05 02:26:42.803 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:26:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:26:44 compute-0 ceph-mon[192914]: pgmap v2335: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2336: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:26:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/899715316' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:26:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:26:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/899715316' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:26:45 compute-0 nova_compute[349548]: 2025-12-05 02:26:45.427 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:26:46 compute-0 ceph-mon[192914]: pgmap v2336: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/899715316' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:26:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/899715316' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:26:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:26:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:26:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:26:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:26:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:26:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:26:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2337: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:47 compute-0 nova_compute[349548]: 2025-12-05 02:26:47.806 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:26:48 compute-0 ceph-mon[192914]: pgmap v2337: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:26:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2338: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:50 compute-0 ceph-mon[192914]: pgmap v2338: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:50 compute-0 nova_compute[349548]: 2025-12-05 02:26:50.403 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:26:50 compute-0 nova_compute[349548]: 2025-12-05 02:26:50.404 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:26:50 compute-0 nova_compute[349548]: 2025-12-05 02:26:50.404 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:26:50 compute-0 nova_compute[349548]: 2025-12-05 02:26:50.429 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 02:26:50 compute-0 nova_compute[349548]: 2025-12-05 02:26:50.431 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:26:50 compute-0 podman[470583]: 2025-12-05 02:26:50.731584498 +0000 UTC m=+0.126101342 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:26:50 compute-0 podman[470582]: 2025-12-05 02:26:50.743104152 +0000 UTC m=+0.142043660 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Dec 05 02:26:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2339: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:52 compute-0 ceph-mon[192914]: pgmap v2339: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2340: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:52 compute-0 nova_compute[349548]: 2025-12-05 02:26:52.811 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:26:53 compute-0 nova_compute[349548]: 2025-12-05 02:26:53.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:26:53 compute-0 nova_compute[349548]: 2025-12-05 02:26:53.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:26:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:26:54 compute-0 ceph-mon[192914]: pgmap v2340: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2341: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:55 compute-0 nova_compute[349548]: 2025-12-05 02:26:55.436 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:26:56 compute-0 nova_compute[349548]: 2025-12-05 02:26:56.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:26:56 compute-0 nova_compute[349548]: 2025-12-05 02:26:56.069 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:26:56.229 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:26:56.230 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:26:56.230 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:26:56 compute-0 ceph-mon[192914]: pgmap v2341: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:56 compute-0 podman[470625]: 2025-12-05 02:26:56.733481686 +0000 UTC m=+0.136611778 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, release-0.7.12=, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, vcs-type=git, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30)
Dec 05 02:26:56 compute-0 podman[470626]: 2025-12-05 02:26:56.736785778 +0000 UTC m=+0.133656324 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 02:26:56 compute-0 podman[470624]: 2025-12-05 02:26:56.75038803 +0000 UTC m=+0.159052098 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:26:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2342: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:57 compute-0 nova_compute[349548]: 2025-12-05 02:26:57.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:26:57 compute-0 nova_compute[349548]: 2025-12-05 02:26:57.108 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:26:57 compute-0 nova_compute[349548]: 2025-12-05 02:26:57.108 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:26:57 compute-0 nova_compute[349548]: 2025-12-05 02:26:57.109 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:26:57 compute-0 nova_compute[349548]: 2025-12-05 02:26:57.109 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:26:57 compute-0 nova_compute[349548]: 2025-12-05 02:26:57.109 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:26:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:26:57 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2954224562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:26:57 compute-0 nova_compute[349548]: 2025-12-05 02:26:57.613 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:26:57 compute-0 nova_compute[349548]: 2025-12-05 02:26:57.812 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:26:58 compute-0 nova_compute[349548]: 2025-12-05 02:26:58.112 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:26:58 compute-0 nova_compute[349548]: 2025-12-05 02:26:58.113 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3966MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:26:58 compute-0 nova_compute[349548]: 2025-12-05 02:26:58.113 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:26:58 compute-0 nova_compute[349548]: 2025-12-05 02:26:58.114 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:26:58 compute-0 nova_compute[349548]: 2025-12-05 02:26:58.197 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:26:58 compute-0 nova_compute[349548]: 2025-12-05 02:26:58.198 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:26:58 compute-0 nova_compute[349548]: 2025-12-05 02:26:58.220 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:26:58 compute-0 ceph-mon[192914]: pgmap v2342: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:58 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2954224562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:26:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:26:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:26:58 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2702320788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:26:58 compute-0 nova_compute[349548]: 2025-12-05 02:26:58.806 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:26:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2343: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:26:58 compute-0 nova_compute[349548]: 2025-12-05 02:26:58.819 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:26:58 compute-0 nova_compute[349548]: 2025-12-05 02:26:58.842 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:26:58 compute-0 nova_compute[349548]: 2025-12-05 02:26:58.844 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:26:58 compute-0 nova_compute[349548]: 2025-12-05 02:26:58.845 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:26:59 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2702320788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:26:59 compute-0 podman[158197]: time="2025-12-05T02:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:26:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:26:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8189 "" "Go-http-client/1.1"
Dec 05 02:26:59 compute-0 nova_compute[349548]: 2025-12-05 02:26:59.842 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:27:00 compute-0 nova_compute[349548]: 2025-12-05 02:27:00.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:27:00 compute-0 nova_compute[349548]: 2025-12-05 02:27:00.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:27:00 compute-0 ceph-mon[192914]: pgmap v2343: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:00 compute-0 nova_compute[349548]: 2025-12-05 02:27:00.441 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:27:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2344: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:01 compute-0 ceph-mon[192914]: pgmap v2344: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:01 compute-0 openstack_network_exporter[366555]: ERROR   02:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:27:01 compute-0 openstack_network_exporter[366555]: ERROR   02:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:27:01 compute-0 openstack_network_exporter[366555]: ERROR   02:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:27:01 compute-0 openstack_network_exporter[366555]: ERROR   02:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:27:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:27:01 compute-0 openstack_network_exporter[366555]: ERROR   02:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:27:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:27:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2345: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:02 compute-0 nova_compute[349548]: 2025-12-05 02:27:02.815 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:27:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:27:03 compute-0 ceph-mon[192914]: pgmap v2345: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:04 compute-0 nova_compute[349548]: 2025-12-05 02:27:04.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:27:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2346: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:05 compute-0 nova_compute[349548]: 2025-12-05 02:27:05.447 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:27:05 compute-0 ceph-mon[192914]: pgmap v2346: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2347: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:07 compute-0 nova_compute[349548]: 2025-12-05 02:27:07.818 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:27:07 compute-0 ceph-mon[192914]: pgmap v2347: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:27:08 compute-0 podman[470723]: 2025-12-05 02:27:08.715070395 +0000 UTC m=+0.111779080 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 02:27:08 compute-0 podman[470725]: 2025-12-05 02:27:08.72841985 +0000 UTC m=+0.114439605 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, version=9.6, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release=1755695350, architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter)
Dec 05 02:27:08 compute-0 podman[470722]: 2025-12-05 02:27:08.75405182 +0000 UTC m=+0.156739253 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 05 02:27:08 compute-0 podman[470724]: 2025-12-05 02:27:08.767314813 +0000 UTC m=+0.158233406 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 05 02:27:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2348: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:09 compute-0 ceph-mon[192914]: pgmap v2348: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:10 compute-0 nova_compute[349548]: 2025-12-05 02:27:10.451 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:27:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2349: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:11 compute-0 ceph-mon[192914]: pgmap v2349: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:12 compute-0 sshd-session[470808]: Accepted publickey for zuul from 192.168.122.10 port 55824 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 02:27:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2350: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:12 compute-0 nova_compute[349548]: 2025-12-05 02:27:12.822 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:27:12 compute-0 systemd-logind[792]: New session 64 of user zuul.
Dec 05 02:27:12 compute-0 systemd[1]: Started Session 64 of User zuul.
Dec 05 02:27:12 compute-0 sshd-session[470808]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 02:27:13 compute-0 sudo[470812]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 05 02:27:13 compute-0 sudo[470812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 02:27:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:27:13 compute-0 ceph-mon[192914]: pgmap v2350: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2351: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:15 compute-0 nova_compute[349548]: 2025-12-05 02:27:15.456 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:27:15 compute-0 ceph-mon[192914]: pgmap v2351: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:27:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:27:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:27:16
Dec 05 02:27:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:27:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:27:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'images', '.mgr', 'vms', 'volumes']
Dec 05 02:27:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:27:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:27:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:27:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:27:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:27:16 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15545 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2352: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:17 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15547 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:17 compute-0 nova_compute[349548]: 2025-12-05 02:27:17.825 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:27:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Dec 05 02:27:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1078030792' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 05 02:27:17 compute-0 ceph-mon[192914]: from='client.15545 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:17 compute-0 ceph-mon[192914]: pgmap v2352: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:17 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1078030792' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 05 02:27:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:27:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:27:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:27:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:27:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:27:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:27:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:27:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:27:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:27:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:27:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:27:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2353: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:18 compute-0 ceph-mon[192914]: from='client.15547 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:20 compute-0 ceph-mon[192914]: pgmap v2353: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:20 compute-0 nova_compute[349548]: 2025-12-05 02:27:20.461 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:27:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2354: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:21 compute-0 ovs-vsctl[471091]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 05 02:27:21 compute-0 podman[471098]: 2025-12-05 02:27:21.734310825 +0000 UTC m=+0.141327671 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:27:21 compute-0 podman[471100]: 2025-12-05 02:27:21.751323089 +0000 UTC m=+0.154102252 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 02:27:22 compute-0 ceph-mon[192914]: pgmap v2354: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2355: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:22 compute-0 nova_compute[349548]: 2025-12-05 02:27:22.830 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:27:22 compute-0 virtqemud[138703]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 05 02:27:22 compute-0 virtqemud[138703]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 05 02:27:23 compute-0 virtqemud[138703]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 05 02:27:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:27:23 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: cache status {prefix=cache status} (starting...)
Dec 05 02:27:23 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: client ls {prefix=client ls} (starting...)
Dec 05 02:27:24 compute-0 lvm[471450]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 05 02:27:24 compute-0 ceph-mon[192914]: pgmap v2355: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:24 compute-0 lvm[471447]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 02:27:24 compute-0 lvm[471447]: VG ceph_vg0 finished
Dec 05 02:27:24 compute-0 lvm[471450]: VG ceph_vg2 finished
Dec 05 02:27:24 compute-0 lvm[471493]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 05 02:27:24 compute-0 lvm[471493]: VG ceph_vg1 finished
Dec 05 02:27:24 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: damage ls {prefix=damage ls} (starting...)
Dec 05 02:27:24 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: dump loads {prefix=dump loads} (starting...)
Dec 05 02:27:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2356: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:24 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 05 02:27:24 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15551 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:25 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 05 02:27:25 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 05 02:27:25 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 05 02:27:25 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15553 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:25 compute-0 nova_compute[349548]: 2025-12-05 02:27:25.464 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:27:25 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 05 02:27:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Dec 05 02:27:25 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3396425913' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 05 02:27:25 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 05 02:27:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:27:25 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2277704271' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:27:26 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: ops {prefix=ops} (starting...)
Dec 05 02:27:26 compute-0 ceph-mon[192914]: pgmap v2356: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:26 compute-0 ceph-mon[192914]: from='client.15551 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:26 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3396425913' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 05 02:27:26 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2277704271' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:27:26 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15561 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:26 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T02:27:26.212+0000 7f1b09f03640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 05 02:27:26 compute-0 ceph-mgr[193209]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 05 02:27:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Dec 05 02:27:26 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3289709011' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 05 02:27:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Dec 05 02:27:26 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2324153540' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 05 02:27:26 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: session ls {prefix=session ls} (starting...)
Dec 05 02:27:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2357: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:26 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: status {prefix=status} (starting...)
Dec 05 02:27:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Dec 05 02:27:26 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2857087708' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 05 02:27:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec 05 02:27:27 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3935095675' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 05 02:27:27 compute-0 ceph-mon[192914]: from='client.15553 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:27 compute-0 ceph-mon[192914]: from='client.15561 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:27 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3289709011' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 05 02:27:27 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2324153540' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 05 02:27:27 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2857087708' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 05 02:27:27 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3935095675' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.193540) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901647193570, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1277, "num_deletes": 253, "total_data_size": 1926638, "memory_usage": 1959536, "flush_reason": "Manual Compaction"}
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901647205610, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 1896585, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47337, "largest_seqno": 48613, "table_properties": {"data_size": 1890458, "index_size": 3394, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13116, "raw_average_key_size": 20, "raw_value_size": 1878084, "raw_average_value_size": 2880, "num_data_blocks": 152, "num_entries": 652, "num_filter_entries": 652, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764901524, "oldest_key_time": 1764901524, "file_creation_time": 1764901647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 12114 microseconds, and 4638 cpu microseconds.
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.205655) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 1896585 bytes OK
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.205670) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.207232) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.207245) EVENT_LOG_v1 {"time_micros": 1764901647207241, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.207258) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 1920830, prev total WAL file size 1920830, number of live WAL files 2.
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.208368) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(1852KB)], [113(8977KB)]
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901647208389, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 11089471, "oldest_snapshot_seqno": -1}
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 6252 keys, 9317781 bytes, temperature: kUnknown
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901647258310, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 9317781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9276629, "index_size": 24402, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 163330, "raw_average_key_size": 26, "raw_value_size": 9164185, "raw_average_value_size": 1465, "num_data_blocks": 970, "num_entries": 6252, "num_filter_entries": 6252, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764901647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.258484) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 9317781 bytes
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.260202) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 221.9 rd, 186.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 8.8 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(10.8) write-amplify(4.9) OK, records in: 6773, records dropped: 521 output_compression: NoCompression
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.260217) EVENT_LOG_v1 {"time_micros": 1764901647260210, "job": 68, "event": "compaction_finished", "compaction_time_micros": 49972, "compaction_time_cpu_micros": 19571, "output_level": 6, "num_output_files": 1, "total_output_size": 9317781, "num_input_records": 6773, "num_output_records": 6252, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901647260602, "job": 68, "event": "table_file_deletion", "file_number": 115}
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901647262019, "job": 68, "event": "table_file_deletion", "file_number": 113}
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.208266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.262433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.262442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.262446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.262450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.262454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:27:27 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15571 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 05 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:27:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec 05 02:27:27 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/801306621' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 05 02:27:27 compute-0 podman[471954]: 2025-12-05 02:27:27.667324586 +0000 UTC m=+0.079690923 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:27:27 compute-0 podman[471951]: 2025-12-05 02:27:27.675660618 +0000 UTC m=+0.086407308 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, com.redhat.component=ubi9-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, container_name=kepler, managed_by=edpm_ansible, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, vcs-type=git, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 05 02:27:27 compute-0 podman[471943]: 2025-12-05 02:27:27.685871194 +0000 UTC m=+0.108072856 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4)
Dec 05 02:27:27 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15575 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:27 compute-0 nova_compute[349548]: 2025-12-05 02:27:27.830 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:27:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec 05 02:27:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3441343144' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 05 02:27:28 compute-0 ceph-mon[192914]: pgmap v2357: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:28 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/801306621' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 05 02:27:28 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3441343144' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 05 02:27:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Dec 05 02:27:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1173665859' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 05 02:27:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:27:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec 05 02:27:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3071045181' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 02:27:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Dec 05 02:27:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/760973472' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 05 02:27:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2358: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Dec 05 02:27:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3982683381' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 05 02:27:29 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15587 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:29 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T02:27:29.174+0000 7f1b09f03640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 05 02:27:29 compute-0 ceph-mgr[193209]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 05 02:27:29 compute-0 ceph-mon[192914]: from='client.15571 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:29 compute-0 ceph-mon[192914]: from='client.15575 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:29 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1173665859' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 05 02:27:29 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3071045181' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 02:27:29 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/760973472' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 05 02:27:29 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3982683381' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 05 02:27:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec 05 02:27:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1835647' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 05 02:27:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Dec 05 02:27:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/326692733' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 05 02:27:29 compute-0 podman[158197]: time="2025-12-05T02:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:27:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:27:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8199 "" "Go-http-client/1.1"
Dec 05 02:27:30 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15593 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Dec 05 02:27:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2479113107' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 05 02:27:30 compute-0 ceph-mon[192914]: pgmap v2358: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:30 compute-0 ceph-mon[192914]: from='client.15587 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:30 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1835647' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 05 02:27:30 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/326692733' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 05 02:27:30 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2479113107' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 05 02:27:30 compute-0 nova_compute[349548]: 2025-12-05 02:27:30.467 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:27:30 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15597 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:44.868431+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 2220032 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:45.868802+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 2220032 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:46.869079+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 2220032 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8a41000/0x0/0x4ffc00000, data 0x2f74dbe/0x3036000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420165 data_alloc: 234881024 data_used: 25731072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:47.869421+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 2220032 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:48.869827+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 2220032 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:49.870087+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 2220032 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:50.870450+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 2220032 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8a41000/0x0/0x4ffc00000, data 0x2f74dbe/0x3036000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:51.871331+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 2220032 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420165 data_alloc: 234881024 data_used: 25731072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:52.871684+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:53.872063+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8a41000/0x0/0x4ffc00000, data 0x2f74dbe/0x3036000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:54.872462+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:55.872772+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:56.873062+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420165 data_alloc: 234881024 data_used: 25731072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:57.873373+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8a41000/0x0/0x4ffc00000, data 0x2f74dbe/0x3036000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:58.873671+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:59.873940+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:00.874136+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:01.874392+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420165 data_alloc: 234881024 data_used: 25731072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:02.874624+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8a41000/0x0/0x4ffc00000, data 0x2f74dbe/0x3036000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:03.874814+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8a41000/0x0/0x4ffc00000, data 0x2f74dbe/0x3036000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:04.875039+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:05.875229+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:06.875437+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8a41000/0x0/0x4ffc00000, data 0x2f74dbe/0x3036000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43a4cf400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c43a4cf400 session 0x55c43980c960
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43a4cf000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c43a4cf000 session 0x55c43980d680
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438c37800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c438c37800 session 0x55c439965680
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4399ee400 session 0x55c439562000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420165 data_alloc: 234881024 data_used: 25731072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:07.875737+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ef800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 48.267883301s of 48.293380737s, submitted: 8
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 3317760 heap: 113770496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4399ef800 session 0x55c43980d2c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43a4cf400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c43a4cf400 session 0x55c43967a780
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43a4cec00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c43a4cec00 session 0x55c439c74d20
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438c37800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c438c37800 session 0x55c439c74b40
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4399ee400 session 0x55c4378c34a0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ef800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4399ef800 session 0x55c439e16b40
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43a4cf400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c43a4cf400 session 0x55c43ac4dc20
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c439892c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c439892c00 session 0x55c439345e00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:08.876030+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438c37800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 10199040 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c438c37800 session 0x55c43ac4cb40
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4399ee400 session 0x55c43a54f860
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ef800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4399ef800 session 0x55c439866960
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43a4cf400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c43a4cf400 session 0x55c4398a9680
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d8000/0x0/0x4ffc00000, data 0x34e3dce/0x35a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:09.876312+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 10190848 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d8000/0x0/0x4ffc00000, data 0x34e3dce/0x35a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:10.876582+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 10190848 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:11.876836+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 10117120 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4398b2400 session 0x55c439101680
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464999 data_alloc: 234881024 data_used: 25731072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:12.877146+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438c37800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c438c37800 session 0x55c437aaf860
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 10117120 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d8000/0x0/0x4ffc00000, data 0x34e3dce/0x35a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4399ee400 session 0x55c439344b40
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ef800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4399ef800 session 0x55c43ac4cf00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:13.877379+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 9003008 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43a4cf400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:14.877606+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 9265152 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:15.877861+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 9265152 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:16.878044+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 9265152 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1474021 data_alloc: 234881024 data_used: 26497024
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:17.878278+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 111722496 unmapped: 8347648 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:18.878535+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114540544 unmapped: 5529600 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:19.878811+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 5521408 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:20.879541+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 5521408 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:21.880056+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 5521408 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:22.880484+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1500421 data_alloc: 251658240 data_used: 30220288
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 5521408 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:23.880867+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 5521408 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:24.881195+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 5521408 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:25.881421+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 5521408 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:26.881642+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 5521408 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:27.882083+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1500421 data_alloc: 251658240 data_used: 30220288
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 5521408 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:28.882422+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 5521408 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:29.882700+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 5521408 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:30.882965+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 5521408 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:31.883370+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114556928 unmapped: 5513216 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:32.883612+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1500421 data_alloc: 251658240 data_used: 30220288
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114556928 unmapped: 5513216 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:33.883808+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.994903564s of 26.165307999s, submitted: 27
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 5406720 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:34.884086+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 5324800 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:35.884465+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 5660672 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:36.884804+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 5660672 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:37.885156+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1500021 data_alloc: 251658240 data_used: 30224384
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 5660672 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:38.885413+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 5660672 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:39.885683+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 5660672 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:40.886051+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 5660672 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:41.886739+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 5660672 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:42.887079+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1500021 data_alloc: 251658240 data_used: 30224384
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 5660672 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:43.887452+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 5660672 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:44.887846+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 5660672 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:45.888223+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 5660672 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:46.888519+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114384896 unmapped: 5685248 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:47.888975+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1500021 data_alloc: 251658240 data_used: 30224384
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114393088 unmapped: 5677056 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:48.889212+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114393088 unmapped: 5677056 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:49.889594+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 5644288 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:50.889785+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114515968 unmapped: 5554176 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.991294861s of 17.457975388s, submitted: 90
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:51.890353+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 4440064 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:52.890604+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1553143 data_alloc: 251658240 data_used: 30351360
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117768192 unmapped: 4407296 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:53.890944+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e37000/0x0/0x4ffc00000, data 0x3b83df1/0x3c47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 4349952 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:54.891242+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 4349952 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:55.891457+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 4349952 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:56.892136+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 4349952 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:57.892486+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1554279 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 4317184 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:58.892773+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 4317184 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:59.893053+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 4317184 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:00.893414+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 4317184 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:01.894305+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 4317184 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:02.895267+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1554279 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 4308992 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:03.895651+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 4308992 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:04.896140+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 4300800 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:05.896455+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 4300800 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:06.896847+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 4300800 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:07.897164+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1554279 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 4300800 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:08.897450+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 4300800 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:09.897683+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 4300800 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:10.897969+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 4300800 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:11.898272+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 4284416 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:12.898579+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1554279 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 4284416 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:13.898872+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 4284416 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:14.899247+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 4284416 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:15.899502+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 4284416 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:16.900012+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 4284416 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:17.900634+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1554279 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 4284416 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:18.900875+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 4284416 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:19.901134+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 4284416 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:20.901463+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 4284416 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:21.901738+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 4276224 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:22.902135+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1554279 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 4276224 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:23.902500+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 32.652729034s of 32.879589081s, submitted: 50
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:24.903039+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:25.903504+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:26.903850+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:27.904124+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:28.904527+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:29.905183+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:30.905556+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:31.906005+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:32.906523+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:33.907282+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:34.907664+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:35.908193+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:36.908410+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:37.908743+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:38.908963+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:39.909329+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:40.909707+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:41.910152+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:42.910449+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:43.910837+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:44.911234+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:45.911574+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:46.912055+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:48.527611+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:49.528129+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:50.528484+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:51.528692+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:52.529163+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:53.529405+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:54.529759+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:55.530142+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:56.530554+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:57.531035+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:58.531340+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:59.531693+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:00.531958+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:01.532163+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:02.532386+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:03.532719+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:04.533127+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:05.533482+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:06.533741+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:07.534124+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:08.534601+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:09.535109+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:10.535503+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:11.535738+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:12.536059+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:13.536462+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:14.536981+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:15.537307+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:16.537866+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:17.538384+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:18.538743+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:19.539221+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:20.539601+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:21.540056+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:22.540453+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:23.540775+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:24.541055+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:25.541312+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:26.541695+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:27.541984+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:28.542249+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:29.542579+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:30.542951+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:31.543308+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:32.543615+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:33.544067+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:34.544405+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:35.544791+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:36.545169+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:37.545500+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:38.545852+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:39.546216+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:40.546613+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:41.547087+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:42.547455+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:43.547852+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:44.548063+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 5660672 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:45.548349+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 5660672 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:46.548760+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 5660672 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:47.549104+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 5660672 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:48.549588+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 5660672 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:49.550056+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 5660672 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:50.550370+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:51.550770+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:52.551206+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:53.551594+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:54.552079+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:55.552505+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:56.552980+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:57.553422+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:58.553691+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:59.553994+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:00.554184+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:01.554357+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:02.554587+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:03.554958+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:04.555369+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:05.555720+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:06.556098+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:07.556418+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:08.556623+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:09.557008+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:10.557345+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:11.557724+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:12.558180+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:13.558413+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:14.558686+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:15.559325+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:16.559549+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:17.560066+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:18.560401+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:19.560676+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 5636096 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:20.561023+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 5636096 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:21.561285+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 5636096 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:22.561734+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 5636096 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:23.561980+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 5636096 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:24.562449+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 5636096 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:25.562756+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 5636096 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:26.563123+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 5636096 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:27.563412+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 5636096 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:28.563743+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 5636096 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:29.564029+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:30.564365+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:31.564634+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:32.565176+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:33.565520+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:34.565767+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:35.566032+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:36.566353+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:37.566602+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:38.566785+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:39.567065+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:40.567321+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:41.567646+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:42.567969+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:43.568160+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:44.568510+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:45.569094+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:46.569459+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:47.569836+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:48.570239+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:49.570659+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:50.571030+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:51.571262+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 5619712 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:52.571717+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 5619712 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:53.571958+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:54.572291+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 5619712 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:55.572578+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 5619712 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:56.573065+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 5619712 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:57.573317+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 5619712 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4399ef400 session 0x55c4398c2f00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 153.511062622s of 153.552902222s, submitted: 2
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4399f1000 session 0x55c439344780
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4399ee800 session 0x55c43912a780
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438c37800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:58.574203+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 5619712 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1354486 data_alloc: 234881024 data_used: 19447808
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c438c37800 session 0x55c439c75e00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:59.575347+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:00.575721+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:01.576089+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:02.576481+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:03.577015+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 234881024 data_used: 19435520
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:04.577184+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:05.577442+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets getting new tickets!
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:06.578178+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _finish_auth 0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:06.579309+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:07.578478+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:08.579148+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 234881024 data_used: 19435520
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:09.579496+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:10.580155+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:11.580393+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:12.580735+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:13.581121+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 234881024 data_used: 19435520
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:14.581510+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:15.581711+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:16.582128+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:17.582395+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:18.582776+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 234881024 data_used: 19435520
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:19.583112+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:20.583446+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:21.583833+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:22.584457+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:23.585359+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 234881024 data_used: 19435520
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:24.585622+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:25.586024+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:26.586420+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:27.586631+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:28.587004+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 234881024 data_used: 19435520
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:29.587351+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:30.587659+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:31.587910+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:32.588443+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:33.588774+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:34.589125+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 234881024 data_used: 19435520
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:35.589555+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:36.589827+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:37.590279+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:38.590694+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:39.591137+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 234881024 data_used: 19435520
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:40.591469+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:41.591825+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:42.592433+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:43.592711+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:44.593141+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:45.593551+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:46.593759+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:47.594180+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:48.594474+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:49.594767+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:50.595085+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:51.595435+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:52.595688+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:53.595912+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:54.596214+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:55.596563+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:56.596808+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:57.597125+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:58.597517+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:59.597774+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:00.598113+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:01.598362+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:02.598828+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:03.599049+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:04.599395+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:05.599747+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:06.600088+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:07.600526+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:08.600784+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:09.601048+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:10.601765+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:11.602103+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:12.602480+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:13.602645+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:14.602951+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:15.603203+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:16.603404+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:17.603811+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:18.604215+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:19.604521+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:20.605047+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:21.605474+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:22.605812+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:23.606294+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:24.607275+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:25.607599+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:26.607805+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c43989b000 session 0x55c43986c3c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:27.608150+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:28.608392+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:29.609001+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:30.609363+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:31.609782+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:32.610259+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:33.610802+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:34.611132+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:35.611476+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:36.611872+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:37.612304+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:38.612699+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:39.613127+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:40.613490+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:41.613766+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:42.614194+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:43.614423+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:44.614858+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:45.615243+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:46.615625+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:47.616080+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:48.616442+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:49.616825+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:50.617135+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:51.617480+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:52.618016+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:53.618429+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:54.618697+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:55.619173+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:56.619575+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:57.619778+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4399efc00 session 0x55c439345860
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c437982c00 session 0x55c4399cde00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c43a4cf800 session 0x55c439344000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 120.342414856s of 120.453239441s, submitted: 19
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:58.620012+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c437982c00 session 0x55c43912a960
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:59.620291+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301576 data_alloc: 218103808 data_used: 16261120
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:00.620531+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9184000/0x0/0x4ffc00000, data 0x2838d7f/0x28fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:01.620719+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:02.621025+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:03.621277+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:04.621637+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301576 data_alloc: 218103808 data_used: 16261120
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9184000/0x0/0x4ffc00000, data 0x2838d7f/0x28fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:05.621988+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9184000/0x0/0x4ffc00000, data 0x2838d7f/0x28fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:06.622256+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:07.622661+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9184000/0x0/0x4ffc00000, data 0x2838d7f/0x28fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:08.623069+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9184000/0x0/0x4ffc00000, data 0x2838d7f/0x28fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:09.623422+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301576 data_alloc: 218103808 data_used: 16261120
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:10.623775+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:11.624129+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9184000/0x0/0x4ffc00000, data 0x2838d7f/0x28fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:12.624543+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9184000/0x0/0x4ffc00000, data 0x2838d7f/0x28fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:13.625117+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:14.625345+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301576 data_alloc: 218103808 data_used: 16261120
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:15.625725+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 14704640 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:16.626061+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 14704640 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9184000/0x0/0x4ffc00000, data 0x2838d7f/0x28fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:17.626447+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 14704640 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:18.626682+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 14704640 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:19.627061+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 14704640 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9184000/0x0/0x4ffc00000, data 0x2838d7f/0x28fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301576 data_alloc: 218103808 data_used: 16261120
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:20.627354+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 14704640 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:21.627751+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 14704640 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:22.628154+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 14704640 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9184000/0x0/0x4ffc00000, data 0x2838d7f/0x28fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:23.628515+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 14704640 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438c37800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.429254532s of 25.455911636s, submitted: 4
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:24.629006+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 15581184 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304964 data_alloc: 218103808 data_used: 16261120
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:25.629290+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 15581184 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:26.629497+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 15581184 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9183000/0x0/0x4ffc00000, data 0x2838da2/0x28fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:27.629963+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 15581184 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9183000/0x0/0x4ffc00000, data 0x2838da2/0x28fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9183000/0x0/0x4ffc00000, data 0x2838da2/0x28fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:28.630421+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 31416320 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _renew_subs
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: get_auth_request con 0x55c43652d400 auth_method 0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c438c37800 session 0x55c4398c25a0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:29.630743+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 31350784 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363204 data_alloc: 218103808 data_used: 16269312
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:30.631130+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 31350784 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f89f2000/0x0/0x4ffc00000, data 0x2fc6342/0x308b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:31.631510+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 31350784 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:32.631863+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 31350784 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:33.632307+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 31350784 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f89f2000/0x0/0x4ffc00000, data 0x2fc6342/0x308b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:34.632721+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 31350784 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363204 data_alloc: 218103808 data_used: 16269312
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:35.633106+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 31350784 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f89f2000/0x0/0x4ffc00000, data 0x2fc6342/0x308b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:36.633323+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 31350784 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c4399ee800 session 0x55c4398a8000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399efc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c4399efc00 session 0x55c43912b680
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ef400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c4399ef400 session 0x55c43a54e1e0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:37.633526+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 31350784 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.135446548s of 14.344388962s, submitted: 25
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c437982c00 session 0x55c4398d41e0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438c37800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c438c37800 session 0x55c4398c2b40
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:38.633841+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113647616 unmapped: 24363008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c4399ee800 session 0x55c4399cd680
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399efc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:39.634099+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c4399efc00 session 0x55c4397dbe00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ef800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 23379968 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c4399ef800 session 0x55c43911a780
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c437982c00 session 0x55c439563680
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438c37800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c438c37800 session 0x55c439964f00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c4399ee800 session 0x55c436d8d0e0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399efc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1434785 data_alloc: 234881024 data_used: 23072768
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:40.634533+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c4399efc00 session 0x55c43967a960
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114638848 unmapped: 23371776 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438d52000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c438d52000 session 0x55c4373b7a40
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c437982c00 session 0x55c43802b0e0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438c37800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c438c37800 session 0x55c437bdf0e0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f83c4000/0x0/0x4ffc00000, data 0x35f2416/0x36ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:41.634940+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114638848 unmapped: 23371776 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:42.635637+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 23298048 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c4399ee800 session 0x55c43806da40
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:43.635872+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 23298048 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399efc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438d52400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438d52800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:44.636307+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 23298048 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1435525 data_alloc: 234881024 data_used: 23080960
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:45.636572+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 23298048 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f83c3000/0x0/0x4ffc00000, data 0x35f2439/0x36bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:46.636774+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 23289856 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:47.637137+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:48.637499+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:49.637703+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1481765 data_alloc: 234881024 data_used: 29540352
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:50.638007+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f83c3000/0x0/0x4ffc00000, data 0x35f2439/0x36bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:51.638354+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:52.638791+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f83c3000/0x0/0x4ffc00000, data 0x35f2439/0x36bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:53.638993+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:54.639341+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1481765 data_alloc: 234881024 data_used: 29540352
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:55.639653+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f83c3000/0x0/0x4ffc00000, data 0x35f2439/0x36bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:56.640121+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:57.640489+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:58.640726+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:59.641002+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:00.641235+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1481765 data_alloc: 234881024 data_used: 29540352
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:01.641500+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c4399efc00 session 0x55c43806d680
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c438d52400 session 0x55c4397db4a0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c438d52800 session 0x55c439101680
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 23.625425339s of 23.886270523s, submitted: 40
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f83c3000/0x0/0x4ffc00000, data 0x35f2439/0x36bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [0,1])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:02.641979+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 23879680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c437982c00 session 0x55c437953c20
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:03.642324+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 24305664 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:04.642774+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 24305664 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438c37800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:05.643034+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387971 data_alloc: 234881024 data_used: 23068672
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 24297472 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 128 ms_handle_reset con 0x55c438c37800 session 0x55c43965e1e0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:06.643260+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 30531584 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:07.644086+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f917d000/0x0/0x4ffc00000, data 0x283c4cd/0x2900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 30531584 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:08.644380+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 30531584 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:09.644602+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 30531584 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f917d000/0x0/0x4ffc00000, data 0x283c4cd/0x2900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:10.644831+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1318945 data_alloc: 218103808 data_used: 16277504
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 30531584 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f917d000/0x0/0x4ffc00000, data 0x283c4cd/0x2900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:11.645046+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 30531584 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f917d000/0x0/0x4ffc00000, data 0x283c4cd/0x2900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:12.645265+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107175936 unmapped: 30834688 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:13.645480+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107175936 unmapped: 30834688 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:14.645779+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _renew_subs
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.102030754s of 12.603260040s, submitted: 82
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 30826496 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:15.646010+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 129 heartbeat osd_stat(store_statfs(0x4f917a000/0x0/0x4ffc00000, data 0x283df30/0x2903000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322943 data_alloc: 218103808 data_used: 16285696
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 30826496 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:16.646177+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 30826496 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:17.646531+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 30826496 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:18.646810+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 30826496 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:19.647199+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 30826496 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:20.647547+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323784 data_alloc: 218103808 data_used: 16285696
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 129 heartbeat osd_stat(store_statfs(0x4f917b000/0x0/0x4ffc00000, data 0x283df30/0x2903000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 30826496 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:21.648012+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 30826496 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:22.648449+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _renew_subs
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 130 ms_handle_reset con 0x55c4399ee800 session 0x55c437172d20
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107200512 unmapped: 30810112 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:23.648852+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107200512 unmapped: 30810112 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:24.649111+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 130 heartbeat osd_stat(store_statfs(0x4f89ea000/0x0/0x4ffc00000, data 0x2fcb4d0/0x3093000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107200512 unmapped: 30810112 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:25.649438+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381830 data_alloc: 218103808 data_used: 16293888
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107200512 unmapped: 30810112 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:26.649635+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107200512 unmapped: 30810112 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:27.650044+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 130 heartbeat osd_stat(store_statfs(0x4f89ea000/0x0/0x4ffc00000, data 0x2fcb4d0/0x3093000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107208704 unmapped: 30801920 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:28.650239+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107208704 unmapped: 30801920 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:29.650589+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107208704 unmapped: 30801920 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399efc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.494970322s of 15.655971527s, submitted: 28
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:30.651073+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1380284 data_alloc: 218103808 data_used: 16293888
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _renew_subs
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107200512 unmapped: 30810112 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 131 ms_handle_reset con 0x55c4399efc00 session 0x55c439867a40
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:31.651491+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:32.651989+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 131 heartbeat osd_stat(store_statfs(0x4f9174000/0x0/0x4ffc00000, data 0x284167e/0x2909000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 131 heartbeat osd_stat(store_statfs(0x4f9174000/0x0/0x4ffc00000, data 0x284167e/0x2909000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:33.652435+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:34.652870+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 131 ms_handle_reset con 0x55c4398b3000 session 0x55c4398bda40
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:35.653249+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332618 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:36.653447+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:37.653662+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 131 heartbeat osd_stat(store_statfs(0x4f9174000/0x0/0x4ffc00000, data 0x284167e/0x2909000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:38.654088+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:39.654310+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _renew_subs
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:40.654662+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:41.654975+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:42.655361+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:43.655571+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:44.663995+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:45.687291+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:46.687479+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:47.687733+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:48.688055+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:49.688279+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:50.688650+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:51.689144+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:52.690096+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:53.690436+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:54.690822+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:55.691239+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:56.691626+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:57.692132+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:58.692538+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:59.693031+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:00.693488+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:01.694013+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:02.694445+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:03.694719+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:04.695097+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:05.695378+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:06.695979+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:07.696367+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:08.696725+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:09.697298+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:10.697557+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:11.698082+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:12.698532+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:13.698999+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:14.699355+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:15.699609+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:16.700065+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:17.700418+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:18.700715+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:19.700945+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:20.701308+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:21.701604+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:22.702071+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:23.702357+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:24.702606+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:25.703007+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:26.703335+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:27.703708+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:28.704034+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:29.704571+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:30.704949+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:31.705339+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:32.705836+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:33.706381+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:34.706798+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:35.707545+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:36.708264+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:37.708558+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:38.709106+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:39.709495+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:40.709971+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:41.710426+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:42.710781+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:43.711169+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:44.711495+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:45.712015+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:46.712222+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:47.712601+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:48.712854+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:49.713226+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:50.713993+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:51.714347+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:52.714689+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:53.715105+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:54.715497+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:55.716025+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:56.716433+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:57.716824+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:58.717235+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:59.717436+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:00.717693+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:01.718027+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:02.718301+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:03.718555+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:04.718723+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3000.1 total, 600.0 interval
                                            Cumulative writes: 7411 writes, 29K keys, 7411 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 7411 writes, 1632 syncs, 4.54 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 761 writes, 2337 keys, 761 commit groups, 1.0 writes per commit group, ingest: 1.65 MB, 0.00 MB/s
                                            Interval WAL: 761 writes, 334 syncs, 2.28 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:05.719090+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:06.719511+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:07.719738+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:08.719970+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:09.720337+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:10.720709+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:11.721140+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:12.721621+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:13.722073+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:14.722223+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:15.722621+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:16.722829+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:17.723087+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:18.723326+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:19.723550+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:20.723864+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:21.724274+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:22.724726+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:23.725323+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:24.725602+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:25.726110+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:26.726486+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:27.726975+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:28.727324+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:29.727573+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 30744576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:30.727810+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 30744576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:31.728031+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 30744576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:32.728312+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 30744576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:33.728606+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 30744576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:34.729051+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 30744576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:35.729226+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 30744576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:36.729553+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 30744576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:37.729838+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 30744576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:38.730105+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 30744576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:39.730413+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:40.730781+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:41.731126+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:42.731626+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:43.732045+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:44.732342+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:45.732678+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:46.732969+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:47.733396+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:48.733646+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:49.734092+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:50.734395+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:51.734744+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:52.735152+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:53.735525+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:54.735995+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:55.736434+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:56.736877+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:57.737463+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:58.737854+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:59.738257+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:00.738625+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:01.739204+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:02.739485+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:03.739836+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:04.740303+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:05.740641+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:06.741437+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:07.741707+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:08.742107+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:09.742426+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:10.742853+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:11.743193+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:12.743635+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:13.744246+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:14.744599+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:15.745116+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:16.745436+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:17.745712+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:18.746080+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:19.746336+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:20.746632+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:21.746853+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:22.747426+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:23.747756+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c43a4cf400 session 0x55c43a54e780
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4398b2800 session 0x55c4397da1e0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 173.469711304s of 174.075759888s, submitted: 52
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4398b2000 session 0x55c43986dc20
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:24.748172+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:25.748575+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4398b3000 session 0x55c4399641e0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:26.749077+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c300ae/0x1cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210818 data_alloc: 218103808 data_used: 11628544
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:27.749381+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:28.749809+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c300ae/0x1cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:29.750196+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:30.750583+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c300ae/0x1cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:31.751064+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210818 data_alloc: 218103808 data_used: 11628544
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:32.751573+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:33.751810+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.715733528s of 10.014011383s, submitted: 44
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103849984 unmapped: 34160640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:34.752314+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103874560 unmapped: 34136064 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:35.752705+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103874560 unmapped: 34136064 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c300ae/0x1cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [0,0,0,1])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:36.753081+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210818 data_alloc: 218103808 data_used: 11628544
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9977000/0x0/0x4ffc00000, data 0x1c300ae/0x1cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:37.753430+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:38.753749+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:39.753994+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:40.754272+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:41.754557+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210818 data_alloc: 218103808 data_used: 11628544
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:42.754913+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9977000/0x0/0x4ffc00000, data 0x1c300ae/0x1cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:43.755195+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:44.755609+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4398b2c00 session 0x55c43990bc20
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c437982c00 session 0x55c4398c2d20
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4399f0c00 session 0x55c4398c2780
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.585576057s of 11.133249283s, submitted: 76
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:45.756105+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100327424 unmapped: 37683200 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4398b2000 session 0x55c4373285a0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:46.756392+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:47.756630+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:48.756970+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:49.757175+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:50.757603+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:51.757856+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:52.758359+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:53.758831+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:54.759262+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:55.759772+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:56.760171+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:57.760599+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:58.761145+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:59.761575+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:00.761970+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:01.762370+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:02.762770+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:03.763156+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:04.763391+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:05.763819+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:06.764209+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:07.764651+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:08.765164+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:09.765560+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:10.766160+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:11.766549+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:12.767194+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:13.767619+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:14.767840+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:15.768191+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:16.768557+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:17.769031+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:18.769246+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:19.769600+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:20.770033+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:21.770399+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:22.770746+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:23.771102+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:24.771493+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:25.771877+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:26.772170+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:27.772589+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:28.773111+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:29.773427+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:30.773811+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:31.774173+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:32.774596+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:33.775073+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:34.775566+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:35.775776+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:36.776165+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:37.776586+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:38.777046+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:39.777415+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:40.777713+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:41.778053+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:42.778460+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:43.778827+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:44.779032+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:45.779271+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:46.779601+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:47.779870+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:48.780300+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:49.780622+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:50.780961+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:51.781151+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:52.781503+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:53.781831+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:54.782183+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:55.782425+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:56.782839+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:57.783231+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:58.783517+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:59.783759+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:00.795158+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:01.795541+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:02.796162+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:03.796535+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:04.797009+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:05.797217+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 80.609878540s of 80.641670227s, submitted: 13
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 37224448 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:06.797395+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4398b2800 session 0x55c439c743c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100802560 unmapped: 37208064 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:07.797628+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061603 data_alloc: 218103808 data_used: 4386816
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _renew_subs
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100802560 unmapped: 37208064 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa4e0000/0x0/0x4ffc00000, data 0x10c5c1b/0x118d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:08.798149+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 ms_handle_reset con 0x55c4398b3000 session 0x55c4398a9860
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:09.798447+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:10.798993+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:11.799329+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:12.799732+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:13.800083+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:14.800353+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:15.800730+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:16.801154+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:17.801523+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:18.801772+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:19.802225+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:20.802424+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:21.802678+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:22.803118+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:23.803592+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:24.804141+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:25.804502+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:26.805045+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:27.805419+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:28.805762+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:29.806165+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:30.806493+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:31.806813+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:32.807204+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:33.807576+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:34.808053+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:35.808417+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:36.808662+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:37.808933+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:38.809172+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:39.809533+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:40.810045+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:41.810420+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:42.810794+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:43.811167+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:44.811500+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:45.811851+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:46.812240+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:47.812666+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125576 data_alloc: 218103808 data_used: 4407296
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:48.813095+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:49.813444+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:50.813798+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:51.814169+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:52.814582+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125576 data_alloc: 218103808 data_used: 4407296
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:53.814956+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:54.815345+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:55.815694+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:56.816147+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:57.816409+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125576 data_alloc: 218103808 data_used: 4407296
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:58.816785+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:59.817173+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:00.817482+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:01.818061+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:02.818421+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125576 data_alloc: 218103808 data_used: 4407296
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:03.818674+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 ms_handle_reset con 0x55c437982c00 session 0x55c4373b0780
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 ms_handle_reset con 0x55c4398b2000 session 0x55c43911be00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 ms_handle_reset con 0x55c4398b2800 session 0x55c43803e780
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:04.818915+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:05.819086+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399f0c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 59.751800537s of 59.914466858s, submitted: 18
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43a4cf400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _renew_subs
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c43a4cf400 session 0x55c43911a1e0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4399f0c00 session 0x55c4378c32c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:06.819346+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107839488 unmapped: 30171136 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9cd8000/0x0/0x4ffc00000, data 0x18c935b/0x1995000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c437982c00 session 0x55c439165e00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:07.819691+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398b2800 session 0x55c436fc4d20
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107356160 unmapped: 30654464 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398b2000 session 0x55c439164960
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4399ee800 session 0x55c4373183c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c438edbc00 session 0x55c437aa0000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398b2000 session 0x55c437aa1c20
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398b2800 session 0x55c437aa0780
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4399ee800 session 0x55c43914a1e0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210372 data_alloc: 218103808 data_used: 11231232
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c437982c00 session 0x55c439859860
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43a4ce800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398a0800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398a0800 session 0x55c43914a000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c43a4ce800 session 0x55c437319c20
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398b2000 session 0x55c4398a9a40
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c437982c00 session 0x55c43914bc20
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4399ee800 session 0x55c4373312c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:08.819953+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438cd6800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107626496 unmapped: 30384128 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c438cd6800 session 0x55c437330000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c437982c00 session 0x55c4399cd4a0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398b2000 session 0x55c4399cde00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _renew_subs
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 handle_osd_map epochs [136,136], i have 136, src has [1,136]
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:09.820602+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107667456 unmapped: 30343168 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24f1f7b/0x25c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,1])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c4398b2800 session 0x55c4397dab40
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:10.820956+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107675648 unmapped: 30334976 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:11.821314+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 30056448 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c4399ee800 session 0x55c4399643c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43a4ce800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c43a4ce800 session 0x55c4398c3680
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c437982c00 session 0x55c4398665a0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c4398b2000 session 0x55c43980cd20
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c4398b2800 session 0x55c4398d4780
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:12.821726+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 30015488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304547 data_alloc: 218103808 data_used: 11243520
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:13.822111+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 30015488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8b0c000/0x0/0x4ffc00000, data 0x2a94f58/0x2b62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c4399ee800 session 0x55c4398d45a0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43989d400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:14.822470+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 30015488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c4398b3800 session 0x55c437329860
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:15.822692+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107569152 unmapped: 30441472 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:16.823165+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107569152 unmapped: 30441472 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:17.823341+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107175936 unmapped: 30834688 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1370618 data_alloc: 234881024 data_used: 19755008
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _renew_subs
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.508833885s of 12.396708488s, submitted: 129
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:18.823587+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109142016 unmapped: 28868608 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f8ade000/0x0/0x4ffc00000, data 0x2ac09bb/0x2b8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:19.823754+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 27688960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:20.824047+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 27688960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f8ade000/0x0/0x4ffc00000, data 0x2ac09bb/0x2b8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:21.830207+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 27688960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f8ade000/0x0/0x4ffc00000, data 0x2ac09bb/0x2b8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:22.830675+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 27688960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c43989d400 session 0x55c43965e3c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c438edbc00 session 0x55c43802a000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387780 data_alloc: 234881024 data_used: 21659648
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2400 session 0x55c437aa1e00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:23.831017+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 108232704 unmapped: 29777920 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b3c00 session 0x55c4399cd0e0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:24.831328+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30408704 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:25.832092+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30408704 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91d8000/0x0/0x4ffc00000, data 0x23c79bb/0x2496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:26.832515+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30408704 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:27.833016+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30408704 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302492 data_alloc: 234881024 data_used: 17076224
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:28.833477+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107610112 unmapped: 30400512 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:29.833872+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91d8000/0x0/0x4ffc00000, data 0x23c79bb/0x2496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:30.834147+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91d8000/0x0/0x4ffc00000, data 0x23c79bb/0x2496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:31.834511+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:32.835076+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332732 data_alloc: 234881024 data_used: 21368832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:33.835326+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:34.835640+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:35.836079+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:36.836480+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91d8000/0x0/0x4ffc00000, data 0x23c79bb/0x2496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:37.836839+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332732 data_alloc: 234881024 data_used: 21368832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:38.837056+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:39.837436+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:40.837629+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91d8000/0x0/0x4ffc00000, data 0x23c79bb/0x2496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:41.837793+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:42.838002+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91d8000/0x0/0x4ffc00000, data 0x23c79bb/0x2496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332732 data_alloc: 234881024 data_used: 21368832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:43.838271+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 28295168 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399efc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4399efc00 session 0x55c4399cc5a0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c438edbc00 session 0x55c4399cd2c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:44.838471+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43989d400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c43989d400 session 0x55c4399cc000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2400 session 0x55c43802ba40
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 28295168 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.453773499s of 26.580440521s, submitted: 32
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b3c00 session 0x55c43986d860
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399efc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4399efc00 session 0x55c43965fc20
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c438edbc00 session 0x55c4373b70e0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43989d400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c43989d400 session 0x55c43914bc20
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2400 session 0x55c4398a92c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:45.838711+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 27394048 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:46.839117+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 27394048 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:47.839332+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 27394048 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389229 data_alloc: 234881024 data_used: 21372928
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:48.839576+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f8b33000/0x0/0x4ffc00000, data 0x2a6ba1d/0x2b3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110649344 unmapped: 27361280 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:49.839789+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 27344896 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:50.840275+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 20815872 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:51.840480+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b3c00 session 0x55c43980c3c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 22495232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399efc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ef800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:52.840741+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7e8f000/0x0/0x4ffc00000, data 0x370ea1d/0x37de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115564544 unmapped: 22446080 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503030 data_alloc: 234881024 data_used: 22388736
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:53.840994+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 22896640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:54.841408+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 20946944 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:55.841673+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117784576 unmapped: 20226048 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:56.841857+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120913920 unmapped: 17096704 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4399efc00 session 0x55c4398590e0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.942190170s of 12.467167854s, submitted: 119
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4399ef800 session 0x55c4399cc1e0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:57.842061+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118046720 unmapped: 19963904 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7e8a000/0x0/0x4ffc00000, data 0x3714a1d/0x37e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,1])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451822 data_alloc: 234881024 data_used: 22401024
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c438edbc00 session 0x55c43967b4a0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:58.842465+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7e8a000/0x0/0x4ffc00000, data 0x3714a1d/0x37e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 19947520 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:59.842791+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f852e000/0x0/0x4ffc00000, data 0x30709bb/0x313f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 19947520 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f852e000/0x0/0x4ffc00000, data 0x30709bb/0x313f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:00.843055+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120578048 unmapped: 17432576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:01.843288+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120578048 unmapped: 17432576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:02.844306+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120602624 unmapped: 17408000 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503456 data_alloc: 234881024 data_used: 22880256
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:03.844667+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120602624 unmapped: 17408000 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:04.845174+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120602624 unmapped: 17408000 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:05.845490+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120602624 unmapped: 17408000 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:06.845987+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120602624 unmapped: 17408000 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:07.846445+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503456 data_alloc: 234881024 data_used: 22880256
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.669157028s of 11.117276192s, submitted: 70
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:08.846825+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:09.847254+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:10.847514+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:11.847805+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:12.848340+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503472 data_alloc: 234881024 data_used: 22880256
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:13.848577+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:14.849015+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:15.849234+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:16.849637+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:17.850077+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503472 data_alloc: 234881024 data_used: 22880256
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:18.850383+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:19.850562+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:20.851264+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:21.851543+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:22.852043+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503472 data_alloc: 234881024 data_used: 22880256
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:23.852264+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:24.852450+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:25.852810+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:26.853362+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:27.853674+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503792 data_alloc: 234881024 data_used: 22888448
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:28.854142+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:29.854577+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:30.855069+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:31.855489+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 17375232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:32.855793+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 17375232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503792 data_alloc: 234881024 data_used: 22888448
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:33.856138+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 17375232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:34.856529+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 17375232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:35.856955+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 17375232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:36.857356+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 17375232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:37.857555+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43989d400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.523187637s of 29.533178329s, submitted: 1
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c43989d400 session 0x55c439e163c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118743040 unmapped: 19267584 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2400 session 0x55c439100f00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b3c00 session 0x55c439c74960
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c438edbc00 session 0x55c43990b2c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43989d400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c43989d400 session 0x55c4399ccf00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:38.857991+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533172 data_alloc: 234881024 data_used: 22888448
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118743040 unmapped: 19267584 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:39.858337+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:40.858844+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:41.859223+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2800 session 0x55c437aa0b40
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2000 session 0x55c4373183c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:42.859611+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7c4e000/0x0/0x4ffc00000, data 0x39519bb/0x3a20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2400 session 0x55c43965e000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:43.860130+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533172 data_alloc: 234881024 data_used: 22888448
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:44.860359+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43989d400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:45.860848+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c437982c00 session 0x55c437c10960
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b3000 session 0x55c43965ef00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7c4e000/0x0/0x4ffc00000, data 0x39519bb/0x3a20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:46.860969+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2000 session 0x55c43717eb40
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 22814720 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:47.861269+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 22806528 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:48.861452+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352478 data_alloc: 234881024 data_used: 17600512
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 22806528 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:49.861843+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 22806528 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.346131325s of 12.543250084s, submitted: 31
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:50.862074+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2800 session 0x55c437aa0960
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ef800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ef400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 22806528 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f89f5000/0x0/0x4ffc00000, data 0x2750949/0x281d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:51.862395+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 22806528 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:52.862802+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 22806528 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:53.863156+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355411 data_alloc: 234881024 data_used: 17719296
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115335168 unmapped: 22675456 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:54.863497+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4399ef800 session 0x55c437aa14a0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4399ef400 session 0x55c439101e00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115335168 unmapped: 22675456 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e396c/0x24b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:55.863747+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c437982c00 session 0x55c43ac4de00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e396c/0x24b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:56.864153+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:57.864570+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:58.865024+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324103 data_alloc: 234881024 data_used: 17600512
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:59.865410+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:00.865777+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:01.866001+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:02.866346+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:03.866651+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324103 data_alloc: 234881024 data_used: 17600512
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:04.867011+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:05.867523+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:06.867968+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:07.868388+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:08.868796+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324103 data_alloc: 234881024 data_used: 17600512
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:09.869041+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:10.869222+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:11.869441+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:12.869819+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:13.870182+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324103 data_alloc: 234881024 data_used: 17600512
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:14.870579+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:15.871001+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:16.871308+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:17.871698+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:18.871996+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324103 data_alloc: 234881024 data_used: 17600512
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:19.872442+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.978549957s of 29.158163071s, submitted: 17
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 23904256 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:20.872770+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 23904256 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:21.873121+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 23904256 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:22.873343+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91be000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 23904256 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:23.873697+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324439 data_alloc: 234881024 data_used: 17666048
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 23904256 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:24.874085+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:25.874461+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:26.874807+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:27.875018+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91b9000/0x0/0x4ffc00000, data 0x23e8949/0x24b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:28.875321+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327689 data_alloc: 234881024 data_used: 17661952
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:29.875677+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:30.876022+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:31.876266+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:32.876534+0000)
Dec 05 02:27:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.963225365s of 13.028007507s, submitted: 11
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:33.877407+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 24256512 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91b8000/0x0/0x4ffc00000, data 0x23e9949/0x24b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1326709 data_alloc: 234881024 data_used: 17661952
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:34.877874+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 24256512 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:35.878206+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 24256512 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:36.878685+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 24256512 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f81b7000/0x0/0x4ffc00000, data 0x33e9959/0x34b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:37.879058+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 32628736 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f81b7000/0x0/0x4ffc00000, data 0x33e9959/0x34b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _renew_subs
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7d47000/0x0/0x4ffc00000, data 0x3859959/0x3927000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c43ac4c000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:38.879436+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 32620544 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1474162 data_alloc: 234881024 data_used: 17670144
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:39.879708+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 32620544 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:40.880159+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 32620544 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:41.880515+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 32620544 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7d44000/0x0/0x4ffc00000, data 0x385b4d6/0x392a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:42.880851+0000)
Dec 05 02:27:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/262505520' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 32620544 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:43.881225+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 32620544 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473458 data_alloc: 234881024 data_used: 17670144
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:44.881641+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 32612352 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7d44000/0x0/0x4ffc00000, data 0x385b4d6/0x392a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:45.882134+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 32612352 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7d44000/0x0/0x4ffc00000, data 0x385b4d6/0x392a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:46.882506+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 32612352 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:47.882879+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 32612352 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7d44000/0x0/0x4ffc00000, data 0x385b4d6/0x392a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.567854881s of 15.844666481s, submitted: 24
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2800 session 0x55c437380960
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:48.883263+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 121815040 unmapped: 24592384 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c437aa0d20
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1525586 data_alloc: 251658240 data_used: 37007360
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:49.883533+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 132841472 unmapped: 13565952 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7d44000/0x0/0x4ffc00000, data 0x385b4d6/0x392a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c439838000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c43989d400 session 0x55c4398a90e0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c43806cd20
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2800 session 0x55c437aa05a0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c43a54e5a0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:50.883762+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c43a54ef00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c43990b4a0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7715000/0x0/0x4ffc00000, data 0x3e8a4d6/0x3f59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c439101c20
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:51.884222+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:52.884572+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:53.885061+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c4373314a0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1443055 data_alloc: 234881024 data_used: 30597120
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:54.885421+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43989d400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e01000/0x0/0x4ffc00000, data 0x336d4d6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:55.885761+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e01000/0x0/0x4ffc00000, data 0x336d4d6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:56.886127+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:57.886494+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e01000/0x0/0x4ffc00000, data 0x336d4d6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:58.886835+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1443187 data_alloc: 234881024 data_used: 30597120
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:59.887165+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:00.887542+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ef400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399ef400 session 0x55c439101a40
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c439100f00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:01.887865+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c4398a92c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c438015680
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 20307968 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.293066978s of 13.490474701s, submitted: 33
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e01000/0x0/0x4ffc00000, data 0x336d4d6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,1])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c4373b0b40
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399ee000 session 0x55c4373310e0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e91000/0x0/0x4ffc00000, data 0x370d4e6/0x37dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c437330d20
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:02.888368+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c437aae3c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c437380960
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130064384 unmapped: 20021248 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:03.888629+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130064384 unmapped: 20021248 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1507941 data_alloc: 251658240 data_used: 35082240
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:04.889150+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130064384 unmapped: 20021248 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e91000/0x0/0x4ffc00000, data 0x370d4e6/0x37dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:05.889371+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130064384 unmapped: 20021248 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:06.889693+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130064384 unmapped: 20021248 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:07.890129+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130064384 unmapped: 20021248 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c437381e00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:08.891209+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130367488 unmapped: 19718144 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399eec00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1524263 data_alloc: 251658240 data_used: 36999168
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:09.891384+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130367488 unmapped: 19718144 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:10.891557+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130400256 unmapped: 19685376 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:11.891729+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:12.891969+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:13.892214+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550983 data_alloc: 251658240 data_used: 40714240
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:14.892380+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:15.892760+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:16.893142+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:17.893486+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:18.894026+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550983 data_alloc: 251658240 data_used: 40714240
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:19.894467+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:20.894816+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:21.895075+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:22.895371+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:23.895595+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550983 data_alloc: 251658240 data_used: 40714240
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:24.895799+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:25.896024+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:26.896237+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:27.896440+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:28.896646+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550983 data_alloc: 251658240 data_used: 40714240
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:29.896848+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:30.897029+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:31.897259+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131710976 unmapped: 18374656 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:32.897503+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131710976 unmapped: 18374656 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:33.897702+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 18366464 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550983 data_alloc: 251658240 data_used: 40714240
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:34.897954+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 32.743537903s of 32.836509705s, submitted: 6
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 12705792 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:35.898205+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140926976 unmapped: 9158656 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:36.898588+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7455000/0x0/0x4ffc00000, data 0x41434e6/0x4213000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 9125888 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7455000/0x0/0x4ffc00000, data 0x41434e6/0x4213000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:37.898874+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 139755520 unmapped: 10330112 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:38.899127+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 139755520 unmapped: 10330112 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1631561 data_alloc: 251658240 data_used: 41697280
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:39.899321+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 139755520 unmapped: 10330112 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:40.899536+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141393920 unmapped: 10166272 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:41.899775+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6e8a000/0x0/0x4ffc00000, data 0x470c4e6/0x47dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141541376 unmapped: 10018816 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:42.900006+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 10010624 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:43.900204+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 10010624 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697183 data_alloc: 251658240 data_used: 41992192
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:44.900365+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 10010624 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:45.900548+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 10010624 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cec000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:46.900727+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 10010624 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:47.900980+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:48.901170+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.211829185s of 13.867918968s, submitted: 144
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697199 data_alloc: 251658240 data_used: 41992192
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:49.901396+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:50.901594+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:51.901802+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cec000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:52.902012+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cec000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cec000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:53.902212+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cec000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697199 data_alloc: 251658240 data_used: 41992192
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:54.902436+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:55.902633+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:56.902848+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:57.903013+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:58.903201+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cf4000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:59.903387+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1692287 data_alloc: 251658240 data_used: 41992192
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:00.903583+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c439892c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cf4000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439892c00 session 0x55c437319680
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c439892c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439892c00 session 0x55c437aaeb40
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c439164960
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c4371734a0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:01.903755+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.983115196s of 13.000985146s, submitted: 2
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c439964b40
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c4373b63c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c4373b01e0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 25296896 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c43967b0e0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cf4000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c4398c2f00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:02.904006+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 25288704 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c439892c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439892c00 session 0x55c43ac4d680
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c438014d20
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c43806da40
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:03.904232+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c437aa0960
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c437aa0b40
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c439892c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439892c00 session 0x55c43912a5a0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c437bdfc20
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140345344 unmapped: 25911296 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c4398a9a40
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c439101e00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:04.904412+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1777183 data_alloc: 251658240 data_used: 41992192
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:05.904587+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628b000/0x0/0x4ffc00000, data 0x53124f6/0x53e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:06.904767+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:07.905027+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628b000/0x0/0x4ffc00000, data 0x53124f6/0x53e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:08.905230+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:09.905469+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1777359 data_alloc: 251658240 data_used: 41992192
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:10.905683+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c439892c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439892c00 session 0x55c4373303c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:11.907999+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.160284996s of 10.497438431s, submitted: 42
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398a5400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 25935872 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:12.908234+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 25935872 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:13.908419+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398a5800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398a5800 session 0x55c43a54e3c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 25706496 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398a5c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c439c88c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:14.908722+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1807465 data_alloc: 251658240 data_used: 45797376
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 146636800 unmapped: 19619840 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:15.908971+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 148963328 unmapped: 17293312 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:16.909155+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149340160 unmapped: 16916480 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:17.909485+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 16695296 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:18.909866+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 16695296 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:19.910325+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1857705 data_alloc: 268435456 data_used: 52809728
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 16687104 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:20.910687+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 16687104 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:21.911196+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 16687104 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:22.911607+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149602304 unmapped: 16654336 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:23.911977+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149602304 unmapped: 16654336 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:24.912309+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1857705 data_alloc: 268435456 data_used: 52809728
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149602304 unmapped: 16654336 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:25.912545+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149602304 unmapped: 16654336 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:26.912867+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149602304 unmapped: 16654336 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:27.913123+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149635072 unmapped: 16621568 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:28.913527+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 16613376 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:29.913811+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1857705 data_alloc: 268435456 data_used: 52809728
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 16613376 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:30.914074+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 16613376 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:31.914401+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 16613376 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:32.914771+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 16613376 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:33.915099+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149684224 unmapped: 16572416 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:34.915449+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1857705 data_alloc: 268435456 data_used: 52809728
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149684224 unmapped: 16572416 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:35.915759+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149684224 unmapped: 16572416 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:36.916097+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149684224 unmapped: 16572416 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:37.916295+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149684224 unmapped: 16572416 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:38.916526+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.395227432s of 26.412460327s, submitted: 3
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 16465920 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:39.916855+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1858713 data_alloc: 268435456 data_used: 52813824
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 16465920 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:40.917254+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149798912 unmapped: 16457728 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:41.917497+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:42.917801+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149807104 unmapped: 16449536 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:43.918019+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149815296 unmapped: 16441344 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:44.918187+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149815296 unmapped: 16441344 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1858713 data_alloc: 268435456 data_used: 52813824
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:45.918452+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149815296 unmapped: 16441344 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:46.918764+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149823488 unmapped: 16433152 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:47.918987+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149823488 unmapped: 16433152 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:48.919190+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 151429120 unmapped: 14827520 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.783483505s of 10.946872711s, submitted: 24
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:49.919358+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 152084480 unmapped: 14172160 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1889331 data_alloc: 268435456 data_used: 52891648
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:50.919665+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 152961024 unmapped: 13295616 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5df9000/0x0/0x4ffc00000, data 0x57a3519/0x5875000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:51.920518+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153083904 unmapped: 13172736 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:52.920751+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153354240 unmapped: 12902400 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:53.921087+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153354240 unmapped: 12902400 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5de9000/0x0/0x4ffc00000, data 0x57b3519/0x5885000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:54.921355+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153387008 unmapped: 12869632 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1911755 data_alloc: 268435456 data_used: 53923840
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:55.921499+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153387008 unmapped: 12869632 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c43ac4d2c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398a5400 session 0x55c4398583c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:56.921950+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 152756224 unmapped: 13500416 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c4398c32c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:57.922255+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 18595840 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:58.922652+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 18595840 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f69de000/0x0/0x4ffc00000, data 0x4bbf4e6/0x4c8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:59.927709+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 18595840 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1743569 data_alloc: 251658240 data_used: 43810816
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:00.928098+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 18595840 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f69de000/0x0/0x4ffc00000, data 0x4bbf4e6/0x4c8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399eec00 session 0x55c43a54ef00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399ee800 session 0x55c43802b0e0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:01.928329+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 18595840 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.789140701s of 12.240109444s, submitted: 93
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c437c112c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:02.928770+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:03.929199+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:04.930514+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1625360 data_alloc: 251658240 data_used: 39796736
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:05.930997+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:06.933108+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:07.934980+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:08.935571+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:09.937506+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1625360 data_alloc: 251658240 data_used: 39796736
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:10.938266+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:11.938448+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:12.938724+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:13.939101+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:14.939374+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1625360 data_alloc: 251658240 data_used: 39796736
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:15.939739+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:16.940137+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:17.940618+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:18.940927+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:19.941355+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1625360 data_alloc: 251658240 data_used: 39796736
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:20.941841+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.772724152s of 19.821142197s, submitted: 11
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:21.942008+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:22.942367+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:23.942775+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:24.943232+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1625872 data_alloc: 251658240 data_used: 39800832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:25.943636+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:26.944072+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398a5c00 session 0x55c437aaef00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439c88c00 session 0x55c4398a8000
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:27.944289+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c43ac4c780
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:28.944660+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:29.945189+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:30.945489+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:31.945747+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:32.946110+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:33.946440+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:34.946814+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:35.947181+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:36.947611+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:37.947831+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:38.948079+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:39.948462+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:40.948839+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:41.949304+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:42.949664+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:43.950105+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:44.950448+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:45.951004+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:46.951256+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:47.951639+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:48.952067+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:49.952455+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:50.952871+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:51.953138+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:52.953610+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:53.954070+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:54.954489+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:55.955024+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:56.955360+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:57.955784+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:58.956173+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:59.956531+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:00.956743+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:01.957177+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:02.957441+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:03.957699+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:04.958079+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.1 total, 600.0 interval
                                            Cumulative writes: 9064 writes, 35K keys, 9064 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 9064 writes, 2319 syncs, 3.91 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1653 writes, 6154 keys, 1653 commit groups, 1.0 writes per commit group, ingest: 6.39 MB, 0.01 MB/s
                                            Interval WAL: 1653 writes, 687 syncs, 2.41 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:05.958442+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:06.958796+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:07.959162+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:08.959556+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:09.959948+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:10.960352+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143785984 unmapped: 22470656 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: mgrc ms_handle_reset ms_handle_reset con 0x55c437983800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/858078637
Dec 05 02:27:30 compute-0 ceph-osd[208828]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/858078637,v1:192.168.122.100:6801/858078637]
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: get_auth_request con 0x55c4398b2000 auth_method 0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: mgrc handle_mgr_configure stats_period=5
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:11.960644+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:12.960941+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:13.961153+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:14.961365+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:15.961576+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:16.962080+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:17.962459+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:18.962865+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:19.963396+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:20.963835+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:21.964284+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:22.964654+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:23.964880+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:24.965124+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:25.965351+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:26.965684+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:27.966165+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:28.966583+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:29.967088+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:30.967335+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:31.967803+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:32.968281+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:33.968800+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:34.969198+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:35.969648+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:36.970204+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:37.970656+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:38.971105+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:39.971502+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:40.972013+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:41.972426+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:42.973017+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:43.973326+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:44.973810+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:45.974117+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:46.974477+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:47.974748+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:48.975070+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:49.975340+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:50.975752+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:51.976088+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:52.976436+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:53.977064+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:54.977411+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:55.977750+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:56.978172+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:57.978386+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:58.978724+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:59.979049+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:00.979346+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 22331392 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:01.979686+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 22331392 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 100.267555237s of 100.389305115s, submitted: 23
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c43ac4de00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398a5400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398a5400 session 0x55c4398a90e0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c4373292c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c439e17680
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398a5c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398a5c00 session 0x55c437319e00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:02.980106+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 26796032 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:03.980412+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 26796032 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d26000/0x0/0x4ffc00000, data 0x48794d6/0x4948000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:04.980656+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 26796032 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:05.981055+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 26796032 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1681860 data_alloc: 251658240 data_used: 37978112
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:06.981288+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 26796032 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d26000/0x0/0x4ffc00000, data 0x48794d6/0x4948000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:07.981621+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 26787840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:08.982052+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 26787840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c439c88c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439c88c00 session 0x55c43914a3c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399eec00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:09.982228+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 26763264 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:10.982527+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 26763264 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1684438 data_alloc: 251658240 data_used: 37978112
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:11.982789+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 26763264 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:12.983079+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 26763264 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:13.983455+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 26755072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:14.983703+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143941632 unmapped: 26517504 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:15.983863+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145530880 unmapped: 24928256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1765558 data_alloc: 251658240 data_used: 49324032
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:16.984302+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:17.984477+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:18.984697+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:19.985047+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:20.985439+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1765558 data_alloc: 251658240 data_used: 49324032
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:21.985674+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:22.985955+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:23.986294+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:24.986665+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:25.986997+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1765558 data_alloc: 251658240 data_used: 49324032
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:26.987188+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399ee400 session 0x55c439867c20
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:27.987582+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:28.987777+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:29.988205+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:30.988436+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1765558 data_alloc: 251658240 data_used: 49324032
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:31.988860+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:32.989373+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:33.989734+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.556104660s of 31.715848923s, submitted: 28
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:34.989934+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,1])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 24788992 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:35.990442+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 24723456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1766950 data_alloc: 251658240 data_used: 49364992
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:36.990939+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:37.991673+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:38.991959+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:39.992224+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:40.992614+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1766950 data_alloc: 251658240 data_used: 49364992
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:41.993180+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:42.993582+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:43.994152+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:44.994530+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:45.994983+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.810025215s of 12.461395264s, submitted: 108
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1791816 data_alloc: 251658240 data_used: 49373184
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:46.995213+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:47.995534+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159358976 unmapped: 11100160 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5eab000/0x0/0x4ffc00000, data 0x56ed4f9/0x57bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:48.995955+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159612928 unmapped: 10846208 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:49.996143+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159825920 unmapped: 10633216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e75000/0x0/0x4ffc00000, data 0x57214f9/0x57f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:50.996550+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159825920 unmapped: 10633216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892200 data_alloc: 268435456 data_used: 50962432
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e75000/0x0/0x4ffc00000, data 0x57214f9/0x57f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:51.997399+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 10600448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:52.997736+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 10600448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:53.998156+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 10600448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:54.998629+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 10600448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:55.999145+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159866880 unmapped: 10592256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892200 data_alloc: 268435456 data_used: 50962432
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:56.999477+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159866880 unmapped: 10592256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e75000/0x0/0x4ffc00000, data 0x57214f9/0x57f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:57.999798+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159899648 unmapped: 10559488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.857128143s of 12.298008919s, submitted: 144
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:59.000100+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159899648 unmapped: 10559488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:00.000353+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159899648 unmapped: 10559488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:01.000683+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159899648 unmapped: 10559488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892412 data_alloc: 268435456 data_used: 50962432
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:02.000910+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e74000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159899648 unmapped: 10559488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:03.001158+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159899648 unmapped: 10559488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:04.001378+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:05.001729+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e74000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:06.002148+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892412 data_alloc: 268435456 data_used: 50962432
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:07.002559+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:08.003096+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:09.003453+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:10.003645+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:11.004016+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e74000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159940608 unmapped: 10518528 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892412 data_alloc: 268435456 data_used: 50962432
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:12.004325+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159948800 unmapped: 10510336 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:13.004616+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159948800 unmapped: 10510336 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:14.005146+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159948800 unmapped: 10510336 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:15.005372+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159956992 unmapped: 10502144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e74000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:16.005781+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159989760 unmapped: 10469376 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892412 data_alloc: 268435456 data_used: 50962432
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:17.006149+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159989760 unmapped: 10469376 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:18.006390+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159989760 unmapped: 10469376 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:19.006717+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159989760 unmapped: 10469376 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:20.007046+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.230848312s of 21.238258362s, submitted: 1
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:21.007231+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1885948 data_alloc: 268435456 data_used: 50962432
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:22.007483+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:23.007944+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:24.008260+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:25.008602+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:26.008831+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1885948 data_alloc: 268435456 data_used: 50962432
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:27.009104+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:28.009314+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:29.009497+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:30.009678+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:31.010007+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1885948 data_alloc: 268435456 data_used: 50962432
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:32.010213+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:33.010625+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:34.011125+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:35.011326+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:36.011723+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1885948 data_alloc: 268435456 data_used: 50962432
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:37.012162+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.680427551s of 17.691595078s, submitted: 2
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:38.012436+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:39.012752+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:40.013072+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:41.013427+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1888652 data_alloc: 268435456 data_used: 51240960
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:42.013678+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:43.013920+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:44.014243+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:45.014648+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:46.014977+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1888476 data_alloc: 268435456 data_used: 51240960
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:47.426769+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:48.427190+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:49.427596+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:50.428041+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.009906769s of 13.027759552s, submitted: 2
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159318016 unmapped: 11141120 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:51.428374+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159318016 unmapped: 11141120 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1889356 data_alloc: 268435456 data_used: 51240960
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:52.429082+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159318016 unmapped: 11141120 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:53.429403+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159318016 unmapped: 11141120 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:54.429574+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 11755520 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:55.429959+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 11755520 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:56.430235+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 11755520 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1889516 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:57.430525+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 11755520 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:58.430968+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 11755520 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:59.431250+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 11755520 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:00.431481+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:01.431643+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1889516 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:02.432098+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:03.432472+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:04.432788+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:05.433130+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:06.433482+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1889516 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:07.433770+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:08.434002+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158720000 unmapped: 11739136 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:09.434414+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158720000 unmapped: 11739136 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:10.434731+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158720000 unmapped: 11739136 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:11.434989+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158720000 unmapped: 11739136 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1889516 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:12.435344+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158720000 unmapped: 11739136 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:13.435756+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.845460892s of 22.864625931s, submitted: 4
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:14.436007+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:15.436198+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:16.436568+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:17.436992+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:18.437373+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:19.437725+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:20.438056+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:21.438388+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:22.438797+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:23.439726+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:24.440080+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:25.440423+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:26.441605+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:27.442047+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:28.442440+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:29.442735+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:30.443141+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:31.443456+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:32.443725+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:33.444108+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:34.444403+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:35.444623+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:36.444857+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:37.446684+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:38.446940+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:39.449438+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:40.450822+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:41.451491+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:42.453679+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:43.455574+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:44.459798+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:45.460757+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:46.461272+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:47.465280+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:48.465963+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:49.467371+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:50.469640+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:51.471378+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:52.474160+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:53.474569+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:54.478127+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:55.478425+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:56.479407+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:57.479725+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:58.480284+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:59.483686+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:00.485282+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:01.485590+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:02.487863+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:03.489344+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:04.490299+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:05.493756+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:06.495226+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:07.496563+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:08.504544+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:09.506228+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:10.506450+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:11.508240+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:12.512328+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:13.513680+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:14.513974+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:15.517369+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:16.519981+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:17.522591+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:18.522829+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:19.525477+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:20.527605+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:21.529785+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:22.530288+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:23.534193+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:24.535671+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:25.538522+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:26.540045+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:27.543743+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:28.545766+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:29.547413+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:30.548804+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:31.550658+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:32.552256+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:33.552967+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:34.553451+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:35.554232+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:36.554607+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:37.555118+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:38.555440+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:39.555813+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:40.556244+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:41.556561+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:42.557089+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:43.557479+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:44.558005+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:45.558422+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:46.558939+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:47.559365+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:48.559606+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:49.559945+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:50.560282+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:51.560697+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:52.561103+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:53.561412+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:54.561789+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:55.562194+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:56.562592+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:57.563052+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:58.563398+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:59.563788+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:00.564457+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:01.565076+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:02.565878+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:03.566481+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:04.566880+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:05.567604+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:06.568128+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:07.568517+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:08.569069+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:09.569665+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:10.570090+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:11.570455+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:12.570799+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:13.571159+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:14.571583+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:15.571964+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:16.572316+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:17.572683+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:18.573112+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:19.573473+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:20.574050+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:21.574368+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:22.574794+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:23.575302+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:24.575667+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:25.576111+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:26.576547+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:27.577006+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:28.577515+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:29.577852+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:30.578200+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:31.578549+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:32.579099+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:33.579679+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:34.580086+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:35.580555+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:36.581005+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:37.581532+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:38.582137+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:39.595287+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:40.595760+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:41.596297+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:42.596692+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:43.597053+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:44.597318+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:45.597728+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:46.598187+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:47.598548+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:48.599070+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:49.599423+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:50.599844+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:51.600324+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:52.600574+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:53.601125+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:54.601348+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:55.601665+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:56.602308+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:57.602678+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:58.603093+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:59.603329+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:00.603589+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:01.603977+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:02.604301+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:03.604574+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:04.604791+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158867456 unmapped: 11591680 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:05.605009+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:06.605203+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:07.605463+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:08.608252+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:09.608617+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:10.609172+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:11.609482+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:12.609866+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:13.610349+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:14.610635+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:15.610910+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:16.611335+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:17.611766+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:18.612154+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:19.612540+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:20.612985+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:21.613410+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:22.613820+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:23.614312+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:24.614734+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:25.615230+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:26.615649+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:27.615979+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:28.616446+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:29.616834+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:30.617212+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:31.617626+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:32.618091+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:33.618522+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:34.619055+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:35.619432+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:36.619970+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:37.620432+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 11567104 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:38.620849+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 11567104 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:39.621250+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 11567104 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:40.621503+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 11567104 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:41.621837+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 11567104 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:42.622150+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 11567104 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:43.622419+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 11558912 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:44.622647+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 11558912 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:45.623094+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 11558912 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:46.623394+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 11558912 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:47.623717+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 11558912 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:48.624114+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 215.814193726s of 215.831085205s, submitted: 14
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159014912 unmapped: 11444224 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:49.624529+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:50.624843+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:51.625161+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:52.625467+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:53.625835+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1894956 data_alloc: 251658240 data_used: 51838976
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:54.626169+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:55.626516+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:56.626866+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:57.627274+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:58.627631+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892500 data_alloc: 251658240 data_used: 51838976
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7a000/0x0/0x4ffc00000, data 0x57244f9/0x57f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:59.628121+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:00.628496+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7a000/0x0/0x4ffc00000, data 0x57244f9/0x57f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:01.629032+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:02.629426+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:03.629844+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892500 data_alloc: 251658240 data_used: 51838976
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:04.630374+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:05.630650+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7a000/0x0/0x4ffc00000, data 0x57244f9/0x57f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:06.631238+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:07.631645+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:08.632143+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892660 data_alloc: 251658240 data_used: 51843072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7a000/0x0/0x4ffc00000, data 0x57244f9/0x57f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:09.632539+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:10.632778+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.156604767s of 22.175872803s, submitted: 2
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:11.633098+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:12.633416+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:13.633842+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892904 data_alloc: 251658240 data_used: 51843072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:14.634114+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:15.634494+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159055872 unmapped: 11403264 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:16.634738+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:17.635072+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:18.635382+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892904 data_alloc: 251658240 data_used: 51843072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:19.635832+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:20.636127+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:21.636516+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:22.636802+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:23.637119+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892904 data_alloc: 251658240 data_used: 51843072
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:24.637495+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.559044838s of 13.568158150s, submitted: 1
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:25.637999+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:26.638294+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:27.638472+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:28.638872+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895352 data_alloc: 251658240 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:29.639357+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:30.639760+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:31.640141+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:32.640516+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:33.640989+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895352 data_alloc: 251658240 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:34.641322+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:35.641586+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:36.641859+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:37.642093+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:38.642433+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895352 data_alloc: 251658240 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:39.642719+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.334384918s of 15.360384941s, submitted: 14
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:40.643116+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:41.643519+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:42.643758+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:43.644043+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:44.644416+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:45.644625+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:46.644940+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:47.645237+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:48.645643+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:49.646196+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:50.646663+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:51.647208+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:52.647525+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:53.648084+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:54.648511+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:55.648857+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:56.649168+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:57.649470+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:58.650089+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:59.650415+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:00.651216+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:01.651591+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:02.652023+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:03.652497+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:04.653097+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:05.653518+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:06.653973+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:07.654468+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:08.654723+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:09.655124+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:10.655552+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:11.655850+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:12.656425+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:13.656870+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:14.657405+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:15.657761+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:16.658214+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:17.658642+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:18.658986+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:19.659690+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:20.660233+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:21.661030+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:22.661183+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:23.661529+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:24.662034+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:25.662405+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:26.662829+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:27.663131+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:28.663515+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:29.663854+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:30.664304+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 11313152 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:31.664680+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 11313152 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:32.665023+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 11313152 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:33.665371+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 11313152 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:34.666189+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159154176 unmapped: 11304960 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:35.666562+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159154176 unmapped: 11304960 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:36.667068+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159154176 unmapped: 11304960 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:37.667518+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:38.668088+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:39.668410+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:40.668785+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:41.669007+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:42.669268+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:43.669614+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:44.670075+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:45.670375+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:46.670745+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:47.671174+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:48.671516+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:49.671846+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:50.672161+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:51.672546+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:52.673005+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:53.673441+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:54.673846+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:55.674623+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:56.675080+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:57.675323+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:58.675678+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:59.675984+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:00.676398+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:01.676639+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:02.676835+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:03.677218+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:04.677619+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:05.678072+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:06.678442+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:07.678814+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:08.679163+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:09.679563+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:10.680051+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:11.680426+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:12.680770+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:13.681200+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:14.681429+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:15.682133+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:16.682389+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:17.682662+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:18.683192+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:19.683403+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:20.683711+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:21.684156+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:22.684495+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:23.685045+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:24.685360+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:25.685653+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:26.686155+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:27.686501+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:28.686824+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:29.687118+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:30.687506+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159203328 unmapped: 11255808 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:31.687734+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159203328 unmapped: 11255808 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:32.687950+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:33.688331+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:34.688789+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:35.689140+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:36.689620+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:37.689861+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:38.690195+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:39.690575+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:40.690957+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:41.691342+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:42.691669+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159219712 unmapped: 11239424 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:43.692057+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159219712 unmapped: 11239424 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:44.692268+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159219712 unmapped: 11239424 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:45.692658+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159219712 unmapped: 11239424 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:46.693057+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159219712 unmapped: 11239424 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:47.693352+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159219712 unmapped: 11239424 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:48.693737+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:49.694303+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:50.694967+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:51.695221+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:52.695567+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:53.696055+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:54.696449+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:55.696813+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:56.697203+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:57.697637+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:58.698057+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:59.698259+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:00.698671+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:01.699032+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:02.699459+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:03.699737+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:04.700022+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.1 total, 600.0 interval
                                            Cumulative writes: 9570 writes, 37K keys, 9570 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 9570 writes, 2504 syncs, 3.82 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 506 writes, 1776 keys, 506 commit groups, 1.0 writes per commit group, ingest: 2.56 MB, 0.00 MB/s
                                            Interval WAL: 506 writes, 185 syncs, 2.74 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:05.701035+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:06.701370+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:07.701590+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:08.702009+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:09.702265+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:10.702608+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:11.702849+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:12.703108+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:13.703360+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:14.703649+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:15.704084+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:16.704400+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:17.705402+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:18.705795+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159244288 unmapped: 11214848 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:19.706215+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159252480 unmapped: 11206656 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:20.706583+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159252480 unmapped: 11206656 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:21.707044+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159252480 unmapped: 11206656 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:22.708021+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159252480 unmapped: 11206656 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:23.708832+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159252480 unmapped: 11206656 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:24.709226+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:25.709767+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:26.710240+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:27.710586+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:28.711086+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:29.711430+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:30.711784+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:31.712443+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:32.712877+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:33.713340+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:34.713561+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:35.714009+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:36.714480+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:37.715129+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:38.715403+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:39.715822+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:40.716186+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:41.716537+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:42.716987+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:43.717416+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:44.717743+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:45.718097+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:46.718521+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159277056 unmapped: 11182080 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:47.718952+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159277056 unmapped: 11182080 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:48.719277+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159277056 unmapped: 11182080 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:49.719651+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159277056 unmapped: 11182080 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:50.720061+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159277056 unmapped: 11182080 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:51.720479+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159285248 unmapped: 11173888 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:52.720779+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159285248 unmapped: 11173888 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:53.722421+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159285248 unmapped: 11173888 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:54.722804+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159285248 unmapped: 11173888 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:55.723008+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159285248 unmapped: 11173888 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 196.116027832s of 196.124725342s, submitted: 1
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c43989d400 session 0x55c43965fa40
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:56.723720+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2800 session 0x55c4398c23c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:57.724264+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c439c75c20
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:58.724761+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:59.725151+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1723092 data_alloc: 234881024 data_used: 44167168
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:00.725457+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6eca000/0x0/0x4ffc00000, data 0x46d44f9/0x47a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:01.725975+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:02.726381+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6eca000/0x0/0x4ffc00000, data 0x46d44f9/0x47a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:03.726815+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6eca000/0x0/0x4ffc00000, data 0x46d44f9/0x47a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:04.727183+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1723092 data_alloc: 234881024 data_used: 44167168
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:05.727376+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:06.727719+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:07.728063+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:08.728430+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:09.728756+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6eca000/0x0/0x4ffc00000, data 0x46d44f9/0x47a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399ee800 session 0x55c4373183c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.321928978s of 13.440460205s, submitted: 22
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399eec00 session 0x55c438015680
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1722960 data_alloc: 234881024 data_used: 44167168
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:10.729039+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c4380143c0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:11.729396+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:12.729715+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:13.730132+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8860000/0x0/0x4ffc00000, data 0x2d3e4d6/0x2e0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:14.730374+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1438434 data_alloc: 218103808 data_used: 30633984
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:15.730668+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:16.731122+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:17.731496+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:18.731749+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8860000/0x0/0x4ffc00000, data 0x2d3e4d6/0x2e0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:19.732141+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1438434 data_alloc: 218103808 data_used: 30633984
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:20.732449+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43989d400
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.228912354s of 11.431051254s, submitted: 36
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142688256 unmapped: 27770880 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:21.733210+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _renew_subs
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124952576 unmapped: 45506560 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 139 ms_handle_reset con 0x55c43989d400 session 0x55c439867860
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:22.733515+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141778944 unmapped: 45465600 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:23.734001+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cce000/0x0/0x4ffc00000, data 0x18d00a7/0x19a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _renew_subs
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 140 ms_handle_reset con 0x55c4398b2800 session 0x55c4373310e0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee800
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125018112 unmapped: 62226432 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:24.735234+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262617 data_alloc: 218103808 data_used: 11313152
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:25.735661+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 141 ms_handle_reset con 0x55c4399ee800 session 0x55c437c10960
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:26.736016+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:27.736512+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9cc9000/0x0/0x4ffc00000, data 0x18d3811/0x19a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:28.736963+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9cc9000/0x0/0x4ffc00000, data 0x18d3811/0x19a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:29.737239+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259196 data_alloc: 218103808 data_used: 11313152
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:30.737720+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:31.738078+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:32.738433+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9cc9000/0x0/0x4ffc00000, data 0x18d3811/0x19a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.351562500s of 11.958790779s, submitted: 94
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:33.738985+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc5000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: get_auth_request con 0x55c437982000 auth_method 0
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:34.739347+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:35.739668+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 62062592 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:36.740523+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125206528 unmapped: 62038016 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:37.740965+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:38.741389+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:39.742033+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:40.742767+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:41.743183+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:42.743464+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:43.744183+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:44.744485+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:45.745220+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:46.745745+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:47.746313+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:48.746837+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:49.747189+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:50.747597+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:51.748099+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:52.748449+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:53.749065+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:54.749748+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:55.750139+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:56.750564+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:57.751002+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:58.751417+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:59.751802+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:00.752149+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:01.752661+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:02.753053+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:03.753489+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:04.753737+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:05.754110+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:06.754492+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:07.754664+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:08.755188+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:09.755605+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:10.756129+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:11.756727+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:12.757183+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:13.757754+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:14.758166+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:15.758582+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:16.759107+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:17.759487+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:18.759973+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:19.760384+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:20.760693+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:21.761185+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:22.761529+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:23.762085+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:24.762460+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:25.762858+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:26.763281+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:27.763596+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:28.764040+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:29.764528+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:30.765238+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:31.766116+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:32.766695+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:33.767500+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:34.768026+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:35.768734+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:36.769151+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:37.769520+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:38.769782+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:39.770477+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:40.771020+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:41.771493+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:42.772055+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:43.772559+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:44.773077+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:45.773308+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:46.773646+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:47.774172+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:48.774478+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:49.774864+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:50.775308+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:51.775692+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:52.776008+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:53.776321+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:54.776627+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:55.776981+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:56.777203+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125403136 unmapped: 61841408 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:57.777457+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: do_command 'config diff' '{prefix=config diff}'
Dec 05 02:27:30 compute-0 ceph-osd[208828]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 05 02:27:30 compute-0 ceph-osd[208828]: do_command 'config show' '{prefix=config show}'
Dec 05 02:27:30 compute-0 ceph-osd[208828]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 05 02:27:30 compute-0 ceph-osd[208828]: do_command 'counter dump' '{prefix=counter dump}'
Dec 05 02:27:30 compute-0 ceph-osd[208828]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 05 02:27:30 compute-0 ceph-osd[208828]: do_command 'counter schema' '{prefix=counter schema}'
Dec 05 02:27:30 compute-0 ceph-osd[208828]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125493248 unmapped: 61751296 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:58.777652+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:27:30 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:59.777996+0000)
Dec 05 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125394944 unmapped: 61849600 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:30 compute-0 ceph-osd[208828]: do_command 'log dump' '{prefix=log dump}'
Dec 05 02:27:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2359: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:30 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 02:27:30 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15601 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec 05 02:27:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2000436730' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 05 02:27:31 compute-0 ceph-mon[192914]: from='client.15593 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:31 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/262505520' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 05 02:27:31 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2000436730' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 05 02:27:31 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15605 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:31 compute-0 openstack_network_exporter[366555]: ERROR   02:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:27:31 compute-0 openstack_network_exporter[366555]: ERROR   02:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:27:31 compute-0 openstack_network_exporter[366555]: ERROR   02:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:27:31 compute-0 openstack_network_exporter[366555]: ERROR   02:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:27:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:27:31 compute-0 openstack_network_exporter[366555]: ERROR   02:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:27:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:27:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec 05 02:27:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2704603559' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 05 02:27:31 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15609 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec 05 02:27:32 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1203180749' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 02:27:32 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15613 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:32 compute-0 ceph-mon[192914]: from='client.15597 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:32 compute-0 ceph-mon[192914]: pgmap v2359: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:32 compute-0 ceph-mon[192914]: from='client.15601 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:32 compute-0 ceph-mon[192914]: from='client.15605 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:32 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2704603559' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 05 02:27:32 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1203180749' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 02:27:32 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15615 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec 05 02:27:32 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/133351809' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 05 02:27:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2360: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:32 compute-0 nova_compute[349548]: 2025-12-05 02:27:32.832 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:27:32 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15619 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Dec 05 02:27:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1408480943' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 05 02:27:33 compute-0 ceph-mon[192914]: from='client.15609 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:33 compute-0 ceph-mon[192914]: from='client.15613 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:33 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/133351809' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 05 02:27:33 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1408480943' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 05 02:27:33 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15623 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:27:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Dec 05 02:27:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2909200176' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 05 02:27:34 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15631 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:34 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T02:27:34.289+0000 7f1b09f03640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 05 02:27:34 compute-0 ceph-mgr[193209]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 05 02:27:34 compute-0 ceph-mon[192914]: from='client.15615 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:34 compute-0 ceph-mon[192914]: pgmap v2360: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:34 compute-0 ceph-mon[192914]: from='client.15619 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:34 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2909200176' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 05 02:27:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Dec 05 02:27:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1139723308' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 05 02:27:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2361: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Dec 05 02:27:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/107936257' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 05 02:27:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Dec 05 02:27:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/867654179' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 05 02:27:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Dec 05 02:27:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2436156114' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 05 02:27:35 compute-0 ceph-mon[192914]: from='client.15623 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:35 compute-0 ceph-mon[192914]: from='client.15631 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:35 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1139723308' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 05 02:27:35 compute-0 ceph-mon[192914]: pgmap v2361: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:35 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/107936257' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 05 02:27:35 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/867654179' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 05 02:27:35 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2436156114' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 05 02:27:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Dec 05 02:27:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/827734816' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 05 02:27:35 compute-0 nova_compute[349548]: 2025-12-05 02:27:35.471 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:49.562611+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7f27000/0x0/0x4ffc00000, data 0x3a914cc/0x3b57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 7454720 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:50.563095+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 7454720 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1523984 data_alloc: 234881024 data_used: 26050560
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:51.563425+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 7454720 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7f27000/0x0/0x4ffc00000, data 0x3a914cc/0x3b57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:52.563865+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 7454720 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:53.564324+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 7454720 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:54.564864+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7f27000/0x0/0x4ffc00000, data 0x3a914cc/0x3b57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 7454720 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:55.565432+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 7454720 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1523984 data_alloc: 234881024 data_used: 26050560
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:56.565737+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 7454720 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:57.566783+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7f27000/0x0/0x4ffc00000, data 0x3a914cc/0x3b57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 7454720 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:58.567143+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 7454720 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:59.567471+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 7446528 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:00.567825+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7f27000/0x0/0x4ffc00000, data 0x3a914cc/0x3b57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 7446528 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1523984 data_alloc: 234881024 data_used: 26050560
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:01.568073+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7f27000/0x0/0x4ffc00000, data 0x3a914cc/0x3b57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 7446528 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:02.568604+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 7446528 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:03.570000+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 7446528 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:04.571197+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 7446528 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:05.572285+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 7446528 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1523984 data_alloc: 234881024 data_used: 26050560
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:06.573035+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 7446528 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af69000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x56484af69000 session 0x564847d04780
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484804a000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:07.573258+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x56484804a000 session 0x564848005c20
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x564847ff7000 session 0x56484a71c5a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7f27000/0x0/0x4ffc00000, data 0x3a914cc/0x3b57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847e8dc00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x564847e8dc00 session 0x564848a7ad20
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af76800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 48.385467529s of 48.404853821s, submitted: 2
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x56484af76800 session 0x56484ab0e000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 114941952 unmapped: 10526720 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:08.573555+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847e8dc00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x564847e8dc00 session 0x56484a8450e0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 10518528 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:09.573875+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 10518528 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:10.574234+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 10518528 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1565757 data_alloc: 234881024 data_used: 26050560
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:11.574666+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 10518528 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:12.574982+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7ade000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 10518528 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:13.575371+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 10518528 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:14.575660+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7ade000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 10518528 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:15.576053+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 10518528 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1565757 data_alloc: 234881024 data_used: 26050560
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:16.576440+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484804a000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 10969088 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:17.576668+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 8019968 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:18.576862+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118398976 unmapped: 7069696 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:19.577062+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118398976 unmapped: 7069696 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:20.577263+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7ade000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 7036928 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1596477 data_alloc: 251658240 data_used: 30371840
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:21.577482+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7ade000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 7004160 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:22.577751+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 7004160 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:23.577969+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 7004160 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:24.578211+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 7004160 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:25.578549+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7ade000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 7004160 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1596477 data_alloc: 251658240 data_used: 30371840
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:26.579865+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7ade000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 6995968 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:27.580308+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 6995968 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:28.580699+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 6995968 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:29.587768+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 6995968 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:30.588267+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 6995968 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1596477 data_alloc: 251658240 data_used: 30371840
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:31.588521+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118489088 unmapped: 6979584 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:32.588991+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7ade000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118489088 unmapped: 6979584 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:33.589204+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7ade000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.026542664s of 26.152660370s, submitted: 20
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118489088 unmapped: 6979584 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:34.589519+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118521856 unmapped: 6946816 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:35.589832+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118595584 unmapped: 6873088 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1595197 data_alloc: 251658240 data_used: 30375936
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:36.590156+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f76ce000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118595584 unmapped: 6873088 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:37.590454+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118595584 unmapped: 6873088 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:38.590848+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118603776 unmapped: 6864896 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:39.591196+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118603776 unmapped: 6864896 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:40.591585+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118603776 unmapped: 6864896 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:41.591964+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1595197 data_alloc: 251658240 data_used: 30375936
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f76ce000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118603776 unmapped: 6864896 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:42.592297+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118603776 unmapped: 6864896 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:43.592771+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f76ce000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118603776 unmapped: 6864896 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:44.593232+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f76ce000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118603776 unmapped: 6864896 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:45.593489+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118603776 unmapped: 6864896 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:46.594033+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1595197 data_alloc: 251658240 data_used: 30375936
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118579200 unmapped: 6889472 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:47.594265+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118579200 unmapped: 6889472 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:48.594720+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f76ce000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118579200 unmapped: 6889472 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:49.595029+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118579200 unmapped: 6889472 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:50.595403+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.629758835s of 17.197685242s, submitted: 90
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 119775232 unmapped: 5693440 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:51.595759+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1612781 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 5152768 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:52.596190+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 4620288 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:53.596528+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7253000/0x0/0x4ffc00000, data 0x434952e/0x4410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 3547136 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:54.597036+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 3538944 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:55.597258+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:56.597468+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 3538944 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1643619 data_alloc: 251658240 data_used: 30416896
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:57.597810+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 3538944 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71e7000/0x0/0x4ffc00000, data 0x43c052e/0x4487000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:58.598120+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121962496 unmapped: 3506176 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71e7000/0x0/0x4ffc00000, data 0x43c052e/0x4487000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:59.598397+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121995264 unmapped: 3473408 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:00.598851+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 3211264 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:01.599183+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 3211264 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1644331 data_alloc: 251658240 data_used: 30416896
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71ce000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:02.599689+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 3211264 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71ce000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:03.600366+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 3211264 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71ce000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:04.600783+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 3211264 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:05.601175+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 3211264 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:06.601704+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 3211264 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1644331 data_alloc: 251658240 data_used: 30416896
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:07.602119+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 3211264 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:08.602368+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 3211264 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:09.602700+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 3211264 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71ce000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:10.602974+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 3211264 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:11.603161+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 3211264 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1644331 data_alloc: 251658240 data_used: 30416896
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:12.603425+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122265600 unmapped: 3203072 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71ce000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:13.603739+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122265600 unmapped: 3203072 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:14.604227+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 3194880 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:15.604469+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71ce000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 3194880 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:16.604863+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 3194880 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1644331 data_alloc: 251658240 data_used: 30416896
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71ce000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:17.605674+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 3194880 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:18.605876+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 3194880 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:19.606164+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 3194880 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:20.606411+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 3194880 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71ce000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71ce000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:21.606731+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 3194880 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1644331 data_alloc: 251658240 data_used: 30416896
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:22.607253+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 3186688 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:23.607453+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 3186688 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.531749725s of 33.041004181s, submitted: 99
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:24.607803+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 2072576 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:25.608233+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 2072576 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:26.608603+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 2072576 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:27.609211+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 2072576 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:28.609583+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 2072576 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:29.610174+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 2064384 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:30.610522+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 2064384 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:31.610871+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 2064384 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:32.611494+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 2064384 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:33.612111+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 2064384 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:34.612471+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 2064384 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:35.613011+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 2064384 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:36.613318+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 2064384 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:37.613788+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 2064384 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:38.614293+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 2064384 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:39.614625+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 2064384 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:40.614998+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 2064384 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:41.615347+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:42.615739+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:43.616069+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:44.616549+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:45.617045+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:46.617266+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:47.617611+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:48.617949+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:49.618332+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:50.618771+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:51.619181+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:52.619510+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:53.619970+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:54.620244+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:55.620621+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:56.621024+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:57.621376+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:58.621819+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:59.622220+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:00.622598+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:01.622836+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:02.623156+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:03.623584+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:04.624125+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:05.624397+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:06.624669+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:07.625005+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:08.625393+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:09.625793+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:10.626143+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:11.626333+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:12.626531+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:13.626825+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:14.627312+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:15.627827+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:16.628112+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:17.628479+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123453440 unmapped: 2015232 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:18.628772+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:19.629039+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:20.629324+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:21.629777+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:22.630279+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:23.630486+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:24.630975+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:25.631295+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:26.631546+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:27.631956+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:28.632102+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:29.632426+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:30.632658+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:31.633015+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:32.633301+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:33.633580+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:34.634097+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:35.634338+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:36.634681+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:37.634988+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:38.635372+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:39.635779+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:40.636100+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:41.636364+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:42.636693+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:43.637119+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:44.637514+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:45.638016+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:46.638248+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:47.638705+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 1990656 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:48.639165+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 1990656 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:49.639501+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 1990656 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:50.639749+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 1990656 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:51.640106+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 1990656 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:52.640322+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 1990656 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:53.640795+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 1990656 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:54.641090+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 1990656 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:55.641432+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 1990656 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:56.641824+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 1990656 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:57.642188+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:58.642627+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:59.643063+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:00.643505+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:01.643976+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:02.644279+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:03.644530+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:04.644812+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:05.645215+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:06.645608+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:07.646397+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:08.646623+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:09.647348+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:10.647844+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:11.648179+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:12.648558+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:13.648777+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:14.649161+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:15.649447+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:16.649794+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:17.650167+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:18.650590+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:19.650743+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:20.651027+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:21.651259+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:22.651611+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:23.651875+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:24.652245+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:25.652647+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:26.653113+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:27.653502+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:28.653872+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:29.654240+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:30.654556+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:31.654857+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:32.655288+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:33.656213+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:34.656483+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:35.656741+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:36.657029+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:37.657290+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:38.657653+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:39.658043+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:40.658267+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:41.658647+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:42.658832+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 1957888 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:43.659067+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 1957888 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:44.659299+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 1957888 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:45.659526+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 1957888 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:46.659872+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 1957888 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:47.660325+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 1957888 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:48.660679+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 1957888 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:49.660958+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 1957888 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:50.661496+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 1957888 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:51.661853+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 1957888 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:52.662158+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 1949696 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:53.662398+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 1949696 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:54.662787+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 1949696 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:55.663096+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 1949696 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:56.663393+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 1949696 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:57.663637+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 153.424575806s of 153.492935181s, submitted: 10
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645079 data_alloc: 251658240 data_used: 30408704
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x564847ff6000 session 0x56484acbf2c0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x564848048800 session 0x56484af70000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x56484af68800 session 0x56484899a780
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af69000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 1949696 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:58.664017+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 5226496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x56484af69000 session 0x564848e265a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets getting new tickets!
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:59.664501+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _finish_auth 0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:59.666104+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:00.664823+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:01.665131+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:02.665384+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:03.665785+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:04.666089+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:05.666373+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x564849ec2400 session 0x56484a71da40
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af69000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:06.666656+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x564847ff6c00 session 0x564848e272c0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848045400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x564847ff7c00 session 0x564847d041e0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff6c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:07.667168+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:08.667592+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:09.668026+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:10.668398+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:11.668663+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:12.669212+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:13.669702+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:14.670141+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:15.670459+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:16.670746+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:17.671230+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:18.671565+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:19.671823+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:20.672177+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:21.672631+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:22.673139+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:23.673395+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:24.673830+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:25.674174+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:26.674450+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:27.674816+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:28.675288+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:29.675568+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:30.675816+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:31.676297+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:32.676770+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:33.677228+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:34.677593+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:35.677992+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:36.678267+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:37.678651+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:38.679184+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:39.679497+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:40.680018+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:41.680634+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:42.681141+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:43.681323+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:44.681633+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:45.682185+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:46.682568+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:47.682753+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:48.683128+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:49.683334+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:50.683590+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:51.683783+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:52.683969+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:53.684322+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:54.684701+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:55.685142+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:56.685409+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:57.685821+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:58.687653+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:59.688032+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:00.688403+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:01.688654+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:02.689106+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:03.689381+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:04.689810+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:05.690213+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:06.690632+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:07.691244+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:08.691577+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:09.692159+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:10.692605+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:11.692863+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:12.693186+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:13.693610+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:14.694100+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:15.694437+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:16.694868+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:17.695642+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:18.695984+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:19.696420+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:20.696798+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:21.697235+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:22.697607+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:23.698171+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:24.698502+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:25.699102+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:26.699390+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x564848e20000 session 0x564849c07680
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff6000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:27.700116+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:28.700545+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:29.700852+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:30.701488+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:31.701739+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:32.702110+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:33.702624+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:34.703018+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:35.703494+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:36.703771+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:37.704157+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:38.704537+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:39.704995+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:40.705269+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:41.705518+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:42.705876+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:43.706216+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:44.706442+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:45.706987+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:46.707315+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:47.707625+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:48.708042+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:49.708326+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:50.708668+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:51.709123+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:52.709443+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:53.709699+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:54.710116+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:55.710368+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:56.710718+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x564848d3a400 session 0x564849c17680
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 119.393753052s of 119.798889160s, submitted: 65
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:57.711199+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x564848d3bc00 session 0x56484acbf0e0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x56484ac71000 session 0x56484ab36f00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848e20000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 5193728 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:58.711417+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229809 data_alloc: 218103808 data_used: 16130048
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x564848e20000 session 0x564848020960
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f9bb7000/0x0/0x4ffc00000, data 0x19f449c/0x1ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:59.711736+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:00.711951+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:01.712242+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:02.712501+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:03.712730+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226977 data_alloc: 218103808 data_used: 16121856
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f9bbb000/0x0/0x4ffc00000, data 0x19f049c/0x1ab3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:04.713061+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:05.713430+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f9bbb000/0x0/0x4ffc00000, data 0x19f049c/0x1ab3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:06.713868+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f9bbb000/0x0/0x4ffc00000, data 0x19f049c/0x1ab3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:07.714279+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:08.714649+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226977 data_alloc: 218103808 data_used: 16121856
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:09.715172+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f9bbb000/0x0/0x4ffc00000, data 0x19f049c/0x1ab3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f9bbb000/0x0/0x4ffc00000, data 0x19f049c/0x1ab3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:10.715407+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:11.715809+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:12.716064+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:13.716406+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226977 data_alloc: 218103808 data_used: 16121856
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:14.716673+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f9bbb000/0x0/0x4ffc00000, data 0x19f049c/0x1ab3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:15.717203+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:16.717573+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:17.718072+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:18.718415+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226977 data_alloc: 218103808 data_used: 16121856
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:19.718781+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f9bbb000/0x0/0x4ffc00000, data 0x19f049c/0x1ab3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:20.719338+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:21.719989+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:22.720423+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:23.720747+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848048800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.754179001s of 26.179981232s, submitted: 49
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228951 data_alloc: 218103808 data_used: 16125952
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:24.721217+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f9bba000/0x0/0x4ffc00000, data 0x19f04ac/0x1ab4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:25.721565+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112975872 unmapped: 29278208 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:26.721777+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112975872 unmapped: 29278208 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:27.722133+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112975872 unmapped: 29278208 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:28.722537+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 handle_osd_map epochs [126,127], i have 126, src has [1,127]
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287797 data_alloc: 218103808 data_used: 16134144
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: get_auth_request con 0x564848e75000 auth_method 0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x564848048800 session 0x5648480052c0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 29270016 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:29.723088+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f93ba000/0x0/0x4ffc00000, data 0x21f04ac/0x22b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 29270016 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:30.723634+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 29270016 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:31.724114+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 29270016 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:32.724360+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 29270016 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:33.724690+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287797 data_alloc: 218103808 data_used: 16134144
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 29270016 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:34.725264+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 29270016 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f93b6000/0x0/0x4ffc00000, data 0x21f2029/0x22b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:35.725694+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 29270016 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:36.726095+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848d3a400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x564848d3a400 session 0x56484aafe3c0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848d3bc00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x564848d3bc00 session 0x56484aaffe00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 29270016 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848e20000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x564848e20000 session 0x56484a5e9e00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:37.726568+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac71000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x56484ac71000 session 0x5648492f05a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 21405696 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x56484af68800 session 0x56484a7ff0e0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:38.727193+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1308437 data_alloc: 234881024 data_used: 22953984
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x56484af68800 session 0x5648481c41e0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848d3a400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 21422080 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.666840553s of 15.728053093s, submitted: 4
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:39.727462+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x564848d3a400 session 0x5648481c5a40
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848d3bc00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x564848d3bc00 session 0x564847f98b40
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848e20000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x564848e20000 session 0x56484a843680
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac71000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x56484ac71000 session 0x564847f9d4a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac71000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120979456 unmapped: 21274624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:40.727730+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x56484ac71000 session 0x56484aa510e0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848d3a400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x564848d3a400 session 0x56484a8450e0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848d3bc00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x564848d3bc00 session 0x56484a845860
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848e20000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x564848e20000 session 0x56484a845680
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8e45000/0x0/0x4ffc00000, data 0x2763039/0x2829000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120979456 unmapped: 21274624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:41.728189+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120979456 unmapped: 21274624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:42.728530+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120979456 unmapped: 21274624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:43.728962+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8e45000/0x0/0x4ffc00000, data 0x2763039/0x2829000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355366 data_alloc: 234881024 data_used: 22953984
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af76000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 21430272 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:44.729316+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 21430272 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:45.729691+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 21454848 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8e45000/0x0/0x4ffc00000, data 0x2763039/0x2829000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [1])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:46.729946+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:47.730163+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:48.730510+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8e45000/0x0/0x4ffc00000, data 0x2763039/0x2829000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1387338 data_alloc: 234881024 data_used: 27414528
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:49.731007+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:50.731256+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:51.731482+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8e45000/0x0/0x4ffc00000, data 0x2763039/0x2829000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:52.731973+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:53.732276+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1387338 data_alloc: 234881024 data_used: 27414528
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:54.732614+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8e45000/0x0/0x4ffc00000, data 0x2763039/0x2829000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8e45000/0x0/0x4ffc00000, data 0x2763039/0x2829000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:55.733171+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:56.733374+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8e45000/0x0/0x4ffc00000, data 0x2763039/0x2829000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:57.733541+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:58.733841+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1387338 data_alloc: 234881024 data_used: 27414528
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:59.734119+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:00.734350+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8e45000/0x0/0x4ffc00000, data 0x2763039/0x2829000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:01.734637+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x56484af68800 session 0x564848e26000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.050895691s of 22.181135178s, submitted: 21
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x56484af76000 session 0x56484acbf680
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848d3a400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:02.735049+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 24436736 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x564848d3a400 session 0x564848020960
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:03.735334+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 24420352 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313333 data_alloc: 234881024 data_used: 22953984
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:04.735656+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 24420352 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848d3bc00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:05.735949+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 24420352 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _renew_subs
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93b7000/0x0/0x4ffc00000, data 0x21f2029/0x22b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,1])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 128 ms_handle_reset con 0x564848d3bc00 session 0x564849e341e0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:06.736316+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 29466624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:07.736636+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 29466624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:08.736943+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 29466624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242578 data_alloc: 218103808 data_used: 16142336
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:09.737126+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 29466624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:10.737370+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 29466624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:11.737670+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 29466624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9bb4000/0x0/0x4ffc00000, data 0x19f3bea/0x1ab9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:12.737944+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 29466624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9bb4000/0x0/0x4ffc00000, data 0x19f3bea/0x1ab9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.768804550s of 11.076835632s, submitted: 50
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:13.738193+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 29466624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245552 data_alloc: 218103808 data_used: 16142336
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:14.738474+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 29466624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9bb1000/0x0/0x4ffc00000, data 0x19f564d/0x1abc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:15.738684+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 29466624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:16.739111+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 29466624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9bb1000/0x0/0x4ffc00000, data 0x19f564d/0x1abc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:17.739416+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 29466624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848e20000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:18.739834+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 29417472 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248268 data_alloc: 218103808 data_used: 16142336
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9bb0000/0x0/0x4ffc00000, data 0x19f567b/0x1abe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:19.740145+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 29417472 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:20.740475+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 38592512 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:21.741043+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 38592512 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f93b0000/0x0/0x4ffc00000, data 0x21f5680/0x22be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:22.741428+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 38592512 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _renew_subs
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.988888741s of 10.125345230s, submitted: 22
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 130 ms_handle_reset con 0x564848e20000 session 0x564849326d20
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:23.741863+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 38592512 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307193 data_alloc: 218103808 data_used: 16150528
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:24.742414+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 38592512 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:25.742750+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 38592512 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 130 heartbeat osd_stat(store_statfs(0x4f93ac000/0x0/0x4ffc00000, data 0x21f71fd/0x22c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:26.742975+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 38592512 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:27.743327+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 38592512 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:28.743573+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 38592512 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307193 data_alloc: 218103808 data_used: 16150528
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:29.744040+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 38592512 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac71000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:30.744295+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 130 heartbeat osd_stat(store_statfs(0x4f93ac000/0x0/0x4ffc00000, data 0x21f71fd/0x22c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 113123328 unmapped: 37527552 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _renew_subs
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 131 ms_handle_reset con 0x56484ac71000 session 0x564847ceed20
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:31.744662+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:32.745117+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:33.745566+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255422 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:34.746710+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 131 heartbeat osd_stat(store_statfs(0x4f9baa000/0x0/0x4ffc00000, data 0x19f8d9b/0x1ac2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:35.746998+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:36.747424+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:37.747779+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.831582069s of 15.007717133s, submitted: 31
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:38.748254+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:39.748609+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:40.749153+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:41.749525+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:42.750032+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:43.750412+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:44.750740+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:45.751297+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:46.751632+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:47.863591+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:48.864000+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:49.864303+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:50.864544+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:51.865093+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:52.865531+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:53.865983+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:54.866241+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:55.866555+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:56.866807+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:57.867065+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:58.867324+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:59.867579+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:00.867816+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:01.868121+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:02.868482+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:03.869054+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:04.869436+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:05.869780+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:06.870132+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:07.870492+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:08.870762+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:09.871036+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:10.871387+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:11.871840+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:12.872146+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:13.872541+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:14.873095+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:15.873500+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:16.873992+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:17.874376+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:18.874671+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:19.875144+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:20.875333+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:21.875564+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:22.875948+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:23.876250+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:24.876527+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:25.876807+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:26.877181+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:27.877785+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:28.878768+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:29.879438+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:30.880324+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:31.880688+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:32.881157+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:33.881502+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:34.882026+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:35.882544+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:36.882820+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:37.883239+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:38.883610+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:39.884020+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:40.884434+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:41.884781+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:42.885001+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:43.885305+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:44.885682+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:45.886114+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:46.887310+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:47.887623+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:48.888119+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:49.888404+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:50.888810+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:51.889177+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:52.889621+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:53.890083+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:54.890671+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:55.891126+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:56.891510+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:57.892027+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:58.892306+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3000.2 total, 600.0 interval
                                            Cumulative writes: 8925 writes, 35K keys, 8925 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 8925 writes, 2023 syncs, 4.41 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 851 writes, 2760 keys, 851 commit groups, 1.0 writes per commit group, ingest: 1.82 MB, 0.00 MB/s
                                            Interval WAL: 851 writes, 368 syncs, 2.31 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:59.892647+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:00.893028+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:01.893343+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:02.893606+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:03.893777+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:04.894131+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:05.894422+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:06.894792+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:07.895062+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:08.895247+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:09.895585+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:10.896006+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:11.896389+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:12.896787+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:13.897048+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:14.897467+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:15.897731+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:16.898227+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:17.898556+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:18.898835+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:19.899246+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:20.899455+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:21.900097+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:22.900453+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:23.901229+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:24.901592+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:25.901998+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:26.902419+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:27.902831+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:28.903200+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:29.903406+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:30.903851+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:31.904171+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:32.904523+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:33.904982+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:34.905218+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:35.905512+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:36.905939+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:37.906371+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:38.906619+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:39.907163+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:40.907553+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:41.908023+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:42.908431+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:43.908870+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:44.909524+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:45.910005+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:46.910450+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:47.910982+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:48.911283+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:49.911694+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:50.912111+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:51.912556+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:52.913073+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:53.913392+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:54.913877+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:55.914329+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:56.914709+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:57.915130+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:58.915520+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:59.915822+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:00.916229+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:01.916554+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:02.917124+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:03.918341+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:04.919145+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:05.919515+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:06.920066+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:07.920544+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:08.920843+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:09.921147+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:10.921471+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:11.922012+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:12.922576+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:13.923186+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:14.923508+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:15.923952+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:16.924309+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:17.924740+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:18.925928+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:19.926384+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:20.926600+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:21.927010+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:22.927362+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:23.927698+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 ms_handle_reset con 0x56484804a000 session 0x56484af3af00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 166.313781738s of 166.387649536s, submitted: 20
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 ms_handle_reset con 0x564847ff7000 session 0x56484af3b860
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac71000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:24.928116+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108568576 unmapped: 42082304 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152304 data_alloc: 218103808 data_used: 11788288
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:25.928499+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 ms_handle_reset con 0x56484ac71000 session 0x564848146d20
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 42090496 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4ed000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:26.929029+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 42090496 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:27.929441+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 42090496 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4ed000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:28.930139+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 42090496 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:29.930495+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 42090496 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149206 data_alloc: 218103808 data_used: 11771904
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4ed000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:30.930812+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 42090496 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:31.934161+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 42090496 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:32.934615+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 42090496 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:33.935064+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 42090496 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.749129295s of 10.003149033s, submitted: 39
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:34.935482+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 41992192 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149030 data_alloc: 218103808 data_used: 11771904
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:35.935962+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 41984000 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4f2000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:36.936367+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:37.936732+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:38.937051+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4f2000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:39.937405+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4f2000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149030 data_alloc: 218103808 data_used: 11771904
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:40.937712+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:41.938140+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:42.938545+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4f2000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:43.939010+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.765543938s of 10.422777176s, submitted: 87
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 ms_handle_reset con 0x56484ac72c00 session 0x56484af661e0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 ms_handle_reset con 0x56484887c000 session 0x56484af674a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:44.939299+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484abb5000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108707840 unmapped: 41943040 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054182 data_alloc: 218103808 data_used: 7081984
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:45.939616+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4f2000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,1])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 ms_handle_reset con 0x56484abb5000 session 0x56484aeee3c0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:46.940101+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:47.940850+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:48.941277+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:49.941827+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:50.942232+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:51.942589+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:52.943115+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:53.943481+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:54.944083+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:55.944474+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:56.945000+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:57.945403+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:58.945697+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:59.946113+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:00.946529+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:01.947233+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:02.947643+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:03.948159+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:04.948655+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:05.949109+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:06.949495+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:07.950081+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:08.950482+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:09.951030+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:10.951402+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:11.951678+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:12.952290+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:13.952736+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:14.953256+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:15.953660+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:16.954210+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:17.954633+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:18.955088+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:19.955460+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:20.956040+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:21.956477+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:22.957060+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:23.957501+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:24.957987+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:25.958235+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:26.958587+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:27.959103+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:28.959547+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:29.960100+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:30.960545+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:31.960866+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:32.961371+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:33.961874+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:34.962527+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:35.962954+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:36.963402+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:37.963788+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:38.964178+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:39.964555+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:40.965004+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:41.965288+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:42.965679+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:43.966184+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:44.966827+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:45.967175+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:46.967548+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:47.968010+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:48.968382+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:49.968812+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:50.969151+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:51.969550+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:52.969734+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:53.970244+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:54.970714+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:55.971084+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:56.971389+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:57.971741+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:58.972132+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:59.972460+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:00.972647+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:01.973163+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:02.973526+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:03.973796+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:04.974229+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:05.974598+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 81.139320374s of 81.423446655s, submitted: 52
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051319 data_alloc: 218103808 data_used: 7057408
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:06.975057+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104734720 unmapped: 45916160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _renew_subs
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 133 ms_handle_reset con 0x56484887c000 session 0x564849e343c0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:07.975444+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104742912 unmapped: 45907968 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa072000/0x0/0x4ffc00000, data 0x1531265/0x15fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:08.975840+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 45899776 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _renew_subs
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 ms_handle_reset con 0x564847ff7000 session 0x56484aafe1e0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:09.976350+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:10.976805+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:11.977124+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:12.977500+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:13.978053+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:14.978539+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:15.978879+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:16.979316+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:17.979627+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:18.979865+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:19.980287+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:20.980668+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:21.981071+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:22.981487+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:23.981865+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:24.982465+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:25.983070+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:26.983471+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:27.984034+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:28.984423+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:29.984750+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:30.985114+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:31.985869+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:32.986372+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:33.986810+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:34.987482+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:35.987875+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:36.988358+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:37.988717+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:38.989076+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:39.989818+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:40.990167+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:41.990590+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:42.991222+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:43.991604+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:44.992035+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:45.992365+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:46.992648+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:47.993150+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:48.993463+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:49.993838+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:50.994223+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:51.994629+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:52.995148+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:53.995471+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:54.995841+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:55.996228+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:56.996632+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:57.997054+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:58.997404+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:59.997695+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:00.998129+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:01.998552+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:02.998840+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:03.999008+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484804a000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 ms_handle_reset con 0x56484804a000 session 0x5648493263c0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac72000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 ms_handle_reset con 0x56484ac72000 session 0x564847f9da40
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 ms_handle_reset con 0x564847ff7000 session 0x56484a5e8f00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:04.999225+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104808448 unmapped: 45842432 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484804a000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 ms_handle_reset con 0x56484804a000 session 0x56484a5e9860
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:05.999571+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484abb5000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 59.769268036s of 59.916522980s, submitted: 11
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 42229760 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 ms_handle_reset con 0x56484887c000 session 0x564849e343c0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _renew_subs
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484abb5000 session 0x564848a7a5a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198393 data_alloc: 218103808 data_used: 11743232
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:06.999955+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f9bf9000/0x0/0x4ffc00000, data 0x19a4d92/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac72400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484ac72400 session 0x564848e26000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 42246144 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac72400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x564847ff7000 session 0x564848ec7c20
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484ac72400 session 0x564848e26b40
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484804a000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484804a000 session 0x56484a5e0d20
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484887c000 session 0x56484aeed680
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:08.001144+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484abb5000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484abb5000 session 0x564847ceed20
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x564847ff7000 session 0x56484a845680
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108691456 unmapped: 41959424 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484804a000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484887c000 session 0x56484aeefa40
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac72400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484804a000 session 0x56484aeee5a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac72c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484ac72c00 session 0x56484a842960
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848d3bc00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x564848d3bc00 session 0x56484a5ec5a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x564847ff7000 session 0x564848005e00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:09.001452+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484804a000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484804a000 session 0x56484a78fa40
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 40763392 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _renew_subs
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:10.001625+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484ac72400 session 0x5648493261e0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 109756416 unmapped: 40894464 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8dc1000/0x0/0x4ffc00000, data 0x27d99c5/0x28ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:11.001816+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484887c000 session 0x564847d114a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac72c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484ac72c00 session 0x564847d11c20
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x564847ff7000 session 0x56484ab0e1e0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 109756416 unmapped: 40894464 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484804a000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484804a000 session 0x56484ab0ef00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:12.002330+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1370612 data_alloc: 218103808 data_used: 11743232
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484887c000 session 0x56484ab0e780
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af76c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484af76c00 session 0x56484a845860
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af76000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484af76000 session 0x56484a5e0d20
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x564847ff7000 session 0x564848e26b40
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484804a000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484804a000 session 0x56484a9fe000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 40402944 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484887c000 session 0x5648474ef4a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:13.002604+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af76c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484af76c00 session 0x5648474ee5a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484aa26400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484aa26400 session 0x564847d10960
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 40402944 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x564847ff7000 session 0x564847d10f00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:14.002803+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484804a000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484887c000 session 0x56484ab0e3c0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484804a000 session 0x56484ab0eb40
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 110641152 unmapped: 40009728 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f883f000/0x0/0x4ffc00000, data 0x2d5c5d5/0x2e2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af76c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849bfac00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:15.003060+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849db2000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x564849db2000 session 0x56484a845a40
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac72400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484ac72400 session 0x56484a9fe5a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 40026112 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484804a000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:16.003342+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 40026112 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8813000/0x0/0x4ffc00000, data 0x2d86618/0x2e5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:17.003729+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1378599 data_alloc: 218103808 data_used: 11751424
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 40017920 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:18.003959+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _renew_subs
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.355422020s of 12.096014023s, submitted: 115
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 38879232 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:19.004133+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f880f000/0x0/0x4ffc00000, data 0x2d8807b/0x2e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115818496 unmapped: 34832384 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:20.004404+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 32759808 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:21.004696+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484887c000 session 0x56484aa50780
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f880f000/0x0/0x4ffc00000, data 0x2d8807b/0x2e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 32759808 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849db2000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849db2000 session 0x56484813c000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:22.005065+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1485893 data_alloc: 234881024 data_used: 26185728
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 32759808 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:23.005415+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484af76c00 session 0x56484ab0f0e0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849bfac00 session 0x56484aeee1e0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848d3a400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564848d3a400 session 0x564849327c20
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849bfac00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 32759808 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484887c000 session 0x564847f98b40
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:24.005701+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849db2000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af76c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849bfac00 session 0x56484a8450e0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f87e6000/0x0/0x4ffc00000, data 0x2db208e/0x2e88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 33431552 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:25.006113+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 33423360 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:26.006509+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 33423360 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:27.006721+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451549 data_alloc: 234881024 data_used: 24010752
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 33423360 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f88ae000/0x0/0x4ffc00000, data 0x2b5fff9/0x2c33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:28.006956+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f88ae000/0x0/0x4ffc00000, data 0x2b5fff9/0x2c33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118177792 unmapped: 32473088 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:29.007178+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118685696 unmapped: 31965184 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:30.007380+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f88ae000/0x0/0x4ffc00000, data 0x2b5fff9/0x2c33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 30597120 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:31.007582+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 30597120 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:32.007842+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1484509 data_alloc: 234881024 data_used: 28577792
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 30597120 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:33.008112+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 30597120 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:34.008303+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 30597120 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:35.008535+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 30597120 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:36.008823+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f88ae000/0x0/0x4ffc00000, data 0x2b5fff9/0x2c33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:37.009039+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1484509 data_alloc: 234881024 data_used: 28577792
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:38.009229+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:39.009435+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:40.009660+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f88ae000/0x0/0x4ffc00000, data 0x2b5fff9/0x2c33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:41.009848+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:42.010124+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1484509 data_alloc: 234881024 data_used: 28577792
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:43.010327+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120102912 unmapped: 30547968 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:44.010542+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120102912 unmapped: 30547968 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:45.010766+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848e20000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.486093521s of 26.861923218s, submitted: 69
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564848e20000 session 0x564849c06960
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af77000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484af77000 session 0x56484813d860
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af76400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484af76400 session 0x56484aa512c0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484887c000 session 0x564848ec74a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848e20000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121430016 unmapped: 29220864 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564848e20000 session 0x56484a8421e0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:46.011246+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8280000/0x0/0x4ffc00000, data 0x331aff9/0x33ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121430016 unmapped: 29220864 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:47.011652+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550538 data_alloc: 234881024 data_used: 28581888
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121430016 unmapped: 29220864 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:48.011833+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8280000/0x0/0x4ffc00000, data 0x331aff9/0x33ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121430016 unmapped: 29220864 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:49.012016+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121430016 unmapped: 29220864 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:50.012549+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 127156224 unmapped: 23494656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:51.013086+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 128073728 unmapped: 22577152 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:52.013350+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849bfac00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849bfac00 session 0x5648481c5c20
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1620314 data_alloc: 234881024 data_used: 29696000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af77000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849477c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 125485056 unmapped: 25165824 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:53.013523+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 23896064 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:54.013756+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f78df000/0x0/0x4ffc00000, data 0x3cb3ff9/0x3d87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f78df000/0x0/0x4ffc00000, data 0x3cb3ff9/0x3d87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 127426560 unmapped: 23224320 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:55.014190+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131514368 unmapped: 19136512 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:56.014395+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133488640 unmapped: 17162240 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:57.014584+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1686404 data_alloc: 251658240 data_used: 36605952
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.745903969s of 12.423884392s, submitted: 151
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484af77000 session 0x56484ab36960
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849477c00 session 0x56484a5ec1e0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133488640 unmapped: 17162240 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:58.014973+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484887c000 session 0x564849e34f00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130637824 unmapped: 20013056 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:59.015352+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130637824 unmapped: 20013056 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:00.015832+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f80a7000/0x0/0x4ffc00000, data 0x34f3ff9/0x35c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134373376 unmapped: 16277504 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:01.016062+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 17661952 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:02.016403+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1658098 data_alloc: 234881024 data_used: 30609408
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133152768 unmapped: 17498112 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75c4000/0x0/0x4ffc00000, data 0x3fd5ff9/0x40a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,3])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:03.016646+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 17391616 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:04.016828+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 17391616 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:05.017285+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 17391616 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:06.017631+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 17391616 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:07.018026+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75bc000/0x0/0x4ffc00000, data 0x3fddff9/0x40b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1665698 data_alloc: 234881024 data_used: 31031296
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 17383424 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:08.018391+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 17383424 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:09.018832+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 17375232 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:10.019379+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75bc000/0x0/0x4ffc00000, data 0x3fddff9/0x40b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 17375232 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:11.019656+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.259255409s of 13.825000763s, submitted: 121
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 18071552 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:12.020033+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1661566 data_alloc: 234881024 data_used: 31031296
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 18071552 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:13.020256+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 18071552 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:14.020672+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 18071552 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:15.021116+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132587520 unmapped: 18063360 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:16.021414+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132587520 unmapped: 18063360 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:17.021657+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1661566 data_alloc: 234881024 data_used: 31031296
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132587520 unmapped: 18063360 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:18.021979+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 18055168 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:19.022515+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 18055168 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:20.022983+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 18055168 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:21.023319+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 18055168 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:22.023509+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1661566 data_alloc: 234881024 data_used: 31031296
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 18055168 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:23.023861+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.511526108s of 12.544201851s, submitted: 4
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:24.024101+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:25.024475+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:26.024832+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:27.025160+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1662094 data_alloc: 234881024 data_used: 31031296
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:28.025573+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:29.026007+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:30.026197+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:31.026525+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:32.026751+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132620288 unmapped: 18030592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1661650 data_alloc: 234881024 data_used: 31031296
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:33.027201+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75b5000/0x0/0x4ffc00000, data 0x3fe5ff9/0x40b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132620288 unmapped: 18030592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:34.027592+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132628480 unmapped: 18022400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.172998428s of 10.241044044s, submitted: 10
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:35.028153+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 17997824 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:36.028556+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 17997824 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75b5000/0x0/0x4ffc00000, data 0x3fe5ff9/0x40b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:37.028967+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 17997824 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664114 data_alloc: 234881024 data_used: 31019008
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849477800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849477800 session 0x5648481472c0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849477400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849477400 session 0x564848021e00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849476400 session 0x56484aeed860
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849476800 session 0x56484aeecb40
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:38.029299+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 17997824 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849476800 session 0x56484aeede00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484887c000 session 0x56484a9b8000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849476400 session 0x56484a9b9860
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849477400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849477400 session 0x564847cef860
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849477800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849477800 session 0x5648488f0780
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f6e82000/0x0/0x4ffc00000, data 0x4718009/0x47ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:39.029716+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132964352 unmapped: 17686528 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:40.030054+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132964352 unmapped: 17686528 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f6e82000/0x0/0x4ffc00000, data 0x4718009/0x47ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:41.030352+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132964352 unmapped: 17686528 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:42.030639+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132964352 unmapped: 17686528 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849db2000 session 0x56484af3d680
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484af76c00 session 0x56484a9b85a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1719347 data_alloc: 234881024 data_used: 31019008
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:43.030945+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 17670144 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:44.031327+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f6e82000/0x0/0x4ffc00000, data 0x4718009/0x47ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 17670144 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.484535217s of 10.727886200s, submitted: 38
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:45.031678+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 17670144 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:46.032261+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 17670144 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564847ff7000 session 0x564847f9d680
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484804a000 session 0x564848143680
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:47.032460+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 27230208 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849476800 session 0x564847f9cb40
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468530 data_alloc: 234881024 data_used: 17723392
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:48.032835+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f842a000/0x0/0x4ffc00000, data 0x3170ff9/0x3244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 27230208 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:49.033227+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564847ff7000 session 0x56484a91c000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 27222016 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484804a000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484804a000 session 0x56484a91d2c0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:50.033627+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 27222016 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f842a000/0x0/0x4ffc00000, data 0x3170ff9/0x3244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849476800 session 0x564848143c20
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849db2000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849db2000 session 0x5648481434a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:51.034299+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 27746304 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af76c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849477400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:52.034576+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 27746304 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473527 data_alloc: 234881024 data_used: 17731584
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:53.034846+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 27746304 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:54.035022+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 27746304 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.922307968s of 10.159023285s, submitted: 48
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:55.035718+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484af76c00 session 0x5648492f1860
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849477400 session 0x564847f983c0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 27738112 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8406000/0x0/0x4ffc00000, data 0x3194ff9/0x3268000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,1])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:56.036103+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564847ff7000 session 0x56484aeec960
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:57.036496+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417997 data_alloc: 234881024 data_used: 17723392
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:58.036873+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:59.037236+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:00.037596+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:01.037994+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:02.038445+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417997 data_alloc: 234881024 data_used: 17723392
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:03.038674+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:04.038931+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:05.039296+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:06.039718+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:07.040119+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417997 data_alloc: 234881024 data_used: 17723392
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:08.040440+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:09.040715+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:10.041173+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:11.041392+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:12.041625+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417997 data_alloc: 234881024 data_used: 17723392
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:13.041950+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:14.042140+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:15.042363+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:16.042536+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:17.042822+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417997 data_alloc: 234881024 data_used: 17723392
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:18.043225+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:19.043627+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121061376 unmapped: 29589504 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.571750641s of 24.842250824s, submitted: 46
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:20.044255+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 29532160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:21.044615+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121217024 unmapped: 29433856 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:22.045006+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 29229056 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1435561 data_alloc: 234881024 data_used: 19324928
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:23.045203+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:24.045535+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:25.046140+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:26.046530+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:27.046945+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436041 data_alloc: 234881024 data_used: 19337216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:28.047239+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:29.047576+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:30.047809+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:31.048171+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:32.048417+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436041 data_alloc: 234881024 data_used: 19337216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:33.048788+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:34.048996+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:35.049401+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:36.049683+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484d2ca000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:37.050160+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436041 data_alloc: 234881024 data_used: 19337216
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:38.050593+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.756362915s of 18.791091919s, submitted: 4
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484d2ca000 session 0x56484a845680
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:39.051124+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8b59000/0x0/0x4ffc00000, data 0x2a40b66/0x2b14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:40.051494+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:41.052013+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:42.052379+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:43.052846+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439863 data_alloc: 234881024 data_used: 19345408
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8b59000/0x0/0x4ffc00000, data 0x2a40b66/0x2b14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:44.053194+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:45.053700+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:46.054083+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:47.620291+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af69c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af69c00 session 0x56484aa51860
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439863 data_alloc: 234881024 data_used: 19345408
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848e20800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564848e20800 session 0x56484aeee5a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564847ff7000 session 0x56484af2c780
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8b59000/0x0/0x4ffc00000, data 0x2a40b66/0x2b14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:48.620542+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848e20800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564848e20800 session 0x564848e265a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847cf8c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.284442902s of 10.303551674s, submitted: 2
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564847cf8c00 session 0x564849e341e0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:49.620983+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847fcc400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564847fcc400 session 0x56484a8443c0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484887c000 session 0x56484ab37a40
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484ab370e0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af69c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:50.621202+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476800 session 0x56484a5ec3c0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 33398784 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68c00 session 0x564847d114a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af69c00 session 0x564847cef860
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:51.621609+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 37429248 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:52.621986+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af69400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af69400 session 0x56484a9fef00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 37429248 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356384 data_alloc: 218103808 data_used: 11759616
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484a9fe780
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:53.622337+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 37429248 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476800 session 0x56484a9fe000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68c00 session 0x564848e272c0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f85f1000/0x0/0x4ffc00000, data 0x26d9b95/0x27ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:54.622620+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 37068800 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af69c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:55.622968+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 37068800 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8e96000/0x0/0x4ffc00000, data 0x2703bc8/0x27d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:56.623278+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 37068800 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:57.623750+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 37068800 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364170 data_alloc: 218103808 data_used: 11759616
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8e96000/0x0/0x4ffc00000, data 0x2703bc8/0x27d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:58.624080+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 37068800 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:59.624433+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 37068800 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:00.624759+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118841344 unmapped: 37060608 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8e96000/0x0/0x4ffc00000, data 0x2703bc8/0x27d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af69800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:01.625077+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118841344 unmapped: 37060608 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x564848ec7860
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac70800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac70800 session 0x56484a842960
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484ab0f860
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476800 session 0x56484813dc20
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.737582207s of 13.344229698s, submitted: 87
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:02.625421+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484ac421e0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68c00 session 0x56484ab36f00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 39936000 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac71000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71000 session 0x56484a78ef00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484a9b8780
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476800 session 0x56484a8434a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1535513 data_alloc: 234881024 data_used: 20418560
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:03.625865+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 36896768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:04.626287+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 35504128 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:05.626664+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 35504128 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac71000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71000 session 0x564847d11860
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:06.626876+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 35504128 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484ab36b40
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:07.627253+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 35504128 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571513 data_alloc: 234881024 data_used: 25505792
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68c00 session 0x56484aa50b40
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484aa51860
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:08.632190+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 35495936 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac71000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:09.632464+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 35495936 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:10.632679+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 35463168 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:11.632866+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 126574592 unmapped: 33005568 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:12.633108+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132308992 unmapped: 27271168 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663144 data_alloc: 251658240 data_used: 37560320
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:13.633561+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135913472 unmapped: 23666688 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:14.633972+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135913472 unmapped: 23666688 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:15.634216+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135913472 unmapped: 23666688 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:16.634451+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135954432 unmapped: 23625728 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:17.634861+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135962624 unmapped: 23617536 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677864 data_alloc: 251658240 data_used: 38080512
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:18.635294+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135962624 unmapped: 23617536 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:19.635697+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:20.636085+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:21.636471+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:22.637038+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677864 data_alloc: 251658240 data_used: 38080512
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:23.637420+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:24.637789+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:25.638189+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:26.638439+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:27.638786+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677864 data_alloc: 251658240 data_used: 38080512
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:28.639154+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:29.639505+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:30.639877+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:31.640220+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:32.640440+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677864 data_alloc: 251658240 data_used: 38080512
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:33.640670+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:34.641015+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136028160 unmapped: 23552000 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.552471161s of 32.833507538s, submitted: 33
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:35.641367+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139296768 unmapped: 20283392 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:36.641742+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139304960 unmapped: 20275200 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:37.642086+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7387000/0x0/0x4ffc00000, data 0x4212bc8/0x42e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140591104 unmapped: 18989056 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1791954 data_alloc: 251658240 data_used: 38576128
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:38.642461+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140591104 unmapped: 18989056 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:39.642711+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140591104 unmapped: 18989056 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7378000/0x0/0x4ffc00000, data 0x4221bc8/0x42f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:40.643011+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143826944 unmapped: 15753216 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7378000/0x0/0x4ffc00000, data 0x4221bc8/0x42f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:41.643291+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 15073280 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:42.643506+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 16146432 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1865102 data_alloc: 251658240 data_used: 38723584
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:43.643675+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f6673000/0x0/0x4ffc00000, data 0x4b16bc8/0x4beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16130048 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:44.643919+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16130048 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:45.644274+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16130048 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:46.644497+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16130048 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:47.644810+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16130048 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f6673000/0x0/0x4ffc00000, data 0x4b16bc8/0x4beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1865262 data_alloc: 251658240 data_used: 38727680
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:48.645123+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.018227577s of 13.871615410s, submitted: 193
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16130048 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:49.645408+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 16121856 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:50.645674+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 16121856 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:51.646176+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f6672000/0x0/0x4ffc00000, data 0x4b17bc8/0x4bec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 16121856 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:52.646400+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 16121856 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1865686 data_alloc: 251658240 data_used: 38731776
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:53.646769+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 16121856 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:54.647151+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 16121856 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:55.647576+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f6670000/0x0/0x4ffc00000, data 0x4b19bc8/0x4bee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 16113664 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:56.647868+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 16113664 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:57.648353+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 16113664 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1863118 data_alloc: 251658240 data_used: 38731776
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:58.648675+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 16113664 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:59.649048+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f6670000/0x0/0x4ffc00000, data 0x4b19bc8/0x4bee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 16113664 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:00.649324+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 16113664 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484ab0fc20
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:01.649697+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac71c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71c00 session 0x56484813cd20
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac70c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac70c00 session 0x56484ab374a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848048800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564848048800 session 0x56484a9fe780
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.988618851s of 13.008173943s, submitted: 2
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x564847d114a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143294464 unmapped: 17899520 heap: 161193984 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac70c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac70c00 session 0x56484a845e00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:02.650037+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f5d46000/0x0/0x4ffc00000, data 0x5442bf1/0x5518000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143327232 unmapped: 17866752 heap: 161193984 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1934703 data_alloc: 251658240 data_used: 38731776
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:03.650393+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac71c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 146489344 unmapped: 14704640 heap: 161193984 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71c00 session 0x56484a844780
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:04.650719+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484ab37a40
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e78000/0x0/0x4ffc00000, data 0x630fc53/0x63e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 25387008 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:05.651142+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 25387008 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:06.651561+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 25387008 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:07.652099+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 25387008 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2046908 data_alloc: 251658240 data_used: 38731776
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:08.652538+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 25387008 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e78000/0x0/0x4ffc00000, data 0x630fc8c/0x63e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484a7f0400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f0400 session 0x56484aa51a40
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:09.653072+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484a8425a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 25387008 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:10.653475+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 25378816 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac70c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac70c00 session 0x56484a843680
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac71c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:11.653758+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71c00 session 0x56484a843860
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 144007168 unmapped: 25059328 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:12.653963+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.890540123s of 11.288828850s, submitted: 66
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 144015360 unmapped: 25051136 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:13.654217+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2050670 data_alloc: 251658240 data_used: 38731776
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484a7f0800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 144072704 unmapped: 24993792 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:14.654383+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 144252928 unmapped: 24813568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:15.654577+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 146710528 unmapped: 22355968 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:16.654773+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484a7f1000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 153870336 unmapped: 15196160 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:17.654984+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 158777344 unmapped: 10289152 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:18.655131+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2184302 data_alloc: 268435456 data_used: 56541184
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:19.655407+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:20.655678+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:21.655943+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:22.656285+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:23.656510+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2193742 data_alloc: 268435456 data_used: 57905152
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:24.656727+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:25.656991+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:26.657209+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161628160 unmapped: 7438336 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:27.657450+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161636352 unmapped: 7430144 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:28.657667+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2193742 data_alloc: 268435456 data_used: 57905152
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161636352 unmapped: 7430144 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:29.657864+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:30.658165+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:31.658407+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:32.658627+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:33.659007+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2193742 data_alloc: 268435456 data_used: 57905152
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:34.659251+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:35.659562+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:36.659785+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:37.660090+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:38.660420+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2193742 data_alloc: 268435456 data_used: 57905152
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:39.660600+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:40.661021+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:41.661369+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:42.661786+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:43.662008+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2193742 data_alloc: 268435456 data_used: 57905152
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:44.662436+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:45.662751+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161669120 unmapped: 7397376 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:46.663004+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161669120 unmapped: 7397376 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:47.663182+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.762050629s of 34.795322418s, submitted: 5
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 165208064 unmapped: 3858432 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:48.663377+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2233138 data_alloc: 268435456 data_used: 58028032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4a4e000/0x0/0x4ffc00000, data 0x6738c9c/0x6810000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 165330944 unmapped: 3735552 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:49.663580+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 6807552 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:50.663758+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 165462016 unmapped: 6758400 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:51.663996+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 166813696 unmapped: 5406720 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:52.664204+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 166813696 unmapped: 5406720 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:53.664400+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f3c01000/0x0/0x4ffc00000, data 0x7585c9c/0x765d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2345136 data_alloc: 268435456 data_used: 58961920
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:54.664601+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 166846464 unmapped: 5373952 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f3c01000/0x0/0x4ffc00000, data 0x7585c9c/0x765d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:55.664986+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 166846464 unmapped: 5373952 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:56.665320+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f0800 session 0x56484a5e01e0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484a9fe000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 166846464 unmapped: 5373952 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x5648489743c0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:57.665570+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161579008 unmapped: 10641408 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:58.665853+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161579008 unmapped: 10641408 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2161675 data_alloc: 251658240 data_used: 49319936
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:59.666082+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161579008 unmapped: 10641408 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4a3d000/0x0/0x4ffc00000, data 0x6746c2a/0x681c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:00.666645+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161579008 unmapped: 10641408 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4a3d000/0x0/0x4ffc00000, data 0x6746c2a/0x681c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.678235054s of 13.497112274s, submitted: 173
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476800 session 0x56484aa505a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71000 session 0x564848144d20
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:01.667001+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161587200 unmapped: 10633216 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4a42000/0x0/0x4ffc00000, data 0x6746c2a/0x681c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484af7c3c0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:02.667238+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:03.667600+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1882942 data_alloc: 251658240 data_used: 36667392
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f616b000/0x0/0x4ffc00000, data 0x501dc2a/0x50f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:04.667997+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:05.668224+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:06.668557+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:07.669074+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f616b000/0x0/0x4ffc00000, data 0x501dc2a/0x50f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:08.669834+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1882942 data_alloc: 251658240 data_used: 36667392
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:09.670324+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:10.670664+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:11.671128+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:12.671442+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f616b000/0x0/0x4ffc00000, data 0x501dc2a/0x50f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:13.671790+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1882942 data_alloc: 251658240 data_used: 36667392
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:14.671998+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:15.672260+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f616b000/0x0/0x4ffc00000, data 0x501dc2a/0x50f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:16.672571+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:17.672993+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:18.673223+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1882942 data_alloc: 251658240 data_used: 36667392
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f616b000/0x0/0x4ffc00000, data 0x501dc2a/0x50f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:19.673572+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:20.674124+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:21.674518+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:22.674713+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f616b000/0x0/0x4ffc00000, data 0x501dc2a/0x50f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.953754425s of 21.209007263s, submitted: 50
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:23.675052+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1883210 data_alloc: 251658240 data_used: 36667392
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:24.675470+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:25.675863+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:26.676304+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f1000 session 0x564847f98d20
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f6169000/0x0/0x4ffc00000, data 0x501ec2a/0x50f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:27.676487+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484ac43e00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:28.676988+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:29.677346+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:30.679232+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:31.679472+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:32.679716+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:33.680056+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:34.680341+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:35.680647+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:36.681150+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:37.681495+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:38.682231+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:39.682605+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:40.682970+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:41.683449+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:42.683681+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:43.684064+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:44.684349+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:45.684806+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:46.685101+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:47.685490+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:48.686575+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:49.687059+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:50.687365+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:51.687679+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:52.688151+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:53.688571+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:54.688874+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:55.689356+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139501568 unmapped: 32718848 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:56.689666+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139501568 unmapped: 32718848 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:57.690132+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139501568 unmapped: 32718848 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:58.690391+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.2 total, 600.0 interval
                                            Cumulative writes: 11K writes, 44K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 11K writes, 2982 syncs, 3.80 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 2397 writes, 9102 keys, 2397 commit groups, 1.0 writes per commit group, ingest: 9.64 MB, 0.02 MB/s
                                            Interval WAL: 2397 writes, 959 syncs, 2.50 writes per sync, written: 0.01 GB, 0.02 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139509760 unmapped: 32710656 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:59.690789+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139509760 unmapped: 32710656 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:00.691092+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139509760 unmapped: 32710656 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:01.691500+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139509760 unmapped: 32710656 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:02.691809+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139509760 unmapped: 32710656 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:03.692213+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139517952 unmapped: 32702464 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:04.692422+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139517952 unmapped: 32702464 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:05.692774+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af69000 session 0x564849e34000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139517952 unmapped: 32702464 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: mgrc ms_handle_reset ms_handle_reset con 0x56484885e000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/858078637
Dec 05 02:27:35 compute-0 ceph-osd[207795]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/858078637,v1:192.168.122.100:6801/858078637]
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: get_auth_request con 0x56484a7f1000 auth_method 0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: mgrc handle_mgr_configure stats_period=5
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:06.693110+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847e8dc00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564848045400 session 0x564848144960
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139665408 unmapped: 32555008 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564847ff6c00 session 0x564847cee1e0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848045400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:07.693649+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139665408 unmapped: 32555008 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:08.694020+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139665408 unmapped: 32555008 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:09.694343+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139665408 unmapped: 32555008 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:10.694641+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139665408 unmapped: 32555008 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:11.695071+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139665408 unmapped: 32555008 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:12.695333+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139665408 unmapped: 32555008 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:13.695714+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:14.695932+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:15.696384+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Dec 05 02:27:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2632763281' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:16.699302+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:17.699531+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:18.699944+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:19.700334+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:20.700717+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:21.701051+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:22.701330+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139681792 unmapped: 32538624 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:23.701536+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139681792 unmapped: 32538624 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:24.701731+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139681792 unmapped: 32538624 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:25.702036+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139681792 unmapped: 32538624 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:26.702302+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139681792 unmapped: 32538624 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:27.702628+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:28.703021+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:29.703341+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:30.703684+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:31.703914+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:32.704108+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:33.704442+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:34.704663+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:35.705079+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:36.705510+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:37.706186+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:38.706484+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:39.706825+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:40.707238+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:41.707570+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:42.707878+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:43.708580+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:44.708863+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:45.709297+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:46.709684+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:47.710071+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:48.710346+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:49.710838+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:50.711207+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:51.711571+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:52.711788+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:53.712133+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:54.712368+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:55.712656+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:56.712959+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:57.713158+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:58.713478+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:59.713956+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:00.714299+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:01.714683+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484ac425a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484a7f0800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f0800 session 0x56484ac423c0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac70c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac70c00 session 0x5648488f1e00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac71c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71c00 session 0x56484a842d20
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 99.254913330s of 99.396072388s, submitted: 32
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484a8423c0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484a7f0800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f0800 session 0x56484a842f00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac70c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138616832 unmapped: 40951808 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac70c00 session 0x564848a7a5a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:02.714920+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x564848005c20
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484a7f1400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f1400 session 0x56484a91d680
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138600448 unmapped: 40968192 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:03.715188+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138600448 unmapped: 40968192 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:04.715583+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7967000/0x0/0x4ffc00000, data 0x3820c3a/0x38f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1612692 data_alloc: 234881024 data_used: 21114880
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138600448 unmapped: 40968192 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:05.716068+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7967000/0x0/0x4ffc00000, data 0x3820c3a/0x38f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138600448 unmapped: 40968192 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:06.716496+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484aeed0e0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138600448 unmapped: 40968192 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:07.716824+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7967000/0x0/0x4ffc00000, data 0x3820c3a/0x38f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484a7f0800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f0800 session 0x56484aeede00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138600448 unmapped: 40968192 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:08.717259+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac70c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac70c00 session 0x56484aeec960
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484aeecb40
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138600448 unmapped: 40968192 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:09.717561+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1614530 data_alloc: 234881024 data_used: 21114880
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484a7f1800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484d2ca000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:10.717769+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:11.717984+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:12.718243+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:13.719407+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:14.719594+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1614262 data_alloc: 234881024 data_used: 21233664
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:15.723052+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:16.723239+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:17.723446+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:18.723647+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:19.723838+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1642742 data_alloc: 234881024 data_used: 25214976
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:20.724195+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:21.724532+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:22.724940+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:23.725265+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:24.726324+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1642742 data_alloc: 234881024 data_used: 25214976
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:25.726658+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:26.726923+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564847ff6000 session 0x5648481421e0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:27.727363+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:28.727579+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:29.727829+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1642742 data_alloc: 234881024 data_used: 25214976
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:30.728098+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:31.728474+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:32.728858+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137682944 unmapped: 41885696 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:33.729240+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137682944 unmapped: 41885696 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.585376740s of 31.791507721s, submitted: 41
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:34.729430+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138772480 unmapped: 40796160 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1643046 data_alloc: 234881024 data_used: 25227264
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:35.729863+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138878976 unmapped: 40689664 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:36.730378+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138911744 unmapped: 40656896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:37.730782+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138936320 unmapped: 40632320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:38.731106+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138936320 unmapped: 40632320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:39.731302+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138936320 unmapped: 40632320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1643046 data_alloc: 234881024 data_used: 25227264
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:40.731645+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138944512 unmapped: 40624128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:41.731973+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138944512 unmapped: 40624128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:42.732194+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138944512 unmapped: 40624128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:43.732595+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138944512 unmapped: 40624128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:44.732836+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138944512 unmapped: 40624128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1643046 data_alloc: 234881024 data_used: 25227264
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:45.733281+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138944512 unmapped: 40624128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.734733582s of 12.462025642s, submitted: 110
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:46.733680+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139984896 unmapped: 39583744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:47.734043+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139984896 unmapped: 39583744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f71fb000/0x0/0x4ffc00000, data 0x3f83c4a/0x405b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:48.734554+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139214848 unmapped: 40353792 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:49.734787+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 39370752 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1705914 data_alloc: 234881024 data_used: 25247744
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:50.735171+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 39370752 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:51.735487+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 39288832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:52.735703+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 39288832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:53.735986+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7187000/0x0/0x4ffc00000, data 0x3fffc4a/0x40d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 39288832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:54.736410+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 39288832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1705914 data_alloc: 234881024 data_used: 25247744
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:55.736824+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140288000 unmapped: 39280640 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:56.737184+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140288000 unmapped: 39280640 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7187000/0x0/0x4ffc00000, data 0x3fffc4a/0x40d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:57.737416+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 39272448 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:58.737792+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.947365761s of 12.310205460s, submitted: 67
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:59.738021+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704530 data_alloc: 234881024 data_used: 25251840
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717d000/0x0/0x4ffc00000, data 0x4009c4a/0x40e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:00.738193+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:01.738389+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717d000/0x0/0x4ffc00000, data 0x4009c4a/0x40e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:02.738683+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:03.738870+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717d000/0x0/0x4ffc00000, data 0x4009c4a/0x40e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:04.739108+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704530 data_alloc: 234881024 data_used: 25251840
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:05.739499+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:06.739831+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:07.740262+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:08.740511+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:09.740673+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704398 data_alloc: 234881024 data_used: 25251840
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:10.741018+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:11.741392+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:12.741670+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:13.741930+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:14.742105+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704398 data_alloc: 234881024 data_used: 25251840
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:15.742384+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:16.742733+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:17.743161+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:18.743553+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:19.743982+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704398 data_alloc: 234881024 data_used: 25251840
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:20.747040+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:21.747345+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:22.748274+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:23.748603+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:24.748823+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704398 data_alloc: 234881024 data_used: 25251840
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:25.749098+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:26.749409+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:27.749600+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 39092224 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:28.749781+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 39092224 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:29.749996+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 39092224 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704398 data_alloc: 234881024 data_used: 25251840
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:30.750165+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 39092224 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:31.750362+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 39092224 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:32.750564+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 39092224 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:33.750754+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 39092224 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:34.751157+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 39092224 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704398 data_alloc: 234881024 data_used: 25251840
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:35.751483+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 39084032 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:36.751751+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 39084032 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:37.752145+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.973342896s of 39.005554199s, submitted: 6
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 39084032 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:38.752448+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 39084032 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:39.752830+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704558 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:40.753279+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:41.753838+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:42.754265+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:43.754685+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:44.754961+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704598 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:45.755430+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:46.755947+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:47.756355+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:48.756706+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7178000/0x0/0x4ffc00000, data 0x400ec4a/0x40e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7178000/0x0/0x4ffc00000, data 0x400ec4a/0x40e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:49.757115+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704598 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:50.757466+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:51.757702+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:52.757962+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:53.758213+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7178000/0x0/0x4ffc00000, data 0x400ec4a/0x40e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:54.758552+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704598 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:55.758932+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7178000/0x0/0x4ffc00000, data 0x400ec4a/0x40e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:56.759129+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:57.759431+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:58.759633+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.475843430s of 21.493835449s, submitted: 3
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:59.760246+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704778 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:00.760447+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:01.760771+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:02.761228+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:03.761640+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:04.761956+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704778 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:05.762419+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:06.762780+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:07.763108+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:08.763296+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:09.763637+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704778 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:10.763914+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:11.764171+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:12.764461+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:13.764714+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:14.765047+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704778 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:15.765328+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:16.765619+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:17.765850+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:18.766041+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:19.766225+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704778 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:20.766414+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:21.766590+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:22.766791+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:23.767063+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:24.767279+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704778 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:25.767684+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:26.768108+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:27.768301+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:28.768572+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.689928055s of 29.696563721s, submitted: 1
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:29.768983+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:30.769418+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:31.769695+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:32.770111+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:33.770602+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:34.771062+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:35.771558+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:36.771950+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:37.772211+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:38.772493+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:39.773036+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:40.773237+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:41.773572+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:42.773732+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:43.774097+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:44.774600+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:45.774843+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:46.775024+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:47.775229+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:48.775647+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:49.776017+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:50.776270+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:51.776657+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:52.777055+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:53.777259+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:54.777642+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:55.777975+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:56.778353+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:57.778744+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:58.778995+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:59.779288+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:00.779634+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:01.780051+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:02.780404+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:03.780760+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:04.781129+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:05.781606+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:06.781972+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:07.782363+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:08.782735+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:09.783083+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:10.783372+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:11.783745+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:12.784054+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:13.784385+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:14.784773+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:15.785441+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:16.785849+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:17.786186+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:18.786358+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:19.786874+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:20.787350+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:21.787678+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:22.788078+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:23.788377+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:24.788684+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:25.789112+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:26.789850+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:27.790737+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:28.791290+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:29.798088+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:30.798646+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:31.799967+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:32.800581+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:33.801228+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:34.801493+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:35.802128+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:36.803243+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:37.803509+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:38.804044+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:39.804717+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:40.805136+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:41.805767+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:42.806026+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:43.806387+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:44.806640+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:45.807377+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:46.807741+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:47.808094+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:48.808286+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:49.808639+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:50.808999+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:51.809318+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:52.809503+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:53.809714+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:54.809938+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:55.810175+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:56.810375+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:57.810599+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:58.810804+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:59.811052+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:00.811254+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:01.811771+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:02.812019+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:03.812240+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:04.812456+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:05.812688+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:06.812938+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:07.813161+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:08.813341+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:09.813560+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:10.813770+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:11.814045+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:12.814250+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:13.814440+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:14.814655+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:15.814928+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:16.815118+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:17.816151+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:18.817789+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:19.819522+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:20.821214+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:21.822835+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:22.824050+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:23.824311+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:24.824722+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:25.825021+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:26.825352+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:27.825744+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:28.826178+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:29.826711+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:30.827153+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:31.827618+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:32.828076+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:33.828629+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:34.829224+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 39034880 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:35.829703+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 39034880 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:36.830110+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 39034880 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:37.830531+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 39034880 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:38.831011+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 39034880 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:39.831345+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:40.831747+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:41.832128+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:42.832566+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:43.832980+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:44.833350+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:45.833711+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:46.834095+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:47.834444+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:48.834864+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:49.835174+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:50.835513+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:51.835987+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:52.836333+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:53.836673+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:54.837036+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:55.837508+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:56.837786+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:57.838196+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:58.838545+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:59.838874+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:00.839182+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:01.839455+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:02.839699+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:03.839983+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:04.840362+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 39018496 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:05.840605+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:06.841044+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:07.841405+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:08.841752+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:09.841952+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:10.842368+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:11.842774+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:12.843256+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:13.843554+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:14.844014+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:15.844355+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:16.844604+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:17.844983+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:18.845223+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:19.845424+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:20.845740+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:21.846022+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:22.846300+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:23.846666+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:24.846954+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:25.847322+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:26.847623+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:27.848160+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:28.848510+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:29.848837+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:30.849300+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:31.849754+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:32.850112+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:33.850471+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:34.850820+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:35.851326+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:36.851735+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:37.852149+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:38.852470+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:39.852861+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:40.853247+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:41.853653+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:42.854023+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:43.854397+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:44.854773+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:45.855314+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:46.855569+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:47.856009+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:48.856369+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 200.378692627s of 200.388336182s, submitted: 1
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:49.856746+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:50.857144+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:51.857622+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704586 data_alloc: 234881024 data_used: 25264128
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:52.858133+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:53.858505+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:54.858852+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:55.859342+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:56.859681+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7160000/0x0/0x4ffc00000, data 0x4026c4a/0x40fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1705430 data_alloc: 234881024 data_used: 25264128
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:57.860220+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:58.860629+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:59.861181+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:00.861530+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7160000/0x0/0x4ffc00000, data 0x4026c4a/0x40fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:01.861811+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1705430 data_alloc: 234881024 data_used: 25264128
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:02.862164+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:03.862485+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7160000/0x0/0x4ffc00000, data 0x4026c4a/0x40fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:04.862871+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:05.863497+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:06.863823+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1705590 data_alloc: 234881024 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:07.864288+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:08.864694+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7160000/0x0/0x4ffc00000, data 0x4026c4a/0x40fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:09.865175+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7160000/0x0/0x4ffc00000, data 0x4026c4a/0x40fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:10.865535+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.224842072s of 22.257825851s, submitted: 4
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:11.865846+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706010 data_alloc: 234881024 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140943360 unmapped: 38625280 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:12.866253+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140943360 unmapped: 38625280 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:13.866603+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140943360 unmapped: 38625280 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:14.866832+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140951552 unmapped: 38617088 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:15.867142+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:16.867319+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706010 data_alloc: 234881024 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:17.867557+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:18.867794+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:19.868157+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:20.868482+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:21.868846+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706010 data_alloc: 234881024 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:22.869270+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:23.869652+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:24.870167+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.544095993s of 13.567526817s, submitted: 2
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:25.870652+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:26.871198+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:27.871721+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:28.872328+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:29.872738+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:30.873105+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:31.873308+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:32.873719+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:33.874112+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:34.874370+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:35.874753+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:36.875215+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:37.875494+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:38.875973+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:39.876362+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:40.876751+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:41.877173+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:42.877568+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:43.878004+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:44.878410+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:45.878806+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:46.879159+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:47.879550+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:48.880120+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:49.880536+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:50.881023+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:51.881422+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:52.881837+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:53.882239+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:54.882614+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:55.883119+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:56.883560+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:57.884144+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:58.884524+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:59.884987+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:00.885220+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:01.885679+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:02.886171+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:03.886510+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:04.887008+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:05.887404+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:06.887824+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:07.888203+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:08.888482+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:09.888850+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:10.889229+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:11.889673+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:12.891361+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:13.893318+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:14.894035+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:15.894561+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:16.895108+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:17.896219+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:18.896586+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:19.897152+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:20.897794+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:21.898255+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:22.898606+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:23.899043+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:24.899385+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:25.899807+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:26.899998+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:27.900208+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:28.900459+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:29.900805+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:30.901109+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:31.901356+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:32.901713+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:33.902123+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:34.902560+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:35.903058+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:36.903318+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:37.903657+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:38.904170+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:39.904539+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:40.905048+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:41.905352+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:42.905761+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:43.906238+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:44.906575+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:45.907054+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:46.907423+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:47.907810+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:48.908171+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:49.908615+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:50.909120+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:51.909464+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:52.909799+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:53.910160+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:54.910470+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:55.910857+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:56.911184+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:57.911455+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:58.911803+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:59.912191+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:00.912550+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:01.912760+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:02.913075+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:03.913464+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:04.913771+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:05.914288+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:06.914684+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:07.915005+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:08.915343+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:09.915987+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:10.916369+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:11.916682+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:12.917062+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:13.917380+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:14.917578+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:15.918006+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:16.918235+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:17.918599+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:18.918850+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:19.919110+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:20.919458+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:21.919690+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:22.920172+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:23.920505+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:24.920797+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:25.921294+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:26.921648+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:27.922046+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:28.922410+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:29.922755+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:30.923045+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:31.923347+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:32.923685+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:33.924107+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:34.924418+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:35.924773+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:36.925149+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:37.925557+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:38.926015+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:39.926338+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:40.926709+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:41.927074+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:42.927467+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:43.927794+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:44.928179+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:45.928615+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:46.929056+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:47.929455+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:48.929845+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:49.930286+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:50.930734+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:51.931216+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:52.931655+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:53.932073+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:54.932500+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:55.933090+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:56.933418+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:57.933801+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:58.934237+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.2 total, 600.0 interval
                                            Cumulative writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 11K writes, 3184 syncs, 3.69 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 439 writes, 1119 keys, 439 commit groups, 1.0 writes per commit group, ingest: 1.04 MB, 0.00 MB/s
                                            Interval WAL: 439 writes, 202 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:59.934596+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:00.935165+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:01.935569+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:02.936074+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 38518784 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:03.936412+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 38518784 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:04.936793+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 38518784 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:05.937131+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 38518784 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:06.937454+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 38518784 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:07.937861+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 38518784 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:08.938246+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 38518784 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:09.938654+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 38518784 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:10.939214+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 38518784 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:11.939558+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:12.940062+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:13.940220+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:14.940541+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:15.941081+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:16.941435+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:17.941782+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:18.942099+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:19.942476+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:20.943168+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:21.943525+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:22.944385+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:23.944699+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:24.945385+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:25.945971+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:26.946636+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:27.947232+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:28.947574+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:29.948103+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:30.948538+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:31.948968+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:32.949356+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:33.949699+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:34.950086+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:35.950582+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:36.951055+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:37.951315+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:38.951827+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:39.952283+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:40.952675+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:41.953079+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:42.953427+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:43.953787+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:44.954142+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:45.954458+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:46.954748+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:47.954985+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:48.955554+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:49.955738+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:50.956124+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:51.956363+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:52.956570+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:53.956864+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:54.957268+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:55.958168+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 211.454177856s of 211.471817017s, submitted: 2
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af69800 session 0x56484a9b85a0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af69c00 session 0x564847f9cb40
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:56.958405+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484a7f0800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f0800 session 0x564849e35860
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f86b5000/0x0/0x4ffc00000, data 0x261bbe8/0x26f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:57.958787+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1438405 data_alloc: 218103808 data_used: 15904768
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:58.959187+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:59.959583+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:00.960011+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:01.960334+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:02.960629+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f86df000/0x0/0x4ffc00000, data 0x25f1bb5/0x26c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1438405 data_alloc: 218103808 data_used: 15904768
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:03.961123+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:04.961505+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:05.961861+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f86df000/0x0/0x4ffc00000, data 0x25f1bb5/0x26c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:06.962268+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:07.962544+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1438405 data_alloc: 218103808 data_used: 15904768
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:08.962826+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:09.963115+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.053515434s of 13.407748222s, submitted: 59
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f1800 session 0x56484aeec000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484d2ca000 session 0x56484ab36b40
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484a7f0800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8b98000/0x0/0x4ffc00000, data 0x25f1bb5/0x26c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,1])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:10.963451+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f0800 session 0x56484ab0e000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:11.963790+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:12.964251+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315764 data_alloc: 218103808 data_used: 11771904
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:13.964586+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:14.965054+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:15.965490+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f9619000/0x0/0x4ffc00000, data 0x19a9b33/0x1a7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:16.965998+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:17.966417+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:18.966815+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315764 data_alloc: 218103808 data_used: 11771904
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:19.967227+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f9619000/0x0/0x4ffc00000, data 0x19a9b33/0x1a7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:20.967777+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484a7f1800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:21.968190+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _renew_subs
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.907560349s of 12.213282585s, submitted: 45
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 139 ms_handle_reset con 0x56484a7f1800 session 0x564847f9c780
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:22.968639+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af69800
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:23.969068+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352751 data_alloc: 218103808 data_used: 11780096
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 140 ms_handle_reset con 0x56484af69800 session 0x564847f98d20
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af69c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:24.969439+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130908160 unmapped: 48660480 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _renew_subs
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 141 ms_handle_reset con 0x56484af69c00 session 0x5648488f1e00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:25.969850+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131989504 unmapped: 47579136 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19aee7e/0x1a84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:26.970254+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131989504 unmapped: 47579136 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:27.970672+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131989504 unmapped: 47579136 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19aee7e/0x1a84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:28.971164+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329473 data_alloc: 218103808 data_used: 11792384
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131989504 unmapped: 47579136 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:29.971555+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131989504 unmapped: 47579136 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:30.972021+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131989504 unmapped: 47579136 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19aee7e/0x1a84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:31.972351+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131989504 unmapped: 47579136 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19aee7e/0x1a84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:32.972719+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131989504 unmapped: 47579136 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:33.973107+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329473 data_alloc: 218103808 data_used: 11792384
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131989504 unmapped: 47579136 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac70c00
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 141 ms_handle_reset con 0x56484ac70c00 session 0x564848ec6000
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.952308655s of 12.327050209s, submitted: 54
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:34.973513+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19aee7e/0x1a84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131997696 unmapped: 47570944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:35.974023+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 47505408 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:36.974295+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 47505408 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:37.974574+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:38.975085+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:39.975472+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:40.975862+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:41.976343+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:42.976768+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:43.977129+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:44.977498+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:45.978070+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:46.978436+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:47.978803+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:48.979222+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:49.979563+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:50.979994+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:51.980346+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:52.980725+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:53.981194+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Dec 05 02:27:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/513712806' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:54.981562+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:55.982155+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:56.982538+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:57.983018+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:58.983326+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:59.983546+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:00.983722+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:01.984163+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:02.984483+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:03.984848+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:04.985158+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:05.985524+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:06.985875+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:07.986181+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:08.986581+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:09.987053+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:10.987426+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:11.987839+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:12.988200+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:13.988543+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:14.988966+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:15.989429+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:16.989820+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:17.990190+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:18.990600+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:19.991052+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:20.991459+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:21.991827+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:22.992266+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:23.992652+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:24.993100+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:25.993526+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:26.994013+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:27.994321+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:28.994678+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:29.995096+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:30.995534+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:31.996033+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:32.996439+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:33.997013+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:34.997433+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:35.997875+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:36.998455+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:37.999030+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:38.999428+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:40.000003+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:41.000375+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:42.000746+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:43.001251+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:44.001596+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:45.002033+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:46.002527+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:47.002796+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:48.003157+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:49.003426+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:50.003809+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:51.004026+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:52.004377+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:53.004728+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:54.004991+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:55.005289+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:56.005483+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:57.005769+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:58.006163+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:59.006337+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:00.006526+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:01.006709+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:02.007126+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132079616 unmapped: 47489024 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: do_command 'config diff' '{prefix=config diff}'
Dec 05 02:27:35 compute-0 ceph-osd[207795]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 05 02:27:35 compute-0 ceph-osd[207795]: do_command 'config show' '{prefix=config show}'
Dec 05 02:27:35 compute-0 ceph-osd[207795]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 05 02:27:35 compute-0 ceph-osd[207795]: do_command 'counter dump' '{prefix=counter dump}'
Dec 05 02:27:35 compute-0 ceph-osd[207795]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:03.007435+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132055040 unmapped: 47513600 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: do_command 'counter schema' '{prefix=counter schema}'
Dec 05 02:27:35 compute-0 ceph-osd[207795]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 05 02:27:35 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:04.007610+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 47685632 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:27:35 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:05.007814+0000)
Dec 05 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132243456 unmapped: 47325184 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:35 compute-0 ceph-osd[207795]: do_command 'log dump' '{prefix=log dump}'
Dec 05 02:27:36 compute-0 rsyslogd[188644]: imjournal from <compute-0:ceph-osd>: begin to drop messages due to rate-limiting
Dec 05 02:27:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Dec 05 02:27:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1837677437' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 05 02:27:36 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 02:27:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Dec 05 02:27:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1480691537' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 05 02:27:36 compute-0 crontab[473254]: (root) LIST (root)
Dec 05 02:27:36 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/827734816' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 05 02:27:36 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2632763281' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 05 02:27:36 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/513712806' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 05 02:27:36 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1837677437' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 05 02:27:36 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1480691537' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 05 02:27:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Dec 05 02:27:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4273559648' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 05 02:27:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Dec 05 02:27:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3728821913' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 05 02:27:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2362: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Dec 05 02:27:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3040492597' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 02:27:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Dec 05 02:27:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2309626779' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 05 02:27:37 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4273559648' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 05 02:27:37 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3728821913' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 05 02:27:37 compute-0 ceph-mon[192914]: pgmap v2362: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:37 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3040492597' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 02:27:37 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2309626779' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 05 02:27:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Dec 05 02:27:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1043439777' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 05 02:27:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Dec 05 02:27:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1600017680' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 05 02:27:37 compute-0 nova_compute[349548]: 2025-12-05 02:27:37.837 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:27:37 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15663 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:38 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15665 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:38 compute-0 sudo[473509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:27:38 compute-0 sudo[473509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:27:38 compute-0 sudo[473509]: pam_unix(sudo:session): session closed for user root
Dec 05 02:27:38 compute-0 sudo[473556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:27:38 compute-0 sudo[473556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:27:38 compute-0 sudo[473556]: pam_unix(sudo:session): session closed for user root
Dec 05 02:27:38 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1043439777' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 05 02:27:38 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1600017680' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 05 02:27:38 compute-0 ceph-mon[192914]: from='client.15663 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:38 compute-0 ceph-mon[192914]: from='client.15665 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:38 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15667 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:38 compute-0 sudo[473600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:27:38 compute-0 sudo[473600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:27:38 compute-0 sudo[473600]: pam_unix(sudo:session): session closed for user root
Dec 05 02:27:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:27:38 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15669 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:38 compute-0 sudo[473630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:27:38 compute-0 sudo[473630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:27:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2363: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:38 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15673 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:38 compute-0 sudo[473630]: pam_unix(sudo:session): session closed for user root
Dec 05 02:27:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:27:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:27:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:27:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:27:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:27:39 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:27:39 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8d94c3e4-a8fe-4e13-b787-276a50524397 does not exist
Dec 05 02:27:39 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 3819ab94-53aa-4a17-988a-5e84ef09c771 does not exist
Dec 05 02:27:39 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 5bbb89e7-8998-4cee-90aa-7d08e34acc1d does not exist
Dec 05 02:27:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:27:39 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:27:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:27:39 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:27:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:27:39 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:27:39 compute-0 sudo[473765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:27:39 compute-0 sudo[473765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:27:39 compute-0 sudo[473765]: pam_unix(sudo:session): session closed for user root
Dec 05 02:27:39 compute-0 podman[473811]: 2025-12-05 02:27:39.262611951 +0000 UTC m=+0.125549394 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, release=1755695350, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9, name=ubi9-minimal, version=9.6, container_name=openstack_network_exporter, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container)
Dec 05 02:27:39 compute-0 podman[473809]: 2025-12-05 02:27:39.272079925 +0000 UTC m=+0.135612295 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 02:27:39 compute-0 podman[473808]: 2025-12-05 02:27:39.270949893 +0000 UTC m=+0.132579128 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 05 02:27:39 compute-0 sudo[473841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:27:39 compute-0 sudo[473841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:27:39 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15677 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:39 compute-0 sudo[473841]: pam_unix(sudo:session): session closed for user root
Dec 05 02:27:39 compute-0 podman[473810]: 2025-12-05 02:27:39.305817664 +0000 UTC m=+0.168385086 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:27:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Dec 05 02:27:39 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1315727408' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 05 02:27:39 compute-0 sudo[473919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:27:39 compute-0 sudo[473919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:27:39 compute-0 sudo[473919]: pam_unix(sudo:session): session closed for user root
Dec 05 02:27:39 compute-0 ceph-mon[192914]: from='client.15667 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:39 compute-0 ceph-mon[192914]: from='client.15669 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:39 compute-0 ceph-mon[192914]: pgmap v2363: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:39 compute-0 ceph-mon[192914]: from='client.15673 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:27:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:27:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:27:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:27:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:27:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:27:39 compute-0 ceph-mon[192914]: from='client.15677 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:39 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1315727408' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 05 02:27:39 compute-0 sudo[473950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:27:39 compute-0 sudo[473950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:27:39 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15680 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Dec 05 02:27:39 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3595884862' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 05 02:27:39 compute-0 podman[474085]: 2025-12-05 02:27:39.956675567 +0000 UTC m=+0.057940442 container create c541b6c3cdd23c3cfe289085c36b1a24f599f058047603df3e82dffab52f05dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:27:40 compute-0 systemd[1]: Started libpod-conmon-c541b6c3cdd23c3cfe289085c36b1a24f599f058047603df3e82dffab52f05dd.scope.
Dec 05 02:27:40 compute-0 podman[474085]: 2025-12-05 02:27:39.926834371 +0000 UTC m=+0.028099286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:27:40 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:27:40 compute-0 podman[474085]: 2025-12-05 02:27:40.072083135 +0000 UTC m=+0.173348020 container init c541b6c3cdd23c3cfe289085c36b1a24f599f058047603df3e82dffab52f05dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chatelet, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:27:40 compute-0 podman[474085]: 2025-12-05 02:27:40.090662234 +0000 UTC m=+0.191927099 container start c541b6c3cdd23c3cfe289085c36b1a24f599f058047603df3e82dffab52f05dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:27:40 compute-0 podman[474085]: 2025-12-05 02:27:40.094953669 +0000 UTC m=+0.196218624 container attach c541b6c3cdd23c3cfe289085c36b1a24f599f058047603df3e82dffab52f05dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chatelet, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec 05 02:27:40 compute-0 sad_chatelet[474120]: 167 167
Dec 05 02:27:40 compute-0 systemd[1]: libpod-c541b6c3cdd23c3cfe289085c36b1a24f599f058047603df3e82dffab52f05dd.scope: Deactivated successfully.
Dec 05 02:27:40 compute-0 conmon[474120]: conmon c541b6c3cdd23c3cfe28 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c541b6c3cdd23c3cfe289085c36b1a24f599f058047603df3e82dffab52f05dd.scope/container/memory.events
Dec 05 02:27:40 compute-0 podman[474085]: 2025-12-05 02:27:40.098326287 +0000 UTC m=+0.199591152 container died c541b6c3cdd23c3cfe289085c36b1a24f599f058047603df3e82dffab52f05dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chatelet, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 02:27:40 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15683 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-de6a8341ef1031ef9e592a3b5dc60ae33c4484f9ef6e0b1ac81e9ce88a88e298-merged.mount: Deactivated successfully.
Dec 05 02:27:40 compute-0 podman[474085]: 2025-12-05 02:27:40.1497948 +0000 UTC m=+0.251059665 container remove c541b6c3cdd23c3cfe289085c36b1a24f599f058047603df3e82dffab52f05dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:27:40 compute-0 systemd[1]: libpod-conmon-c541b6c3cdd23c3cfe289085c36b1a24f599f058047603df3e82dffab52f05dd.scope: Deactivated successfully.
Dec 05 02:27:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Dec 05 02:27:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/270653386' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 2269184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:45.217638+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113019 data_alloc: 218103808 data_used: 7467008
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 2260992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fbd73000/0x0/0x4ffc00000, data 0xde489a/0xeab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:46.217840+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 2260992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:47.218065+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 2260992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:48.218272+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 2260992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:49.218506+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 2260992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:50.218710+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113019 data_alloc: 218103808 data_used: 7467008
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fbd73000/0x0/0x4ffc00000, data 0xde489a/0xeab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 2260992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:51.218940+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 2260992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:52.219146+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 2260992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:53.219336+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 2260992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:54.219521+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fbd73000/0x0/0x4ffc00000, data 0xde489a/0xeab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 2260992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:55.219768+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113019 data_alloc: 218103808 data_used: 7467008
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85794816 unmapped: 2244608 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:56.220515+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85794816 unmapped: 2244608 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:57.221724+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fbd73000/0x0/0x4ffc00000, data 0xde489a/0xeab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85794816 unmapped: 2244608 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:58.223322+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fbd73000/0x0/0x4ffc00000, data 0xde489a/0xeab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85794816 unmapped: 2244608 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:54:59.224982+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 2236416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:00.225915+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113019 data_alloc: 218103808 data_used: 7467008
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 2236416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:01.226872+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 2236416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:02.228386+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fbd73000/0x0/0x4ffc00000, data 0xde489a/0xeab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 2236416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:03.229344+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 2236416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:04.232672+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fbd73000/0x0/0x4ffc00000, data 0xde489a/0xeab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85794816 unmapped: 2244608 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:05.234346+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113019 data_alloc: 218103808 data_used: 7467008
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85794816 unmapped: 2244608 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:06.235599+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fbd73000/0x0/0x4ffc00000, data 0xde489a/0xeab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85794816 unmapped: 2244608 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:07.236992+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 48.267864227s of 48.277412415s, submitted: 1
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 11739136 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e6e6f800 session 0x5630e64c6f00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e837f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e837f800 session 0x5630e9024960
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:08.237520+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8867000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e8867000 session 0x5630e617a000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8867000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e8867000 session 0x5630e86da3c0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e6e6f800 session 0x5630e9025860
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e837f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e837f800 session 0x5630e72b72c0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e98f2000 session 0x5630e8cb81e0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f3400
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e98f3400 session 0x5630e8cb8d20
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f3400
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 11665408 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:09.237811+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e98f3400 session 0x5630e64e8780
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e6e6f800 session 0x5630e79e2780
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e837f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e837f800 session 0x5630e8493860
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8867000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e8867000 session 0x5630e64563c0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fb6e6000/0x0/0x4ffc00000, data 0x146f90c/0x1538000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 11665408 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:10.238184+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168066 data_alloc: 218103808 data_used: 7475200
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 11665408 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:11.238600+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 11665408 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:12.238817+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e98f2000 session 0x5630e700c1e0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fb6e6000/0x0/0x4ffc00000, data 0x146f90c/0x1538000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e98f2000 session 0x5630e62b4960
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 11673600 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:13.239230+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e837f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e837f800 session 0x5630e96fef00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8867000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e8867000 session 0x5630e96ff4a0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 86335488 unmapped: 11173888 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:14.239717+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f3400
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8dccc00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8dcd000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 11091968 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:15.240074+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174104 data_alloc: 218103808 data_used: 7475200
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 11091968 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:16.240388+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 11091968 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:17.240689+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fb6a9000/0x0/0x4ffc00000, data 0x14ab91c/0x1575000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 10960896 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:18.241016+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89735168 unmapped: 7774208 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:19.241498+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fb6a9000/0x0/0x4ffc00000, data 0x14ab91c/0x1575000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92831744 unmapped: 4677632 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:20.241783+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fb6a9000/0x0/0x4ffc00000, data 0x14ab91c/0x1575000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223224 data_alloc: 234881024 data_used: 14254080
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92864512 unmapped: 4644864 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:21.242193+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fb6a9000/0x0/0x4ffc00000, data 0x14ab91c/0x1575000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92864512 unmapped: 4644864 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:22.242544+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92864512 unmapped: 4644864 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:23.242759+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92864512 unmapped: 4644864 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:24.242993+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 4636672 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:25.243301+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223224 data_alloc: 234881024 data_used: 14254080
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 4636672 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:26.244266+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 4636672 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:27.244586+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fb6a9000/0x0/0x4ffc00000, data 0x14ab91c/0x1575000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:28.244803+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 4636672 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:29.245133+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 4636672 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:30.245509+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 4636672 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223224 data_alloc: 234881024 data_used: 14254080
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:31.245718+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92872704 unmapped: 4636672 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fb6a9000/0x0/0x4ffc00000, data 0x14ab91c/0x1575000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:32.245954+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 4628480 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:33.246163+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92880896 unmapped: 4628480 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:34.246392+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.114095688s of 26.401132584s, submitted: 47
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92913664 unmapped: 4595712 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:35.246621+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 93020160 unmapped: 4489216 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fb6a9000/0x0/0x4ffc00000, data 0x14ab91c/0x1575000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223048 data_alloc: 234881024 data_used: 14254080
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fb6a9000/0x0/0x4ffc00000, data 0x14ab91c/0x1575000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:36.246964+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 93110272 unmapped: 4399104 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:37.247157+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 93126656 unmapped: 4382720 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:38.247522+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 93126656 unmapped: 4382720 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:39.247950+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 93126656 unmapped: 4382720 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:40.248362+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 93126656 unmapped: 4382720 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223048 data_alloc: 234881024 data_used: 14254080
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:41.248646+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 93126656 unmapped: 4382720 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fb6a9000/0x0/0x4ffc00000, data 0x14ab91c/0x1575000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:42.248850+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 93126656 unmapped: 4382720 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:43.249302+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 93126656 unmapped: 4382720 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fb6a9000/0x0/0x4ffc00000, data 0x14ab91c/0x1575000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:44.249705+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 4374528 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.587363243s of 10.130939484s, submitted: 106
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:45.249968+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 4333568 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223224 data_alloc: 234881024 data_used: 14254080
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:46.250202+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 4333568 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:47.250506+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 4333568 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:48.250786+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 4333568 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:49.251053+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 4333568 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fb6a9000/0x0/0x4ffc00000, data 0x14ab91c/0x1575000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:50.251282+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 93175808 unmapped: 4333568 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fb6a9000/0x0/0x4ffc00000, data 0x14ab91c/0x1575000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223224 data_alloc: 234881024 data_used: 14254080
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:51.251597+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 93192192 unmapped: 4317184 heap: 97509376 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:52.251784+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 5300224 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:53.252008+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 5300224 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:54.252437+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98295808 unmapped: 5079040 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:55.252738+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98295808 unmapped: 5079040 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:56.253186+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98361344 unmapped: 5013504 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:57.253446+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98361344 unmapped: 5013504 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:58.253811+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98361344 unmapped: 5013504 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:55:59.254184+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98361344 unmapped: 5013504 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:00.254419+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98394112 unmapped: 4980736 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:01.254739+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98394112 unmapped: 4980736 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:02.255059+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98426880 unmapped: 4947968 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:03.255754+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98435072 unmapped: 4939776 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:04.256082+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98435072 unmapped: 4939776 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:05.256334+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98435072 unmapped: 4939776 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:06.256539+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98443264 unmapped: 4931584 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:07.256873+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98443264 unmapped: 4931584 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:08.257174+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98476032 unmapped: 4898816 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:09.257427+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98476032 unmapped: 4898816 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:10.257709+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98476032 unmapped: 4898816 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:11.258230+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98476032 unmapped: 4898816 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:12.258589+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 4890624 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:13.259083+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 4890624 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:14.259330+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 4890624 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:15.259988+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 4890624 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:16.260283+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 4890624 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:17.260601+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 4890624 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:18.261028+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 4890624 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:19.261463+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 4890624 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:20.261673+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98492416 unmapped: 4882432 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:21.262069+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98492416 unmapped: 4882432 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:22.262268+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98492416 unmapped: 4882432 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:23.262707+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98492416 unmapped: 4882432 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:24.262996+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 39.671836853s of 39.983684540s, submitted: 50
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 4874240 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:25.263397+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 4874240 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:26.263792+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 4874240 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:27.264191+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 4874240 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:28.264534+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 4874240 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:29.265108+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 4874240 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:30.265350+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 4874240 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:31.266035+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 4874240 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:32.266264+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 4874240 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:33.266644+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 4874240 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:34.267122+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 4874240 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:35.267483+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 4874240 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:36.268202+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98508800 unmapped: 4866048 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:37.268388+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98508800 unmapped: 4866048 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:38.268839+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98508800 unmapped: 4866048 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:39.269266+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98508800 unmapped: 4866048 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:40.269661+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98508800 unmapped: 4866048 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:41.270151+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98508800 unmapped: 4866048 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:42.270352+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98508800 unmapped: 4866048 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:43.270591+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98508800 unmapped: 4866048 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:44.271003+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 4857856 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:45.271356+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 4857856 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:46.271747+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 4857856 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:47.272110+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 4857856 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:48.272505+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 4857856 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:49.273024+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 4857856 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:50.273580+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 4857856 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:51.273954+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 4857856 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:52.274330+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 4857856 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:53.274697+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 4857856 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:54.275122+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 4857856 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:55.275468+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 4857856 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:56.275966+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 4857856 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:57.276217+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 4857856 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:58.276450+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 4857856 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:56:59.276748+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98525184 unmapped: 4849664 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:00.276969+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98525184 unmapped: 4849664 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:01.277325+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98525184 unmapped: 4849664 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:02.277642+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98525184 unmapped: 4849664 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:03.278006+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98525184 unmapped: 4849664 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:04.278392+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 4841472 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:05.278820+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 4841472 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:06.279165+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 4841472 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:07.279525+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 4841472 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:08.279875+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 4841472 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:09.280461+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 4841472 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:10.280737+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 4833280 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:11.281109+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 4833280 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:12.281491+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 4833280 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:13.281877+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 4833280 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:14.289169+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 4833280 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:15.289415+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 4833280 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:16.289615+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 4833280 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:17.290040+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 4833280 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:18.290297+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 4833280 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:19.290723+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 4833280 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:20.290991+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 4833280 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:21.291379+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 4833280 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:22.291707+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 4833280 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:23.291989+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98541568 unmapped: 4833280 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:24.292294+0000)
Dec 05 02:27:40 compute-0 podman[474172]: 2025-12-05 02:27:40.379243497 +0000 UTC m=+0.077877411 container create e39f87d5b77bab670aa984da1c4501725284790005ee33a601bd79da23e4b9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_grothendieck, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98549760 unmapped: 4825088 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:25.292703+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98549760 unmapped: 4825088 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:26.292961+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98549760 unmapped: 4825088 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:27.293214+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98549760 unmapped: 4825088 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:28.293623+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98549760 unmapped: 4825088 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:29.293999+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 4816896 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:30.294254+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 4816896 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:31.294417+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 4816896 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:32.294830+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 4816896 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:33.295053+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 4816896 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:34.295445+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 4816896 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:35.295835+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 4816896 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:36.296234+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 4816896 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:37.296497+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 4816896 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:38.296856+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 4816896 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:39.297306+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 4816896 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:40.297664+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 4816896 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:41.298126+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:42.298455+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 4816896 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:43.298766+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 4816896 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:44.299071+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98566144 unmapped: 4808704 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:45.299313+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98566144 unmapped: 4808704 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:46.299697+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98566144 unmapped: 4808704 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:47.300108+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98566144 unmapped: 4808704 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:48.300354+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98566144 unmapped: 4808704 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:49.300732+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98566144 unmapped: 4808704 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:50.301075+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 4800512 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:51.301501+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 4800512 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:52.301850+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 4800512 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:53.302297+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 4800512 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:54.302649+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 4800512 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:55.303053+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 4800512 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:56.303362+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 4800512 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:57.303688+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 4800512 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:58.304009+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 4800512 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:57:59.304381+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 4800512 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:00.304762+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 4800512 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:01.304985+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 4800512 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:02.305181+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 4800512 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:03.305454+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 4784128 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:04.305820+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:05.306027+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:06.306360+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:07.306573+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:08.306976+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:09.307384+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:10.307651+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:11.307986+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:12.308321+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:13.308656+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:14.309034+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:15.309224+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:16.309601+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:17.310191+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:18.310536+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:19.310866+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:20.311155+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:21.311615+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:22.313261+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:23.313688+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:24.314147+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:25.314429+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:26.314689+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:27.315129+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:28.315640+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:29.316214+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:30.316540+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:31.317005+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:32.317246+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:33.317640+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:34.317961+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:35.318188+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:36.318537+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:37.318872+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:38.319169+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:39.319526+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:40.319720+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:41.320110+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:42.320538+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:43.320785+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:44.321170+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:45.321460+0000)
Dec 05 02:27:40 compute-0 ceph-mon[192914]: from='client.15680 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:40 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3595884862' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 05 02:27:40 compute-0 ceph-mon[192914]: from='client.15683 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:40 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/270653386' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:46.321711+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 4775936 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:47.322065+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 4907008 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:48.322308+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 4907008 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:49.322759+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 4907008 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:50.323093+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 4907008 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:51.323397+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98467840 unmapped: 4907008 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:52.323753+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98476032 unmapped: 4898816 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets getting new tickets!
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:53.324323+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _finish_auth 0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:53.326330+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 4890624 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:54.324702+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 4890624 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:55.325028+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 4890624 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:56.325366+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98484224 unmapped: 4890624 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:57.325676+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280362 data_alloc: 234881024 data_used: 14458880
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4f9f3a000/0x0/0x4ffc00000, data 0x1a7a91c/0x1b44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98492416 unmapped: 4882432 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e9033400 session 0x5630e8a2c960
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 153.529281616s of 153.565139771s, submitted: 1
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e9033c00 session 0x5630e700d860
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e8dcc400 session 0x5630e8c4a780
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:58.325925+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e837f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98238464 unmapped: 5136384 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:58:59.326361+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e837f800 session 0x5630e8cb94a0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa203000/0x0/0x4ffc00000, data 0x17b290c/0x187b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:00.326857+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:01.327018+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:02.327425+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238352 data_alloc: 234881024 data_used: 14229504
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:03.327727+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:04.328051+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:05.328529+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:06.328826+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e8867800 session 0x5630e900c1e0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8867000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:07.329179+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238352 data_alloc: 234881024 data_used: 14229504
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:08.329621+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:09.330146+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:10.330518+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:11.330753+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:12.331123+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238352 data_alloc: 234881024 data_used: 14229504
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:13.331637+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:14.332220+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:15.332622+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:16.332833+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:17.333248+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238352 data_alloc: 234881024 data_used: 14229504
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:18.333765+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:19.334204+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:20.334586+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:21.334833+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:22.335212+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238352 data_alloc: 234881024 data_used: 14229504
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:23.335658+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:24.336067+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:25.336405+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:26.336782+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:27.337098+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238352 data_alloc: 234881024 data_used: 14229504
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:28.337440+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:29.337871+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:30.338367+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:31.338683+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:32.339093+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238352 data_alloc: 234881024 data_used: 14229504
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:33.339480+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97861632 unmapped: 5513216 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:34.339788+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:35.340101+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:36.340457+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:37.340848+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238352 data_alloc: 234881024 data_used: 14229504
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:38.341169+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:39.341594+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:40.342112+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:41.342365+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:42.342806+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238352 data_alloc: 234881024 data_used: 14229504
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:43.343188+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:44.343573+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:45.344041+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:46.344254+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:47.344530+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238352 data_alloc: 234881024 data_used: 14229504
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:48.344868+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:49.345268+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:50.345707+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:51.345977+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:52.346254+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238352 data_alloc: 234881024 data_used: 14229504
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:53.346548+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:54.346830+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:55.347212+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:56.347599+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:57.347846+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238352 data_alloc: 234881024 data_used: 14229504
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:58.348036+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T01:59:59.348392+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:00.348780+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:01.349023+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:02.349296+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238352 data_alloc: 234881024 data_used: 14229504
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:03.349604+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:04.350055+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:05.350272+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:06.350523+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:07.351004+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238352 data_alloc: 234881024 data_used: 14229504
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:08.351407+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:09.351976+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:10.352273+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:11.352544+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:12.352978+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238352 data_alloc: 234881024 data_used: 14229504
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:13.353376+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:14.353967+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:15.354379+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:16.354765+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:17.355128+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238352 data_alloc: 234881024 data_used: 14229504
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:18.355686+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:19.356223+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:20.356680+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:21.357148+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:22.357439+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238352 data_alloc: 234881024 data_used: 14229504
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:23.357880+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:24.358327+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:25.358606+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:26.358855+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e8867c00 session 0x5630e8c87c20
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9033400
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:27.359234+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238352 data_alloc: 234881024 data_used: 14229504
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:28.359679+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:29.360040+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:30.360471+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:31.360754+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:32.361179+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238352 data_alloc: 234881024 data_used: 14229504
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:33.361551+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:34.362038+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:35.362418+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:36.362756+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:37.363034+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238352 data_alloc: 234881024 data_used: 14229504
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:38.363433+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:39.363996+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:40.364370+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:41.364875+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:42.365395+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238352 data_alloc: 234881024 data_used: 14229504
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:43.365745+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:44.366146+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:45.366588+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:46.367187+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:47.367420+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238352 data_alloc: 234881024 data_used: 14229504
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:48.367793+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:49.368323+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:50.368813+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:51.369188+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:52.374051+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238352 data_alloc: 234881024 data_used: 14229504
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:53.374395+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:54.374828+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:55.375408+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa23f000/0x0/0x4ffc00000, data 0x17768e9/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:56.375860+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:57.376265+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 119.443283081s of 119.677101135s, submitted: 37
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e98f2c00 session 0x5630e700d680
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e98f3000 session 0x5630e700de00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e98f3800 session 0x5630e6b532c0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97869824 unmapped: 5505024 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e837f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237676 data_alloc: 234881024 data_used: 14229504
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:58.376519+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e837f800 session 0x5630e848eb40
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:00:59.376936+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:00.377314+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa285000/0x0/0x4ffc00000, data 0x13c77f3/0x148b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa285000/0x0/0x4ffc00000, data 0x13c77f3/0x148b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:01.377639+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:02.377875+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174704 data_alloc: 218103808 data_used: 10719232
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:03.378295+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa285000/0x0/0x4ffc00000, data 0x13c77f3/0x148b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:04.378569+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:05.378781+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa285000/0x0/0x4ffc00000, data 0x13c77f3/0x148b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:06.379174+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:07.379490+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174704 data_alloc: 218103808 data_used: 10719232
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:08.379975+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:09.380383+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:10.380720+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa285000/0x0/0x4ffc00000, data 0x13c77f3/0x148b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:11.381153+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa285000/0x0/0x4ffc00000, data 0x13c77f3/0x148b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:12.381591+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174704 data_alloc: 218103808 data_used: 10719232
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:13.382053+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:14.382624+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa285000/0x0/0x4ffc00000, data 0x13c77f3/0x148b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:15.383042+0000)
Dec 05 02:27:40 compute-0 podman[474172]: 2025-12-05 02:27:40.332701657 +0000 UTC m=+0.031335601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:16.383421+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa285000/0x0/0x4ffc00000, data 0x13c77f3/0x148b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:17.383683+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174704 data_alloc: 218103808 data_used: 10719232
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:18.383923+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:19.384319+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa285000/0x0/0x4ffc00000, data 0x13c77f3/0x148b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:20.385039+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:21.385564+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:22.386342+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:23.386618+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174704 data_alloc: 218103808 data_used: 10719232
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:24.387071+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa285000/0x0/0x4ffc00000, data 0x13c77f3/0x148b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:25.387442+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:26.388158+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:27.388564+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:28.389142+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174704 data_alloc: 218103808 data_used: 10719232
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fa285000/0x0/0x4ffc00000, data 0x13c77f3/0x148b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8dcc400
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e8dcc400 session 0x5630e73f3680
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.114091873s of 31.488634109s, submitted: 68
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 ms_handle_reset con 0x5630e98f2c00 session 0x5630e8a2cf00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 126 handle_osd_map epochs [127,127], i have 127, src has [1,127]
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:29.389805+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96772096 unmapped: 6602752 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:30.390182+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96780288 unmapped: 6594560 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:31.390428+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96780288 unmapped: 6594560 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:32.390855+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96780288 unmapped: 6594560 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:33.391297+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178878 data_alloc: 218103808 data_used: 10727424
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96780288 unmapped: 6594560 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:34.391787+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fa281000/0x0/0x4ffc00000, data 0x13c9370/0x148e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96780288 unmapped: 6594560 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:35.392206+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96780288 unmapped: 6594560 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:36.392549+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96780288 unmapped: 6594560 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:37.393167+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fa281000/0x0/0x4ffc00000, data 0x13c9370/0x148e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96780288 unmapped: 6594560 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:38.393515+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178878 data_alloc: 218103808 data_used: 10727424
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fa281000/0x0/0x4ffc00000, data 0x13c9370/0x148e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96780288 unmapped: 6594560 heap: 103374848 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:39.393731+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f3000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.359286308s of 10.367539406s, submitted: 1
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 ms_handle_reset con 0x5630e98f3000 session 0x5630ea880780
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 ms_handle_reset con 0x5630e98f2000 session 0x5630e8c4b2c0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8b23c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 ms_handle_reset con 0x5630e8b23c00 session 0x5630e79e2780
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e7464c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 ms_handle_reset con 0x5630e7464c00 session 0x5630e8c87a40
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e7464c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 ms_handle_reset con 0x5630e7464c00 session 0x5630e9025e00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97517568 unmapped: 12673024 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:40.394202+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e837f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 ms_handle_reset con 0x5630e837f800 session 0x5630e72b6000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97533952 unmapped: 12656640 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:41.394422+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8b23c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 ms_handle_reset con 0x5630e8b23c00 session 0x5630e64561e0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97533952 unmapped: 12656640 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:42.394645+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f3000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 ms_handle_reset con 0x5630e98f3000 session 0x5630e86da3c0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fa14b000/0x0/0x4ffc00000, data 0x186e370/0x1933000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 ms_handle_reset con 0x5630e98f2000 session 0x5630e86db2c0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97165312 unmapped: 13025280 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:43.395088+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 ms_handle_reset con 0x5630e98f2000 session 0x5630e86d1680
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1227026 data_alloc: 218103808 data_used: 10727424
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e7464c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96821248 unmapped: 13369344 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:44.395447+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e837f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8b23c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96837632 unmapped: 13352960 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:45.395963+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 13320192 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:46.396332+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fa126000/0x0/0x4ffc00000, data 0x1892380/0x1958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fa126000/0x0/0x4ffc00000, data 0x1892380/0x1958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:47.396547+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96854016 unmapped: 13336576 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:48.396980+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 11108352 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261210 data_alloc: 234881024 data_used: 15409152
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:49.397444+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 11108352 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:50.397859+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 11108352 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:51.398317+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 11108352 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fa126000/0x0/0x4ffc00000, data 0x1892380/0x1958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:52.398706+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 11108352 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:53.398949+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 11108352 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261210 data_alloc: 234881024 data_used: 15409152
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fa126000/0x0/0x4ffc00000, data 0x1892380/0x1958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:54.399348+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 11108352 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:55.399736+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 11108352 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fa126000/0x0/0x4ffc00000, data 0x1892380/0x1958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:56.400035+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 11108352 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:57.400458+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 11108352 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:58.400735+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 11108352 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261210 data_alloc: 234881024 data_used: 15409152
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fa126000/0x0/0x4ffc00000, data 0x1892380/0x1958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:01:59.401002+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 11108352 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fa126000/0x0/0x4ffc00000, data 0x1892380/0x1958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fa126000/0x0/0x4ffc00000, data 0x1892380/0x1958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:00.401342+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 11108352 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:01.401684+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 11108352 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.015104294s of 22.262731552s, submitted: 34
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 ms_handle_reset con 0x5630e7464c00 session 0x5630e8c4a1e0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 ms_handle_reset con 0x5630e837f800 session 0x5630e5a6bc20
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 ms_handle_reset con 0x5630e8b23c00 session 0x5630e96fe000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f3000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:02.402299+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12394496 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 ms_handle_reset con 0x5630e98f3000 session 0x5630e96fe780
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:03.402540+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97804288 unmapped: 12386304 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186057 data_alloc: 218103808 data_used: 10727424
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:04.403079+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fa5f0000/0x0/0x4ffc00000, data 0x13c9370/0x148e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97804288 unmapped: 12386304 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f3000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:05.403470+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97681408 unmapped: 12509184 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _renew_subs
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:06.403791+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 12566528 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 128 ms_handle_reset con 0x5630e98f3000 session 0x5630e90efa40
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:07.404158+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 12566528 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:08.404358+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 12566528 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191677 data_alloc: 218103808 data_used: 10735616
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:09.404628+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 12566528 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fa5ec000/0x0/0x4ffc00000, data 0x13caf41/0x1491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:10.405102+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 12566528 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:11.405339+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 12566528 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:12.405585+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 12566528 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:13.405836+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 12566528 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191677 data_alloc: 218103808 data_used: 10735616
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:14.406086+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _renew_subs
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.459855080s of 12.779479027s, submitted: 53
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 12566528 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fa5e9000/0x0/0x4ffc00000, data 0x13cc9a4/0x1494000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:15.406333+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 12566528 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:16.406526+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 12566528 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:17.407013+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 12566528 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e7464c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:18.407432+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 12566528 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194651 data_alloc: 218103808 data_used: 10735616
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:19.407821+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 12566528 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fa5e9000/0x0/0x4ffc00000, data 0x13cc9a4/0x1494000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:20.408173+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 12566528 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:21.408498+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 12566528 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:22.408776+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 12566528 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 130 ms_handle_reset con 0x5630e7464c00 session 0x5630e8cb92c0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:23.409146+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 12566528 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197625 data_alloc: 218103808 data_used: 10735616
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:24.409292+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 12566528 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fa5e6000/0x0/0x4ffc00000, data 0x13ce521/0x1497000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:25.409613+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 12566528 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:26.410031+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 12566528 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:27.410409+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 12566528 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:28.410700+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 12566528 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197625 data_alloc: 218103808 data_used: 10735616
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:29.411088+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 12566528 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fa5e6000/0x0/0x4ffc00000, data 0x13ce521/0x1497000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:30.411381+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e837f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 12566528 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.520847321s of 16.555543900s, submitted: 11
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:31.411632+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fa5e6000/0x0/0x4ffc00000, data 0x13ce521/0x1497000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 131 ms_handle_reset con 0x5630e837f800 session 0x5630e73825a0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:32.412093+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:33.412479+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fa5e3000/0x0/0x4ffc00000, data 0x13d00f2/0x149a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201079 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:34.412808+0000)
Dec 05 02:27:40 compute-0 systemd[1]: Started libpod-conmon-e39f87d5b77bab670aa984da1c4501725284790005ee33a601bd79da23e4b9c3.scope.
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:35.413037+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fa5e3000/0x0/0x4ffc00000, data 0x13d00f2/0x149a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:36.413238+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fa5e3000/0x0/0x4ffc00000, data 0x13d00f2/0x149a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:37.413591+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:38.414057+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201079 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:39.414490+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _renew_subs
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:40.414748+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:41.415224+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:42.415635+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:43.416006+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:44.416213+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:45.416509+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:46.416876+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:47.417651+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:48.418125+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:49.418517+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:50.418725+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:51.419304+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:52.419731+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:53.420163+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:54.420553+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:55.420981+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:56.421330+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:57.421696+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:58.421945+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:02:59.422277+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:00.422672+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:01.423129+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:02.423485+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:03.423985+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:04.424376+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:05.424715+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:06.425119+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:07.425510+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:08.425985+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:09.426493+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:10.427451+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:11.427847+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:12.428205+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:13.428611+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:14.429094+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:15.429350+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:16.429840+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:17.430158+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:18.430408+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:19.430833+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:20.431078+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:21.431416+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:22.431777+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:23.432080+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:24.432310+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:25.432589+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:26.433037+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:27.433613+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:28.434008+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:29.435140+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:30.436008+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:31.436303+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:32.436991+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:33.437465+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:34.438319+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:35.439160+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:36.439725+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:37.440314+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:38.440630+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:39.441162+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:40.442247+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:41.442795+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:42.443300+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:43.443590+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:44.444020+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:45.444339+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:46.444733+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:47.445118+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:48.445382+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 12550144 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:49.445804+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:50.446195+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:51.446647+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:52.447131+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3000.1 total, 600.0 interval
                                            Cumulative writes: 7369 writes, 28K keys, 7369 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                            Cumulative WAL: 7369 writes, 1658 syncs, 4.44 writes per sync, written: 0.02 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 796 writes, 1996 keys, 796 commit groups, 1.0 writes per commit group, ingest: 1.23 MB, 0.00 MB/s
                                            Interval WAL: 796 writes, 362 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:53.447415+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:54.447663+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:55.448159+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:56.448602+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:57.448823+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:58.449280+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:03:59.449695+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:00.450206+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:01.450527+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:02.450879+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:03.451186+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:04.451573+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:05.451802+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:06.452263+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:07.452684+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:08.453085+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:09.453523+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:10.453780+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:11.454329+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:12.454647+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:13.454984+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:14.455387+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:15.455616+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:16.456076+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:17.456416+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:18.456767+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:19.457175+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:20.457555+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:21.457995+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:22.458276+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:23.458608+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:24.459071+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:25.459463+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:26.460059+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:27.460451+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:28.460880+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:29.461415+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:30.461797+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:31.462036+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:32.462430+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:33.462779+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:34.463200+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:35.463645+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:36.464082+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:37.464269+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:38.464708+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:39.465183+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [1])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:40.465581+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:41.466147+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:42.466529+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:43.467042+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:44.467415+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:45.468083+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:46.468411+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:47.468818+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:48.469276+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:49.469802+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 12541952 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:50.470005+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12533760 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:51.470398+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12533760 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:52.470837+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12533760 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:53.471162+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 12533760 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:54.471537+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:55.472379+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:56.472672+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:57.473061+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:58.473415+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:04:59.474358+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:00.474589+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:01.475016+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:02.475401+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:03.475700+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:04.476180+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:05.477008+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:06.477298+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:07.477972+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:08.478302+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:09.478711+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:10.479087+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:11.479555+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:12.480139+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:13.480335+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:14.480669+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:15.481060+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:16.481355+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:17.481608+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:18.482070+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:19.482462+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204053 data_alloc: 218103808 data_used: 10747904
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:20.482830+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e0000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:21.483246+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:22.483726+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:23.484149+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 12525568 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 173.062698364s of 173.233779907s, submitted: 25
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 ms_handle_reset con 0x5630e98f3400 session 0x5630e86dba40
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 ms_handle_reset con 0x5630e8dccc00 session 0x5630e8cc0f00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 ms_handle_reset con 0x5630e8dcd000 session 0x5630e8cc14a0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:24.484510+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97673216 unmapped: 12517376 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fa5e1000/0x0/0x4ffc00000, data 0x13d1b55/0x149d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202981 data_alloc: 218103808 data_used: 10743808
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:25.485028+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 17260544 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 ms_handle_reset con 0x5630e98f2000 session 0x5630e9036780
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:26.485509+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 17260544 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb27a000/0x0/0x4ffc00000, data 0x73bad3/0x804000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:27.485956+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 17260544 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:28.486349+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 17260544 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:29.486750+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 17260544 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050933 data_alloc: 218103808 data_used: 3756032
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:30.487133+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 17260544 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:31.487677+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb27a000/0x0/0x4ffc00000, data 0x73bad3/0x804000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 17260544 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:32.488055+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 17260544 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:33.488423+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 17260544 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb27a000/0x0/0x4ffc00000, data 0x73bad3/0x804000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:34.488754+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.792471886s of 10.155592918s, submitted: 55
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92446720 unmapped: 17743872 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051005 data_alloc: 218103808 data_used: 3756032
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:35.489150+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92479488 unmapped: 17711104 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:36.490078+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 17620992 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:37.490474+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 17620992 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:38.491107+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 17620992 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:39.491590+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 17620992 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fae6a000/0x0/0x4ffc00000, data 0x73bad3/0x804000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050933 data_alloc: 218103808 data_used: 3756032
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:40.492096+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 17620992 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:41.492555+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 17620992 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:42.493164+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 17620992 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:43.493426+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 17620992 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:44.493802+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.582604408s of 10.326370239s, submitted: 106
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 ms_handle_reset con 0x5630e9032000 session 0x5630e6761e00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 ms_handle_reset con 0x5630e9032c00 session 0x5630e8cb83c0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 ms_handle_reset con 0x5630e9033000 session 0x5630e64c7e00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fae6a000/0x0/0x4ffc00000, data 0x73bad3/0x804000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 17620992 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8dcd000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050353 data_alloc: 218103808 data_used: 3756032
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:45.494060+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89628672 unmapped: 20561920 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 ms_handle_reset con 0x5630e8dcd000 session 0x5630e814b860
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:46.494438+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:47.494767+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:48.495047+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:49.495386+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:50.495783+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:51.496211+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:52.496607+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:53.497029+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:54.497399+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:55.497804+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:56.498176+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:57.498518+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:58.498871+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:59.499298+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:00.499673+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:01.500558+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:02.501010+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:03.501444+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:04.501963+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:05.502323+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:06.502696+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:07.503139+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:08.503554+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:09.504179+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:10.504560+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:11.505378+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8898a5418aae4cbda98cdc08571a1c43cfe0c51f636989afb233d7b9d9aef747/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:12.505610+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:13.505875+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:14.506175+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:15.506428+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:16.506786+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:17.507183+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:18.507601+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:19.508016+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:20.508430+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:21.508755+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:22.509168+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:23.509464+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:24.509780+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:25.510189+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:26.510584+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:27.511051+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:28.511418+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:29.511982+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:30.512398+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8898a5418aae4cbda98cdc08571a1c43cfe0c51f636989afb233d7b9d9aef747/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8898a5418aae4cbda98cdc08571a1c43cfe0c51f636989afb233d7b9d9aef747/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:31.512824+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:32.513057+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:33.513462+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:34.513872+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:35.514286+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:36.514751+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:37.515130+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:38.515495+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:39.516052+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:40.516468+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8898a5418aae4cbda98cdc08571a1c43cfe0c51f636989afb233d7b9d9aef747/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8898a5418aae4cbda98cdc08571a1c43cfe0c51f636989afb233d7b9d9aef747/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:41.516685+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:42.517155+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:43.517389+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:44.517784+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:45.518016+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:46.518387+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:47.518753+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:48.519128+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:49.519541+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:50.519873+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:51.520365+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:52.520556+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:53.520977+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:54.521299+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:55.521666+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:56.522018+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:57.522252+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:58.522614+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:59.523070+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:00.523431+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:01.523725+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:02.524077+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:03.524438+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:04.524815+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:05.525200+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 81.124771118s of 81.432746887s, submitted: 56
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:06.525452+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89784320 unmapped: 28803072 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _renew_subs
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 132 handle_osd_map epochs [133,133], i have 133, src has [1,133]
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 133 ms_handle_reset con 0x5630e9032000 session 0x5630e86d0b40
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:07.525766+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89817088 unmapped: 28770304 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ea005000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:08.526179+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89841664 unmapped: 28745728 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _renew_subs
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 134 ms_handle_reset con 0x5630ea005000 session 0x5630e8c4be00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:09.526574+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89882624 unmapped: 28704768 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:10.527065+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041480 data_alloc: 218103808 data_used: 274432
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:11.527413+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:12.527720+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:13.528188+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:14.528533+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:15.528980+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041480 data_alloc: 218103808 data_used: 274432
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:16.529298+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:17.529869+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:18.530341+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:19.530668+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:20.531195+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041480 data_alloc: 218103808 data_used: 274432
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:21.531523+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:22.532133+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:23.532502+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:24.532837+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:25.533246+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041480 data_alloc: 218103808 data_used: 274432
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:26.533608+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:27.534154+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:28.534614+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:29.535341+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:30.535718+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041480 data_alloc: 218103808 data_used: 274432
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:31.536115+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:32.536449+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:33.536806+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:34.537055+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:35.537498+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041640 data_alloc: 218103808 data_used: 278528
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:36.537875+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:37.538395+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:38.538764+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:39.539206+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:40.539566+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041640 data_alloc: 218103808 data_used: 278528
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:41.540112+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:42.540507+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:43.541020+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:44.541384+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:45.541754+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041640 data_alloc: 218103808 data_used: 278528
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:46.542092+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:47.542437+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:48.542832+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:49.543309+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:50.543660+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041640 data_alloc: 218103808 data_used: 278528
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:51.543874+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:52.544305+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:53.544556+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:54.544982+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:55.545292+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89907200 unmapped: 28680192 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041640 data_alloc: 218103808 data_used: 278528
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:56.545688+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89907200 unmapped: 28680192 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:57.546136+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89907200 unmapped: 28680192 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:58.546536+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89907200 unmapped: 28680192 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:59.546869+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89907200 unmapped: 28680192 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:00.547357+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89915392 unmapped: 28672000 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041640 data_alloc: 218103808 data_used: 278528
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:01.547780+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89915392 unmapped: 28672000 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:02.548171+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89915392 unmapped: 28672000 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:03.548353+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89915392 unmapped: 28672000 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:04.548677+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89915392 unmapped: 28672000 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:05.549067+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8b23c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 59.437492371s of 59.699771881s, submitted: 34
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 134 ms_handle_reset con 0x5630e8b23c00 session 0x5630e84970e0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e837f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96550912 unmapped: 22036480 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061116 data_alloc: 218103808 data_used: 7094272
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 135 ms_handle_reset con 0x5630e837f800 session 0x5630e64c6000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:06.549408+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96550912 unmapped: 22036480 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:07.549642+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8b23c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 135 heartbeat osd_stat(store_statfs(0x4facd7000/0x0/0x4ffc00000, data 0x8c8d80/0x996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 135 ms_handle_reset con 0x5630e8b23c00 session 0x5630e849c960
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97255424 unmapped: 21331968 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8dcd000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:08.549873+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 135 ms_handle_reset con 0x5630e8dcd000 session 0x5630e64e94a0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97353728 unmapped: 21233664 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:09.550241+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _renew_subs
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 136 ms_handle_reset con 0x5630e9032000 session 0x5630e90ef860
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97198080 unmapped: 21389312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f9cb5000/0x0/0x4ffc00000, data 0x18ead80/0x19b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:10.550452+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18ec951/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 21372928 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ea005000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 136 ms_handle_reset con 0x5630ea005000 session 0x5630e848f4a0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e7464c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 136 ms_handle_reset con 0x5630e7464c00 session 0x5630e848e960
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198548 data_alloc: 218103808 data_used: 7102464
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8b23c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 136 ms_handle_reset con 0x5630e8b23c00 session 0x5630e679bc20
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:11.550795+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8dcd000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 136 ms_handle_reset con 0x5630e8dcd000 session 0x5630e679b0e0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 136 ms_handle_reset con 0x5630e9032000 session 0x5630e74bb4a0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 99090432 unmapped: 19496960 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ea005000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 136 ms_handle_reset con 0x5630ea005000 session 0x5630e8a2d0e0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:12.551006+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f3400
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 136 ms_handle_reset con 0x5630e98f3400 session 0x5630e62b5860
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8b23c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 136 ms_handle_reset con 0x5630e8b23c00 session 0x5630e73805a0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 99090432 unmapped: 19496960 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:13.551462+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8dcd000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 136 ms_handle_reset con 0x5630e8dcd000 session 0x5630e8d38d20
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 136 ms_handle_reset con 0x5630e9032000 session 0x5630e8492780
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 99098624 unmapped: 19488768 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:14.553522+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ea005000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 19832832 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 136 ms_handle_reset con 0x5630ea005000 session 0x5630e64e65a0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:15.554000+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98762752 unmapped: 19824640 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278747 data_alloc: 218103808 data_used: 7106560
Dec 05 02:27:40 compute-0 nova_compute[349548]: 2025-12-05 02:27:40.482 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:16.554216+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f9299000/0x0/0x4ffc00000, data 0x23049d6/0x23d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f3800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98762752 unmapped: 19824640 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:17.554619+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f3000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.525561333s of 12.098001480s, submitted: 91
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98852864 unmapped: 19734528 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:18.554862+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 10297344 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:19.556302+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 109568000 unmapped: 9019392 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:20.556653+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e6e6f800 session 0x5630e8c4ad20
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 109568000 unmapped: 9019392 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401641 data_alloc: 234881024 data_used: 23805952
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:21.557077+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f9295000/0x0/0x4ffc00000, data 0x2306439/0x23d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e6e6f800 session 0x5630e8d38b40
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 109600768 unmapped: 8986624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:22.557438+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e98f3800 session 0x5630e8cc12c0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 109600768 unmapped: 8986624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8b23c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e8b23c00 session 0x5630e6b52960
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8dcd000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:23.557614+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e8dcd000 session 0x5630e848fe00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ea005000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9032000 session 0x5630e62b5680
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 103759872 unmapped: 14827520 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:24.557858+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 14819328 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:25.558207+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 14819328 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f9ab2000/0x0/0x4ffc00000, data 0x16e9449/0x17bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226068 data_alloc: 234881024 data_used: 11317248
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:26.558426+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 14819328 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:27.558998+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 14778368 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:28.559250+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f9ab2000/0x0/0x4ffc00000, data 0x16e9449/0x17bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 103833600 unmapped: 14753792 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:29.559657+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:30.560090+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:31.560361+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299988 data_alloc: 234881024 data_used: 21700608
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:32.560581+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:33.561130+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f9ab2000/0x0/0x4ffc00000, data 0x16e9449/0x17bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:34.561502+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:35.561868+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:36.562123+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299988 data_alloc: 234881024 data_used: 21700608
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:37.562385+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:38.564222+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f9ab2000/0x0/0x4ffc00000, data 0x16e9449/0x17bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:39.566120+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:40.568264+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:41.569063+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299988 data_alloc: 234881024 data_used: 21700608
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:42.571343+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f9ab2000/0x0/0x4ffc00000, data 0x16e9449/0x17bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:43.573494+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:44.574450+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e6e6f800 session 0x5630e672cb40
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8b23c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e8b23c00 session 0x5630e8497e00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8dcd000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e8dcd000 session 0x5630e73f2f00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f3800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e98f3800 session 0x5630e6760780
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9916000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.943056107s of 27.035001755s, submitted: 27
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9916000 session 0x5630e64e63c0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 14090240 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e6e6f800 session 0x5630e900dc20
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8b23c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e8b23c00 session 0x5630e8d392c0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:45.576035+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8dcd000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e8dcd000 session 0x5630e8c4c780
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f3800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e98f3800 session 0x5630e79ead20
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 14090240 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:46.576372+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356478 data_alloc: 234881024 data_used: 21700608
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 14090240 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:47.576626+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 14090240 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:48.576850+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f97d2000/0x0/0x4ffc00000, data 0x1dc8459/0x1e9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 14090240 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:49.577259+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9916400
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9916400 session 0x5630ea57d2c0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 11485184 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:50.577525+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e6e6f800 session 0x5630e79e3e00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111345664 unmapped: 11444224 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:51.577770+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1380426 data_alloc: 234881024 data_used: 21864448
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8b23c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e8b23c00 session 0x5630e700de00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8dcd000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e8dcd000 session 0x5630e6b52d20
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111190016 unmapped: 11599872 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f3800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:52.577962+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9916800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f94e3000/0x0/0x4ffc00000, data 0x20b7459/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111230976 unmapped: 11558912 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:53.578191+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f94dc000/0x0/0x4ffc00000, data 0x20bc48c/0x2192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f94d2000/0x0/0x4ffc00000, data 0x20c648c/0x219c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111534080 unmapped: 11255808 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:54.578419+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 9879552 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:55.578819+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115310592 unmapped: 7479296 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:56.579039+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1433605 data_alloc: 251658240 data_used: 27631616
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116080640 unmapped: 6709248 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e98f3800 session 0x5630e73832c0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:57.579422+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9916800 session 0x5630e6456f00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.604548454s of 12.920574188s, submitted: 49
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f94d2000/0x0/0x4ffc00000, data 0x20c648c/0x219c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 10223616 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:58.579725+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e6e6f800 session 0x5630e8c86000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 10215424 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:59.580073+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 10207232 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:00.580295+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 6463488 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:01.580465+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399972 data_alloc: 234881024 data_used: 21270528
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f93b0000/0x0/0x4ffc00000, data 0x21e5449/0x22b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 6299648 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:02.580839+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 6823936 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:03.581129+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 6823936 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:04.581482+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f92eb000/0x0/0x4ffc00000, data 0x22a2449/0x2375000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 6823936 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:05.581861+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f92eb000/0x0/0x4ffc00000, data 0x22a2449/0x2375000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:06.582608+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 6823936 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424694 data_alloc: 234881024 data_used: 21520384
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:07.583072+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 6823936 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:08.583355+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 6815744 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.178137779s of 10.709356308s, submitted: 124
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:09.583608+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 7782400 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:10.583948+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 7782400 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:11.584275+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 7782400 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415802 data_alloc: 234881024 data_used: 21524480
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f92d6000/0x0/0x4ffc00000, data 0x22c5449/0x2398000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:12.584568+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 7782400 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:13.585061+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 7782400 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:14.585308+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 7782400 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f92d6000/0x0/0x4ffc00000, data 0x22c5449/0x2398000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:15.585640+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 7782400 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:16.586072+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 7774208 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415802 data_alloc: 234881024 data_used: 21524480
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:17.586423+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 7774208 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 podman[474172]: 2025-12-05 02:27:40.489244078 +0000 UTC m=+0.187878012 container init e39f87d5b77bab670aa984da1c4501725284790005ee33a601bd79da23e4b9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_grothendieck, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:18.586673+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 7774208 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f92d6000/0x0/0x4ffc00000, data 0x22c5449/0x2398000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:19.586984+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 7774208 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:20.587381+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 7774208 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.986150742s of 12.017604828s, submitted: 4
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f92d6000/0x0/0x4ffc00000, data 0x22c5449/0x2398000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:21.587655+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 7774208 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415670 data_alloc: 234881024 data_used: 21524480
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:22.588029+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 7774208 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:23.588378+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 7774208 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:24.588716+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 7766016 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:25.589064+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 7757824 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f92d1000/0x0/0x4ffc00000, data 0x22c9449/0x239c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:26.589503+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 7757824 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415978 data_alloc: 234881024 data_used: 21524480
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:27.589809+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 7757824 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:28.590221+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 7757824 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:29.590748+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 7757824 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f92d1000/0x0/0x4ffc00000, data 0x22c9449/0x239c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:30.591115+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 7749632 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:31.591454+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 7749632 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415978 data_alloc: 234881024 data_used: 21524480
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:32.591683+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 7749632 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f92d1000/0x0/0x4ffc00000, data 0x22c9449/0x239c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:33.592160+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 7749632 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.032866478s of 13.056352615s, submitted: 3
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:34.592533+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 7749632 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:35.593027+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 7749632 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:36.593392+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 7749632 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416246 data_alloc: 234881024 data_used: 21524480
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:37.593814+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9917000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9917000 session 0x5630e64c7860
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9917400
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9917400 session 0x5630e86dbe00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9917c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9917c00 session 0x5630e64e7c20
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 7766016 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9033c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9033c00 session 0x5630e73f2780
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f92d0000/0x0/0x4ffc00000, data 0x22ca449/0x239d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e6e6f800 session 0x5630e8cc0000
Dec 05 02:27:40 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15687 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:38.594056+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9917000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9917000 session 0x5630e8a2d0e0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 23060480 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:39.594537+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 23060480 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:40.595213+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 23060480 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:41.596138+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 23060480 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1505841 data_alloc: 234881024 data_used: 21512192
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9032000 session 0x5630e86dbc20
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630ea005000 session 0x5630ea57cb40
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:42.596351+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 23126016 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9917400
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f882b000/0x0/0x4ffc00000, data 0x2d6f4ab/0x2e43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9917400 session 0x5630e6b53c20
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:43.596635+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 23126016 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:44.596856+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 23117824 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.861010551s of 11.164711952s, submitted: 59
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:45.597124+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 23117824 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e98f3000 session 0x5630e64c6b40
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e98f2000 session 0x5630e79e3860
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9917000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:46.597428+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 25010176 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f882b000/0x0/0x4ffc00000, data 0x2d6f4ab/0x2e43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,1])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413893 data_alloc: 234881024 data_used: 17195008
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9917000 session 0x5630e8c86780
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f882b000/0x0/0x4ffc00000, data 0x2d6f4ab/0x2e43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [1])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:47.597792+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 25534464 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:48.598128+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 25526272 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ea005000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630ea005000 session 0x5630e90ef680
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:49.598563+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9917c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9917c00 session 0x5630e90ef860
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 25526272 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:50.598945+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e98f2000 session 0x5630e90ee780
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f3000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e98f3000 session 0x5630e90efe00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 25509888 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9917000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:51.599131+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 25509888 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414291 data_alloc: 234881024 data_used: 18087936
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:52.599328+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 25509888 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f8f2f000/0x0/0x4ffc00000, data 0x266b498/0x273f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:53.599687+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ea005000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25468928 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:54.600036+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25468928 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630ea005000 session 0x5630e6776780
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9917000 session 0x5630e90ee1e0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9033000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.266552925s of 10.460735321s, submitted: 37
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:55.600244+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9033000 session 0x5630e90eed20
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:56.600560+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335384 data_alloc: 234881024 data_used: 18087936
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:57.601036+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99cf000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:58.601406+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:59.602030+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:00.602287+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:01.602739+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99cf000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335384 data_alloc: 234881024 data_used: 18087936
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:02.603235+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99cf000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:03.603635+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99cf000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99cf000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:04.604093+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:05.604480+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:06.605090+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335384 data_alloc: 234881024 data_used: 18087936
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:07.605501+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:08.605933+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99cf000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:09.609404+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:10.609639+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 25427968 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:11.609853+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 25427968 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335384 data_alloc: 234881024 data_used: 18087936
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:12.610077+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 25427968 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:13.610349+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 25427968 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:14.610722+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99cf000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 25427968 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:15.611060+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 25427968 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:16.611424+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 25427968 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335384 data_alloc: 234881024 data_used: 18087936
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:17.611789+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 25427968 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:18.612223+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 25427968 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:19.612632+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.385606766s of 24.469984055s, submitted: 18
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 25255936 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99cf000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,1])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:20.612978+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99ce000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 25255936 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:21.613386+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 25272320 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351520 data_alloc: 234881024 data_used: 18911232
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:22.613694+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 25272320 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:23.614183+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 25272320 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:24.614519+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99ce000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 25272320 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:25.614968+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 25272320 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:26.615238+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99ce000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 25272320 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351680 data_alloc: 234881024 data_used: 18915328
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:27.615562+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 25272320 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:28.617254+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 25272320 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:29.617568+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 25272320 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:30.617983+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99ce000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 25264128 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:31.618228+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 25264128 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351680 data_alloc: 234881024 data_used: 18915328
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:32.618588+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 25264128 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:33.619016+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 25264128 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:34.619379+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 25264128 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:35.619666+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 25264128 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99ce000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:36.620092+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.105096817s of 17.149562836s, submitted: 19
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 25264128 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350429 data_alloc: 234881024 data_used: 18915328
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:37.620432+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 25264128 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:38.620855+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _renew_subs
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 137 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e98f2000 session 0x5630e84923c0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 25255936 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:39.621351+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 25255936 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f99ca000/0x0/0x4ffc00000, data 0x1bcdfe9/0x1ca3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:40.621729+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 25239552 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:41.622159+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 25239552 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356379 data_alloc: 234881024 data_used: 18923520
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:42.622560+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 25239552 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f99ca000/0x0/0x4ffc00000, data 0x1bcdfe9/0x1ca3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:43.623376+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 25239552 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:44.623835+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 25239552 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:45.624205+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 25239552 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:46.624600+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 25239552 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356379 data_alloc: 234881024 data_used: 18923520
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:47.624924+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f3000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e98f3000 session 0x5630e8a2dc20
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9917000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9917000 session 0x5630e8a2c5a0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ea005000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630ea005000 session 0x5630e90ef4a0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 25255936 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:48.625264+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f99ca000/0x0/0x4ffc00000, data 0x1bcdfe9/0x1ca3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032c00 session 0x5630e62b43c0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032c00 session 0x5630e90ee5a0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 25255936 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:49.625555+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e98f2000 session 0x5630e73f3680
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.139289856s of 13.231978416s, submitted: 12
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032000 session 0x5630e900c000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f3000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e6e6f800 session 0x5630e700da40
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112369664 unmapped: 25731072 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9917000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:50.625763+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e98f3000 session 0x5630e6760960
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e6e6f800 session 0x5630e8c4cd20
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032000 session 0x5630e672d4a0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032c00 session 0x5630e8496960
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e98f2000 session 0x5630e679b860
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 105979904 unmapped: 32120832 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9917000 session 0x5630e64e63c0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:51.626109+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 105979904 unmapped: 32120832 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146508 data_alloc: 218103808 data_used: 7118848
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:52.626455+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e6e6f800 session 0x5630e6777860
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4faaa5000/0x0/0x4ffc00000, data 0xaacf87/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032000 session 0x5630e617a3c0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 105979904 unmapped: 32120832 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:53.626835+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032c00 session 0x5630e62a0960
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e98f2000 session 0x5630e8497680
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 105979904 unmapped: 32120832 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:54.627045+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ea005000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106061824 unmapped: 32038912 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:55.627400+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4faaa5000/0x0/0x4ffc00000, data 0xaacf87/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106061824 unmapped: 32038912 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:56.627762+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106061824 unmapped: 32038912 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146508 data_alloc: 218103808 data_used: 7118848
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:57.628215+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106061824 unmapped: 32038912 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:58.628588+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106061824 unmapped: 32038912 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:59.629113+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106061824 unmapped: 32038912 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:00.629939+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4faaa5000/0x0/0x4ffc00000, data 0xaacf87/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 32104448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:01.630256+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8cf4400
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.733372688s of 12.021333694s, submitted: 56
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 30875648 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:02.630815+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e8cf4400 session 0x5630e64e7c20
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185235 data_alloc: 218103808 data_used: 7671808
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa98a000/0x0/0x4ffc00000, data 0xc0efb0/0xce4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e6e6f800 session 0x5630e8c4cf00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa98a000/0x0/0x4ffc00000, data 0xc0efb0/0xce4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 31293440 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:03.631204+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 31358976 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:04.631597+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 31358976 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:05.632002+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032400
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 31358976 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032400 session 0x5630e8c4c5a0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:06.632354+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa780000/0x0/0x4ffc00000, data 0xe18fe9/0xeee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032800 session 0x5630e679b860
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 31358976 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:07.632703+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185571 data_alloc: 218103808 data_used: 7684096
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032c00 session 0x5630e679bc20
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106774528 unmapped: 31326208 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032000 session 0x5630e679be00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:08.633081+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032400
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 31293440 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:09.633431+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 31293440 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:10.633717+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106815488 unmapped: 31285248 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:11.634084+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106815488 unmapped: 31285248 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:12.634315+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192468 data_alloc: 218103808 data_used: 8331264
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:13.634511+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:14.634780+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:15.635033+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:16.635277+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:17.635589+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203828 data_alloc: 234881024 data_used: 9945088
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:18.635982+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:19.636390+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:20.636722+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:21.636985+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:22.637227+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203828 data_alloc: 234881024 data_used: 9945088
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:23.637550+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:24.637959+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:25.638279+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:26.638987+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:27.639407+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203828 data_alloc: 234881024 data_used: 9945088
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:28.639797+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:29.640317+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:30.640517+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:31.640821+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:32.641193+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203828 data_alloc: 234881024 data_used: 9945088
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:33.641441+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:34.641739+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.723632812s of 32.954315186s, submitted: 40
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:35.642022+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108003328 unmapped: 30097408 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:36.643492+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108003328 unmapped: 30097408 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:37.644143+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 109035520 unmapped: 29065216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232924 data_alloc: 234881024 data_used: 10022912
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:38.644533+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 109035520 unmapped: 29065216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa57d000/0x0/0x4ffc00000, data 0x101801b/0x10ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:39.645298+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 109035520 unmapped: 29065216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:40.645595+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 30564352 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:41.645766+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111738880 unmapped: 26361856 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:42.646044+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 26312704 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304974 data_alloc: 234881024 data_used: 10981376
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:43.646237+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 26238976 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9ca9000/0x0/0x4ffc00000, data 0x18ee01b/0x19c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:44.646545+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 26238976 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:45.646953+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 26238976 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:46.647252+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 26238976 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9ca9000/0x0/0x4ffc00000, data 0x18ee01b/0x19c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:47.647388+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 26238976 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1312330 data_alloc: 234881024 data_used: 10944512
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:48.647734+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 26238976 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.991629601s of 13.776041031s, submitted: 141
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:49.648046+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 26001408 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:50.648394+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 26001408 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:51.648779+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 26001408 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9c8c000/0x0/0x4ffc00000, data 0x190b01b/0x19e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:52.649021+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112115712 unmapped: 25985024 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 podman[474172]: 2025-12-05 02:27:40.509388453 +0000 UTC m=+0.208022367 container start e39f87d5b77bab670aa984da1c4501725284790005ee33a601bd79da23e4b9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_grothendieck, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313406 data_alloc: 234881024 data_used: 11014144
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:53.649331+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112115712 unmapped: 25985024 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:54.649784+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112115712 unmapped: 25985024 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9c8c000/0x0/0x4ffc00000, data 0x190b01b/0x19e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:55.650231+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112173056 unmapped: 25927680 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:56.650634+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112173056 unmapped: 25927680 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:57.650950+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112173056 unmapped: 25927680 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314082 data_alloc: 234881024 data_used: 11014144
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9c7b000/0x0/0x4ffc00000, data 0x191c01b/0x19f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:58.651271+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112181248 unmapped: 25919488 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:59.651689+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112181248 unmapped: 25919488 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:00.652110+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112181248 unmapped: 25919488 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.975809097s of 12.017213821s, submitted: 6
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9c7b000/0x0/0x4ffc00000, data 0x191c01b/0x19f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:01.652825+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112181248 unmapped: 25919488 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032800 session 0x5630e8cc0780
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:02.653060+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 24625152 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346354 data_alloc: 234881024 data_used: 11014144
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:03.653436+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032c00 session 0x5630e86daf00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e98f2000 session 0x5630e900d0e0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9238000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9238000 session 0x5630e90363c0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 24625152 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9238400
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9238400 session 0x5630e67612c0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f99f6000/0x0/0x4ffc00000, data 0x1ba101b/0x1c78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9238400
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9238400 session 0x5630e8c4c1e0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032800 session 0x5630e6b53e00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032c00 session 0x5630e6b52960
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:04.653763+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9238000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9238000 session 0x5630e900c3c0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e98f2000 session 0x5630e73805a0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114049024 unmapped: 24051712 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:05.654110+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9459000/0x0/0x4ffc00000, data 0x213d02b/0x2215000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114049024 unmapped: 24051712 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9459000/0x0/0x4ffc00000, data 0x213d02b/0x2215000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:06.654531+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114049024 unmapped: 24051712 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:07.654787+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114049024 unmapped: 24051712 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401728 data_alloc: 234881024 data_used: 11014144
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9459000/0x0/0x4ffc00000, data 0x213d02b/0x2215000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:08.655101+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114049024 unmapped: 24051712 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e98f2000 session 0x5630e700c5a0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:09.655535+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114049024 unmapped: 24051712 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032800 session 0x5630e7383c20
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:10.656028+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114049024 unmapped: 24051712 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032c00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032c00 session 0x5630e64e7680
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9238000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.383359909s of 10.606524467s, submitted: 36
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:11.656236+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9238000 session 0x5630e9024b40
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114049024 unmapped: 24051712 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9238400
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9238400 session 0x5630e8cc0d20
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9238400
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:12.656497+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114057216 unmapped: 24043520 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032800 session 0x5630e8a2c780
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401728 data_alloc: 234881024 data_used: 11014144
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:13.656719+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9459000/0x0/0x4ffc00000, data 0x213d02b/0x2215000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113524736 unmapped: 24576000 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e97cac00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e97cac00 session 0x5630e79ea1e0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9229400
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9456000/0x0/0x4ffc00000, data 0x213e05e/0x2218000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [1,0,0,2])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9229400 session 0x5630e64e6b40
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:14.656952+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8dcd800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8dcdc00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113680384 unmapped: 24420352 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:15.657168+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113680384 unmapped: 24420352 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:16.657439+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114851840 unmapped: 23248896 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:17.657693+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 23060480 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1438654 data_alloc: 234881024 data_used: 15085568
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:18.658078+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 21782528 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f942c000/0x0/0x4ffc00000, data 0x216805e/0x2242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:19.658444+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 21782528 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:20.658865+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 21782528 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f942c000/0x0/0x4ffc00000, data 0x216805e/0x2242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:21.659348+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 21774336 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:22.659587+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 21774336 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450014 data_alloc: 234881024 data_used: 16699392
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:23.660000+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.337281227s of 12.433724403s, submitted: 17
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 21708800 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f942c000/0x0/0x4ffc00000, data 0x216805e/0x2242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:24.660371+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 21708800 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:25.660704+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 21708800 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:26.661084+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 21708800 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:27.661442+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f942c000/0x0/0x4ffc00000, data 0x216805e/0x2242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 21708800 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452254 data_alloc: 234881024 data_used: 16691200
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:28.661653+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 21708800 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:29.662064+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 21708800 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:30.662348+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 21708800 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:31.662671+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f942c000/0x0/0x4ffc00000, data 0x216805e/0x2242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 21708800 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:32.663032+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 21708800 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452254 data_alloc: 234881024 data_used: 16691200
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:33.663312+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 21708800 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:34.663516+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 21708800 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f942c000/0x0/0x4ffc00000, data 0x216805e/0x2242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:35.663933+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 21700608 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:36.664197+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 21700608 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:37.664417+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 21692416 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452254 data_alloc: 234881024 data_used: 16691200
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:38.664659+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 21692416 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:39.664917+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f942c000/0x0/0x4ffc00000, data 0x216805e/0x2242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 21692416 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:40.665081+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 21692416 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:41.665251+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f942c000/0x0/0x4ffc00000, data 0x216805e/0x2242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 21692416 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:42.665456+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 21692416 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452254 data_alloc: 234881024 data_used: 16691200
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:43.665746+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f942c000/0x0/0x4ffc00000, data 0x216805e/0x2242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 21684224 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:44.665951+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 21684224 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:45.666267+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 21684224 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:46.666485+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 21676032 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:47.666818+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.989109039s of 24.041637421s, submitted: 13
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 19341312 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1493354 data_alloc: 234881024 data_used: 16728064
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:48.667016+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 118226944 unmapped: 19873792 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:49.667736+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f8308000/0x0/0x4ffc00000, data 0x327e05e/0x3358000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,4,3,2,2])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 15941632 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f79a4000/0x0/0x4ffc00000, data 0x3be005e/0x3cba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:50.667940+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 16220160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:51.668194+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 16531456 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:52.668352+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 16498688 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1691062 data_alloc: 234881024 data_used: 18505728
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:53.668699+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3ca105e/0x3d7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 16498688 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:54.669112+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3ca105e/0x3d7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 16457728 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:55.669443+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 16531456 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:56.670055+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9238400 session 0x5630e7301860
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9917400
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 119947264 unmapped: 18153472 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9917400 session 0x5630e64e65a0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:57.670223+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f8090000/0x0/0x4ffc00000, data 0x285d05e/0x2937000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 17883136 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1518456 data_alloc: 234881024 data_used: 16740352
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:58.670538+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 17883136 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:59.671165+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 17883136 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:00.671540+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 17883136 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.288977623s of 13.481291771s, submitted: 293
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e6e6f800 session 0x5630e79eb4a0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032400 session 0x5630e90372c0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:01.671970+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 21291008 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032800 session 0x5630ea57dc20
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:02.672207+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f996f000/0x0/0x4ffc00000, data 0x1c1bffc/0x1cf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 21282816 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 podman[474172]: 2025-12-05 02:27:40.513980146 +0000 UTC m=+0.212614080 container attach e39f87d5b77bab670aa984da1c4501725284790005ee33a601bd79da23e4b9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_grothendieck, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391140 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:03.672528+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 21282816 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:04.674746+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 21282816 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:05.675154+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 21282816 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:06.675501+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9970000/0x0/0x4ffc00000, data 0x1c1bfca/0x1cf3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 21282816 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:07.675994+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 21274624 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391140 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9970000/0x0/0x4ffc00000, data 0x1c1bfca/0x1cf3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:08.676621+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 21274624 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:09.677408+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 21274624 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:10.678043+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 21274624 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:11.678326+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 21274624 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:12.678564+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 21274624 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391184 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:13.679155+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9978000/0x0/0x4ffc00000, data 0x1c1efca/0x1cf6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 21274624 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:14.679501+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 21266432 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:15.679785+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9978000/0x0/0x4ffc00000, data 0x1c1efca/0x1cf6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 21266432 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9978000/0x0/0x4ffc00000, data 0x1c1efca/0x1cf6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:16.680206+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 21266432 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:17.680578+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 21266432 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391184 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:18.681057+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 21266432 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:19.681481+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 21266432 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:20.681965+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 21266432 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:21.682326+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9978000/0x0/0x4ffc00000, data 0x1c1efca/0x1cf6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 21266432 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:22.682699+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.034488678s of 21.216693878s, submitted: 34
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391680 data_alloc: 234881024 data_used: 13504512
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116842496 unmapped: 21258240 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:23.682858+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 21250048 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:24.683223+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9978000/0x0/0x4ffc00000, data 0x1c1efca/0x1cf6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 21250048 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:25.683541+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 21250048 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:26.683998+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9978000/0x0/0x4ffc00000, data 0x1c1efca/0x1cf6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e8dcd800 session 0x5630e8cc0960
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e8dcdc00 session 0x5630e8c4b0e0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 21184512 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:27.684211+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e6e6f800 session 0x5630e8d381e0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:28.684423+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:29.685055+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:30.685382+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:31.685828+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:32.686160+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:33.686536+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:34.687027+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:35.687314+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:36.687651+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:37.688084+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:38.688486+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:39.688997+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:40.689171+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:41.689397+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:42.689633+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:43.689863+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:44.690214+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:45.690440+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:46.690665+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:47.690971+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:48.691212+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:49.691496+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:50.691706+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:51.692067+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:52.692340+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.1 total, 600.0 interval
                                            Cumulative writes: 9676 writes, 36K keys, 9676 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 9676 writes, 2587 syncs, 3.74 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 2307 writes, 8716 keys, 2307 commit groups, 1.0 writes per commit group, ingest: 9.18 MB, 0.02 MB/s
                                            Interval WAL: 2307 writes, 929 syncs, 2.48 writes per sync, written: 0.01 GB, 0.02 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:53.692844+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:54.693182+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:55.693514+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 24059904 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:56.693979+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 24059904 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:57.694313+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 24059904 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:58.694548+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 24059904 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:59.694750+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: mgrc ms_handle_reset ms_handle_reset con 0x5630e8b22800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/858078637
Dec 05 02:27:40 compute-0 ceph-osd[206647]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/858078637,v1:192.168.122.100:6801/858078637]
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: get_auth_request con 0x5630e9917400 auth_method 0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: mgrc handle_mgr_configure stats_period=5
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 23928832 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:00.695038+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 23928832 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:01.695421+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 23928832 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:02.695645+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 23928832 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:03.696095+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 23928832 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:04.696381+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 23920640 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:05.696735+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 23920640 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:06.697759+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 23920640 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e8867000 session 0x5630e64c7a40
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8dcd800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:07.698184+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 23920640 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:08.698550+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 23920640 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:09.698973+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 23920640 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:10.699164+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 23920640 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:11.699511+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:12.699826+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:13.700043+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:14.700229+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:15.700482+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:16.700828+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:17.701054+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:18.701383+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:19.701777+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:20.702026+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:21.702425+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:22.702947+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:23.703381+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:24.703754+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:25.704204+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:26.704567+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:27.705089+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:28.705384+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:29.705718+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:30.706116+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:31.706453+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:32.706827+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 23904256 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:33.707117+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 23953408 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:34.707786+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 23953408 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:35.708239+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 23953408 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:36.708621+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 23953408 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:37.709037+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 23953408 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:38.709353+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 23953408 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:39.709766+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 23953408 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:40.710386+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:41.710775+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:42.710994+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:43.711495+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:44.711759+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:45.712123+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:46.712526+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:47.712716+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:48.712961+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:49.713282+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:50.713735+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:51.714006+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:52.714247+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:53.714558+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:54.714874+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:55.715341+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:56.715711+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:57.716083+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:58.716436+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:59.716814+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:00.717174+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 23937024 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:01.717559+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032400
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 99.126289368s of 99.387001038s, submitted: 48
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 119668736 unmapped: 21585920 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032400 session 0x5630e849cd20
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:02.717768+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 26894336 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:03.717992+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa2bc000/0x0/0x4ffc00000, data 0x12ddf87/0x13b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261787 data_alloc: 218103808 data_used: 7757824
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 26894336 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:04.718290+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 26894336 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:05.718604+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 26894336 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:06.718841+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032800 session 0x5630e849c960
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 26894336 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:07.719141+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9229400
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9229400 session 0x5630e64e8d20
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 26894336 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:08.719494+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ebdfa000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630ebdfa000 session 0x5630e9025860
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e6e6f800 session 0x5630e8d38b40
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264741 data_alloc: 218103808 data_used: 7757824
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 26533888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:09.719796+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032400
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa2bc000/0x0/0x4ffc00000, data 0x12ddf87/0x13b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 26533888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:10.720045+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 26533888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:11.720337+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa292000/0x0/0x4ffc00000, data 0x1307f87/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 26533888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:12.720541+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 26533888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:13.720811+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292901 data_alloc: 234881024 data_used: 11685888
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:14.721014+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:15.721232+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa292000/0x0/0x4ffc00000, data 0x1307f87/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:16.721507+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:17.721664+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:18.721912+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302501 data_alloc: 234881024 data_used: 13074432
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:19.722198+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa292000/0x0/0x4ffc00000, data 0x1307f87/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:20.722503+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:21.722729+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa292000/0x0/0x4ffc00000, data 0x1307f87/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:22.722984+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:23.723222+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302501 data_alloc: 234881024 data_used: 13074432
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:24.723445+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa292000/0x0/0x4ffc00000, data 0x1307f87/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:25.723780+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:26.724021+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9033400 session 0x5630ea57d0e0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9229400
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:27.724370+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:28.724596+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302501 data_alloc: 234881024 data_used: 13074432
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:29.724864+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa292000/0x0/0x4ffc00000, data 0x1307f87/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:30.725152+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:31.725521+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:32.725992+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa292000/0x0/0x4ffc00000, data 0x1307f87/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:33.726329+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.751306534s of 31.864397049s, submitted: 10
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa292000/0x0/0x4ffc00000, data 0x1307f87/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,1])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304101 data_alloc: 234881024 data_used: 13115392
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114688000 unmapped: 26566656 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:34.726510+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 26542080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:35.726856+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 26484736 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:36.727270+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa292000/0x0/0x4ffc00000, data 0x1307f87/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,1])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 26435584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:37.727616+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 26435584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:38.728026+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304581 data_alloc: 234881024 data_used: 13127680
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 26435584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:39.728289+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 26435584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:40.728615+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 26435584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:41.729012+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa292000/0x0/0x4ffc00000, data 0x1307f87/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa292000/0x0/0x4ffc00000, data 0x1307f87/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 26435584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:42.729247+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 26435584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:43.729700+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304581 data_alloc: 234881024 data_used: 13127680
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 26435584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:44.730019+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa292000/0x0/0x4ffc00000, data 0x1307f87/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [1])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 26435584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:45.730426+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.707974434s of 12.498046875s, submitted: 132
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116006912 unmapped: 25247744 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:46.730940+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 25567232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:47.731260+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 25567232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:48.731983+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341687 data_alloc: 234881024 data_used: 13488128
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 24510464 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:49.732246+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa018000/0x0/0x4ffc00000, data 0x1580f87/0x1655000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 24510464 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:50.732653+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 24510464 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:51.733037+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 24510464 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:52.733253+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa00a000/0x0/0x4ffc00000, data 0x158ef87/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:53.733506+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 24510464 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341687 data_alloc: 234881024 data_used: 13488128
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:54.733985+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 24510464 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:55.734382+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 24510464 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:56.734735+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 24510464 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:57.735053+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 24510464 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:58.735429+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 24510464 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa00a000/0x0/0x4ffc00000, data 0x158ef87/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341687 data_alloc: 234881024 data_used: 13488128
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:59.735764+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 24510464 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:00.736058+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 24502272 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:01.736278+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 24502272 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:02.736517+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 24502272 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:03.736713+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 24502272 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341687 data_alloc: 234881024 data_used: 13488128
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:04.736998+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa00a000/0x0/0x4ffc00000, data 0x158ef87/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:05.737348+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:06.737702+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:07.738172+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:08.738458+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:09.738715+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341687 data_alloc: 234881024 data_used: 13488128
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa00a000/0x0/0x4ffc00000, data 0x158ef87/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:10.739147+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:11.739577+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:12.739926+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:13.740184+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:14.740419+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341687 data_alloc: 234881024 data_used: 13488128
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa00a000/0x0/0x4ffc00000, data 0x158ef87/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:15.740795+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:16.741130+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:17.741539+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:18.742043+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa00a000/0x0/0x4ffc00000, data 0x158ef87/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:19.742522+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341687 data_alloc: 234881024 data_used: 13488128
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:20.742793+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:21.743219+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:22.743443+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa00a000/0x0/0x4ffc00000, data 0x158ef87/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:23.743813+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:24.744461+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341687 data_alloc: 234881024 data_used: 13488128
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:25.745011+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:26.745333+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa00a000/0x0/0x4ffc00000, data 0x158ef87/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:27.745537+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa00a000/0x0/0x4ffc00000, data 0x158ef87/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:28.746733+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:29.749040+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341687 data_alloc: 234881024 data_used: 13488128
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:30.749693+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:31.749933+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa00a000/0x0/0x4ffc00000, data 0x158ef87/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:32.750229+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:33.750497+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:34.750690+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341687 data_alloc: 234881024 data_used: 13488128
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:35.751121+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:36.751482+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa00a000/0x0/0x4ffc00000, data 0x158ef87/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:37.751709+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:38.752087+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:39.752499+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341847 data_alloc: 234881024 data_used: 13492224
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:40.752732+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:41.753099+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:42.753325+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 56.702072144s of 56.839138031s, submitted: 25
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa00a000/0x0/0x4ffc00000, data 0x158ef87/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:43.753617+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:44.754056+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340915 data_alloc: 234881024 data_used: 13492224
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:45.754414+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:46.754806+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9ff4000/0x0/0x4ffc00000, data 0x15a5f87/0x167a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:47.755036+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:48.755389+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:49.755840+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340915 data_alloc: 234881024 data_used: 13492224
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:50.756271+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:51.756466+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:52.756734+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9ff4000/0x0/0x4ffc00000, data 0x15a5f87/0x167a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:53.757100+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:54.757458+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341075 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:55.757790+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9ff4000/0x0/0x4ffc00000, data 0x15a5f87/0x167a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:56.758195+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9ff4000/0x0/0x4ffc00000, data 0x15a5f87/0x167a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:57.758632+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:58.759129+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9ff4000/0x0/0x4ffc00000, data 0x15a5f87/0x167a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.855074883s of 15.871548653s, submitted: 2
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:59.759556+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341495 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:00.759781+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:01.760012+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:02.760326+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:03.760640+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdf000/0x0/0x4ffc00000, data 0x15baf87/0x168f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:04.761178+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341495 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:05.761557+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:06.761973+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:07.762404+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdf000/0x0/0x4ffc00000, data 0x15baf87/0x168f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:08.762740+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:09.763140+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341495 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdf000/0x0/0x4ffc00000, data 0x15baf87/0x168f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:10.763524+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdf000/0x0/0x4ffc00000, data 0x15baf87/0x168f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:11.764007+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:12.764389+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.313895226s of 14.336582184s, submitted: 2
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:13.764682+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:14.765051+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:15.765645+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:16.766116+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:17.766564+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:18.766843+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:19.767311+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:20.767592+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:21.768028+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:22.768271+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:23.768852+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:24.769281+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:25.769713+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:26.770194+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:27.770671+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:28.771180+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:29.771606+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:30.772104+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:31.772596+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:32.773203+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:33.773642+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:34.774065+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:35.774377+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:36.774584+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:37.774799+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:38.775078+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:39.775430+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:40.775676+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:41.775977+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:42.776281+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:43.776493+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:44.776729+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:45.777071+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:46.777421+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:47.777872+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:48.778251+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:49.778556+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:50.778844+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:51.779267+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:52.779803+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:53.780021+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:54.780288+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:55.780606+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:56.780942+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:57.781128+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:58.781302+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:59.781643+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:00.782082+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:01.782442+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:02.782715+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:03.782984+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:04.783207+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:05.783441+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:06.783711+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:07.784164+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:08.784500+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:09.784706+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:10.784989+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:11.785214+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:12.785587+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:13.786044+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:14.786449+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:15.787131+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:16.787508+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:17.787975+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:18.788371+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:19.788749+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:20.789076+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:21.789428+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:22.789747+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:23.790076+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:24.790290+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:25.790529+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:26.790966+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:27.791297+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:28.791809+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:29.792313+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:30.793082+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:31.793291+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:32.793564+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:33.793812+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:34.794054+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:35.794325+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:36.794632+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:37.796080+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:38.796604+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:39.797126+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:40.797562+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:41.797998+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:42.798292+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:43.798626+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:44.798875+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:45.799401+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:46.799755+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:47.800066+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:48.800361+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:49.800954+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:50.801330+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:51.801774+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:52.802135+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:53.802333+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:54.802692+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:55.802916+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:56.803113+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:57.803368+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:58.803566+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:59.804011+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:00.804307+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:01.805079+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:02.805345+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:03.805573+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:04.806015+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:05.806418+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:06.806680+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:07.807290+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:08.807508+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:09.807829+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:10.808119+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:11.808474+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:12.808822+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:13.808977+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:14.809257+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:15.809489+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:16.809803+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:17.810151+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:18.810623+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:19.811060+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:20.811500+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:21.811877+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:22.812359+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:23.812680+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:24.813049+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:25.813224+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:26.813611+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:27.814061+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:28.814496+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:29.814950+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:30.815334+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:31.815852+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:32.816232+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:33.816688+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:34.817099+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:35.817435+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:36.817853+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:37.818308+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:38.818673+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:39.819133+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:40.819481+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:41.819736+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:42.820080+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:43.820428+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:44.820755+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:45.821080+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:46.821466+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:47.821862+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:48.822375+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:49.822972+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:50.823396+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:51.823769+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:52.824119+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:53.824346+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:54.824714+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:55.825046+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:56.825395+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:57.825651+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:58.826023+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:59.826354+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:00.826552+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:01.826761+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:02.827091+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:03.827532+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:04.827739+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:05.828048+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:06.828335+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:07.828624+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:08.828856+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:09.829159+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:10.829507+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:11.829759+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:12.830123+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:13.830463+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:14.830808+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:15.831142+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:16.831547+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:17.831975+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:18.832378+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:19.832804+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:20.833208+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:21.833636+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:22.834033+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:23.834495+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:24.834754+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:25.835038+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:26.835419+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:27.835621+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:28.835998+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:29.836303+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:30.836575+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:31.836813+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:32.837150+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:33.837486+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:34.837794+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:35.838047+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:36.838275+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:37.838676+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:38.838982+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:39.839291+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:40.839752+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:41.840021+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:42.840348+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:43.840698+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:44.840950+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:45.841272+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:46.841666+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:47.841986+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:48.842369+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:49.842628+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:50.843005+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:51.843393+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:52.843711+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:53.844123+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:54.844447+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:55.844844+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:56.845192+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 24100864 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:57.845563+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 24100864 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:58.845880+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 24100864 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:59.846358+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24092672 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:00.846789+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24092672 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:01.847148+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24092672 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:02.847547+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24092672 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:03.847772+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24092672 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:04.848304+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24092672 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:05.848514+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24092672 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:06.848983+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24092672 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:07.849329+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24092672 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:08.849748+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24092672 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:09.850143+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24092672 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:10.850506+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24084480 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:11.850866+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24084480 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:12.851149+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24084480 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:13.851520+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24084480 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:14.851871+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24084480 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:15.852265+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24084480 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:16.853084+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24084480 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:17.853321+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24084480 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:18.853603+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24084480 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:19.854221+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 24076288 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:20.854647+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 24076288 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:21.854964+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 24076288 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:22.855205+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 24076288 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:23.855591+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24068096 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:24.856072+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24068096 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 251.589416504s of 251.610870361s, submitted: 3
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:25.856393+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24068096 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:26.856707+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24068096 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:27.856982+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24068096 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:28.857341+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24068096 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:29.857661+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24068096 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:30.858033+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24068096 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:31.858362+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24068096 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:32.858724+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24068096 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:33.859106+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24068096 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:34.859385+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24068096 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:35.859759+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24059904 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:36.860159+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24059904 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:37.860397+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24059904 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:38.860828+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24059904 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:39.861272+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24051712 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:40.861649+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24051712 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:41.862029+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24051712 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:42.862410+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24051712 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:43.862748+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24051712 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:44.863321+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24051712 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:45.863696+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24051712 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:46.864118+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24051712 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:47.864640+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:48.865072+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:49.865463+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:50.865997+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:51.866405+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:52.866998+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:53.867361+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:54.867765+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:55.868189+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:56.868620+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:57.869058+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:58.869415+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:59.869985+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:00.870392+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:01.870684+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:02.871191+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:03.871567+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 24035328 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:04.872041+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 24305664 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:05.872475+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 24305664 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:06.873582+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 24305664 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:07.873817+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 24305664 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:08.874181+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 24305664 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:09.874653+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 24305664 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:10.875116+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 24305664 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:11.875553+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 24305664 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:12.876237+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 24305664 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:13.876803+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24297472 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:14.877144+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24297472 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:15.877627+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24297472 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:16.878490+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24297472 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:17.878800+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24297472 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:18.879241+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24297472 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:19.879667+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24297472 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:20.880084+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24297472 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:21.880593+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24297472 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:22.881055+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24297472 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:23.881388+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24297472 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:24.881736+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24297472 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:25.882129+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 24289280 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:26.882360+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 24289280 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:27.882679+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 24289280 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:28.882930+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 24289280 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:29.883232+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 24289280 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:30.883548+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 24289280 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:31.883828+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 24281088 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:32.884013+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 24281088 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:33.884287+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 24281088 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:34.884611+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 24281088 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:35.885273+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 24272896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:36.885610+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 24272896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:37.886187+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 24272896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:38.886546+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 24272896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:39.887072+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 24272896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:40.887468+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 24272896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:41.887981+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 24272896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:42.888457+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 24272896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:43.888800+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 24272896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:44.889870+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 24272896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:45.890251+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 24272896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:46.890463+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 24272896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:47.890842+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116989952 unmapped: 24264704 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:48.891167+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116989952 unmapped: 24264704 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:49.891592+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116989952 unmapped: 24264704 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:50.892146+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116989952 unmapped: 24264704 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:51.892533+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 24256512 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:52.893053+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 24256512 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:53.893422+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:54.893802+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:55.894118+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:56.894430+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:57.894783+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:58.895160+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:59.895768+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:00.896151+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:01.896590+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:02.897064+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:03.897358+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:04.897712+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:05.898177+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:06.898662+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:07.899036+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:08.899394+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:09.899734+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 24240128 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:10.900084+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 24240128 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:11.900441+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 24240128 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:12.900792+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 24240128 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:13.901122+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 24240128 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:14.901500+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 24240128 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:15.902081+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 24231936 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:16.902483+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 24223744 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:17.902848+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 24223744 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:18.903238+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 24223744 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:19.903602+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 24223744 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:20.904048+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 24223744 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:21.904420+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 24223744 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:22.904685+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 24223744 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:23.905140+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 24223744 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:24.905468+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:25.905827+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:26.906180+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:27.906566+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:28.907005+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:29.907407+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:30.907806+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:31.908226+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:32.908575+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:33.908998+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:34.909618+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:35.909993+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:36.910224+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:37.911245+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:38.912465+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:39.912970+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:40.913454+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:41.914226+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:42.914556+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:43.914998+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:44.915409+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:45.915780+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:46.916159+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:47.917021+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:48.917349+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:49.918149+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:50.918640+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:51.919044+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:52.919510+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.1 total, 600.0 interval
                                            Cumulative writes: 10K writes, 37K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 10K writes, 2749 syncs, 3.64 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 337 writes, 730 keys, 337 commit groups, 1.0 writes per commit group, ingest: 0.46 MB, 0.00 MB/s
                                            Interval WAL: 337 writes, 162 syncs, 2.08 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:53.920036+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:54.920442+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:55.920829+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:56.921252+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:57.921585+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:58.922117+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:59.922511+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:00.922845+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:01.923237+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:02.923569+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:03.924038+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:04.924369+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:05.924801+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:06.925443+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:07.925840+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:08.926198+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:09.926549+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:10.926993+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:11.927529+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:12.927865+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:13.928341+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:14.928826+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:15.929161+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:16.929525+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:17.929982+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:18.930481+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:19.931014+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:20.931376+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:21.931806+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:22.932162+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:23.932533+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:24.932732+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:25.933055+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:26.933443+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:27.933984+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:28.934343+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:29.935143+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:30.935492+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:31.936603+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:32.937149+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:33.938083+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:34.938491+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:35.939118+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:36.939448+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:37.939828+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:38.940328+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:39.940746+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:40.941141+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:41.941547+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:42.942072+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:43.942397+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:44.942839+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:45.943222+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:46.943604+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:47.944003+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:48.944299+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:49.944714+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:50.945089+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:51.945301+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:52.945758+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:53.946199+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:54.946473+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:55.946724+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630ea005000 session 0x5630e90eef00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:56.946967+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ebdfa000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 211.850372314s of 211.858688354s, submitted: 1
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630ebdfa000 session 0x5630e8d390e0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:57.947379+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296649 data_alloc: 218103808 data_used: 12849152
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa3f7000/0x0/0x4ffc00000, data 0x11a3f77/0x1277000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:58.947683+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:59.948060+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:00.948527+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:01.949428+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:02.949822+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa3f7000/0x0/0x4ffc00000, data 0x11a3f77/0x1277000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296649 data_alloc: 218103808 data_used: 12849152
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa3f7000/0x0/0x4ffc00000, data 0x11a3f77/0x1277000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:03.950176+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:04.950420+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:05.950807+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:06.951196+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:07.951679+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa3f7000/0x0/0x4ffc00000, data 0x11a3f77/0x1277000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296649 data_alloc: 218103808 data_used: 12849152
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:08.952112+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:09.952681+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032400 session 0x5630e73825a0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032800 session 0x5630e8cc0000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.471276283s of 13.519330025s, submitted: 11
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 28729344 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:10.953298+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e6e6f800 session 0x5630e7381860
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa893000/0x0/0x4ffc00000, data 0x8f7f77/0x9cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 28729344 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:11.953662+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 28729344 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:12.954152+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183069 data_alloc: 218103808 data_used: 7172096
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 28729344 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:13.954460+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 28729344 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:14.954827+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8bd000/0x0/0x4ffc00000, data 0x8cdf77/0x9a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 28729344 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:15.955247+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8bd000/0x0/0x4ffc00000, data 0x8cdf77/0x9a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 28729344 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:16.955735+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 28729344 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:17.956145+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183069 data_alloc: 218103808 data_used: 7172096
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8bd000/0x0/0x4ffc00000, data 0x8cdf77/0x9a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 28729344 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:18.956570+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 28729344 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:19.957134+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 28729344 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:20.957528+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032400
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.972305298s of 11.028164864s, submitted: 11
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 28729344 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:21.958064+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 139 ms_handle_reset con 0x5630e9032400 session 0x5630e8d38f00
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fa8bd000/0x0/0x4ffc00000, data 0x8cdf54/0x9a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 28663808 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:22.958472+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ea005000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240857 data_alloc: 218103808 data_used: 7180288
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:23.958764+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 37044224 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 139 ms_handle_reset con 0x5630ea005000 session 0x5630e900d2c0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ebdfa000
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _renew_subs
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:24.959172+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 37036032 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _renew_subs
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 141 ms_handle_reset con 0x5630ebdfa000 session 0x5630e8c4c5a0
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:25.959529+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112017408 unmapped: 37634048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa8b4000/0x0/0x4ffc00000, data 0x8d327c/0x9a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:26.959984+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112017408 unmapped: 37634048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa8b4000/0x0/0x4ffc00000, data 0x8d327c/0x9a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:27.960346+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112017408 unmapped: 37634048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193709 data_alloc: 218103808 data_used: 7180288
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:28.960785+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112017408 unmapped: 37634048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:29.961328+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112017408 unmapped: 37634048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:30.961721+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112017408 unmapped: 37634048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:31.962173+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112017408 unmapped: 37634048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa8b4000/0x0/0x4ffc00000, data 0x8d327c/0x9a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:32.962662+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112017408 unmapped: 37634048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.521178246s of 11.814188957s, submitted: 43
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196011 data_alloc: 218103808 data_used: 7180288
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:33.963073+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112017408 unmapped: 37634048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:34.963503+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112017408 unmapped: 37634048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:35.963847+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 37584896 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:36.964251+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112123904 unmapped: 37527552 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:37.964729+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 37494784 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195131 data_alloc: 218103808 data_used: 7180288
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:38.965262+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 37494784 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:39.965762+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 37494784 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:40.966211+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 37494784 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:41.966650+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 37494784 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:42.967106+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 37494784 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195131 data_alloc: 218103808 data_used: 7180288
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:43.967483+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 37494784 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:44.967988+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 37494784 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:45.968427+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:46.968774+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:47.969188+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:48.969524+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:49.970062+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:50.970438+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:51.970825+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:52.971292+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:53.971688+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:54.972118+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:55.972621+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:56.973159+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:57.973494+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:58.973799+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:59.974016+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:00.974197+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:01.974572+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:02.975015+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:03.975415+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:04.975595+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:05.976021+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:06.976384+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:07.976676+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:08.977152+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:09.977455+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:10.977852+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:11.978243+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:12.978659+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:13.979174+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:14.979541+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:15.980040+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:16.980390+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:17.980744+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:18.981250+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:19.981694+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:20.982123+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:21.982545+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:22.983030+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:23.983417+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:24.983768+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:25.984118+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:26.984599+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:27.985036+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:28.985371+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:29.985979+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:30.986449+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:31.986950+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:32.987338+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:33.987817+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:34.988262+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:35.988668+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:36.989190+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:37.989665+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:38.990169+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:39.990658+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:40.991089+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:41.991466+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:42.991817+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:43.992228+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:44.992629+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:45.993072+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:46.993428+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:47.993809+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:48.994066+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:49.994456+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:50.994767+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:51.995228+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:52.995706+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:53.996069+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:54.996251+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:55.996603+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:56.996824+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:57.997101+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:58.997312+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:59.998123+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:00.998341+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:01.998662+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:02.998853+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:03.999095+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:04.999281+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:05.999472+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:06.999960+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: do_command 'config diff' '{prefix=config diff}'
Dec 05 02:27:40 compute-0 ceph-osd[206647]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112279552 unmapped: 37371904 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: do_command 'config show' '{prefix=config show}'
Dec 05 02:27:40 compute-0 ceph-osd[206647]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: do_command 'counter dump' '{prefix=counter dump}'
Dec 05 02:27:40 compute-0 ceph-osd[206647]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:08.000271+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: do_command 'counter schema' '{prefix=counter schema}'
Dec 05 02:27:40 compute-0 ceph-osd[206647]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 05 02:27:40 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 37429248 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:09.000444+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:27:40 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:27:40 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:27:40 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 37085184 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:27:40 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:10.000679+0000)
Dec 05 02:27:40 compute-0 ceph-osd[206647]: do_command 'log dump' '{prefix=log dump}'
Dec 05 02:27:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Dec 05 02:27:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2329746944' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 05 02:27:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2364: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 02:27:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 02:27:41 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 02:27:41 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 02:27:41 compute-0 ceph-mon[192914]: from='client.15687 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:41 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2329746944' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 05 02:27:41 compute-0 ceph-mon[192914]: pgmap v2364: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:41 compute-0 ceph-mon[192914]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 02:27:41 compute-0 ceph-mon[192914]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 02:27:41 compute-0 ceph-mon[192914]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 02:27:41 compute-0 ceph-mon[192914]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 02:27:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Dec 05 02:27:41 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/750428465' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 05 02:27:41 compute-0 loving_grothendieck[474191]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:27:41 compute-0 loving_grothendieck[474191]: --> relative data size: 1.0
Dec 05 02:27:41 compute-0 loving_grothendieck[474191]: --> All data devices are unavailable
Dec 05 02:27:41 compute-0 systemd[1]: libpod-e39f87d5b77bab670aa984da1c4501725284790005ee33a601bd79da23e4b9c3.scope: Deactivated successfully.
Dec 05 02:27:41 compute-0 systemd[1]: libpod-e39f87d5b77bab670aa984da1c4501725284790005ee33a601bd79da23e4b9c3.scope: Consumed 1.065s CPU time.
Dec 05 02:27:41 compute-0 podman[474172]: 2025-12-05 02:27:41.656136942 +0000 UTC m=+1.354770856 container died e39f87d5b77bab670aa984da1c4501725284790005ee33a601bd79da23e4b9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 05 02:27:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-8898a5418aae4cbda98cdc08571a1c43cfe0c51f636989afb233d7b9d9aef747-merged.mount: Deactivated successfully.
Dec 05 02:27:41 compute-0 podman[474172]: 2025-12-05 02:27:41.756403991 +0000 UTC m=+1.455037905 container remove e39f87d5b77bab670aa984da1c4501725284790005ee33a601bd79da23e4b9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_grothendieck, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:27:41 compute-0 systemd[1]: libpod-conmon-e39f87d5b77bab670aa984da1c4501725284790005ee33a601bd79da23e4b9c3.scope: Deactivated successfully.
Dec 05 02:27:41 compute-0 sudo[473950]: pam_unix(sudo:session): session closed for user root
Dec 05 02:27:41 compute-0 sudo[474383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:27:41 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15701 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:41 compute-0 sudo[474383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:27:41 compute-0 sudo[474383]: pam_unix(sudo:session): session closed for user root
Dec 05 02:27:42 compute-0 sudo[474412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:27:42 compute-0 sudo[474412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:27:42 compute-0 sudo[474412]: pam_unix(sudo:session): session closed for user root
Dec 05 02:27:42 compute-0 sudo[474449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:27:42 compute-0 sudo[474449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:27:42 compute-0 sudo[474449]: pam_unix(sudo:session): session closed for user root
Dec 05 02:27:42 compute-0 sudo[474502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:27:42 compute-0 sudo[474502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:27:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Dec 05 02:27:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1385813320' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 05 02:27:42 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/750428465' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 05 02:27:42 compute-0 ceph-mon[192914]: from='client.15701 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:42 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1385813320' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 05 02:27:42 compute-0 podman[474614]: 2025-12-05 02:27:42.713159388 +0000 UTC m=+0.067916991 container create 337c5d27d13bedd6951c1fb78e8b90d1fa9f04b7c2c9fb91d1aa91f15dc6f57a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:27:42 compute-0 systemd[1]: Started libpod-conmon-337c5d27d13bedd6951c1fb78e8b90d1fa9f04b7c2c9fb91d1aa91f15dc6f57a.scope.
Dec 05 02:27:42 compute-0 podman[474614]: 2025-12-05 02:27:42.681030156 +0000 UTC m=+0.035787779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:27:42 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:27:42 compute-0 podman[474614]: 2025-12-05 02:27:42.829152504 +0000 UTC m=+0.183910127 container init 337c5d27d13bedd6951c1fb78e8b90d1fa9f04b7c2c9fb91d1aa91f15dc6f57a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mclean, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 05 02:27:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2365: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:42 compute-0 nova_compute[349548]: 2025-12-05 02:27:42.836 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:27:42 compute-0 podman[474614]: 2025-12-05 02:27:42.845072235 +0000 UTC m=+0.199829828 container start 337c5d27d13bedd6951c1fb78e8b90d1fa9f04b7c2c9fb91d1aa91f15dc6f57a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:27:42 compute-0 sleepy_mclean[474632]: 167 167
Dec 05 02:27:42 compute-0 systemd[1]: libpod-337c5d27d13bedd6951c1fb78e8b90d1fa9f04b7c2c9fb91d1aa91f15dc6f57a.scope: Deactivated successfully.
Dec 05 02:27:42 compute-0 podman[474614]: 2025-12-05 02:27:42.855013884 +0000 UTC m=+0.209771467 container attach 337c5d27d13bedd6951c1fb78e8b90d1fa9f04b7c2c9fb91d1aa91f15dc6f57a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 05 02:27:42 compute-0 conmon[474632]: conmon 337c5d27d13bedd6951c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-337c5d27d13bedd6951c1fb78e8b90d1fa9f04b7c2c9fb91d1aa91f15dc6f57a.scope/container/memory.events
Dec 05 02:27:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Dec 05 02:27:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3801890838' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 05 02:27:42 compute-0 podman[474639]: 2025-12-05 02:27:42.92244632 +0000 UTC m=+0.041408392 container died 337c5d27d13bedd6951c1fb78e8b90d1fa9f04b7c2c9fb91d1aa91f15dc6f57a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 05 02:27:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-32448b7b983019629ff33b3ca0ccb0970d4df2013002484f4bc2c6642791a3fd-merged.mount: Deactivated successfully.
Dec 05 02:27:42 compute-0 podman[474639]: 2025-12-05 02:27:42.97827728 +0000 UTC m=+0.097239332 container remove 337c5d27d13bedd6951c1fb78e8b90d1fa9f04b7c2c9fb91d1aa91f15dc6f57a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_mclean, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 05 02:27:42 compute-0 systemd[1]: libpod-conmon-337c5d27d13bedd6951c1fb78e8b90d1fa9f04b7c2c9fb91d1aa91f15dc6f57a.scope: Deactivated successfully.
Dec 05 02:27:43 compute-0 podman[474688]: 2025-12-05 02:27:43.242152056 +0000 UTC m=+0.083157714 container create ea6344e474b93d1900ac8b7282e8c989aeaa14bab07b8aa82ba7a709d70e8fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_meninsky, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 05 02:27:43 compute-0 podman[474688]: 2025-12-05 02:27:43.205274506 +0000 UTC m=+0.046280124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:27:43 compute-0 systemd[1]: Started libpod-conmon-ea6344e474b93d1900ac8b7282e8c989aeaa14bab07b8aa82ba7a709d70e8fdb.scope.
Dec 05 02:27:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Dec 05 02:27:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1432210231' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 05 02:27:43 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670d68504a5c7374d879e312ef695e4875a111a9945800d3540ee81ccc154c9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670d68504a5c7374d879e312ef695e4875a111a9945800d3540ee81ccc154c9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670d68504a5c7374d879e312ef695e4875a111a9945800d3540ee81ccc154c9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670d68504a5c7374d879e312ef695e4875a111a9945800d3540ee81ccc154c9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:27:43 compute-0 podman[474688]: 2025-12-05 02:27:43.416781772 +0000 UTC m=+0.257787430 container init ea6344e474b93d1900ac8b7282e8c989aeaa14bab07b8aa82ba7a709d70e8fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:27:43 compute-0 ceph-mon[192914]: pgmap v2365: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:43 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3801890838' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 05 02:27:43 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1432210231' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 05 02:27:43 compute-0 systemd[1]: Starting Hostname Service...
Dec 05 02:27:43 compute-0 podman[474688]: 2025-12-05 02:27:43.449572713 +0000 UTC m=+0.290578341 container start ea6344e474b93d1900ac8b7282e8c989aeaa14bab07b8aa82ba7a709d70e8fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_meninsky, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec 05 02:27:43 compute-0 podman[474688]: 2025-12-05 02:27:43.458679148 +0000 UTC m=+0.299684776 container attach ea6344e474b93d1900ac8b7282e8c989aeaa14bab07b8aa82ba7a709d70e8fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 02:27:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:27:43 compute-0 systemd[1]: Started Hostname Service.
Dec 05 02:27:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Dec 05 02:27:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1857028700' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]: {
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:     "0": [
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:         {
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "devices": [
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "/dev/loop3"
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             ],
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "lv_name": "ceph_lv0",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "lv_size": "21470642176",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "name": "ceph_lv0",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "tags": {
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.cluster_name": "ceph",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.crush_device_class": "",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.encrypted": "0",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.osd_id": "0",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.type": "block",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.vdo": "0"
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             },
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "type": "block",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "vg_name": "ceph_vg0"
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:         }
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:     ],
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:     "1": [
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:         {
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "devices": [
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "/dev/loop4"
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             ],
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "lv_name": "ceph_lv1",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "lv_size": "21470642176",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "name": "ceph_lv1",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "tags": {
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.cluster_name": "ceph",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.crush_device_class": "",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.encrypted": "0",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.osd_id": "1",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.type": "block",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.vdo": "0"
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             },
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "type": "block",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "vg_name": "ceph_vg1"
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:         }
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:     ],
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:     "2": [
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:         {
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "devices": [
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "/dev/loop5"
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             ],
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "lv_name": "ceph_lv2",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "lv_size": "21470642176",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "name": "ceph_lv2",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "tags": {
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.cluster_name": "ceph",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.crush_device_class": "",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.encrypted": "0",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.osd_id": "2",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.type": "block",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:                 "ceph.vdo": "0"
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             },
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "type": "block",
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:             "vg_name": "ceph_vg2"
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:         }
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]:     ]
Dec 05 02:27:44 compute-0 laughing_meninsky[474715]: }
Dec 05 02:27:44 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15711 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:44 compute-0 systemd[1]: libpod-ea6344e474b93d1900ac8b7282e8c989aeaa14bab07b8aa82ba7a709d70e8fdb.scope: Deactivated successfully.
Dec 05 02:27:44 compute-0 podman[474688]: 2025-12-05 02:27:44.245039752 +0000 UTC m=+1.086045370 container died ea6344e474b93d1900ac8b7282e8c989aeaa14bab07b8aa82ba7a709d70e8fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 05 02:27:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-670d68504a5c7374d879e312ef695e4875a111a9945800d3540ee81ccc154c9b-merged.mount: Deactivated successfully.
Dec 05 02:27:44 compute-0 podman[474688]: 2025-12-05 02:27:44.330746618 +0000 UTC m=+1.171752236 container remove ea6344e474b93d1900ac8b7282e8c989aeaa14bab07b8aa82ba7a709d70e8fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_meninsky, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:27:44 compute-0 systemd[1]: libpod-conmon-ea6344e474b93d1900ac8b7282e8c989aeaa14bab07b8aa82ba7a709d70e8fdb.scope: Deactivated successfully.
Dec 05 02:27:44 compute-0 sudo[474502]: pam_unix(sudo:session): session closed for user root
Dec 05 02:27:44 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1857028700' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 05 02:27:44 compute-0 ceph-mon[192914]: from='client.15711 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:44 compute-0 sudo[474842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:27:44 compute-0 sudo[474842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:27:44 compute-0 sudo[474842]: pam_unix(sudo:session): session closed for user root
Dec 05 02:27:44 compute-0 sudo[474891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:27:44 compute-0 sudo[474891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:27:44 compute-0 sudo[474891]: pam_unix(sudo:session): session closed for user root
Dec 05 02:27:44 compute-0 sudo[474918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:27:44 compute-0 sudo[474918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:27:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Dec 05 02:27:44 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2279240648' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 05 02:27:44 compute-0 sudo[474918]: pam_unix(sudo:session): session closed for user root
Dec 05 02:27:44 compute-0 sudo[474946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:27:44 compute-0 sudo[474946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:27:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2366: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Dec 05 02:27:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2122990425' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 05 02:27:45 compute-0 podman[475085]: 2025-12-05 02:27:45.252283774 +0000 UTC m=+0.051445683 container create 74d5b9627276c00613ac24d770136e2af20c09e09df408ba4ecc057b8ca99e18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 02:27:45 compute-0 systemd[1]: Started libpod-conmon-74d5b9627276c00613ac24d770136e2af20c09e09df408ba4ecc057b8ca99e18.scope.
Dec 05 02:27:45 compute-0 podman[475085]: 2025-12-05 02:27:45.229453042 +0000 UTC m=+0.028615041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:27:45 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:27:45 compute-0 podman[475085]: 2025-12-05 02:27:45.341022739 +0000 UTC m=+0.140184668 container init 74d5b9627276c00613ac24d770136e2af20c09e09df408ba4ecc057b8ca99e18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 05 02:27:45 compute-0 podman[475085]: 2025-12-05 02:27:45.353445829 +0000 UTC m=+0.152607738 container start 74d5b9627276c00613ac24d770136e2af20c09e09df408ba4ecc057b8ca99e18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_nobel, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:27:45 compute-0 youthful_nobel[475108]: 167 167
Dec 05 02:27:45 compute-0 podman[475085]: 2025-12-05 02:27:45.361427911 +0000 UTC m=+0.160589850 container attach 74d5b9627276c00613ac24d770136e2af20c09e09df408ba4ecc057b8ca99e18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 05 02:27:45 compute-0 systemd[1]: libpod-74d5b9627276c00613ac24d770136e2af20c09e09df408ba4ecc057b8ca99e18.scope: Deactivated successfully.
Dec 05 02:27:45 compute-0 podman[475085]: 2025-12-05 02:27:45.363649756 +0000 UTC m=+0.162811665 container died 74d5b9627276c00613ac24d770136e2af20c09e09df408ba4ecc057b8ca99e18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:27:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2745ea566f8e9d6234ac128bb037520487d714aa0b90482d56ef6f5ea9f9428-merged.mount: Deactivated successfully.
Dec 05 02:27:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:27:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/626765808' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:27:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:27:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/626765808' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:27:45 compute-0 podman[475085]: 2025-12-05 02:27:45.423481521 +0000 UTC m=+0.222643450 container remove 74d5b9627276c00613ac24d770136e2af20c09e09df408ba4ecc057b8ca99e18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:27:45 compute-0 systemd[1]: libpod-conmon-74d5b9627276c00613ac24d770136e2af20c09e09df408ba4ecc057b8ca99e18.scope: Deactivated successfully.
Dec 05 02:27:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2279240648' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 05 02:27:45 compute-0 ceph-mon[192914]: pgmap v2366: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2122990425' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 05 02:27:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/626765808' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:27:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/626765808' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:27:45 compute-0 nova_compute[349548]: 2025-12-05 02:27:45.486 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:27:45 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15717 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:45 compute-0 podman[475180]: 2025-12-05 02:27:45.629701053 +0000 UTC m=+0.060224298 container create 12da94d4e17ef36949f2eaad0da2fc39e9ef18b1bd87fc23320ea05d83e1b3eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Dec 05 02:27:45 compute-0 systemd[1]: Started libpod-conmon-12da94d4e17ef36949f2eaad0da2fc39e9ef18b1bd87fc23320ea05d83e1b3eb.scope.
Dec 05 02:27:45 compute-0 podman[475180]: 2025-12-05 02:27:45.605543222 +0000 UTC m=+0.036066487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:27:45 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/162f6421394e06e76653d983abe21353fc390ba44b4668cb9fd0f956d858a547/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/162f6421394e06e76653d983abe21353fc390ba44b4668cb9fd0f956d858a547/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/162f6421394e06e76653d983abe21353fc390ba44b4668cb9fd0f956d858a547/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/162f6421394e06e76653d983abe21353fc390ba44b4668cb9fd0f956d858a547/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:27:45 compute-0 podman[475180]: 2025-12-05 02:27:45.734631998 +0000 UTC m=+0.165155283 container init 12da94d4e17ef36949f2eaad0da2fc39e9ef18b1bd87fc23320ea05d83e1b3eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_allen, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 05 02:27:45 compute-0 podman[475180]: 2025-12-05 02:27:45.746537723 +0000 UTC m=+0.177060968 container start 12da94d4e17ef36949f2eaad0da2fc39e9ef18b1bd87fc23320ea05d83e1b3eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_allen, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:27:45 compute-0 podman[475180]: 2025-12-05 02:27:45.751049024 +0000 UTC m=+0.181572309 container attach 12da94d4e17ef36949f2eaad0da2fc39e9ef18b1bd87fc23320ea05d83e1b3eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_allen, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:27:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Dec 05 02:27:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/82176333' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 05 02:27:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:27:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:27:46 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15725 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:46 compute-0 ceph-mon[192914]: from='client.15717 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/82176333' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 05 02:27:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:27:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:27:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:27:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:27:46 compute-0 elegant_allen[475199]: {
Dec 05 02:27:46 compute-0 elegant_allen[475199]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:27:46 compute-0 elegant_allen[475199]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:27:46 compute-0 elegant_allen[475199]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:27:46 compute-0 elegant_allen[475199]:         "osd_id": 0,
Dec 05 02:27:46 compute-0 elegant_allen[475199]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:27:46 compute-0 elegant_allen[475199]:         "type": "bluestore"
Dec 05 02:27:46 compute-0 elegant_allen[475199]:     },
Dec 05 02:27:46 compute-0 elegant_allen[475199]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:27:46 compute-0 elegant_allen[475199]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:27:46 compute-0 elegant_allen[475199]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:27:46 compute-0 elegant_allen[475199]:         "osd_id": 1,
Dec 05 02:27:46 compute-0 elegant_allen[475199]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:27:46 compute-0 elegant_allen[475199]:         "type": "bluestore"
Dec 05 02:27:46 compute-0 elegant_allen[475199]:     },
Dec 05 02:27:46 compute-0 elegant_allen[475199]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:27:46 compute-0 elegant_allen[475199]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:27:46 compute-0 elegant_allen[475199]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:27:46 compute-0 elegant_allen[475199]:         "osd_id": 2,
Dec 05 02:27:46 compute-0 elegant_allen[475199]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:27:46 compute-0 elegant_allen[475199]:         "type": "bluestore"
Dec 05 02:27:46 compute-0 elegant_allen[475199]:     }
Dec 05 02:27:46 compute-0 elegant_allen[475199]: }
Dec 05 02:27:46 compute-0 systemd[1]: libpod-12da94d4e17ef36949f2eaad0da2fc39e9ef18b1bd87fc23320ea05d83e1b3eb.scope: Deactivated successfully.
Dec 05 02:27:46 compute-0 systemd[1]: libpod-12da94d4e17ef36949f2eaad0da2fc39e9ef18b1bd87fc23320ea05d83e1b3eb.scope: Consumed 1.005s CPU time.
Dec 05 02:27:46 compute-0 podman[475180]: 2025-12-05 02:27:46.780323466 +0000 UTC m=+1.210846731 container died 12da94d4e17ef36949f2eaad0da2fc39e9ef18b1bd87fc23320ea05d83e1b3eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:27:46 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15727 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-162f6421394e06e76653d983abe21353fc390ba44b4668cb9fd0f956d858a547-merged.mount: Deactivated successfully.
Dec 05 02:27:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2367: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:46 compute-0 podman[475180]: 2025-12-05 02:27:46.874503228 +0000 UTC m=+1.305026473 container remove 12da94d4e17ef36949f2eaad0da2fc39e9ef18b1bd87fc23320ea05d83e1b3eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_allen, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:27:46 compute-0 systemd[1]: libpod-conmon-12da94d4e17ef36949f2eaad0da2fc39e9ef18b1bd87fc23320ea05d83e1b3eb.scope: Deactivated successfully.
Dec 05 02:27:46 compute-0 sudo[474946]: pam_unix(sudo:session): session closed for user root
Dec 05 02:27:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:27:46 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:27:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:27:46 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:27:46 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c457fbad-6200-453b-9aa3-3ddfe9ca6029 does not exist
Dec 05 02:27:46 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a0cf35ed-32a2-406b-820a-410387462b19 does not exist
Dec 05 02:27:47 compute-0 sudo[475431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:27:47 compute-0 sudo[475431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:27:47 compute-0 sudo[475431]: pam_unix(sudo:session): session closed for user root
Dec 05 02:27:47 compute-0 sudo[475479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:27:47 compute-0 sudo[475479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:27:47 compute-0 sudo[475479]: pam_unix(sudo:session): session closed for user root
Dec 05 02:27:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Dec 05 02:27:47 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4024151621' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 05 02:27:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Dec 05 02:27:47 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/966002622' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 05 02:27:47 compute-0 nova_compute[349548]: 2025-12-05 02:27:47.839 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:27:48 compute-0 ceph-mon[192914]: from='client.15725 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:48 compute-0 ceph-mon[192914]: from='client.15727 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:48 compute-0 ceph-mon[192914]: pgmap v2367: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:27:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:27:48 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4024151621' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 05 02:27:48 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/966002622' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 05 02:27:48 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15733 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:27:48 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15735 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:48 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:48 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:27:48 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:48 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 02:27:48 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:48 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:27:48 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:48 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:27:48 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:48 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 05 02:27:48 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:48 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:27:48 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:48 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:27:48 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:48 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:27:48 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:48 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:27:48 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:48 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:27:48 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:48 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:27:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2368: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Dec 05 02:27:48 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1580577288' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 05 02:27:49 compute-0 ceph-mon[192914]: from='client.15733 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:49 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1580577288' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 05 02:27:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Dec 05 02:27:49 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3680300619' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 05 02:27:49 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15741 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:50 compute-0 ceph-mon[192914]: from='client.15735 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:50 compute-0 ceph-mon[192914]: pgmap v2368: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:50 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3680300619' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 05 02:27:50 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15743 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:50 compute-0 nova_compute[349548]: 2025-12-05 02:27:50.492 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:27:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Dec 05 02:27:50 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3859856654' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 05 02:27:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2369: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:51 compute-0 nova_compute[349548]: 2025-12-05 02:27:51.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:27:51 compute-0 nova_compute[349548]: 2025-12-05 02:27:51.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:27:51 compute-0 nova_compute[349548]: 2025-12-05 02:27:51.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:27:51 compute-0 nova_compute[349548]: 2025-12-05 02:27:51.092 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 02:27:51 compute-0 ceph-mon[192914]: from='client.15741 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:51 compute-0 ceph-mon[192914]: from='client.15743 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:27:51 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3859856654' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 05 02:27:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Dec 05 02:27:51 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1973112124' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 05 02:27:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Dec 05 02:27:51 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1744810020' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 05 02:27:52 compute-0 ceph-mon[192914]: pgmap v2369: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:52 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1973112124' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 05 02:27:52 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1744810020' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 05 02:27:52 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15751 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:52 compute-0 podman[476194]: 2025-12-05 02:27:52.718418972 +0000 UTC m=+0.115834510 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 02:27:52 compute-0 podman[476191]: 2025-12-05 02:27:52.725503908 +0000 UTC m=+0.130232959 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 05 02:27:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Dec 05 02:27:52 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2257265558' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 02:27:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2370: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:52 compute-0 nova_compute[349548]: 2025-12-05 02:27:52.840 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:27:53 compute-0 ceph-mon[192914]: from='client.15751 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:53 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2257265558' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 02:27:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Dec 05 02:27:53 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2508745667' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 05 02:27:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:27:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Dec 05 02:27:53 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4127698388' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 05 02:27:54 compute-0 nova_compute[349548]: 2025-12-05 02:27:54.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:27:54 compute-0 nova_compute[349548]: 2025-12-05 02:27:54.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:27:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Dec 05 02:27:54 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2328014263' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 05 02:27:54 compute-0 ceph-mon[192914]: pgmap v2370: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:54 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2508745667' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 05 02:27:54 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4127698388' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 05 02:27:54 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2328014263' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 05 02:27:54 compute-0 ovs-appctl[476733]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 05 02:27:54 compute-0 ovs-appctl[476737]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 05 02:27:54 compute-0 ovs-appctl[476742]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 05 02:27:54 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15761 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2371: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Dec 05 02:27:55 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1640341707' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec 05 02:27:55 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1640341707' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec 05 02:27:55 compute-0 nova_compute[349548]: 2025-12-05 02:27:55.495 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:27:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Dec 05 02:27:55 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/425062436' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec 05 02:27:55 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15767 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:27:56.230 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:27:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:27:56.231 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:27:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:27:56.231 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:27:56 compute-0 ceph-mon[192914]: from='client.15761 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:56 compute-0 ceph-mon[192914]: pgmap v2371: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:56 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/425062436' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec 05 02:27:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Dec 05 02:27:56 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3019496411' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec 05 02:27:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2372: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:56 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15771 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:57 compute-0 nova_compute[349548]: 2025-12-05 02:27:57.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:27:57 compute-0 nova_compute[349548]: 2025-12-05 02:27:57.098 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:27:57 compute-0 nova_compute[349548]: 2025-12-05 02:27:57.099 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:27:57 compute-0 nova_compute[349548]: 2025-12-05 02:27:57.099 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:27:57 compute-0 nova_compute[349548]: 2025-12-05 02:27:57.099 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:27:57 compute-0 nova_compute[349548]: 2025-12-05 02:27:57.100 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:27:57 compute-0 ceph-mon[192914]: from='client.15767 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:57 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3019496411' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec 05 02:27:57 compute-0 ceph-mon[192914]: pgmap v2372: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:57 compute-0 ceph-mon[192914]: from='client.15771 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:57 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15773 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:27:57 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/880591417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:27:57 compute-0 nova_compute[349548]: 2025-12-05 02:27:57.604 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:27:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Dec 05 02:27:57 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/149218261' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec 05 02:27:57 compute-0 nova_compute[349548]: 2025-12-05 02:27:57.841 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:27:57 compute-0 nova_compute[349548]: 2025-12-05 02:27:57.913 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:27:57 compute-0 nova_compute[349548]: 2025-12-05 02:27:57.915 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3829MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:27:57 compute-0 nova_compute[349548]: 2025-12-05 02:27:57.916 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:27:57 compute-0 nova_compute[349548]: 2025-12-05 02:27:57.916 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:27:58 compute-0 nova_compute[349548]: 2025-12-05 02:27:57.999 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:27:58 compute-0 nova_compute[349548]: 2025-12-05 02:27:57.999 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:27:58 compute-0 nova_compute[349548]: 2025-12-05 02:27:58.017 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:27:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Dec 05 02:27:58 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1091475944' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec 05 02:27:58 compute-0 ceph-mon[192914]: from='client.15773 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:58 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/880591417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:27:58 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/149218261' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec 05 02:27:58 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1091475944' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec 05 02:27:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:27:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:27:58 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3633524408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:27:58 compute-0 nova_compute[349548]: 2025-12-05 02:27:58.517 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:27:58 compute-0 nova_compute[349548]: 2025-12-05 02:27:58.523 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:27:58 compute-0 nova_compute[349548]: 2025-12-05 02:27:58.542 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:27:58 compute-0 nova_compute[349548]: 2025-12-05 02:27:58.544 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:27:58 compute-0 nova_compute[349548]: 2025-12-05 02:27:58.544 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:27:58 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15783 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:58 compute-0 podman[477524]: 2025-12-05 02:27:58.684993316 +0000 UTC m=+0.096351667 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, config_id=edpm, maintainer=Red Hat, Inc., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.29.0, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.expose-services=, version=9.4, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9)
Dec 05 02:27:58 compute-0 podman[477525]: 2025-12-05 02:27:58.70476669 +0000 UTC m=+0.106642805 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 02:27:58 compute-0 podman[477523]: 2025-12-05 02:27:58.716123879 +0000 UTC m=+0.121643280 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 05 02:27:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2373: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:59 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15785 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:59 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:59 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:27:59 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:59 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 02:27:59 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:59 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:27:59 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:59 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:27:59 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:59 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 05 02:27:59 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:59 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:27:59 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:59 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:27:59 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:59 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:27:59 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:59 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:27:59 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:59 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:27:59 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:27:59 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:27:59 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3633524408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:27:59 compute-0 ceph-mon[192914]: from='client.15783 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:59 compute-0 ceph-mon[192914]: pgmap v2373: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:27:59 compute-0 ceph-mon[192914]: from='client.15785 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:27:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Dec 05 02:27:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/185837423' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 02:27:59 compute-0 nova_compute[349548]: 2025-12-05 02:27:59.544 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:27:59 compute-0 nova_compute[349548]: 2025-12-05 02:27:59.545 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:27:59 compute-0 podman[158197]: time="2025-12-05T02:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:27:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:27:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8199 "" "Go-http-client/1.1"
Dec 05 02:27:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Dec 05 02:27:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3154767681' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec 05 02:28:00 compute-0 nova_compute[349548]: 2025-12-05 02:28:00.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:28:00 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15791 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:28:00 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/185837423' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 02:28:00 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3154767681' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec 05 02:28:00 compute-0 ceph-mon[192914]: from='client.15791 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:28:00 compute-0 nova_compute[349548]: 2025-12-05 02:28:00.500 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:28:00 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15793 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:28:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2374: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:01 compute-0 nova_compute[349548]: 2025-12-05 02:28:01.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:28:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 05 02:28:01 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/771574563' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 02:28:01 compute-0 ceph-mon[192914]: from='client.15793 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:28:01 compute-0 ceph-mon[192914]: pgmap v2374: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:01 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/771574563' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 05 02:28:01 compute-0 openstack_network_exporter[366555]: ERROR   02:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:28:01 compute-0 openstack_network_exporter[366555]: ERROR   02:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:28:01 compute-0 openstack_network_exporter[366555]: ERROR   02:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:28:01 compute-0 openstack_network_exporter[366555]: ERROR   02:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:28:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:28:01 compute-0 openstack_network_exporter[366555]: ERROR   02:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:28:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:28:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Dec 05 02:28:01 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2309279398' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec 05 02:28:02 compute-0 nova_compute[349548]: 2025-12-05 02:28:02.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:28:02 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2309279398' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec 05 02:28:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2375: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:02 compute-0 nova_compute[349548]: 2025-12-05 02:28:02.843 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:28:03 compute-0 nova_compute[349548]: 2025-12-05 02:28:03.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:28:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:28:03 compute-0 ceph-mon[192914]: pgmap v2375: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:04 compute-0 nova_compute[349548]: 2025-12-05 02:28:04.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:28:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2376: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:05 compute-0 nova_compute[349548]: 2025-12-05 02:28:05.504 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:28:05 compute-0 ceph-mon[192914]: pgmap v2376: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2377: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:07 compute-0 virtqemud[138703]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 05 02:28:07 compute-0 nova_compute[349548]: 2025-12-05 02:28:07.846 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:28:07 compute-0 ceph-mon[192914]: pgmap v2377: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:28:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2378: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:09 compute-0 podman[478843]: 2025-12-05 02:28:09.433753611 +0000 UTC m=+0.128202230 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, release=1755695350, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 05 02:28:09 compute-0 podman[478844]: 2025-12-05 02:28:09.441857476 +0000 UTC m=+0.108685454 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Dec 05 02:28:09 compute-0 podman[478851]: 2025-12-05 02:28:09.475468691 +0000 UTC m=+0.139273841 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 02:28:09 compute-0 podman[478854]: 2025-12-05 02:28:09.512774244 +0000 UTC m=+0.158999764 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 05 02:28:09 compute-0 ceph-mon[192914]: pgmap v2378: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:10 compute-0 nova_compute[349548]: 2025-12-05 02:28:10.507 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:28:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2379: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:11 compute-0 ceph-mon[192914]: pgmap v2379: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:12 compute-0 systemd[1]: Starting Time & Date Service...
Dec 05 02:28:12 compute-0 systemd[1]: Started Time & Date Service.
Dec 05 02:28:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2380: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:12 compute-0 nova_compute[349548]: 2025-12-05 02:28:12.848 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:28:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:28:14 compute-0 ceph-mon[192914]: pgmap v2380: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2381: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:15 compute-0 ceph-mon[192914]: pgmap v2381: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:15 compute-0 nova_compute[349548]: 2025-12-05 02:28:15.513 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:28:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:28:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:28:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:28:16
Dec 05 02:28:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:28:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:28:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'backups', 'default.rgw.meta', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'images', 'default.rgw.control', 'default.rgw.log']
Dec 05 02:28:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:28:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:28:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:28:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:28:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:28:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2382: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:17 compute-0 nova_compute[349548]: 2025-12-05 02:28:17.852 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:28:17 compute-0 ceph-mon[192914]: pgmap v2382: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:28:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:28:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:28:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:28:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:28:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:28:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:28:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:28:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:28:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:28:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:28:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2383: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:20 compute-0 ceph-mon[192914]: pgmap v2383: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:20 compute-0 nova_compute[349548]: 2025-12-05 02:28:20.517 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:28:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2384: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:22 compute-0 ceph-mon[192914]: pgmap v2384: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2385: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:22 compute-0 nova_compute[349548]: 2025-12-05 02:28:22.853 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:28:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:28:23 compute-0 podman[479005]: 2025-12-05 02:28:23.723076715 +0000 UTC m=+0.129935380 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 05 02:28:23 compute-0 podman[479006]: 2025-12-05 02:28:23.770632915 +0000 UTC m=+0.172368542 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 02:28:24 compute-0 ceph-mon[192914]: pgmap v2385: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2386: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:25 compute-0 nova_compute[349548]: 2025-12-05 02:28:25.521 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:28:26 compute-0 ceph-mon[192914]: pgmap v2386: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2387: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:28:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:28:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:28:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:28:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 02:28:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:28:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:28:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:28:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:28:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:28:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 05 02:28:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:28:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:28:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:28:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:28:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:28:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:28:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:28:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:28:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:28:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:28:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:28:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:28:27 compute-0 nova_compute[349548]: 2025-12-05 02:28:27.858 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:28:28 compute-0 ceph-mon[192914]: pgmap v2387: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:28:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2388: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:29 compute-0 podman[479046]: 2025-12-05 02:28:29.323985059 +0000 UTC m=+0.114290167 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute)
Dec 05 02:28:29 compute-0 ceph-mon[192914]: pgmap v2388: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:29 compute-0 podman[479048]: 2025-12-05 02:28:29.359658014 +0000 UTC m=+0.140032293 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 05 02:28:29 compute-0 podman[479047]: 2025-12-05 02:28:29.371959461 +0000 UTC m=+0.170703003 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., name=ubi9, io.openshift.expose-services=, release=1214.1726694543, distribution-scope=public, release-0.7.12=, architecture=x86_64, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 05 02:28:29 compute-0 podman[158197]: time="2025-12-05T02:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:28:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:28:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8214 "" "Go-http-client/1.1"
Dec 05 02:28:30 compute-0 nova_compute[349548]: 2025-12-05 02:28:30.526 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:28:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2389: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:31 compute-0 openstack_network_exporter[366555]: ERROR   02:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:28:31 compute-0 openstack_network_exporter[366555]: ERROR   02:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:28:31 compute-0 openstack_network_exporter[366555]: ERROR   02:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:28:31 compute-0 openstack_network_exporter[366555]: ERROR   02:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:28:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:28:31 compute-0 openstack_network_exporter[366555]: ERROR   02:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:28:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:28:31 compute-0 ceph-mon[192914]: pgmap v2389: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:32 compute-0 nova_compute[349548]: 2025-12-05 02:28:32.860 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:28:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2390: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:28:33 compute-0 ceph-mon[192914]: pgmap v2390: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2391: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:35 compute-0 nova_compute[349548]: 2025-12-05 02:28:35.531 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:28:35 compute-0 ceph-mon[192914]: pgmap v2391: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2392: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:37 compute-0 nova_compute[349548]: 2025-12-05 02:28:37.863 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:28:38 compute-0 ceph-mon[192914]: pgmap v2392: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.329 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.329 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.349 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.350 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.350 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.351 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.352 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.352 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.353 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.353 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.354 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.358 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.359 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.360 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.361 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.361 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.361 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.361 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.362 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.362 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.362 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.362 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.363 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.364 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.368 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.368 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.368 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.368 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.368 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.368 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.372 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.372 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.372 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.372 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:28:38.373 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:28:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:28:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2393: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:39 compute-0 podman[479100]: 2025-12-05 02:28:39.705640384 +0000 UTC m=+0.106822251 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, architecture=x86_64, name=ubi9-minimal, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6)
Dec 05 02:28:39 compute-0 podman[479097]: 2025-12-05 02:28:39.712272036 +0000 UTC m=+0.116770749 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 02:28:39 compute-0 podman[479098]: 2025-12-05 02:28:39.719170926 +0000 UTC m=+0.119446386 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:28:39 compute-0 podman[479099]: 2025-12-05 02:28:39.768211839 +0000 UTC m=+0.161789475 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 05 02:28:40 compute-0 ceph-mon[192914]: pgmap v2393: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:40 compute-0 nova_compute[349548]: 2025-12-05 02:28:40.534 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:28:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2394: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:42 compute-0 ceph-mon[192914]: pgmap v2394: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:42 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 05 02:28:42 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 05 02:28:42 compute-0 nova_compute[349548]: 2025-12-05 02:28:42.864 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:28:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2395: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:28:43 compute-0 sudo[470812]: pam_unix(sudo:session): session closed for user root
Dec 05 02:28:43 compute-0 sshd-session[470811]: Received disconnect from 192.168.122.10 port 55824:11: disconnected by user
Dec 05 02:28:43 compute-0 sshd-session[470811]: Disconnected from user zuul 192.168.122.10 port 55824
Dec 05 02:28:43 compute-0 sshd-session[470808]: pam_unix(sshd:session): session closed for user zuul
Dec 05 02:28:43 compute-0 systemd[1]: session-64.scope: Deactivated successfully.
Dec 05 02:28:43 compute-0 systemd[1]: session-64.scope: Consumed 3min 15.931s CPU time, 1.0G memory peak, read 578.2M from disk, written 111.6M to disk.
Dec 05 02:28:43 compute-0 systemd-logind[792]: Session 64 logged out. Waiting for processes to exit.
Dec 05 02:28:43 compute-0 systemd-logind[792]: Removed session 64.
Dec 05 02:28:43 compute-0 sshd-session[479184]: Accepted publickey for zuul from 192.168.122.10 port 58340 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 02:28:43 compute-0 systemd-logind[792]: New session 65 of user zuul.
Dec 05 02:28:43 compute-0 systemd[1]: Started Session 65 of User zuul.
Dec 05 02:28:43 compute-0 sshd-session[479184]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 02:28:43 compute-0 sudo[479188]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-12-05-igvrqeu.tar.xz
Dec 05 02:28:43 compute-0 sudo[479188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 02:28:44 compute-0 sudo[479188]: pam_unix(sudo:session): session closed for user root
Dec 05 02:28:44 compute-0 sshd-session[479187]: Received disconnect from 192.168.122.10 port 58340:11: disconnected by user
Dec 05 02:28:44 compute-0 sshd-session[479187]: Disconnected from user zuul 192.168.122.10 port 58340
Dec 05 02:28:44 compute-0 sshd-session[479184]: pam_unix(sshd:session): session closed for user zuul
Dec 05 02:28:44 compute-0 systemd[1]: session-65.scope: Deactivated successfully.
Dec 05 02:28:44 compute-0 systemd-logind[792]: Session 65 logged out. Waiting for processes to exit.
Dec 05 02:28:44 compute-0 systemd-logind[792]: Removed session 65.
Dec 05 02:28:44 compute-0 ceph-mon[192914]: pgmap v2395: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:44 compute-0 sshd-session[479213]: Accepted publickey for zuul from 192.168.122.10 port 58354 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 02:28:44 compute-0 systemd-logind[792]: New session 66 of user zuul.
Dec 05 02:28:44 compute-0 systemd[1]: Started Session 66 of User zuul.
Dec 05 02:28:44 compute-0 sshd-session[479213]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 02:28:44 compute-0 sudo[479217]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Dec 05 02:28:44 compute-0 sudo[479217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 02:28:44 compute-0 sudo[479217]: pam_unix(sudo:session): session closed for user root
Dec 05 02:28:44 compute-0 sshd-session[479216]: Received disconnect from 192.168.122.10 port 58354:11: disconnected by user
Dec 05 02:28:44 compute-0 sshd-session[479216]: Disconnected from user zuul 192.168.122.10 port 58354
Dec 05 02:28:44 compute-0 sshd-session[479213]: pam_unix(sshd:session): session closed for user zuul
Dec 05 02:28:44 compute-0 systemd[1]: session-66.scope: Deactivated successfully.
Dec 05 02:28:44 compute-0 systemd-logind[792]: Session 66 logged out. Waiting for processes to exit.
Dec 05 02:28:44 compute-0 systemd-logind[792]: Removed session 66.
Dec 05 02:28:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2396: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:28:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3396410785' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:28:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:28:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3396410785' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:28:45 compute-0 nova_compute[349548]: 2025-12-05 02:28:45.539 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:28:46 compute-0 ceph-mon[192914]: pgmap v2396: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/3396410785' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:28:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/3396410785' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:28:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:28:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:28:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:28:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:28:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:28:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:28:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2397: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:47 compute-0 sudo[479242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:28:47 compute-0 sudo[479242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:28:47 compute-0 sudo[479242]: pam_unix(sudo:session): session closed for user root
Dec 05 02:28:47 compute-0 sudo[479267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:28:47 compute-0 sudo[479267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:28:47 compute-0 sudo[479267]: pam_unix(sudo:session): session closed for user root
Dec 05 02:28:47 compute-0 sudo[479292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:28:47 compute-0 sudo[479292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:28:47 compute-0 sudo[479292]: pam_unix(sudo:session): session closed for user root
Dec 05 02:28:47 compute-0 sudo[479317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 05 02:28:47 compute-0 sudo[479317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:28:47 compute-0 nova_compute[349548]: 2025-12-05 02:28:47.867 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:28:48 compute-0 sudo[479317]: pam_unix(sudo:session): session closed for user root
Dec 05 02:28:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:28:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:28:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:28:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:28:48 compute-0 ceph-mon[192914]: pgmap v2397: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:28:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:28:48 compute-0 sudo[479362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:28:48 compute-0 sudo[479362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:28:48 compute-0 sudo[479362]: pam_unix(sudo:session): session closed for user root
Dec 05 02:28:48 compute-0 sudo[479387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:28:48 compute-0 sudo[479387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:28:48 compute-0 sudo[479387]: pam_unix(sudo:session): session closed for user root
Dec 05 02:28:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:28:48 compute-0 sudo[479412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:28:48 compute-0 sudo[479412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:28:48 compute-0 sudo[479412]: pam_unix(sudo:session): session closed for user root
Dec 05 02:28:48 compute-0 sudo[479437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:28:48 compute-0 sudo[479437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:28:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2398: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:49 compute-0 sudo[479437]: pam_unix(sudo:session): session closed for user root
Dec 05 02:28:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:28:49 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:28:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:28:49 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:28:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:28:49 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:28:49 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev fcd96100-8b35-4325-8bf8-d95bef74f870 does not exist
Dec 05 02:28:49 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b342e88b-32a3-482b-bac8-9999bb2ca83b does not exist
Dec 05 02:28:49 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 6e5d4fe6-6f90-4cab-83a0-b6cf808d34f2 does not exist
Dec 05 02:28:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:28:49 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:28:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:28:49 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:28:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:28:49 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:28:49 compute-0 sudo[479494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:28:49 compute-0 sudo[479494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:28:49 compute-0 sudo[479494]: pam_unix(sudo:session): session closed for user root
Dec 05 02:28:49 compute-0 sudo[479519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:28:49 compute-0 sudo[479519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:28:49 compute-0 sudo[479519]: pam_unix(sudo:session): session closed for user root
Dec 05 02:28:49 compute-0 sudo[479544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:28:49 compute-0 sudo[479544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:28:49 compute-0 sudo[479544]: pam_unix(sudo:session): session closed for user root
Dec 05 02:28:50 compute-0 sudo[479569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:28:50 compute-0 sudo[479569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:28:50 compute-0 ceph-mon[192914]: pgmap v2398: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:50 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:28:50 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:28:50 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:28:50 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:28:50 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:28:50 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:28:50 compute-0 nova_compute[349548]: 2025-12-05 02:28:50.544 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:28:50 compute-0 podman[479632]: 2025-12-05 02:28:50.613412862 +0000 UTC m=+0.090245610 container create 0764612f75e32e705c57d4b79ef7921659bbd17e61313717920616a5ab4e20d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lamarr, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:28:50 compute-0 podman[479632]: 2025-12-05 02:28:50.578734606 +0000 UTC m=+0.055567404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:28:50 compute-0 systemd[1]: Started libpod-conmon-0764612f75e32e705c57d4b79ef7921659bbd17e61313717920616a5ab4e20d8.scope.
Dec 05 02:28:50 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:28:50 compute-0 podman[479632]: 2025-12-05 02:28:50.752517377 +0000 UTC m=+0.229350135 container init 0764612f75e32e705c57d4b79ef7921659bbd17e61313717920616a5ab4e20d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lamarr, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:28:50 compute-0 podman[479632]: 2025-12-05 02:28:50.764735192 +0000 UTC m=+0.241567940 container start 0764612f75e32e705c57d4b79ef7921659bbd17e61313717920616a5ab4e20d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 05 02:28:50 compute-0 podman[479632]: 2025-12-05 02:28:50.771604891 +0000 UTC m=+0.248437639 container attach 0764612f75e32e705c57d4b79ef7921659bbd17e61313717920616a5ab4e20d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:28:50 compute-0 determined_lamarr[479648]: 167 167
Dec 05 02:28:50 compute-0 systemd[1]: libpod-0764612f75e32e705c57d4b79ef7921659bbd17e61313717920616a5ab4e20d8.scope: Deactivated successfully.
Dec 05 02:28:50 compute-0 conmon[479648]: conmon 0764612f75e32e705c57 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0764612f75e32e705c57d4b79ef7921659bbd17e61313717920616a5ab4e20d8.scope/container/memory.events
Dec 05 02:28:50 compute-0 podman[479632]: 2025-12-05 02:28:50.780393506 +0000 UTC m=+0.257226284 container died 0764612f75e32e705c57d4b79ef7921659bbd17e61313717920616a5ab4e20d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lamarr, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:28:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-3acd741c19419a67e3e476088fafdb05de293abf64ef90d86704e2bbf71e431d-merged.mount: Deactivated successfully.
Dec 05 02:28:50 compute-0 podman[479632]: 2025-12-05 02:28:50.865418953 +0000 UTC m=+0.342251701 container remove 0764612f75e32e705c57d4b79ef7921659bbd17e61313717920616a5ab4e20d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lamarr, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:28:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2399: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:50 compute-0 systemd[1]: libpod-conmon-0764612f75e32e705c57d4b79ef7921659bbd17e61313717920616a5ab4e20d8.scope: Deactivated successfully.
Dec 05 02:28:51 compute-0 nova_compute[349548]: 2025-12-05 02:28:51.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:28:51 compute-0 nova_compute[349548]: 2025-12-05 02:28:51.069 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:28:51 compute-0 nova_compute[349548]: 2025-12-05 02:28:51.070 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:28:51 compute-0 nova_compute[349548]: 2025-12-05 02:28:51.093 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 02:28:51 compute-0 podman[479671]: 2025-12-05 02:28:51.156464637 +0000 UTC m=+0.101738023 container create 8aafae5fe869215c1dab247c3a831964520576ac21a3305e730483bd5dc50a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brahmagupta, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 05 02:28:51 compute-0 podman[479671]: 2025-12-05 02:28:51.118532916 +0000 UTC m=+0.063806362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:28:51 compute-0 systemd[1]: Started libpod-conmon-8aafae5fe869215c1dab247c3a831964520576ac21a3305e730483bd5dc50a6f.scope.
Dec 05 02:28:51 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:28:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8258b372c9057d7a51ec16f386602086690245a292934fca5c06dbf2bf3c7cb1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:28:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8258b372c9057d7a51ec16f386602086690245a292934fca5c06dbf2bf3c7cb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:28:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8258b372c9057d7a51ec16f386602086690245a292934fca5c06dbf2bf3c7cb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:28:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8258b372c9057d7a51ec16f386602086690245a292934fca5c06dbf2bf3c7cb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:28:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8258b372c9057d7a51ec16f386602086690245a292934fca5c06dbf2bf3c7cb1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:28:51 compute-0 podman[479671]: 2025-12-05 02:28:51.35130203 +0000 UTC m=+0.296575456 container init 8aafae5fe869215c1dab247c3a831964520576ac21a3305e730483bd5dc50a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Dec 05 02:28:51 compute-0 podman[479671]: 2025-12-05 02:28:51.370086155 +0000 UTC m=+0.315359551 container start 8aafae5fe869215c1dab247c3a831964520576ac21a3305e730483bd5dc50a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brahmagupta, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 02:28:51 compute-0 podman[479671]: 2025-12-05 02:28:51.378655013 +0000 UTC m=+0.323928479 container attach 8aafae5fe869215c1dab247c3a831964520576ac21a3305e730483bd5dc50a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:28:52 compute-0 ceph-mon[192914]: pgmap v2399: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:52 compute-0 eager_brahmagupta[479686]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:28:52 compute-0 eager_brahmagupta[479686]: --> relative data size: 1.0
Dec 05 02:28:52 compute-0 eager_brahmagupta[479686]: --> All data devices are unavailable
Dec 05 02:28:52 compute-0 systemd[1]: libpod-8aafae5fe869215c1dab247c3a831964520576ac21a3305e730483bd5dc50a6f.scope: Deactivated successfully.
Dec 05 02:28:52 compute-0 systemd[1]: libpod-8aafae5fe869215c1dab247c3a831964520576ac21a3305e730483bd5dc50a6f.scope: Consumed 1.289s CPU time.
Dec 05 02:28:52 compute-0 podman[479671]: 2025-12-05 02:28:52.713119559 +0000 UTC m=+1.658392945 container died 8aafae5fe869215c1dab247c3a831964520576ac21a3305e730483bd5dc50a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:28:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-8258b372c9057d7a51ec16f386602086690245a292934fca5c06dbf2bf3c7cb1-merged.mount: Deactivated successfully.
Dec 05 02:28:52 compute-0 podman[479671]: 2025-12-05 02:28:52.803352527 +0000 UTC m=+1.748625923 container remove 8aafae5fe869215c1dab247c3a831964520576ac21a3305e730483bd5dc50a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 02:28:52 compute-0 systemd[1]: libpod-conmon-8aafae5fe869215c1dab247c3a831964520576ac21a3305e730483bd5dc50a6f.scope: Deactivated successfully.
Dec 05 02:28:52 compute-0 sudo[479569]: pam_unix(sudo:session): session closed for user root
Dec 05 02:28:52 compute-0 nova_compute[349548]: 2025-12-05 02:28:52.869 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:28:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2400: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:53 compute-0 sudo[479726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:28:53 compute-0 sudo[479726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:28:53 compute-0 sudo[479726]: pam_unix(sudo:session): session closed for user root
Dec 05 02:28:53 compute-0 sudo[479751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:28:53 compute-0 sudo[479751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:28:53 compute-0 sudo[479751]: pam_unix(sudo:session): session closed for user root
Dec 05 02:28:53 compute-0 sudo[479776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:28:53 compute-0 sudo[479776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:28:53 compute-0 sudo[479776]: pam_unix(sudo:session): session closed for user root
Dec 05 02:28:53 compute-0 sudo[479801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:28:53 compute-0 sudo[479801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:28:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:28:53.488529) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901733488639, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 1126, "num_deletes": 250, "total_data_size": 1442453, "memory_usage": 1473048, "flush_reason": "Manual Compaction"}
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901733502506, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 930351, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48614, "largest_seqno": 49739, "table_properties": {"data_size": 925683, "index_size": 2000, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13314, "raw_average_key_size": 21, "raw_value_size": 915311, "raw_average_value_size": 1488, "num_data_blocks": 89, "num_entries": 615, "num_filter_entries": 615, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764901648, "oldest_key_time": 1764901648, "file_creation_time": 1764901733, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 14072 microseconds, and 8111 cpu microseconds.
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:28:53.502612) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 930351 bytes OK
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:28:53.502640) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:28:53.505247) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:28:53.505273) EVENT_LOG_v1 {"time_micros": 1764901733505264, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:28:53.505299) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 1436937, prev total WAL file size 1436937, number of live WAL files 2.
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:28:53.506822) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303033' seq:72057594037927935, type:22 .. '6D6772737461740032323534' seq:0, type:0; will stop at (end)
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(908KB)], [116(9099KB)]
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901733506867, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 10248132, "oldest_snapshot_seqno": -1}
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 6392 keys, 7434754 bytes, temperature: kUnknown
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901733577574, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 7434754, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7396359, "index_size": 21325, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16005, "raw_key_size": 166991, "raw_average_key_size": 26, "raw_value_size": 7285190, "raw_average_value_size": 1139, "num_data_blocks": 844, "num_entries": 6392, "num_filter_entries": 6392, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764901733, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:28:53.577861) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 7434754 bytes
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:28:53.580690) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 144.9 rd, 105.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.9 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(19.0) write-amplify(8.0) OK, records in: 6867, records dropped: 475 output_compression: NoCompression
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:28:53.580723) EVENT_LOG_v1 {"time_micros": 1764901733580709, "job": 70, "event": "compaction_finished", "compaction_time_micros": 70722, "compaction_time_cpu_micros": 40822, "output_level": 6, "num_output_files": 1, "total_output_size": 7434754, "num_input_records": 6867, "num_output_records": 6392, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901733581238, "job": 70, "event": "table_file_deletion", "file_number": 118}
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901733584792, "job": 70, "event": "table_file_deletion", "file_number": 116}
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:28:53.506563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:28:53.585104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:28:53.585111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:28:53.585114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:28:53.585117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:28:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:28:53.585120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:28:54 compute-0 podman[479865]: 2025-12-05 02:28:54.121265612 +0000 UTC m=+0.099060105 container create edd8250f481b41589f60d26e6c8cf4cb7a8152c99392e7f0912f880fef2d3369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 05 02:28:54 compute-0 podman[479865]: 2025-12-05 02:28:54.086586846 +0000 UTC m=+0.064381399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:28:54 compute-0 systemd[1]: Started libpod-conmon-edd8250f481b41589f60d26e6c8cf4cb7a8152c99392e7f0912f880fef2d3369.scope.
Dec 05 02:28:54 compute-0 ceph-mon[192914]: pgmap v2400: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:54 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:28:54 compute-0 podman[479865]: 2025-12-05 02:28:54.275987281 +0000 UTC m=+0.253781834 container init edd8250f481b41589f60d26e6c8cf4cb7a8152c99392e7f0912f880fef2d3369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mayer, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:28:54 compute-0 podman[479865]: 2025-12-05 02:28:54.296798355 +0000 UTC m=+0.274592848 container start edd8250f481b41589f60d26e6c8cf4cb7a8152c99392e7f0912f880fef2d3369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mayer, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 02:28:54 compute-0 eager_mayer[479891]: 167 167
Dec 05 02:28:54 compute-0 podman[479865]: 2025-12-05 02:28:54.303405836 +0000 UTC m=+0.281200339 container attach edd8250f481b41589f60d26e6c8cf4cb7a8152c99392e7f0912f880fef2d3369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mayer, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 05 02:28:54 compute-0 systemd[1]: libpod-edd8250f481b41589f60d26e6c8cf4cb7a8152c99392e7f0912f880fef2d3369.scope: Deactivated successfully.
Dec 05 02:28:54 compute-0 conmon[479891]: conmon edd8250f481b41589f60 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-edd8250f481b41589f60d26e6c8cf4cb7a8152c99392e7f0912f880fef2d3369.scope/container/memory.events
Dec 05 02:28:54 compute-0 podman[479878]: 2025-12-05 02:28:54.320051639 +0000 UTC m=+0.131302180 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:28:54 compute-0 podman[479879]: 2025-12-05 02:28:54.347137045 +0000 UTC m=+0.156106300 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:28:54 compute-0 podman[479924]: 2025-12-05 02:28:54.403738387 +0000 UTC m=+0.067336844 container died edd8250f481b41589f60d26e6c8cf4cb7a8152c99392e7f0912f880fef2d3369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 05 02:28:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ca6e3ff3d8c59ff68c9efc400eb8b7a855774ce02e60e7d7438b95e15223028-merged.mount: Deactivated successfully.
Dec 05 02:28:54 compute-0 podman[479924]: 2025-12-05 02:28:54.475140249 +0000 UTC m=+0.138738666 container remove edd8250f481b41589f60d26e6c8cf4cb7a8152c99392e7f0912f880fef2d3369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mayer, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:28:54 compute-0 systemd[1]: libpod-conmon-edd8250f481b41589f60d26e6c8cf4cb7a8152c99392e7f0912f880fef2d3369.scope: Deactivated successfully.
Dec 05 02:28:54 compute-0 podman[479945]: 2025-12-05 02:28:54.812992581 +0000 UTC m=+0.085203483 container create b14b0affbdd081fac0529839d62fa152ac45f04c5624435c70624e57cb9aa9d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lalande, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:28:54 compute-0 podman[479945]: 2025-12-05 02:28:54.781784375 +0000 UTC m=+0.053995327 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:28:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2401: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:54 compute-0 systemd[1]: Started libpod-conmon-b14b0affbdd081fac0529839d62fa152ac45f04c5624435c70624e57cb9aa9d5.scope.
Dec 05 02:28:54 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:28:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47cea1163faff1f020f9abb3364a72a434083f05ee194056ed2e89d657aea7a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:28:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47cea1163faff1f020f9abb3364a72a434083f05ee194056ed2e89d657aea7a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:28:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47cea1163faff1f020f9abb3364a72a434083f05ee194056ed2e89d657aea7a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:28:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47cea1163faff1f020f9abb3364a72a434083f05ee194056ed2e89d657aea7a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:28:55 compute-0 podman[479945]: 2025-12-05 02:28:55.009948825 +0000 UTC m=+0.282159737 container init b14b0affbdd081fac0529839d62fa152ac45f04c5624435c70624e57cb9aa9d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 05 02:28:55 compute-0 podman[479945]: 2025-12-05 02:28:55.033339524 +0000 UTC m=+0.305550426 container start b14b0affbdd081fac0529839d62fa152ac45f04c5624435c70624e57cb9aa9d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lalande, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:28:55 compute-0 podman[479945]: 2025-12-05 02:28:55.04045902 +0000 UTC m=+0.312669982 container attach b14b0affbdd081fac0529839d62fa152ac45f04c5624435c70624e57cb9aa9d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lalande, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:28:55 compute-0 nova_compute[349548]: 2025-12-05 02:28:55.548 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:28:55 compute-0 sweet_lalande[479961]: {
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:     "0": [
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:         {
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "devices": [
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "/dev/loop3"
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             ],
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "lv_name": "ceph_lv0",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "lv_size": "21470642176",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "name": "ceph_lv0",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "tags": {
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.cluster_name": "ceph",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.crush_device_class": "",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.encrypted": "0",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.osd_id": "0",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.type": "block",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.vdo": "0"
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             },
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "type": "block",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "vg_name": "ceph_vg0"
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:         }
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:     ],
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:     "1": [
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:         {
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "devices": [
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "/dev/loop4"
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             ],
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "lv_name": "ceph_lv1",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "lv_size": "21470642176",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "name": "ceph_lv1",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "tags": {
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.cluster_name": "ceph",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.crush_device_class": "",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.encrypted": "0",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.osd_id": "1",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.type": "block",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.vdo": "0"
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             },
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "type": "block",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "vg_name": "ceph_vg1"
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:         }
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:     ],
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:     "2": [
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:         {
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "devices": [
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "/dev/loop5"
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             ],
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "lv_name": "ceph_lv2",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "lv_size": "21470642176",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "name": "ceph_lv2",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "tags": {
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.cluster_name": "ceph",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.crush_device_class": "",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.encrypted": "0",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.osd_id": "2",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.type": "block",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:                 "ceph.vdo": "0"
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             },
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "type": "block",
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:             "vg_name": "ceph_vg2"
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:         }
Dec 05 02:28:55 compute-0 sweet_lalande[479961]:     ]
Dec 05 02:28:55 compute-0 sweet_lalande[479961]: }
Dec 05 02:28:55 compute-0 systemd[1]: libpod-b14b0affbdd081fac0529839d62fa152ac45f04c5624435c70624e57cb9aa9d5.scope: Deactivated successfully.
Dec 05 02:28:55 compute-0 podman[479945]: 2025-12-05 02:28:55.939084681 +0000 UTC m=+1.211295583 container died b14b0affbdd081fac0529839d62fa152ac45f04c5624435c70624e57cb9aa9d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lalande, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 05 02:28:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-47cea1163faff1f020f9abb3364a72a434083f05ee194056ed2e89d657aea7a0-merged.mount: Deactivated successfully.
Dec 05 02:28:56 compute-0 podman[479945]: 2025-12-05 02:28:56.051355059 +0000 UTC m=+1.323565961 container remove b14b0affbdd081fac0529839d62fa152ac45f04c5624435c70624e57cb9aa9d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lalande, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:28:56 compute-0 systemd[1]: libpod-conmon-b14b0affbdd081fac0529839d62fa152ac45f04c5624435c70624e57cb9aa9d5.scope: Deactivated successfully.
Dec 05 02:28:56 compute-0 nova_compute[349548]: 2025-12-05 02:28:56.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:28:56 compute-0 nova_compute[349548]: 2025-12-05 02:28:56.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:28:56 compute-0 sudo[479801]: pam_unix(sudo:session): session closed for user root
Dec 05 02:28:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:28:56.232 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:28:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:28:56.232 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:28:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:28:56.233 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:28:56 compute-0 sudo[479981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:28:56 compute-0 ceph-mon[192914]: pgmap v2401: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:56 compute-0 sudo[479981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:28:56 compute-0 sudo[479981]: pam_unix(sudo:session): session closed for user root
Dec 05 02:28:56 compute-0 sudo[480006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:28:56 compute-0 sudo[480006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:28:56 compute-0 sudo[480006]: pam_unix(sudo:session): session closed for user root
Dec 05 02:28:56 compute-0 sudo[480031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:28:56 compute-0 sudo[480031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:28:56 compute-0 sudo[480031]: pam_unix(sudo:session): session closed for user root
Dec 05 02:28:56 compute-0 sudo[480056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:28:56 compute-0 sudo[480056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:28:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2402: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:57 compute-0 nova_compute[349548]: 2025-12-05 02:28:57.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:28:57 compute-0 nova_compute[349548]: 2025-12-05 02:28:57.092 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:28:57 compute-0 nova_compute[349548]: 2025-12-05 02:28:57.094 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:28:57 compute-0 nova_compute[349548]: 2025-12-05 02:28:57.094 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:28:57 compute-0 nova_compute[349548]: 2025-12-05 02:28:57.095 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:28:57 compute-0 nova_compute[349548]: 2025-12-05 02:28:57.095 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:28:57 compute-0 podman[480119]: 2025-12-05 02:28:57.297254784 +0000 UTC m=+0.080168087 container create 109679f644be3ad4e1905a02a8d1d7d03f0b28efa91a3dea6a22e8534fbbf922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mclean, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:28:57 compute-0 podman[480119]: 2025-12-05 02:28:57.268351196 +0000 UTC m=+0.051264499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:28:57 compute-0 systemd[1]: Started libpod-conmon-109679f644be3ad4e1905a02a8d1d7d03f0b28efa91a3dea6a22e8534fbbf922.scope.
Dec 05 02:28:57 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:28:57 compute-0 podman[480119]: 2025-12-05 02:28:57.447634257 +0000 UTC m=+0.230547620 container init 109679f644be3ad4e1905a02a8d1d7d03f0b28efa91a3dea6a22e8534fbbf922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mclean, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:28:57 compute-0 podman[480119]: 2025-12-05 02:28:57.460229553 +0000 UTC m=+0.243142836 container start 109679f644be3ad4e1905a02a8d1d7d03f0b28efa91a3dea6a22e8534fbbf922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mclean, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 02:28:57 compute-0 podman[480119]: 2025-12-05 02:28:57.466742212 +0000 UTC m=+0.249655595 container attach 109679f644be3ad4e1905a02a8d1d7d03f0b28efa91a3dea6a22e8534fbbf922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 02:28:57 compute-0 elastic_mclean[480154]: 167 167
Dec 05 02:28:57 compute-0 systemd[1]: libpod-109679f644be3ad4e1905a02a8d1d7d03f0b28efa91a3dea6a22e8534fbbf922.scope: Deactivated successfully.
Dec 05 02:28:57 compute-0 conmon[480154]: conmon 109679f644be3ad4e190 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-109679f644be3ad4e1905a02a8d1d7d03f0b28efa91a3dea6a22e8534fbbf922.scope/container/memory.events
Dec 05 02:28:57 compute-0 podman[480119]: 2025-12-05 02:28:57.472410566 +0000 UTC m=+0.255323899 container died 109679f644be3ad4e1905a02a8d1d7d03f0b28efa91a3dea6a22e8534fbbf922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mclean, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:28:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-1af0a3c1cf8194f1ce94475cbc6432c6473113e1e636523dace3abae0c04f394-merged.mount: Deactivated successfully.
Dec 05 02:28:57 compute-0 podman[480119]: 2025-12-05 02:28:57.569131582 +0000 UTC m=+0.352044895 container remove 109679f644be3ad4e1905a02a8d1d7d03f0b28efa91a3dea6a22e8534fbbf922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mclean, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 05 02:28:57 compute-0 systemd[1]: libpod-conmon-109679f644be3ad4e1905a02a8d1d7d03f0b28efa91a3dea6a22e8534fbbf922.scope: Deactivated successfully.
Dec 05 02:28:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:28:57 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/143053187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:28:57 compute-0 nova_compute[349548]: 2025-12-05 02:28:57.661 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:28:57 compute-0 nova_compute[349548]: 2025-12-05 02:28:57.871 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:28:57 compute-0 podman[480179]: 2025-12-05 02:28:57.922288398 +0000 UTC m=+0.118040586 container create f1c4eb384b7713afbe4e293c49d4177ba2e2e6f3019d5883cc410b38c78f2a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:28:57 compute-0 podman[480179]: 2025-12-05 02:28:57.886728586 +0000 UTC m=+0.082480854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:28:57 compute-0 systemd[1]: Started libpod-conmon-f1c4eb384b7713afbe4e293c49d4177ba2e2e6f3019d5883cc410b38c78f2a71.scope.
Dec 05 02:28:58 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:28:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11a8e26697ea0e7113eb7405cafeee0e2bddaf6c2a2c7cbcd7b6f2bb3af06c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:28:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11a8e26697ea0e7113eb7405cafeee0e2bddaf6c2a2c7cbcd7b6f2bb3af06c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:28:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11a8e26697ea0e7113eb7405cafeee0e2bddaf6c2a2c7cbcd7b6f2bb3af06c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:28:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d11a8e26697ea0e7113eb7405cafeee0e2bddaf6c2a2c7cbcd7b6f2bb3af06c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:28:58 compute-0 podman[480179]: 2025-12-05 02:28:58.056073579 +0000 UTC m=+0.251825807 container init f1c4eb384b7713afbe4e293c49d4177ba2e2e6f3019d5883cc410b38c78f2a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:28:58 compute-0 podman[480179]: 2025-12-05 02:28:58.077037158 +0000 UTC m=+0.272789336 container start f1c4eb384b7713afbe4e293c49d4177ba2e2e6f3019d5883cc410b38c78f2a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_babbage, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:28:58 compute-0 podman[480179]: 2025-12-05 02:28:58.08193165 +0000 UTC m=+0.277683858 container attach f1c4eb384b7713afbe4e293c49d4177ba2e2e6f3019d5883cc410b38c78f2a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:28:58 compute-0 nova_compute[349548]: 2025-12-05 02:28:58.196 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:28:58 compute-0 nova_compute[349548]: 2025-12-05 02:28:58.198 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3905MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:28:58 compute-0 nova_compute[349548]: 2025-12-05 02:28:58.198 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:28:58 compute-0 nova_compute[349548]: 2025-12-05 02:28:58.198 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:28:58 compute-0 ceph-mon[192914]: pgmap v2402: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:58 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/143053187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:28:58 compute-0 nova_compute[349548]: 2025-12-05 02:28:58.305 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:28:58 compute-0 nova_compute[349548]: 2025-12-05 02:28:58.306 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:28:58 compute-0 nova_compute[349548]: 2025-12-05 02:28:58.451 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing inventories for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 02:28:58 compute-0 nova_compute[349548]: 2025-12-05 02:28:58.472 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating ProviderTree inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 02:28:58 compute-0 nova_compute[349548]: 2025-12-05 02:28:58.473 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 02:28:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:28:58 compute-0 nova_compute[349548]: 2025-12-05 02:28:58.492 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing aggregate associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 02:28:58 compute-0 nova_compute[349548]: 2025-12-05 02:28:58.512 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing trait associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, traits: HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 02:28:58 compute-0 nova_compute[349548]: 2025-12-05 02:28:58.541 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:28:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2403: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:28:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:28:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1059832123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:28:59 compute-0 nova_compute[349548]: 2025-12-05 02:28:59.097 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:28:59 compute-0 nova_compute[349548]: 2025-12-05 02:28:59.111 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:28:59 compute-0 nova_compute[349548]: 2025-12-05 02:28:59.130 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:28:59 compute-0 nova_compute[349548]: 2025-12-05 02:28:59.133 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:28:59 compute-0 nova_compute[349548]: 2025-12-05 02:28:59.133 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:28:59 compute-0 xenodochial_babbage[480195]: {
Dec 05 02:28:59 compute-0 xenodochial_babbage[480195]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:28:59 compute-0 xenodochial_babbage[480195]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:28:59 compute-0 xenodochial_babbage[480195]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:28:59 compute-0 xenodochial_babbage[480195]:         "osd_id": 0,
Dec 05 02:28:59 compute-0 xenodochial_babbage[480195]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:28:59 compute-0 xenodochial_babbage[480195]:         "type": "bluestore"
Dec 05 02:28:59 compute-0 xenodochial_babbage[480195]:     },
Dec 05 02:28:59 compute-0 xenodochial_babbage[480195]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:28:59 compute-0 xenodochial_babbage[480195]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:28:59 compute-0 xenodochial_babbage[480195]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:28:59 compute-0 xenodochial_babbage[480195]:         "osd_id": 1,
Dec 05 02:28:59 compute-0 xenodochial_babbage[480195]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:28:59 compute-0 xenodochial_babbage[480195]:         "type": "bluestore"
Dec 05 02:28:59 compute-0 xenodochial_babbage[480195]:     },
Dec 05 02:28:59 compute-0 xenodochial_babbage[480195]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:28:59 compute-0 xenodochial_babbage[480195]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:28:59 compute-0 xenodochial_babbage[480195]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:28:59 compute-0 xenodochial_babbage[480195]:         "osd_id": 2,
Dec 05 02:28:59 compute-0 xenodochial_babbage[480195]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:28:59 compute-0 xenodochial_babbage[480195]:         "type": "bluestore"
Dec 05 02:28:59 compute-0 xenodochial_babbage[480195]:     }
Dec 05 02:28:59 compute-0 xenodochial_babbage[480195]: }
Dec 05 02:28:59 compute-0 systemd[1]: libpod-f1c4eb384b7713afbe4e293c49d4177ba2e2e6f3019d5883cc410b38c78f2a71.scope: Deactivated successfully.
Dec 05 02:28:59 compute-0 systemd[1]: libpod-f1c4eb384b7713afbe4e293c49d4177ba2e2e6f3019d5883cc410b38c78f2a71.scope: Consumed 1.135s CPU time.
Dec 05 02:28:59 compute-0 conmon[480195]: conmon f1c4eb384b7713afbe4e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f1c4eb384b7713afbe4e293c49d4177ba2e2e6f3019d5883cc410b38c78f2a71.scope/container/memory.events
Dec 05 02:28:59 compute-0 podman[480179]: 2025-12-05 02:28:59.234077926 +0000 UTC m=+1.429830144 container died f1c4eb384b7713afbe4e293c49d4177ba2e2e6f3019d5883cc410b38c78f2a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:28:59 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1059832123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:28:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-d11a8e26697ea0e7113eb7405cafeee0e2bddaf6c2a2c7cbcd7b6f2bb3af06c9-merged.mount: Deactivated successfully.
Dec 05 02:28:59 compute-0 podman[480179]: 2025-12-05 02:28:59.33969362 +0000 UTC m=+1.535445808 container remove f1c4eb384b7713afbe4e293c49d4177ba2e2e6f3019d5883cc410b38c78f2a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_babbage, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 05 02:28:59 compute-0 systemd[1]: libpod-conmon-f1c4eb384b7713afbe4e293c49d4177ba2e2e6f3019d5883cc410b38c78f2a71.scope: Deactivated successfully.
Dec 05 02:28:59 compute-0 sudo[480056]: pam_unix(sudo:session): session closed for user root
Dec 05 02:28:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:28:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:28:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:28:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:28:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 417f14f7-82f0-440b-bcfb-fe720a6b0145 does not exist
Dec 05 02:28:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d09b46ef-b63b-46fb-9650-f94137f5a3be does not exist
Dec 05 02:28:59 compute-0 podman[480261]: 2025-12-05 02:28:59.532839494 +0000 UTC m=+0.130558369 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 05 02:28:59 compute-0 sudo[480269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:28:59 compute-0 sudo[480269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:28:59 compute-0 sudo[480269]: pam_unix(sudo:session): session closed for user root
Dec 05 02:28:59 compute-0 podman[480306]: 2025-12-05 02:28:59.696106191 +0000 UTC m=+0.114480893 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3)
Dec 05 02:28:59 compute-0 sudo[480318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:28:59 compute-0 podman[480305]: 2025-12-05 02:28:59.699479839 +0000 UTC m=+0.124969637 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, version=9.4, managed_by=edpm_ansible, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, com.redhat.component=ubi9-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release=1214.1726694543)
Dec 05 02:28:59 compute-0 sudo[480318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:28:59 compute-0 sudo[480318]: pam_unix(sudo:session): session closed for user root
Dec 05 02:28:59 compute-0 podman[158197]: time="2025-12-05T02:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:28:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:28:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8210 "" "Go-http-client/1.1"
Dec 05 02:29:00 compute-0 nova_compute[349548]: 2025-12-05 02:29:00.136 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:29:00 compute-0 nova_compute[349548]: 2025-12-05 02:29:00.136 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:29:00 compute-0 ceph-mon[192914]: pgmap v2403: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:29:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:29:00 compute-0 nova_compute[349548]: 2025-12-05 02:29:00.553 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:29:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2404: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:01 compute-0 nova_compute[349548]: 2025-12-05 02:29:01.063 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:29:01 compute-0 nova_compute[349548]: 2025-12-05 02:29:01.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:29:01 compute-0 openstack_network_exporter[366555]: ERROR   02:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:29:01 compute-0 openstack_network_exporter[366555]: ERROR   02:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:29:01 compute-0 openstack_network_exporter[366555]: ERROR   02:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:29:01 compute-0 openstack_network_exporter[366555]: ERROR   02:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:29:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:29:01 compute-0 openstack_network_exporter[366555]: ERROR   02:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:29:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:29:02 compute-0 ceph-mon[192914]: pgmap v2404: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:02 compute-0 nova_compute[349548]: 2025-12-05 02:29:02.874 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:29:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2405: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:29:04 compute-0 nova_compute[349548]: 2025-12-05 02:29:04.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:29:04 compute-0 nova_compute[349548]: 2025-12-05 02:29:04.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:29:04 compute-0 ceph-mon[192914]: pgmap v2405: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2406: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:05 compute-0 ceph-mon[192914]: pgmap v2406: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:05 compute-0 nova_compute[349548]: 2025-12-05 02:29:05.559 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:29:06 compute-0 sshd[114018]: Timeout before authentication for connection from 123.253.22.45 to 38.102.83.176, pid = 470720
Dec 05 02:29:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2407: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:07 compute-0 nova_compute[349548]: 2025-12-05 02:29:07.877 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:29:07 compute-0 ceph-mon[192914]: pgmap v2407: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:29:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2408: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:09 compute-0 ceph-mon[192914]: pgmap v2408: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:10 compute-0 nova_compute[349548]: 2025-12-05 02:29:10.564 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:29:10 compute-0 podman[480367]: 2025-12-05 02:29:10.690584485 +0000 UTC m=+0.086589204 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 02:29:10 compute-0 podman[480369]: 2025-12-05 02:29:10.728310549 +0000 UTC m=+0.110275640 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.7, release=1755695350, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, name=ubi9-minimal, io.openshift.tags=minimal rhel9, vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 02:29:10 compute-0 podman[480366]: 2025-12-05 02:29:10.732858331 +0000 UTC m=+0.124030759 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true)
Dec 05 02:29:10 compute-0 podman[480368]: 2025-12-05 02:29:10.788461514 +0000 UTC m=+0.164324068 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:29:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2409: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:11 compute-0 ceph-mon[192914]: pgmap v2409: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:12 compute-0 nova_compute[349548]: 2025-12-05 02:29:12.881 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:29:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2410: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:29:13 compute-0 ceph-mon[192914]: pgmap v2410: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2411: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:15 compute-0 nova_compute[349548]: 2025-12-05 02:29:15.569 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:29:16 compute-0 ceph-mon[192914]: pgmap v2411: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:29:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:29:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:29:16
Dec 05 02:29:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:29:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:29:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'backups']
Dec 05 02:29:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:29:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:29:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:29:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:29:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:29:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2412: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:17 compute-0 ceph-mon[192914]: pgmap v2412: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:17 compute-0 nova_compute[349548]: 2025-12-05 02:29:17.883 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:29:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:29:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:29:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:29:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:29:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:29:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:29:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:29:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:29:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:29:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:29:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:29:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2413: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:19 compute-0 ceph-mon[192914]: pgmap v2413: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:20 compute-0 nova_compute[349548]: 2025-12-05 02:29:20.575 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:29:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2414: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:21 compute-0 ceph-mon[192914]: pgmap v2414: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:22 compute-0 nova_compute[349548]: 2025-12-05 02:29:22.887 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:29:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2415: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:29:23 compute-0 ceph-mon[192914]: pgmap v2415: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:24 compute-0 podman[480455]: 2025-12-05 02:29:24.699627067 +0000 UTC m=+0.110839666 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 05 02:29:24 compute-0 podman[480456]: 2025-12-05 02:29:24.718290249 +0000 UTC m=+0.118643413 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 02:29:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2416: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:25 compute-0 nova_compute[349548]: 2025-12-05 02:29:25.579 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:29:26 compute-0 ceph-mon[192914]: pgmap v2416: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2417: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:29:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:29:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:29:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:29:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 02:29:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:29:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:29:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:29:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:29:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:29:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 05 02:29:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:29:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:29:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:29:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:29:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:29:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:29:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:29:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:29:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:29:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:29:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:29:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:29:27 compute-0 nova_compute[349548]: 2025-12-05 02:29:27.892 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:29:28 compute-0 ceph-mon[192914]: pgmap v2417: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:29:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2418: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:29 compute-0 podman[158197]: time="2025-12-05T02:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:29:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:29:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8212 "" "Go-http-client/1.1"
Dec 05 02:29:30 compute-0 ceph-mon[192914]: pgmap v2418: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:30 compute-0 nova_compute[349548]: 2025-12-05 02:29:30.585 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:29:30 compute-0 podman[480496]: 2025-12-05 02:29:30.718945301 +0000 UTC m=+0.123124483 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Dec 05 02:29:30 compute-0 podman[480497]: 2025-12-05 02:29:30.723680239 +0000 UTC m=+0.120641822 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, architecture=x86_64, name=ubi9, managed_by=edpm_ansible, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public)
Dec 05 02:29:30 compute-0 podman[480498]: 2025-12-05 02:29:30.745412569 +0000 UTC m=+0.136327386 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 05 02:29:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2419: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:31 compute-0 openstack_network_exporter[366555]: ERROR   02:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:29:31 compute-0 openstack_network_exporter[366555]: ERROR   02:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:29:31 compute-0 openstack_network_exporter[366555]: ERROR   02:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:29:31 compute-0 openstack_network_exporter[366555]: ERROR   02:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:29:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:29:31 compute-0 openstack_network_exporter[366555]: ERROR   02:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:29:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:29:32 compute-0 ceph-mon[192914]: pgmap v2419: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:32 compute-0 nova_compute[349548]: 2025-12-05 02:29:32.894 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:29:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2420: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:29:34 compute-0 ceph-mon[192914]: pgmap v2420: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2421: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:35 compute-0 nova_compute[349548]: 2025-12-05 02:29:35.591 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:29:36 compute-0 ceph-mon[192914]: pgmap v2421: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2422: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:37 compute-0 nova_compute[349548]: 2025-12-05 02:29:37.898 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:29:38 compute-0 ceph-mon[192914]: pgmap v2422: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:29:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2423: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:40 compute-0 ceph-mon[192914]: pgmap v2423: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:40 compute-0 nova_compute[349548]: 2025-12-05 02:29:40.595 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:29:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2424: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:41 compute-0 podman[480550]: 2025-12-05 02:29:41.704817893 +0000 UTC m=+0.111848666 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 02:29:41 compute-0 podman[480553]: 2025-12-05 02:29:41.716281986 +0000 UTC m=+0.098162769 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41)
Dec 05 02:29:41 compute-0 podman[480551]: 2025-12-05 02:29:41.7477922 +0000 UTC m=+0.145509583 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 02:29:41 compute-0 podman[480552]: 2025-12-05 02:29:41.79813527 +0000 UTC m=+0.188999854 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:29:42 compute-0 ceph-mon[192914]: pgmap v2424: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:42 compute-0 nova_compute[349548]: 2025-12-05 02:29:42.903 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:29:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2425: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:29:44 compute-0 ceph-mon[192914]: pgmap v2425: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2426: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:29:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1891236033' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:29:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:29:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1891236033' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:29:45 compute-0 nova_compute[349548]: 2025-12-05 02:29:45.598 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:29:46 compute-0 ceph-mon[192914]: pgmap v2426: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1891236033' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:29:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1891236033' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:29:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:29:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:29:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:29:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:29:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:29:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:29:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2427: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:47 compute-0 nova_compute[349548]: 2025-12-05 02:29:47.904 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:29:48 compute-0 ceph-mon[192914]: pgmap v2427: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:29:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2428: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:50 compute-0 ceph-mon[192914]: pgmap v2428: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:50 compute-0 nova_compute[349548]: 2025-12-05 02:29:50.604 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:29:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2429: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:51 compute-0 nova_compute[349548]: 2025-12-05 02:29:51.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:29:51 compute-0 nova_compute[349548]: 2025-12-05 02:29:51.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:29:51 compute-0 nova_compute[349548]: 2025-12-05 02:29:51.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:29:51 compute-0 nova_compute[349548]: 2025-12-05 02:29:51.089 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 02:29:52 compute-0 ceph-mon[192914]: pgmap v2429: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:52 compute-0 nova_compute[349548]: 2025-12-05 02:29:52.907 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:29:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2430: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:29:54 compute-0 ceph-mon[192914]: pgmap v2430: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2431: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:55 compute-0 nova_compute[349548]: 2025-12-05 02:29:55.608 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:29:55 compute-0 podman[480636]: 2025-12-05 02:29:55.708188098 +0000 UTC m=+0.103999238 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:29:55 compute-0 podman[480635]: 2025-12-05 02:29:55.739341412 +0000 UTC m=+0.143898946 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 02:29:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:29:56.233 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:29:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:29:56.233 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:29:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:29:56.233 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:29:56 compute-0 ceph-mon[192914]: pgmap v2431: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2432: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:57 compute-0 ceph-mon[192914]: pgmap v2432: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:57 compute-0 nova_compute[349548]: 2025-12-05 02:29:57.909 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:29:58 compute-0 nova_compute[349548]: 2025-12-05 02:29:58.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:29:58 compute-0 nova_compute[349548]: 2025-12-05 02:29:58.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:29:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:29:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2433: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:59 compute-0 nova_compute[349548]: 2025-12-05 02:29:59.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:29:59 compute-0 nova_compute[349548]: 2025-12-05 02:29:59.099 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:29:59 compute-0 nova_compute[349548]: 2025-12-05 02:29:59.099 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:29:59 compute-0 nova_compute[349548]: 2025-12-05 02:29:59.100 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:29:59 compute-0 nova_compute[349548]: 2025-12-05 02:29:59.100 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:29:59 compute-0 nova_compute[349548]: 2025-12-05 02:29:59.100 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:29:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:29:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/218031717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:29:59 compute-0 nova_compute[349548]: 2025-12-05 02:29:59.567 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:29:59 compute-0 podman[158197]: time="2025-12-05T02:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:29:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:29:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8216 "" "Go-http-client/1.1"
Dec 05 02:29:59 compute-0 sudo[480696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:29:59 compute-0 sudo[480696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:29:59 compute-0 sudo[480696]: pam_unix(sudo:session): session closed for user root
Dec 05 02:29:59 compute-0 sudo[480721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:29:59 compute-0 ceph-mon[192914]: pgmap v2433: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:29:59 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/218031717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:29:59 compute-0 sudo[480721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:29:59 compute-0 sudo[480721]: pam_unix(sudo:session): session closed for user root
Dec 05 02:30:00 compute-0 nova_compute[349548]: 2025-12-05 02:30:00.119 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:30:00 compute-0 nova_compute[349548]: 2025-12-05 02:30:00.121 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3948MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:30:00 compute-0 nova_compute[349548]: 2025-12-05 02:30:00.122 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:30:00 compute-0 nova_compute[349548]: 2025-12-05 02:30:00.123 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:30:00 compute-0 sudo[480746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:30:00 compute-0 sudo[480746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:30:00 compute-0 sudo[480746]: pam_unix(sudo:session): session closed for user root
Dec 05 02:30:00 compute-0 nova_compute[349548]: 2025-12-05 02:30:00.232 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:30:00 compute-0 nova_compute[349548]: 2025-12-05 02:30:00.233 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:30:00 compute-0 sudo[480771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:30:00 compute-0 sudo[480771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:30:00 compute-0 nova_compute[349548]: 2025-12-05 02:30:00.255 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:30:00 compute-0 nova_compute[349548]: 2025-12-05 02:30:00.615 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:30:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:30:00 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/132553598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:30:00 compute-0 nova_compute[349548]: 2025-12-05 02:30:00.739 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:30:00 compute-0 nova_compute[349548]: 2025-12-05 02:30:00.751 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:30:00 compute-0 nova_compute[349548]: 2025-12-05 02:30:00.785 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:30:00 compute-0 nova_compute[349548]: 2025-12-05 02:30:00.787 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:30:00 compute-0 nova_compute[349548]: 2025-12-05 02:30:00.788 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:30:00 compute-0 sudo[480771]: pam_unix(sudo:session): session closed for user root
Dec 05 02:30:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2434: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 05 02:30:00 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 02:30:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:30:00 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:30:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:30:00 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:30:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:30:00 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:30:00 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 3f9c4261-e194-4a03-b86e-b60d63e93e37 does not exist
Dec 05 02:30:00 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev cad85e83-1319-41d9-a481-10dde7055848 does not exist
Dec 05 02:30:00 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 52a41de5-7c91-4f3c-bd1d-8dd9bcd14374 does not exist
Dec 05 02:30:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:30:00 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:30:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:30:00 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:30:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:30:00 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:30:00 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/132553598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:30:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 05 02:30:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:30:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:30:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:30:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:30:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:30:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:30:01 compute-0 sudo[480850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:30:01 compute-0 sudo[480850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:30:01 compute-0 sudo[480850]: pam_unix(sudo:session): session closed for user root
Dec 05 02:30:01 compute-0 podman[480876]: 2025-12-05 02:30:01.256682102 +0000 UTC m=+0.117802048 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:30:01 compute-0 podman[480875]: 2025-12-05 02:30:01.26072848 +0000 UTC m=+0.125173353 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, build-date=2024-09-18T21:23:30, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, container_name=kepler, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=base rhel9, version=9.4, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, release-0.7.12=)
Dec 05 02:30:01 compute-0 sudo[480893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:30:01 compute-0 sudo[480893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:30:01 compute-0 sudo[480893]: pam_unix(sudo:session): session closed for user root
Dec 05 02:30:01 compute-0 podman[480874]: 2025-12-05 02:30:01.281969146 +0000 UTC m=+0.155632396 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 02:30:01 compute-0 sudo[480953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:30:01 compute-0 sudo[480953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:30:01 compute-0 sudo[480953]: pam_unix(sudo:session): session closed for user root
Dec 05 02:30:01 compute-0 openstack_network_exporter[366555]: ERROR   02:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:30:01 compute-0 openstack_network_exporter[366555]: ERROR   02:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:30:01 compute-0 openstack_network_exporter[366555]: ERROR   02:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:30:01 compute-0 openstack_network_exporter[366555]: ERROR   02:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:30:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:30:01 compute-0 openstack_network_exporter[366555]: ERROR   02:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:30:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:30:01 compute-0 sudo[480978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:30:01 compute-0 sudo[480978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:30:01 compute-0 nova_compute[349548]: 2025-12-05 02:30:01.790 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:30:01 compute-0 nova_compute[349548]: 2025-12-05 02:30:01.790 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:30:02 compute-0 ceph-mon[192914]: pgmap v2434: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:02 compute-0 podman[481042]: 2025-12-05 02:30:02.145314563 +0000 UTC m=+0.109890679 container create fcf478b12ae48d063d5b352b5a073a2657a1f18ab67dd0c31966fefcdad292ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:30:02 compute-0 podman[481042]: 2025-12-05 02:30:02.108491435 +0000 UTC m=+0.073067601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:30:02 compute-0 systemd[1]: Started libpod-conmon-fcf478b12ae48d063d5b352b5a073a2657a1f18ab67dd0c31966fefcdad292ea.scope.
Dec 05 02:30:02 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:30:02 compute-0 podman[481042]: 2025-12-05 02:30:02.298791726 +0000 UTC m=+0.263367892 container init fcf478b12ae48d063d5b352b5a073a2657a1f18ab67dd0c31966fefcdad292ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_turing, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 05 02:30:02 compute-0 podman[481042]: 2025-12-05 02:30:02.315011486 +0000 UTC m=+0.279587592 container start fcf478b12ae48d063d5b352b5a073a2657a1f18ab67dd0c31966fefcdad292ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_turing, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:30:02 compute-0 podman[481042]: 2025-12-05 02:30:02.321474094 +0000 UTC m=+0.286050260 container attach fcf478b12ae48d063d5b352b5a073a2657a1f18ab67dd0c31966fefcdad292ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 05 02:30:02 compute-0 busy_turing[481058]: 167 167
Dec 05 02:30:02 compute-0 systemd[1]: libpod-fcf478b12ae48d063d5b352b5a073a2657a1f18ab67dd0c31966fefcdad292ea.scope: Deactivated successfully.
Dec 05 02:30:02 compute-0 podman[481042]: 2025-12-05 02:30:02.328497137 +0000 UTC m=+0.293073253 container died fcf478b12ae48d063d5b352b5a073a2657a1f18ab67dd0c31966fefcdad292ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_turing, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:30:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b5744fc638701e142c219a19294597094e24b09f62cfee71731e398c4076d6d-merged.mount: Deactivated successfully.
Dec 05 02:30:02 compute-0 podman[481042]: 2025-12-05 02:30:02.403644958 +0000 UTC m=+0.368221074 container remove fcf478b12ae48d063d5b352b5a073a2657a1f18ab67dd0c31966fefcdad292ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Dec 05 02:30:02 compute-0 systemd[1]: libpod-conmon-fcf478b12ae48d063d5b352b5a073a2657a1f18ab67dd0c31966fefcdad292ea.scope: Deactivated successfully.
Dec 05 02:30:02 compute-0 podman[481080]: 2025-12-05 02:30:02.694040363 +0000 UTC m=+0.089200649 container create 63e1d73ff9bd65e35281c0c6d314a65b32fcd135cab87d6430e5727dcb874918 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jackson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 02:30:02 compute-0 podman[481080]: 2025-12-05 02:30:02.662459317 +0000 UTC m=+0.057619643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:30:02 compute-0 systemd[1]: Started libpod-conmon-63e1d73ff9bd65e35281c0c6d314a65b32fcd135cab87d6430e5727dcb874918.scope.
Dec 05 02:30:02 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:30:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d9df1f8aae4ef288425960154a3905c65040fc36a3d87b5de9171cd1e9d135/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:30:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d9df1f8aae4ef288425960154a3905c65040fc36a3d87b5de9171cd1e9d135/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:30:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d9df1f8aae4ef288425960154a3905c65040fc36a3d87b5de9171cd1e9d135/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:30:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d9df1f8aae4ef288425960154a3905c65040fc36a3d87b5de9171cd1e9d135/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:30:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d9df1f8aae4ef288425960154a3905c65040fc36a3d87b5de9171cd1e9d135/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:30:02 compute-0 podman[481080]: 2025-12-05 02:30:02.840807551 +0000 UTC m=+0.235967847 container init 63e1d73ff9bd65e35281c0c6d314a65b32fcd135cab87d6430e5727dcb874918 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jackson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 02:30:02 compute-0 podman[481080]: 2025-12-05 02:30:02.853256832 +0000 UTC m=+0.248417088 container start 63e1d73ff9bd65e35281c0c6d314a65b32fcd135cab87d6430e5727dcb874918 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jackson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:30:02 compute-0 podman[481080]: 2025-12-05 02:30:02.858533605 +0000 UTC m=+0.253693961 container attach 63e1d73ff9bd65e35281c0c6d314a65b32fcd135cab87d6430e5727dcb874918 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jackson, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:30:02 compute-0 nova_compute[349548]: 2025-12-05 02:30:02.912 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:30:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2435: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:03 compute-0 nova_compute[349548]: 2025-12-05 02:30:03.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:30:03 compute-0 nova_compute[349548]: 2025-12-05 02:30:03.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:30:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:30:04 compute-0 ceph-mon[192914]: pgmap v2435: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:04 compute-0 nostalgic_jackson[481096]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:30:04 compute-0 nostalgic_jackson[481096]: --> relative data size: 1.0
Dec 05 02:30:04 compute-0 nostalgic_jackson[481096]: --> All data devices are unavailable
Dec 05 02:30:04 compute-0 systemd[1]: libpod-63e1d73ff9bd65e35281c0c6d314a65b32fcd135cab87d6430e5727dcb874918.scope: Deactivated successfully.
Dec 05 02:30:04 compute-0 podman[481080]: 2025-12-05 02:30:04.107754428 +0000 UTC m=+1.502914734 container died 63e1d73ff9bd65e35281c0c6d314a65b32fcd135cab87d6430e5727dcb874918 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jackson, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:30:04 compute-0 systemd[1]: libpod-63e1d73ff9bd65e35281c0c6d314a65b32fcd135cab87d6430e5727dcb874918.scope: Consumed 1.209s CPU time.
Dec 05 02:30:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9d9df1f8aae4ef288425960154a3905c65040fc36a3d87b5de9171cd1e9d135-merged.mount: Deactivated successfully.
Dec 05 02:30:04 compute-0 podman[481080]: 2025-12-05 02:30:04.212462856 +0000 UTC m=+1.607623112 container remove 63e1d73ff9bd65e35281c0c6d314a65b32fcd135cab87d6430e5727dcb874918 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 05 02:30:04 compute-0 systemd[1]: libpod-conmon-63e1d73ff9bd65e35281c0c6d314a65b32fcd135cab87d6430e5727dcb874918.scope: Deactivated successfully.
Dec 05 02:30:04 compute-0 sudo[480978]: pam_unix(sudo:session): session closed for user root
Dec 05 02:30:04 compute-0 sudo[481139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:30:04 compute-0 sudo[481139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:30:04 compute-0 sudo[481139]: pam_unix(sudo:session): session closed for user root
Dec 05 02:30:04 compute-0 sudo[481164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:30:04 compute-0 sudo[481164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:30:04 compute-0 sudo[481164]: pam_unix(sudo:session): session closed for user root
Dec 05 02:30:04 compute-0 sudo[481189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:30:04 compute-0 sudo[481189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:30:04 compute-0 sudo[481189]: pam_unix(sudo:session): session closed for user root
Dec 05 02:30:04 compute-0 sudo[481214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:30:04 compute-0 sudo[481214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:30:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2436: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:05 compute-0 nova_compute[349548]: 2025-12-05 02:30:05.061 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:30:05 compute-0 nova_compute[349548]: 2025-12-05 02:30:05.087 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:30:05 compute-0 podman[481277]: 2025-12-05 02:30:05.405289352 +0000 UTC m=+0.091956419 container create 34e314879ca126e6761404f3f300c6f159cfba0f1e3544036673a84e5d34be97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_agnesi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 05 02:30:05 compute-0 podman[481277]: 2025-12-05 02:30:05.370152912 +0000 UTC m=+0.056820029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:30:05 compute-0 systemd[1]: Started libpod-conmon-34e314879ca126e6761404f3f300c6f159cfba0f1e3544036673a84e5d34be97.scope.
Dec 05 02:30:05 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:30:05 compute-0 podman[481277]: 2025-12-05 02:30:05.561814633 +0000 UTC m=+0.248481750 container init 34e314879ca126e6761404f3f300c6f159cfba0f1e3544036673a84e5d34be97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:30:05 compute-0 podman[481277]: 2025-12-05 02:30:05.580062242 +0000 UTC m=+0.266729319 container start 34e314879ca126e6761404f3f300c6f159cfba0f1e3544036673a84e5d34be97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:30:05 compute-0 podman[481277]: 2025-12-05 02:30:05.587335293 +0000 UTC m=+0.274002410 container attach 34e314879ca126e6761404f3f300c6f159cfba0f1e3544036673a84e5d34be97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_agnesi, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:30:05 compute-0 charming_agnesi[481292]: 167 167
Dec 05 02:30:05 compute-0 systemd[1]: libpod-34e314879ca126e6761404f3f300c6f159cfba0f1e3544036673a84e5d34be97.scope: Deactivated successfully.
Dec 05 02:30:05 compute-0 podman[481277]: 2025-12-05 02:30:05.592186274 +0000 UTC m=+0.278853351 container died 34e314879ca126e6761404f3f300c6f159cfba0f1e3544036673a84e5d34be97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_agnesi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:30:05 compute-0 nova_compute[349548]: 2025-12-05 02:30:05.621 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:30:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-49b5f01f466dba2407d415e3c559e97244f714206126160b28b83ee95aae8e32-merged.mount: Deactivated successfully.
Dec 05 02:30:05 compute-0 podman[481277]: 2025-12-05 02:30:05.669044244 +0000 UTC m=+0.355711291 container remove 34e314879ca126e6761404f3f300c6f159cfba0f1e3544036673a84e5d34be97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 02:30:05 compute-0 systemd[1]: libpod-conmon-34e314879ca126e6761404f3f300c6f159cfba0f1e3544036673a84e5d34be97.scope: Deactivated successfully.
Dec 05 02:30:05 compute-0 podman[481314]: 2025-12-05 02:30:05.936847033 +0000 UTC m=+0.084622696 container create bcfa69373ebbaa470e7268d441e69d2debbd7fac5b7fc9d57a74e3dd2f5a4dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wu, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:30:05 compute-0 podman[481314]: 2025-12-05 02:30:05.906771441 +0000 UTC m=+0.054547164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:30:06 compute-0 systemd[1]: Started libpod-conmon-bcfa69373ebbaa470e7268d441e69d2debbd7fac5b7fc9d57a74e3dd2f5a4dd7.scope.
Dec 05 02:30:06 compute-0 ceph-mon[192914]: pgmap v2436: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:06 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:30:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d25ba4a5c82497985223c7b637b2052f8be7fca3458af425d29ab62981f4ae8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:30:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d25ba4a5c82497985223c7b637b2052f8be7fca3458af425d29ab62981f4ae8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:30:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d25ba4a5c82497985223c7b637b2052f8be7fca3458af425d29ab62981f4ae8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:30:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d25ba4a5c82497985223c7b637b2052f8be7fca3458af425d29ab62981f4ae8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:30:06 compute-0 nova_compute[349548]: 2025-12-05 02:30:06.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:30:06 compute-0 podman[481314]: 2025-12-05 02:30:06.08664873 +0000 UTC m=+0.234424463 container init bcfa69373ebbaa470e7268d441e69d2debbd7fac5b7fc9d57a74e3dd2f5a4dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wu, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 02:30:06 compute-0 podman[481314]: 2025-12-05 02:30:06.104657232 +0000 UTC m=+0.252432895 container start bcfa69373ebbaa470e7268d441e69d2debbd7fac5b7fc9d57a74e3dd2f5a4dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wu, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:30:06 compute-0 podman[481314]: 2025-12-05 02:30:06.110251634 +0000 UTC m=+0.258027317 container attach bcfa69373ebbaa470e7268d441e69d2debbd7fac5b7fc9d57a74e3dd2f5a4dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wu, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:30:06 compute-0 wonderful_wu[481330]: {
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:     "0": [
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:         {
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "devices": [
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "/dev/loop3"
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             ],
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "lv_name": "ceph_lv0",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "lv_size": "21470642176",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "name": "ceph_lv0",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "tags": {
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.cluster_name": "ceph",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.crush_device_class": "",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.encrypted": "0",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.osd_id": "0",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.type": "block",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.vdo": "0"
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             },
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "type": "block",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "vg_name": "ceph_vg0"
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:         }
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:     ],
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:     "1": [
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:         {
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "devices": [
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "/dev/loop4"
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             ],
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "lv_name": "ceph_lv1",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "lv_size": "21470642176",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "name": "ceph_lv1",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "tags": {
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.cluster_name": "ceph",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.crush_device_class": "",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.encrypted": "0",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.osd_id": "1",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.type": "block",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.vdo": "0"
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             },
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "type": "block",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "vg_name": "ceph_vg1"
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:         }
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:     ],
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:     "2": [
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:         {
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "devices": [
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "/dev/loop5"
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             ],
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "lv_name": "ceph_lv2",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "lv_size": "21470642176",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "name": "ceph_lv2",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "tags": {
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.cluster_name": "ceph",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.crush_device_class": "",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.encrypted": "0",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.osd_id": "2",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.type": "block",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:                 "ceph.vdo": "0"
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             },
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "type": "block",
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:             "vg_name": "ceph_vg2"
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:         }
Dec 05 02:30:06 compute-0 wonderful_wu[481330]:     ]
Dec 05 02:30:06 compute-0 wonderful_wu[481330]: }
Dec 05 02:30:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2437: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:06 compute-0 systemd[1]: libpod-bcfa69373ebbaa470e7268d441e69d2debbd7fac5b7fc9d57a74e3dd2f5a4dd7.scope: Deactivated successfully.
Dec 05 02:30:06 compute-0 podman[481314]: 2025-12-05 02:30:06.963413557 +0000 UTC m=+1.111189240 container died bcfa69373ebbaa470e7268d441e69d2debbd7fac5b7fc9d57a74e3dd2f5a4dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wu, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 02:30:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d25ba4a5c82497985223c7b637b2052f8be7fca3458af425d29ab62981f4ae8-merged.mount: Deactivated successfully.
Dec 05 02:30:07 compute-0 podman[481314]: 2025-12-05 02:30:07.080225976 +0000 UTC m=+1.228001619 container remove bcfa69373ebbaa470e7268d441e69d2debbd7fac5b7fc9d57a74e3dd2f5a4dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wu, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 05 02:30:07 compute-0 systemd[1]: libpod-conmon-bcfa69373ebbaa470e7268d441e69d2debbd7fac5b7fc9d57a74e3dd2f5a4dd7.scope: Deactivated successfully.
Dec 05 02:30:07 compute-0 sudo[481214]: pam_unix(sudo:session): session closed for user root
Dec 05 02:30:07 compute-0 sudo[481349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:30:07 compute-0 sudo[481349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:30:07 compute-0 sudo[481349]: pam_unix(sudo:session): session closed for user root
Dec 05 02:30:07 compute-0 sudo[481374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:30:07 compute-0 sudo[481374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:30:07 compute-0 sudo[481374]: pam_unix(sudo:session): session closed for user root
Dec 05 02:30:07 compute-0 sudo[481399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:30:07 compute-0 sudo[481399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:30:07 compute-0 sudo[481399]: pam_unix(sudo:session): session closed for user root
Dec 05 02:30:07 compute-0 sudo[481424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:30:07 compute-0 sudo[481424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:30:07 compute-0 nova_compute[349548]: 2025-12-05 02:30:07.915 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:30:08 compute-0 ceph-mon[192914]: pgmap v2437: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:08 compute-0 podman[481486]: 2025-12-05 02:30:08.284855955 +0000 UTC m=+0.093778662 container create 12412c6506206c369c5ed0253a9ea51767bbaa2e88cabc22dd04482bb3061d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brown, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 05 02:30:08 compute-0 podman[481486]: 2025-12-05 02:30:08.253208497 +0000 UTC m=+0.062131304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:30:08 compute-0 systemd[1]: Started libpod-conmon-12412c6506206c369c5ed0253a9ea51767bbaa2e88cabc22dd04482bb3061d0f.scope.
Dec 05 02:30:08 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:30:08 compute-0 podman[481486]: 2025-12-05 02:30:08.433028894 +0000 UTC m=+0.241951651 container init 12412c6506206c369c5ed0253a9ea51767bbaa2e88cabc22dd04482bb3061d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 02:30:08 compute-0 podman[481486]: 2025-12-05 02:30:08.45116887 +0000 UTC m=+0.260091617 container start 12412c6506206c369c5ed0253a9ea51767bbaa2e88cabc22dd04482bb3061d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brown, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 02:30:08 compute-0 podman[481486]: 2025-12-05 02:30:08.45769909 +0000 UTC m=+0.266621847 container attach 12412c6506206c369c5ed0253a9ea51767bbaa2e88cabc22dd04482bb3061d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brown, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:30:08 compute-0 hungry_brown[481502]: 167 167
Dec 05 02:30:08 compute-0 systemd[1]: libpod-12412c6506206c369c5ed0253a9ea51767bbaa2e88cabc22dd04482bb3061d0f.scope: Deactivated successfully.
Dec 05 02:30:08 compute-0 podman[481486]: 2025-12-05 02:30:08.465256959 +0000 UTC m=+0.274179696 container died 12412c6506206c369c5ed0253a9ea51767bbaa2e88cabc22dd04482bb3061d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 05 02:30:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:30:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e60854eb11ec9c5dfcb33ecf3426da7764c2a1ab948ff513b6ce66de421b94b3-merged.mount: Deactivated successfully.
Dec 05 02:30:08 compute-0 podman[481486]: 2025-12-05 02:30:08.541204142 +0000 UTC m=+0.350126859 container remove 12412c6506206c369c5ed0253a9ea51767bbaa2e88cabc22dd04482bb3061d0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:30:08 compute-0 systemd[1]: libpod-conmon-12412c6506206c369c5ed0253a9ea51767bbaa2e88cabc22dd04482bb3061d0f.scope: Deactivated successfully.
Dec 05 02:30:08 compute-0 podman[481524]: 2025-12-05 02:30:08.832162453 +0000 UTC m=+0.099168778 container create 23640fa48411264a52c90fbfdb745f8a8c81cadea234e8dd9f024902de57a4b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_visvesvaraya, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 05 02:30:08 compute-0 podman[481524]: 2025-12-05 02:30:08.795128628 +0000 UTC m=+0.062135003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:30:08 compute-0 systemd[1]: Started libpod-conmon-23640fa48411264a52c90fbfdb745f8a8c81cadea234e8dd9f024902de57a4b7.scope.
Dec 05 02:30:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2438: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:08 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:30:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53fb7c8959636fca7201855df1dc192f4b1916f2b9ebb464aa3b8271044920f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:30:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53fb7c8959636fca7201855df1dc192f4b1916f2b9ebb464aa3b8271044920f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:30:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53fb7c8959636fca7201855df1dc192f4b1916f2b9ebb464aa3b8271044920f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:30:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53fb7c8959636fca7201855df1dc192f4b1916f2b9ebb464aa3b8271044920f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:30:09 compute-0 podman[481524]: 2025-12-05 02:30:09.011073173 +0000 UTC m=+0.278079528 container init 23640fa48411264a52c90fbfdb745f8a8c81cadea234e8dd9f024902de57a4b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_visvesvaraya, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:30:09 compute-0 podman[481524]: 2025-12-05 02:30:09.031594549 +0000 UTC m=+0.298600874 container start 23640fa48411264a52c90fbfdb745f8a8c81cadea234e8dd9f024902de57a4b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 02:30:09 compute-0 podman[481524]: 2025-12-05 02:30:09.038048186 +0000 UTC m=+0.305054511 container attach 23640fa48411264a52c90fbfdb745f8a8c81cadea234e8dd9f024902de57a4b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 05 02:30:10 compute-0 ceph-mon[192914]: pgmap v2438: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:10 compute-0 dreamy_visvesvaraya[481540]: {
Dec 05 02:30:10 compute-0 dreamy_visvesvaraya[481540]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:30:10 compute-0 dreamy_visvesvaraya[481540]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:30:10 compute-0 dreamy_visvesvaraya[481540]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:30:10 compute-0 dreamy_visvesvaraya[481540]:         "osd_id": 0,
Dec 05 02:30:10 compute-0 dreamy_visvesvaraya[481540]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:30:10 compute-0 dreamy_visvesvaraya[481540]:         "type": "bluestore"
Dec 05 02:30:10 compute-0 dreamy_visvesvaraya[481540]:     },
Dec 05 02:30:10 compute-0 dreamy_visvesvaraya[481540]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:30:10 compute-0 dreamy_visvesvaraya[481540]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:30:10 compute-0 dreamy_visvesvaraya[481540]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:30:10 compute-0 dreamy_visvesvaraya[481540]:         "osd_id": 1,
Dec 05 02:30:10 compute-0 dreamy_visvesvaraya[481540]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:30:10 compute-0 dreamy_visvesvaraya[481540]:         "type": "bluestore"
Dec 05 02:30:10 compute-0 dreamy_visvesvaraya[481540]:     },
Dec 05 02:30:10 compute-0 dreamy_visvesvaraya[481540]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:30:10 compute-0 dreamy_visvesvaraya[481540]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:30:10 compute-0 dreamy_visvesvaraya[481540]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:30:10 compute-0 dreamy_visvesvaraya[481540]:         "osd_id": 2,
Dec 05 02:30:10 compute-0 dreamy_visvesvaraya[481540]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:30:10 compute-0 dreamy_visvesvaraya[481540]:         "type": "bluestore"
Dec 05 02:30:10 compute-0 dreamy_visvesvaraya[481540]:     }
Dec 05 02:30:10 compute-0 dreamy_visvesvaraya[481540]: }
Dec 05 02:30:10 compute-0 systemd[1]: libpod-23640fa48411264a52c90fbfdb745f8a8c81cadea234e8dd9f024902de57a4b7.scope: Deactivated successfully.
Dec 05 02:30:10 compute-0 systemd[1]: libpod-23640fa48411264a52c90fbfdb745f8a8c81cadea234e8dd9f024902de57a4b7.scope: Consumed 1.105s CPU time.
Dec 05 02:30:10 compute-0 podman[481524]: 2025-12-05 02:30:10.1346203 +0000 UTC m=+1.401626645 container died 23640fa48411264a52c90fbfdb745f8a8c81cadea234e8dd9f024902de57a4b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 05 02:30:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-53fb7c8959636fca7201855df1dc192f4b1916f2b9ebb464aa3b8271044920f3-merged.mount: Deactivated successfully.
Dec 05 02:30:10 compute-0 podman[481524]: 2025-12-05 02:30:10.237241077 +0000 UTC m=+1.504247372 container remove 23640fa48411264a52c90fbfdb745f8a8c81cadea234e8dd9f024902de57a4b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 05 02:30:10 compute-0 systemd[1]: libpod-conmon-23640fa48411264a52c90fbfdb745f8a8c81cadea234e8dd9f024902de57a4b7.scope: Deactivated successfully.
Dec 05 02:30:10 compute-0 sudo[481424]: pam_unix(sudo:session): session closed for user root
Dec 05 02:30:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:30:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:30:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:30:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:30:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev fc64d14d-cbc6-4c47-a6b3-9e0672640c6f does not exist
Dec 05 02:30:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 53260426-2ccc-4cd1-bcd5-89571dfaea64 does not exist
Dec 05 02:30:10 compute-0 sudo[481585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:30:10 compute-0 sudo[481585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:30:10 compute-0 sudo[481585]: pam_unix(sudo:session): session closed for user root
Dec 05 02:30:10 compute-0 sudo[481610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:30:10 compute-0 sudo[481610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:30:10 compute-0 sudo[481610]: pam_unix(sudo:session): session closed for user root
Dec 05 02:30:10 compute-0 nova_compute[349548]: 2025-12-05 02:30:10.627 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:30:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2439: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:30:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:30:12 compute-0 ceph-mon[192914]: pgmap v2439: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:12 compute-0 podman[481638]: 2025-12-05 02:30:12.721586572 +0000 UTC m=+0.112193546 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, maintainer=Red Hat, Inc., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, name=ubi9-minimal, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 05 02:30:12 compute-0 podman[481636]: 2025-12-05 02:30:12.736803814 +0000 UTC m=+0.135071830 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:30:12 compute-0 podman[481635]: 2025-12-05 02:30:12.753993062 +0000 UTC m=+0.160936980 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 02:30:12 compute-0 podman[481637]: 2025-12-05 02:30:12.76493763 +0000 UTC m=+0.160921040 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 05 02:30:12 compute-0 nova_compute[349548]: 2025-12-05 02:30:12.918 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:30:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2440: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:30:14 compute-0 ceph-mon[192914]: pgmap v2440: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2441: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:15 compute-0 nova_compute[349548]: 2025-12-05 02:30:15.634 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:30:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:30:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:30:16 compute-0 ceph-mon[192914]: pgmap v2441: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:30:16
Dec 05 02:30:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:30:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:30:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['images', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', '.rgw.root', '.mgr', 'backups', 'cephfs.cephfs.meta']
Dec 05 02:30:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:30:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:30:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:30:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:30:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:30:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2442: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:17 compute-0 ceph-mon[192914]: pgmap v2442: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:17 compute-0 nova_compute[349548]: 2025-12-05 02:30:17.921 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:30:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:30:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:30:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:30:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:30:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:30:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:30:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:30:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:30:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:30:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:30:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:30:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2443: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:19 compute-0 ceph-mon[192914]: pgmap v2443: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:20 compute-0 nova_compute[349548]: 2025-12-05 02:30:20.639 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:30:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2444: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:22 compute-0 ceph-mon[192914]: pgmap v2444: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:22 compute-0 nova_compute[349548]: 2025-12-05 02:30:22.924 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:30:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2445: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:30:24 compute-0 ceph-mon[192914]: pgmap v2445: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2446: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:25 compute-0 nova_compute[349548]: 2025-12-05 02:30:25.644 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:30:26 compute-0 ceph-mon[192914]: pgmap v2446: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:26 compute-0 podman[481720]: 2025-12-05 02:30:26.711491031 +0000 UTC m=+0.112109483 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 02:30:26 compute-0 podman[481719]: 2025-12-05 02:30:26.712423719 +0000 UTC m=+0.117058058 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 02:30:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2447: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:30:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:30:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:30:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:30:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 02:30:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:30:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:30:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:30:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:30:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:30:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 05 02:30:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:30:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:30:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:30:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:30:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:30:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:30:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:30:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:30:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:30:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:30:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:30:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:30:27 compute-0 nova_compute[349548]: 2025-12-05 02:30:27.927 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:30:28 compute-0 ceph-mon[192914]: pgmap v2447: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:30:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2448: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:29 compute-0 podman[158197]: time="2025-12-05T02:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:30:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:30:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8213 "" "Go-http-client/1.1"
Dec 05 02:30:30 compute-0 ceph-mon[192914]: pgmap v2448: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:30 compute-0 nova_compute[349548]: 2025-12-05 02:30:30.648 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:30:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2449: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:31 compute-0 openstack_network_exporter[366555]: ERROR   02:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:30:31 compute-0 openstack_network_exporter[366555]: ERROR   02:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:30:31 compute-0 openstack_network_exporter[366555]: ERROR   02:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:30:31 compute-0 openstack_network_exporter[366555]: ERROR   02:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:30:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:30:31 compute-0 openstack_network_exporter[366555]: ERROR   02:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:30:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:30:31 compute-0 podman[481761]: 2025-12-05 02:30:31.721783942 +0000 UTC m=+0.111906327 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Dec 05 02:30:31 compute-0 podman[481760]: 2025-12-05 02:30:31.721636018 +0000 UTC m=+0.114998627 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, distribution-scope=public, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=ubi9-container)
Dec 05 02:30:31 compute-0 podman[481759]: 2025-12-05 02:30:31.745428518 +0000 UTC m=+0.150868198 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec 05 02:30:32 compute-0 ceph-mon[192914]: pgmap v2449: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:32 compute-0 nova_compute[349548]: 2025-12-05 02:30:32.929 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:30:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2450: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:30:34 compute-0 ceph-mon[192914]: pgmap v2450: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2451: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:35 compute-0 nova_compute[349548]: 2025-12-05 02:30:35.654 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:30:36 compute-0 ceph-mon[192914]: pgmap v2451: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2452: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:37 compute-0 nova_compute[349548]: 2025-12-05 02:30:37.932 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:30:38 compute-0 ceph-mon[192914]: pgmap v2452: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.330 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.330 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.343 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.343 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:30:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:30:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:30:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2453: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:40 compute-0 ceph-mon[192914]: pgmap v2453: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:40 compute-0 nova_compute[349548]: 2025-12-05 02:30:40.660 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:30:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2454: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:42 compute-0 ceph-mon[192914]: pgmap v2454: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:42 compute-0 nova_compute[349548]: 2025-12-05 02:30:42.934 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:30:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2455: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:30:43 compute-0 podman[481816]: 2025-12-05 02:30:43.703487089 +0000 UTC m=+0.112167105 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125)
Dec 05 02:30:43 compute-0 podman[481824]: 2025-12-05 02:30:43.723321785 +0000 UTC m=+0.105082650 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, version=9.6, vcs-type=git, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., distribution-scope=public, vendor=Red Hat, Inc.)
Dec 05 02:30:43 compute-0 podman[481817]: 2025-12-05 02:30:43.729630758 +0000 UTC m=+0.133202016 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 02:30:43 compute-0 podman[481818]: 2025-12-05 02:30:43.766755015 +0000 UTC m=+0.156414909 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec 05 02:30:44 compute-0 ceph-mon[192914]: pgmap v2455: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2456: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:30:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4206836853' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:30:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:30:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4206836853' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:30:45 compute-0 nova_compute[349548]: 2025-12-05 02:30:45.665 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:30:46 compute-0 ceph-mon[192914]: pgmap v2456: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/4206836853' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:30:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/4206836853' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:30:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:30:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:30:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:30:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:30:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:30:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:30:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2457: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:47 compute-0 nova_compute[349548]: 2025-12-05 02:30:47.937 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:30:48 compute-0 ceph-mon[192914]: pgmap v2457: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:30:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2458: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:49 compute-0 nova_compute[349548]: 2025-12-05 02:30:49.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:30:49 compute-0 nova_compute[349548]: 2025-12-05 02:30:49.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 02:30:49 compute-0 nova_compute[349548]: 2025-12-05 02:30:49.088 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 02:30:50 compute-0 ceph-mon[192914]: pgmap v2458: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:50 compute-0 nova_compute[349548]: 2025-12-05 02:30:50.671 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:30:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2459: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:52 compute-0 ceph-mon[192914]: pgmap v2459: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:52 compute-0 nova_compute[349548]: 2025-12-05 02:30:52.939 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:30:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2460: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:53 compute-0 nova_compute[349548]: 2025-12-05 02:30:53.088 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:30:53 compute-0 nova_compute[349548]: 2025-12-05 02:30:53.089 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:30:53 compute-0 nova_compute[349548]: 2025-12-05 02:30:53.089 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:30:53 compute-0 nova_compute[349548]: 2025-12-05 02:30:53.106 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 02:30:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:30:54 compute-0 ceph-mon[192914]: pgmap v2460: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2461: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:55 compute-0 nova_compute[349548]: 2025-12-05 02:30:55.676 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:30:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:30:56.234 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:30:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:30:56.234 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:30:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:30:56.235 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:30:56 compute-0 ceph-mon[192914]: pgmap v2461: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2462: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:57 compute-0 podman[481896]: 2025-12-05 02:30:57.68076227 +0000 UTC m=+0.095439000 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 02:30:57 compute-0 podman[481897]: 2025-12-05 02:30:57.701865383 +0000 UTC m=+0.102189536 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 02:30:57 compute-0 nova_compute[349548]: 2025-12-05 02:30:57.942 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:30:58 compute-0 ceph-mon[192914]: pgmap v2462: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:30:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2463: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:30:59 compute-0 nova_compute[349548]: 2025-12-05 02:30:59.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:30:59 compute-0 nova_compute[349548]: 2025-12-05 02:30:59.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:30:59 compute-0 rsyslogd[188644]: imjournal: 16408 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec 05 02:30:59 compute-0 podman[158197]: time="2025-12-05T02:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:30:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:30:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8210 "" "Go-http-client/1.1"
Dec 05 02:31:00 compute-0 nova_compute[349548]: 2025-12-05 02:31:00.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:31:00 compute-0 nova_compute[349548]: 2025-12-05 02:31:00.104 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:31:00 compute-0 nova_compute[349548]: 2025-12-05 02:31:00.105 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:31:00 compute-0 nova_compute[349548]: 2025-12-05 02:31:00.105 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:31:00 compute-0 nova_compute[349548]: 2025-12-05 02:31:00.106 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:31:00 compute-0 nova_compute[349548]: 2025-12-05 02:31:00.106 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:31:00 compute-0 ceph-mon[192914]: pgmap v2463: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:31:00 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2901075375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:31:00 compute-0 nova_compute[349548]: 2025-12-05 02:31:00.582 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:31:00 compute-0 nova_compute[349548]: 2025-12-05 02:31:00.682 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:31:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2464: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:01 compute-0 nova_compute[349548]: 2025-12-05 02:31:01.110 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:31:01 compute-0 nova_compute[349548]: 2025-12-05 02:31:01.111 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3923MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:31:01 compute-0 nova_compute[349548]: 2025-12-05 02:31:01.112 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:31:01 compute-0 nova_compute[349548]: 2025-12-05 02:31:01.112 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:31:01 compute-0 nova_compute[349548]: 2025-12-05 02:31:01.194 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:31:01 compute-0 nova_compute[349548]: 2025-12-05 02:31:01.195 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:31:01 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2901075375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:31:01 compute-0 openstack_network_exporter[366555]: ERROR   02:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:31:01 compute-0 openstack_network_exporter[366555]: ERROR   02:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:31:01 compute-0 openstack_network_exporter[366555]: ERROR   02:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:31:01 compute-0 openstack_network_exporter[366555]: ERROR   02:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:31:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:31:01 compute-0 openstack_network_exporter[366555]: ERROR   02:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:31:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:31:01 compute-0 nova_compute[349548]: 2025-12-05 02:31:01.464 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:31:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:31:01 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2306036107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:31:01 compute-0 nova_compute[349548]: 2025-12-05 02:31:01.994 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:31:02 compute-0 nova_compute[349548]: 2025-12-05 02:31:02.008 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:31:02 compute-0 nova_compute[349548]: 2025-12-05 02:31:02.030 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:31:02 compute-0 nova_compute[349548]: 2025-12-05 02:31:02.034 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:31:02 compute-0 nova_compute[349548]: 2025-12-05 02:31:02.035 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:31:02 compute-0 ceph-mon[192914]: pgmap v2464: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:02 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2306036107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:31:02 compute-0 podman[481981]: 2025-12-05 02:31:02.72830144 +0000 UTC m=+0.130585550 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=edpm, distribution-scope=public, name=ubi9, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release=1214.1726694543, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 05 02:31:02 compute-0 podman[481980]: 2025-12-05 02:31:02.743029627 +0000 UTC m=+0.151250769 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 02:31:02 compute-0 podman[481982]: 2025-12-05 02:31:02.750806993 +0000 UTC m=+0.146297136 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 05 02:31:02 compute-0 nova_compute[349548]: 2025-12-05 02:31:02.945 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:31:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2465: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:03 compute-0 nova_compute[349548]: 2025-12-05 02:31:03.036 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:31:03 compute-0 nova_compute[349548]: 2025-12-05 02:31:03.036 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:31:03 compute-0 nova_compute[349548]: 2025-12-05 02:31:03.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:31:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:31:04 compute-0 nova_compute[349548]: 2025-12-05 02:31:04.063 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:31:04 compute-0 ceph-mon[192914]: pgmap v2465: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2466: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.358711) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901865358791, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1264, "num_deletes": 251, "total_data_size": 1961945, "memory_usage": 1988416, "flush_reason": "Manual Compaction"}
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901865376814, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 1932375, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49740, "largest_seqno": 51003, "table_properties": {"data_size": 1926332, "index_size": 3374, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12570, "raw_average_key_size": 19, "raw_value_size": 1914262, "raw_average_value_size": 3014, "num_data_blocks": 152, "num_entries": 635, "num_filter_entries": 635, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764901734, "oldest_key_time": 1764901734, "file_creation_time": 1764901865, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 18294 microseconds, and 9720 cpu microseconds.
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.377014) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 1932375 bytes OK
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.377039) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.379449) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.379469) EVENT_LOG_v1 {"time_micros": 1764901865379462, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.379490) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 1956254, prev total WAL file size 1956254, number of live WAL files 2.
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.380831) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(1887KB)], [119(7260KB)]
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901865380945, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 9367129, "oldest_snapshot_seqno": -1}
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 6513 keys, 7619864 bytes, temperature: kUnknown
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901865443817, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 7619864, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7580480, "index_size": 21994, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 170134, "raw_average_key_size": 26, "raw_value_size": 7466898, "raw_average_value_size": 1146, "num_data_blocks": 868, "num_entries": 6513, "num_filter_entries": 6513, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764901865, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.444207) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 7619864 bytes
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.446610) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.6 rd, 120.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 7.1 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(8.8) write-amplify(3.9) OK, records in: 7027, records dropped: 514 output_compression: NoCompression
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.446639) EVENT_LOG_v1 {"time_micros": 1764901865446625, "job": 72, "event": "compaction_finished", "compaction_time_micros": 63057, "compaction_time_cpu_micros": 36941, "output_level": 6, "num_output_files": 1, "total_output_size": 7619864, "num_input_records": 7027, "num_output_records": 6513, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901865447401, "job": 72, "event": "table_file_deletion", "file_number": 121}
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901865450250, "job": 72, "event": "table_file_deletion", "file_number": 119}
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.380524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.450389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.450395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.450398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.450401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.450404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:31:05 compute-0 nova_compute[349548]: 2025-12-05 02:31:05.688 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:31:06 compute-0 nova_compute[349548]: 2025-12-05 02:31:06.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:31:06 compute-0 ceph-mon[192914]: pgmap v2466: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2467: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:07 compute-0 nova_compute[349548]: 2025-12-05 02:31:07.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:31:07 compute-0 nova_compute[349548]: 2025-12-05 02:31:07.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:31:07 compute-0 nova_compute[349548]: 2025-12-05 02:31:07.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 02:31:07 compute-0 nova_compute[349548]: 2025-12-05 02:31:07.948 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:31:08 compute-0 ceph-mon[192914]: pgmap v2467: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:31:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2468: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:10 compute-0 ceph-mon[192914]: pgmap v2468: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:10 compute-0 nova_compute[349548]: 2025-12-05 02:31:10.692 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:31:10 compute-0 sudo[482036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:31:10 compute-0 sudo[482036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:31:10 compute-0 sudo[482036]: pam_unix(sudo:session): session closed for user root
Dec 05 02:31:10 compute-0 sudo[482061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:31:10 compute-0 sudo[482061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:31:10 compute-0 sudo[482061]: pam_unix(sudo:session): session closed for user root
Dec 05 02:31:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2469: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:11 compute-0 sudo[482086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:31:11 compute-0 sudo[482086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:31:11 compute-0 sudo[482086]: pam_unix(sudo:session): session closed for user root
Dec 05 02:31:11 compute-0 sudo[482111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:31:11 compute-0 sudo[482111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:31:11 compute-0 sudo[482111]: pam_unix(sudo:session): session closed for user root
Dec 05 02:31:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:31:11 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:31:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:31:11 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:31:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:31:11 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:31:12 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 1fc2076e-c47b-412a-8e68-152c6122c378 does not exist
Dec 05 02:31:12 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8580bfce-e54b-4c0f-ab3f-5b189256300c does not exist
Dec 05 02:31:12 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev cd946155-1d0a-47d8-989f-c67d982be91f does not exist
Dec 05 02:31:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:31:12 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:31:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:31:12 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:31:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:31:12 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:31:12 compute-0 sudo[482166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:31:12 compute-0 sudo[482166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:31:12 compute-0 sudo[482166]: pam_unix(sudo:session): session closed for user root
Dec 05 02:31:12 compute-0 sudo[482191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:31:12 compute-0 sudo[482191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:31:12 compute-0 sudo[482191]: pam_unix(sudo:session): session closed for user root
Dec 05 02:31:12 compute-0 ceph-mon[192914]: pgmap v2469: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:12 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:31:12 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:31:12 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:31:12 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:31:12 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:31:12 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:31:12 compute-0 sudo[482216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:31:12 compute-0 sudo[482216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:31:12 compute-0 sudo[482216]: pam_unix(sudo:session): session closed for user root
Dec 05 02:31:12 compute-0 sudo[482241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:31:12 compute-0 sudo[482241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:31:12 compute-0 nova_compute[349548]: 2025-12-05 02:31:12.951 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:31:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2470: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:13 compute-0 podman[482303]: 2025-12-05 02:31:13.100207262 +0000 UTC m=+0.080906517 container create 06c2d9685122ed6f7495a0ce5b511f804aa4c299026791c60d8a6c2c536576ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_thompson, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 05 02:31:13 compute-0 podman[482303]: 2025-12-05 02:31:13.070168571 +0000 UTC m=+0.050867886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:31:13 compute-0 systemd[1]: Started libpod-conmon-06c2d9685122ed6f7495a0ce5b511f804aa4c299026791c60d8a6c2c536576ad.scope.
Dec 05 02:31:13 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:31:13 compute-0 podman[482303]: 2025-12-05 02:31:13.259874775 +0000 UTC m=+0.240574100 container init 06c2d9685122ed6f7495a0ce5b511f804aa4c299026791c60d8a6c2c536576ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_thompson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:31:13 compute-0 podman[482303]: 2025-12-05 02:31:13.280425021 +0000 UTC m=+0.261124286 container start 06c2d9685122ed6f7495a0ce5b511f804aa4c299026791c60d8a6c2c536576ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_thompson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 05 02:31:13 compute-0 podman[482303]: 2025-12-05 02:31:13.286529958 +0000 UTC m=+0.267229193 container attach 06c2d9685122ed6f7495a0ce5b511f804aa4c299026791c60d8a6c2c536576ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_thompson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:31:13 compute-0 admiring_thompson[482318]: 167 167
Dec 05 02:31:13 compute-0 systemd[1]: libpod-06c2d9685122ed6f7495a0ce5b511f804aa4c299026791c60d8a6c2c536576ad.scope: Deactivated successfully.
Dec 05 02:31:13 compute-0 podman[482303]: 2025-12-05 02:31:13.295402025 +0000 UTC m=+0.276101350 container died 06c2d9685122ed6f7495a0ce5b511f804aa4c299026791c60d8a6c2c536576ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_thompson, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:31:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-72a8fe0ec7e4a8d98f154c6cdcc511542ed65e63d22835e1a8e6fe7761a41862-merged.mount: Deactivated successfully.
Dec 05 02:31:13 compute-0 podman[482303]: 2025-12-05 02:31:13.372096011 +0000 UTC m=+0.352795226 container remove 06c2d9685122ed6f7495a0ce5b511f804aa4c299026791c60d8a6c2c536576ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 05 02:31:13 compute-0 systemd[1]: libpod-conmon-06c2d9685122ed6f7495a0ce5b511f804aa4c299026791c60d8a6c2c536576ad.scope: Deactivated successfully.
Dec 05 02:31:13 compute-0 ceph-mon[192914]: pgmap v2470: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:31:13 compute-0 podman[482341]: 2025-12-05 02:31:13.663233877 +0000 UTC m=+0.094122942 container create b920f6f228c6a95e67eee0cf6da3b321569d71da17e864610311707f8e8ccb41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 05 02:31:13 compute-0 podman[482341]: 2025-12-05 02:31:13.625701858 +0000 UTC m=+0.056590973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:31:13 compute-0 systemd[1]: Started libpod-conmon-b920f6f228c6a95e67eee0cf6da3b321569d71da17e864610311707f8e8ccb41.scope.
Dec 05 02:31:13 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9b094e52b9c2088f89add001c647c955c56234a6ef18462308d5229a9cae82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9b094e52b9c2088f89add001c647c955c56234a6ef18462308d5229a9cae82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9b094e52b9c2088f89add001c647c955c56234a6ef18462308d5229a9cae82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9b094e52b9c2088f89add001c647c955c56234a6ef18462308d5229a9cae82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9b094e52b9c2088f89add001c647c955c56234a6ef18462308d5229a9cae82/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:31:13 compute-0 podman[482341]: 2025-12-05 02:31:13.830308955 +0000 UTC m=+0.261198040 container init b920f6f228c6a95e67eee0cf6da3b321569d71da17e864610311707f8e8ccb41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ellis, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 02:31:13 compute-0 podman[482341]: 2025-12-05 02:31:13.855877476 +0000 UTC m=+0.286766521 container start b920f6f228c6a95e67eee0cf6da3b321569d71da17e864610311707f8e8ccb41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ellis, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 05 02:31:13 compute-0 podman[482341]: 2025-12-05 02:31:13.862465257 +0000 UTC m=+0.293354342 container attach b920f6f228c6a95e67eee0cf6da3b321569d71da17e864610311707f8e8ccb41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ellis, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:31:13 compute-0 podman[482357]: 2025-12-05 02:31:13.896153785 +0000 UTC m=+0.120768975 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 05 02:31:13 compute-0 podman[482360]: 2025-12-05 02:31:13.911099708 +0000 UTC m=+0.133310898 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, name=ubi9-minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public)
Dec 05 02:31:13 compute-0 podman[482361]: 2025-12-05 02:31:13.936680901 +0000 UTC m=+0.148493320 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:31:13 compute-0 podman[482378]: 2025-12-05 02:31:13.943769596 +0000 UTC m=+0.134427891 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 05 02:31:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2471: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:15 compute-0 nova_compute[349548]: 2025-12-05 02:31:15.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:31:15 compute-0 festive_ellis[482358]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:31:15 compute-0 festive_ellis[482358]: --> relative data size: 1.0
Dec 05 02:31:15 compute-0 festive_ellis[482358]: --> All data devices are unavailable
Dec 05 02:31:15 compute-0 systemd[1]: libpod-b920f6f228c6a95e67eee0cf6da3b321569d71da17e864610311707f8e8ccb41.scope: Deactivated successfully.
Dec 05 02:31:15 compute-0 podman[482341]: 2025-12-05 02:31:15.144937285 +0000 UTC m=+1.575826330 container died b920f6f228c6a95e67eee0cf6da3b321569d71da17e864610311707f8e8ccb41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Dec 05 02:31:15 compute-0 systemd[1]: libpod-b920f6f228c6a95e67eee0cf6da3b321569d71da17e864610311707f8e8ccb41.scope: Consumed 1.236s CPU time.
Dec 05 02:31:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f9b094e52b9c2088f89add001c647c955c56234a6ef18462308d5229a9cae82-merged.mount: Deactivated successfully.
Dec 05 02:31:15 compute-0 podman[482341]: 2025-12-05 02:31:15.255854263 +0000 UTC m=+1.686743338 container remove b920f6f228c6a95e67eee0cf6da3b321569d71da17e864610311707f8e8ccb41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ellis, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:31:15 compute-0 systemd[1]: libpod-conmon-b920f6f228c6a95e67eee0cf6da3b321569d71da17e864610311707f8e8ccb41.scope: Deactivated successfully.
Dec 05 02:31:15 compute-0 sudo[482241]: pam_unix(sudo:session): session closed for user root
Dec 05 02:31:15 compute-0 sudo[482484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:31:15 compute-0 sudo[482484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:31:15 compute-0 sudo[482484]: pam_unix(sudo:session): session closed for user root
Dec 05 02:31:15 compute-0 sudo[482509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:31:15 compute-0 sudo[482509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:31:15 compute-0 sudo[482509]: pam_unix(sudo:session): session closed for user root
Dec 05 02:31:15 compute-0 nova_compute[349548]: 2025-12-05 02:31:15.697 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:31:15 compute-0 sudo[482534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:31:15 compute-0 sudo[482534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:31:15 compute-0 sudo[482534]: pam_unix(sudo:session): session closed for user root
Dec 05 02:31:15 compute-0 sudo[482559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:31:15 compute-0 sudo[482559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:31:16 compute-0 ceph-mon[192914]: pgmap v2471: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:31:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:31:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:31:16
Dec 05 02:31:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:31:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:31:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['backups', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.control']
Dec 05 02:31:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:31:16 compute-0 podman[482621]: 2025-12-05 02:31:16.47558462 +0000 UTC m=+0.093522424 container create 02d5514648d1966d8cc9b55845228cefd0246d69aad70ae4fc3a44ed81411859 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ride, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:31:16 compute-0 podman[482621]: 2025-12-05 02:31:16.438783423 +0000 UTC m=+0.056721297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:31:16 compute-0 systemd[1]: Started libpod-conmon-02d5514648d1966d8cc9b55845228cefd0246d69aad70ae4fc3a44ed81411859.scope.
Dec 05 02:31:16 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:31:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:31:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:31:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:31:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:31:16 compute-0 podman[482621]: 2025-12-05 02:31:16.623398979 +0000 UTC m=+0.241336833 container init 02d5514648d1966d8cc9b55845228cefd0246d69aad70ae4fc3a44ed81411859 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ride, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 02:31:16 compute-0 podman[482621]: 2025-12-05 02:31:16.639402502 +0000 UTC m=+0.257340306 container start 02d5514648d1966d8cc9b55845228cefd0246d69aad70ae4fc3a44ed81411859 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ride, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 05 02:31:16 compute-0 podman[482621]: 2025-12-05 02:31:16.646838588 +0000 UTC m=+0.264776462 container attach 02d5514648d1966d8cc9b55845228cefd0246d69aad70ae4fc3a44ed81411859 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ride, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:31:16 compute-0 trusting_ride[482635]: 167 167
Dec 05 02:31:16 compute-0 systemd[1]: libpod-02d5514648d1966d8cc9b55845228cefd0246d69aad70ae4fc3a44ed81411859.scope: Deactivated successfully.
Dec 05 02:31:16 compute-0 podman[482621]: 2025-12-05 02:31:16.650212346 +0000 UTC m=+0.268150150 container died 02d5514648d1966d8cc9b55845228cefd0246d69aad70ae4fc3a44ed81411859 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ride, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:31:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca05d602c15b5f194df7acc5300c228387d36de18a2f116f008de5250da06a2c-merged.mount: Deactivated successfully.
Dec 05 02:31:16 compute-0 podman[482621]: 2025-12-05 02:31:16.729032862 +0000 UTC m=+0.346970666 container remove 02d5514648d1966d8cc9b55845228cefd0246d69aad70ae4fc3a44ed81411859 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ride, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:31:16 compute-0 systemd[1]: libpod-conmon-02d5514648d1966d8cc9b55845228cefd0246d69aad70ae4fc3a44ed81411859.scope: Deactivated successfully.
Dec 05 02:31:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2472: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:17 compute-0 podman[482658]: 2025-12-05 02:31:17.028523921 +0000 UTC m=+0.100738923 container create 0e204ca0794fbf3a895033d811a35c1383d94d5f0b22c6ad69689af9817d165e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elion, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 05 02:31:17 compute-0 podman[482658]: 2025-12-05 02:31:16.991024203 +0000 UTC m=+0.063239245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:31:17 compute-0 systemd[1]: Started libpod-conmon-0e204ca0794fbf3a895033d811a35c1383d94d5f0b22c6ad69689af9817d165e.scope.
Dec 05 02:31:17 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb23c203bccc672a716b2de9cdf49df033d0f0c03b02991ff426b2cbd0386756/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb23c203bccc672a716b2de9cdf49df033d0f0c03b02991ff426b2cbd0386756/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb23c203bccc672a716b2de9cdf49df033d0f0c03b02991ff426b2cbd0386756/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb23c203bccc672a716b2de9cdf49df033d0f0c03b02991ff426b2cbd0386756/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:31:17 compute-0 podman[482658]: 2025-12-05 02:31:17.195749673 +0000 UTC m=+0.267964725 container init 0e204ca0794fbf3a895033d811a35c1383d94d5f0b22c6ad69689af9817d165e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elion, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 05 02:31:17 compute-0 podman[482658]: 2025-12-05 02:31:17.224991121 +0000 UTC m=+0.297206113 container start 0e204ca0794fbf3a895033d811a35c1383d94d5f0b22c6ad69689af9817d165e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 05 02:31:17 compute-0 podman[482658]: 2025-12-05 02:31:17.23253358 +0000 UTC m=+0.304748582 container attach 0e204ca0794fbf3a895033d811a35c1383d94d5f0b22c6ad69689af9817d165e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elion, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 05 02:31:17 compute-0 gracious_elion[482675]: {
Dec 05 02:31:17 compute-0 gracious_elion[482675]:     "0": [
Dec 05 02:31:17 compute-0 gracious_elion[482675]:         {
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "devices": [
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "/dev/loop3"
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             ],
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "lv_name": "ceph_lv0",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "lv_size": "21470642176",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "name": "ceph_lv0",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "tags": {
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.cluster_name": "ceph",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.crush_device_class": "",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.encrypted": "0",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.osd_id": "0",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.type": "block",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.vdo": "0"
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             },
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "type": "block",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "vg_name": "ceph_vg0"
Dec 05 02:31:17 compute-0 gracious_elion[482675]:         }
Dec 05 02:31:17 compute-0 gracious_elion[482675]:     ],
Dec 05 02:31:17 compute-0 gracious_elion[482675]:     "1": [
Dec 05 02:31:17 compute-0 gracious_elion[482675]:         {
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "devices": [
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "/dev/loop4"
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             ],
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "lv_name": "ceph_lv1",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "lv_size": "21470642176",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "name": "ceph_lv1",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "tags": {
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.cluster_name": "ceph",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.crush_device_class": "",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.encrypted": "0",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.osd_id": "1",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.type": "block",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.vdo": "0"
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             },
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "type": "block",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "vg_name": "ceph_vg1"
Dec 05 02:31:17 compute-0 gracious_elion[482675]:         }
Dec 05 02:31:17 compute-0 gracious_elion[482675]:     ],
Dec 05 02:31:17 compute-0 gracious_elion[482675]:     "2": [
Dec 05 02:31:17 compute-0 gracious_elion[482675]:         {
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "devices": [
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "/dev/loop5"
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             ],
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "lv_name": "ceph_lv2",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "lv_size": "21470642176",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "name": "ceph_lv2",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "tags": {
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.cluster_name": "ceph",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.crush_device_class": "",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.encrypted": "0",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.osd_id": "2",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.type": "block",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:                 "ceph.vdo": "0"
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             },
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "type": "block",
Dec 05 02:31:17 compute-0 gracious_elion[482675]:             "vg_name": "ceph_vg2"
Dec 05 02:31:17 compute-0 gracious_elion[482675]:         }
Dec 05 02:31:17 compute-0 gracious_elion[482675]:     ]
Dec 05 02:31:17 compute-0 gracious_elion[482675]: }
Dec 05 02:31:17 compute-0 nova_compute[349548]: 2025-12-05 02:31:17.953 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:31:17 compute-0 systemd[1]: libpod-0e204ca0794fbf3a895033d811a35c1383d94d5f0b22c6ad69689af9817d165e.scope: Deactivated successfully.
Dec 05 02:31:17 compute-0 podman[482658]: 2025-12-05 02:31:17.973783146 +0000 UTC m=+1.045998138 container died 0e204ca0794fbf3a895033d811a35c1383d94d5f0b22c6ad69689af9817d165e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 02:31:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb23c203bccc672a716b2de9cdf49df033d0f0c03b02991ff426b2cbd0386756-merged.mount: Deactivated successfully.
Dec 05 02:31:18 compute-0 ceph-mon[192914]: pgmap v2472: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:18 compute-0 podman[482658]: 2025-12-05 02:31:18.066436354 +0000 UTC m=+1.138651326 container remove 0e204ca0794fbf3a895033d811a35c1383d94d5f0b22c6ad69689af9817d165e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elion, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 05 02:31:18 compute-0 systemd[1]: libpod-conmon-0e204ca0794fbf3a895033d811a35c1383d94d5f0b22c6ad69689af9817d165e.scope: Deactivated successfully.
Dec 05 02:31:18 compute-0 sudo[482559]: pam_unix(sudo:session): session closed for user root
Dec 05 02:31:18 compute-0 sudo[482698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:31:18 compute-0 sudo[482698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:31:18 compute-0 sudo[482698]: pam_unix(sudo:session): session closed for user root
Dec 05 02:31:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:31:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:31:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:31:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:31:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:31:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:31:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:31:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:31:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:31:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:31:18 compute-0 sudo[482723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:31:18 compute-0 sudo[482723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:31:18 compute-0 sudo[482723]: pam_unix(sudo:session): session closed for user root
Dec 05 02:31:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:31:18 compute-0 sudo[482748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:31:18 compute-0 sudo[482748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:31:18 compute-0 sudo[482748]: pam_unix(sudo:session): session closed for user root
Dec 05 02:31:18 compute-0 sudo[482773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:31:18 compute-0 sudo[482773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:31:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2473: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:19 compute-0 podman[482837]: 2025-12-05 02:31:19.327545161 +0000 UTC m=+0.082426202 container create 16bec9550120adb446ae040a590119798768c33d4c6dceba6ad919e8fc6b186f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 05 02:31:19 compute-0 podman[482837]: 2025-12-05 02:31:19.29336588 +0000 UTC m=+0.048246981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:31:19 compute-0 systemd[1]: Started libpod-conmon-16bec9550120adb446ae040a590119798768c33d4c6dceba6ad919e8fc6b186f.scope.
Dec 05 02:31:19 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:31:19 compute-0 podman[482837]: 2025-12-05 02:31:19.475525885 +0000 UTC m=+0.230406926 container init 16bec9550120adb446ae040a590119798768c33d4c6dceba6ad919e8fc6b186f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 05 02:31:19 compute-0 podman[482837]: 2025-12-05 02:31:19.495104313 +0000 UTC m=+0.249985344 container start 16bec9550120adb446ae040a590119798768c33d4c6dceba6ad919e8fc6b186f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Dec 05 02:31:19 compute-0 podman[482837]: 2025-12-05 02:31:19.501357364 +0000 UTC m=+0.256238405 container attach 16bec9550120adb446ae040a590119798768c33d4c6dceba6ad919e8fc6b186f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 02:31:19 compute-0 priceless_faraday[482852]: 167 167
Dec 05 02:31:19 compute-0 systemd[1]: libpod-16bec9550120adb446ae040a590119798768c33d4c6dceba6ad919e8fc6b186f.scope: Deactivated successfully.
Dec 05 02:31:19 compute-0 podman[482837]: 2025-12-05 02:31:19.51017736 +0000 UTC m=+0.265058391 container died 16bec9550120adb446ae040a590119798768c33d4c6dceba6ad919e8fc6b186f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:31:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-472af10656b051fe3e2c22690b04752db429c46a9deb6aa9c612ca636874ed7b-merged.mount: Deactivated successfully.
Dec 05 02:31:19 compute-0 podman[482837]: 2025-12-05 02:31:19.597685499 +0000 UTC m=+0.352566530 container remove 16bec9550120adb446ae040a590119798768c33d4c6dceba6ad919e8fc6b186f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:31:19 compute-0 systemd[1]: libpod-conmon-16bec9550120adb446ae040a590119798768c33d4c6dceba6ad919e8fc6b186f.scope: Deactivated successfully.
Dec 05 02:31:19 compute-0 podman[482876]: 2025-12-05 02:31:19.886265791 +0000 UTC m=+0.087480839 container create 2db0ab8a40211cd82cab454d775b8068daf4fbc805e0211054f1ac75039a91cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 05 02:31:19 compute-0 podman[482876]: 2025-12-05 02:31:19.852043688 +0000 UTC m=+0.053258786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:31:19 compute-0 systemd[1]: Started libpod-conmon-2db0ab8a40211cd82cab454d775b8068daf4fbc805e0211054f1ac75039a91cb.scope.
Dec 05 02:31:20 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e3d42ea15e237e4e4069d0e68337390369d365d87ea416e8d5aa5daaf395d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e3d42ea15e237e4e4069d0e68337390369d365d87ea416e8d5aa5daaf395d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e3d42ea15e237e4e4069d0e68337390369d365d87ea416e8d5aa5daaf395d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e3d42ea15e237e4e4069d0e68337390369d365d87ea416e8d5aa5daaf395d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:31:20 compute-0 podman[482876]: 2025-12-05 02:31:20.04649087 +0000 UTC m=+0.247705968 container init 2db0ab8a40211cd82cab454d775b8068daf4fbc805e0211054f1ac75039a91cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jemison, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 02:31:20 compute-0 ceph-mon[192914]: pgmap v2473: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:20 compute-0 podman[482876]: 2025-12-05 02:31:20.080441955 +0000 UTC m=+0.281656993 container start 2db0ab8a40211cd82cab454d775b8068daf4fbc805e0211054f1ac75039a91cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jemison, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 05 02:31:20 compute-0 podman[482876]: 2025-12-05 02:31:20.08888776 +0000 UTC m=+0.290103858 container attach 2db0ab8a40211cd82cab454d775b8068daf4fbc805e0211054f1ac75039a91cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 05 02:31:20 compute-0 nova_compute[349548]: 2025-12-05 02:31:20.703 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:31:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2474: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:21 compute-0 elegant_jemison[482892]: {
Dec 05 02:31:21 compute-0 elegant_jemison[482892]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:31:21 compute-0 elegant_jemison[482892]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:31:21 compute-0 elegant_jemison[482892]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:31:21 compute-0 elegant_jemison[482892]:         "osd_id": 0,
Dec 05 02:31:21 compute-0 elegant_jemison[482892]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:31:21 compute-0 elegant_jemison[482892]:         "type": "bluestore"
Dec 05 02:31:21 compute-0 elegant_jemison[482892]:     },
Dec 05 02:31:21 compute-0 elegant_jemison[482892]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:31:21 compute-0 elegant_jemison[482892]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:31:21 compute-0 elegant_jemison[482892]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:31:21 compute-0 elegant_jemison[482892]:         "osd_id": 1,
Dec 05 02:31:21 compute-0 elegant_jemison[482892]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:31:21 compute-0 elegant_jemison[482892]:         "type": "bluestore"
Dec 05 02:31:21 compute-0 elegant_jemison[482892]:     },
Dec 05 02:31:21 compute-0 elegant_jemison[482892]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:31:21 compute-0 elegant_jemison[482892]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:31:21 compute-0 elegant_jemison[482892]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:31:21 compute-0 elegant_jemison[482892]:         "osd_id": 2,
Dec 05 02:31:21 compute-0 elegant_jemison[482892]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:31:21 compute-0 elegant_jemison[482892]:         "type": "bluestore"
Dec 05 02:31:21 compute-0 elegant_jemison[482892]:     }
Dec 05 02:31:21 compute-0 elegant_jemison[482892]: }
Dec 05 02:31:21 compute-0 systemd[1]: libpod-2db0ab8a40211cd82cab454d775b8068daf4fbc805e0211054f1ac75039a91cb.scope: Deactivated successfully.
Dec 05 02:31:21 compute-0 podman[482876]: 2025-12-05 02:31:21.3111766 +0000 UTC m=+1.512391638 container died 2db0ab8a40211cd82cab454d775b8068daf4fbc805e0211054f1ac75039a91cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:31:21 compute-0 systemd[1]: libpod-2db0ab8a40211cd82cab454d775b8068daf4fbc805e0211054f1ac75039a91cb.scope: Consumed 1.237s CPU time.
Dec 05 02:31:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-80e3d42ea15e237e4e4069d0e68337390369d365d87ea416e8d5aa5daaf395d6-merged.mount: Deactivated successfully.
Dec 05 02:31:21 compute-0 podman[482876]: 2025-12-05 02:31:21.416786844 +0000 UTC m=+1.618001882 container remove 2db0ab8a40211cd82cab454d775b8068daf4fbc805e0211054f1ac75039a91cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jemison, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:31:21 compute-0 systemd[1]: libpod-conmon-2db0ab8a40211cd82cab454d775b8068daf4fbc805e0211054f1ac75039a91cb.scope: Deactivated successfully.
Dec 05 02:31:21 compute-0 sudo[482773]: pam_unix(sudo:session): session closed for user root
Dec 05 02:31:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:31:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:31:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:31:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:31:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 3a2e8728-b198-42e9-b61c-6d79700e4474 does not exist
Dec 05 02:31:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8b236710-92a2-4b88-a332-a17980bbdf55 does not exist
Dec 05 02:31:21 compute-0 sudo[482937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:31:21 compute-0 sudo[482937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:31:21 compute-0 sudo[482937]: pam_unix(sudo:session): session closed for user root
Dec 05 02:31:21 compute-0 sudo[482962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:31:21 compute-0 sudo[482962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:31:21 compute-0 sudo[482962]: pam_unix(sudo:session): session closed for user root
Dec 05 02:31:22 compute-0 ceph-mon[192914]: pgmap v2474: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:31:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:31:22 compute-0 nova_compute[349548]: 2025-12-05 02:31:22.958 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:31:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2475: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:31:24 compute-0 ceph-mon[192914]: pgmap v2475: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2476: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:25 compute-0 nova_compute[349548]: 2025-12-05 02:31:25.710 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:31:26 compute-0 ceph-mon[192914]: pgmap v2476: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2477: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 05 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:31:27 compute-0 nova_compute[349548]: 2025-12-05 02:31:27.960 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:31:28 compute-0 ceph-mon[192914]: pgmap v2477: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:31:28 compute-0 podman[482988]: 2025-12-05 02:31:28.716776073 +0000 UTC m=+0.124983517 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 02:31:28 compute-0 podman[482987]: 2025-12-05 02:31:28.748118142 +0000 UTC m=+0.148976383 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 05 02:31:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2478: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:29 compute-0 podman[158197]: time="2025-12-05T02:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:31:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:31:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8216 "" "Go-http-client/1.1"
Dec 05 02:31:30 compute-0 ceph-mon[192914]: pgmap v2478: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:30 compute-0 nova_compute[349548]: 2025-12-05 02:31:30.715 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:31:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2479: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:31 compute-0 openstack_network_exporter[366555]: ERROR   02:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:31:31 compute-0 openstack_network_exporter[366555]: ERROR   02:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:31:31 compute-0 openstack_network_exporter[366555]: ERROR   02:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:31:31 compute-0 openstack_network_exporter[366555]: ERROR   02:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:31:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:31:31 compute-0 openstack_network_exporter[366555]: ERROR   02:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:31:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:31:32 compute-0 ceph-mon[192914]: pgmap v2479: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:32 compute-0 nova_compute[349548]: 2025-12-05 02:31:32.964 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:31:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2480: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:31:33 compute-0 podman[483027]: 2025-12-05 02:31:33.711049268 +0000 UTC m=+0.107210901 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125)
Dec 05 02:31:33 compute-0 podman[483028]: 2025-12-05 02:31:33.726744913 +0000 UTC m=+0.116760718 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, version=9.4, name=ubi9, release-0.7.12=, architecture=x86_64, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, build-date=2024-09-18T21:23:30, config_id=edpm, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Dec 05 02:31:33 compute-0 podman[483029]: 2025-12-05 02:31:33.796790926 +0000 UTC m=+0.179136109 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:31:34 compute-0 ceph-mon[192914]: pgmap v2480: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2481: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:35 compute-0 nova_compute[349548]: 2025-12-05 02:31:35.720 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:31:36 compute-0 ceph-mon[192914]: pgmap v2481: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2482: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:37 compute-0 nova_compute[349548]: 2025-12-05 02:31:37.969 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:31:38 compute-0 ceph-mon[192914]: pgmap v2482: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:31:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2483: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:40 compute-0 ceph-mon[192914]: pgmap v2483: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:40 compute-0 nova_compute[349548]: 2025-12-05 02:31:40.725 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:31:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2484: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:42 compute-0 ceph-mon[192914]: pgmap v2484: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:42 compute-0 nova_compute[349548]: 2025-12-05 02:31:42.974 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:31:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2485: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:31:44 compute-0 ceph-mon[192914]: pgmap v2485: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:44 compute-0 podman[483082]: 2025-12-05 02:31:44.696993792 +0000 UTC m=+0.105827002 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:31:44 compute-0 podman[483095]: 2025-12-05 02:31:44.713219192 +0000 UTC m=+0.105738458 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 05 02:31:44 compute-0 podman[483083]: 2025-12-05 02:31:44.71587518 +0000 UTC m=+0.122104414 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:31:44 compute-0 podman[483084]: 2025-12-05 02:31:44.757128906 +0000 UTC m=+0.151799815 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller)
Dec 05 02:31:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2486: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:31:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2851762960' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:31:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:31:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2851762960' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:31:45 compute-0 nova_compute[349548]: 2025-12-05 02:31:45.729 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:31:46 compute-0 ceph-mon[192914]: pgmap v2486: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2851762960' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:31:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/2851762960' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:31:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:31:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:31:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:31:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:31:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:31:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:31:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2487: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:47 compute-0 nova_compute[349548]: 2025-12-05 02:31:47.979 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:31:48 compute-0 ceph-mon[192914]: pgmap v2487: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:31:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2488: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:50 compute-0 ceph-mon[192914]: pgmap v2488: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:50 compute-0 nova_compute[349548]: 2025-12-05 02:31:50.734 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:31:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2489: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:52 compute-0 ceph-mon[192914]: pgmap v2489: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:52 compute-0 nova_compute[349548]: 2025-12-05 02:31:52.980 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:31:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2490: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:31:54 compute-0 ceph-mon[192914]: pgmap v2490: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2491: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:55 compute-0 nova_compute[349548]: 2025-12-05 02:31:55.087 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:31:55 compute-0 nova_compute[349548]: 2025-12-05 02:31:55.088 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:31:55 compute-0 nova_compute[349548]: 2025-12-05 02:31:55.088 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:31:55 compute-0 nova_compute[349548]: 2025-12-05 02:31:55.185 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 02:31:55 compute-0 nova_compute[349548]: 2025-12-05 02:31:55.739 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:31:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:31:56.234 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:31:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:31:56.235 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:31:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:31:56.235 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:31:56 compute-0 ceph-mon[192914]: pgmap v2491: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2492: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:57 compute-0 nova_compute[349548]: 2025-12-05 02:31:57.983 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:31:58 compute-0 ceph-mon[192914]: pgmap v2492: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:31:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2493: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:31:59 compute-0 podman[483168]: 2025-12-05 02:31:59.725218773 +0000 UTC m=+0.129447056 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:31:59 compute-0 podman[483169]: 2025-12-05 02:31:59.733325458 +0000 UTC m=+0.132360311 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 02:31:59 compute-0 podman[158197]: time="2025-12-05T02:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:31:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:31:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8215 "" "Go-http-client/1.1"
Dec 05 02:32:00 compute-0 ceph-mon[192914]: pgmap v2493: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:00 compute-0 nova_compute[349548]: 2025-12-05 02:32:00.744 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:32:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2494: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:01 compute-0 nova_compute[349548]: 2025-12-05 02:32:01.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:32:01 compute-0 nova_compute[349548]: 2025-12-05 02:32:01.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:32:01 compute-0 nova_compute[349548]: 2025-12-05 02:32:01.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:32:01 compute-0 openstack_network_exporter[366555]: ERROR   02:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:32:01 compute-0 openstack_network_exporter[366555]: ERROR   02:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:32:01 compute-0 openstack_network_exporter[366555]: ERROR   02:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:32:01 compute-0 openstack_network_exporter[366555]: ERROR   02:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:32:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:32:01 compute-0 openstack_network_exporter[366555]: ERROR   02:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:32:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:32:02 compute-0 nova_compute[349548]: 2025-12-05 02:32:02.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:32:02 compute-0 nova_compute[349548]: 2025-12-05 02:32:02.103 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:32:02 compute-0 nova_compute[349548]: 2025-12-05 02:32:02.104 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:32:02 compute-0 nova_compute[349548]: 2025-12-05 02:32:02.104 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:32:02 compute-0 nova_compute[349548]: 2025-12-05 02:32:02.104 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:32:02 compute-0 nova_compute[349548]: 2025-12-05 02:32:02.105 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:32:02 compute-0 ceph-mon[192914]: pgmap v2494: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:32:02 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1196867583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:32:02 compute-0 nova_compute[349548]: 2025-12-05 02:32:02.597 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:32:02 compute-0 nova_compute[349548]: 2025-12-05 02:32:02.988 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:32:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2495: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:03 compute-0 nova_compute[349548]: 2025-12-05 02:32:03.137 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:32:03 compute-0 nova_compute[349548]: 2025-12-05 02:32:03.139 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3932MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:32:03 compute-0 nova_compute[349548]: 2025-12-05 02:32:03.140 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:32:03 compute-0 nova_compute[349548]: 2025-12-05 02:32:03.140 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:32:03 compute-0 nova_compute[349548]: 2025-12-05 02:32:03.366 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:32:03 compute-0 nova_compute[349548]: 2025-12-05 02:32:03.367 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:32:03 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1196867583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:32:03 compute-0 nova_compute[349548]: 2025-12-05 02:32:03.398 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:32:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:32:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:32:03 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3585697749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:32:03 compute-0 nova_compute[349548]: 2025-12-05 02:32:03.975 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:32:03 compute-0 nova_compute[349548]: 2025-12-05 02:32:03.986 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:32:04 compute-0 nova_compute[349548]: 2025-12-05 02:32:04.007 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:32:04 compute-0 nova_compute[349548]: 2025-12-05 02:32:04.009 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:32:04 compute-0 nova_compute[349548]: 2025-12-05 02:32:04.010 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:32:04 compute-0 ceph-mon[192914]: pgmap v2495: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:04 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3585697749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:32:04 compute-0 podman[483251]: 2025-12-05 02:32:04.730607129 +0000 UTC m=+0.122009941 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, release=1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release-0.7.12=, version=9.4, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, name=ubi9, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., architecture=x86_64, container_name=kepler, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible)
Dec 05 02:32:04 compute-0 podman[483250]: 2025-12-05 02:32:04.745379107 +0000 UTC m=+0.142507455 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 02:32:04 compute-0 podman[483252]: 2025-12-05 02:32:04.748781496 +0000 UTC m=+0.135769350 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:32:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2496: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:05 compute-0 nova_compute[349548]: 2025-12-05 02:32:05.011 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:32:05 compute-0 nova_compute[349548]: 2025-12-05 02:32:05.012 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:32:05 compute-0 nova_compute[349548]: 2025-12-05 02:32:05.012 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:32:05 compute-0 nova_compute[349548]: 2025-12-05 02:32:05.749 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:32:06 compute-0 ceph-mon[192914]: pgmap v2496: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2497: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:07 compute-0 nova_compute[349548]: 2025-12-05 02:32:07.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:32:07 compute-0 nova_compute[349548]: 2025-12-05 02:32:07.990 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:32:08 compute-0 nova_compute[349548]: 2025-12-05 02:32:08.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:32:08 compute-0 ceph-mon[192914]: pgmap v2497: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:32:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2498: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:10 compute-0 nova_compute[349548]: 2025-12-05 02:32:10.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:32:10 compute-0 ceph-mon[192914]: pgmap v2498: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:10 compute-0 nova_compute[349548]: 2025-12-05 02:32:10.754 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:32:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2499: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:11 compute-0 ceph-mon[192914]: pgmap v2499: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:12 compute-0 nova_compute[349548]: 2025-12-05 02:32:12.993 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:32:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2500: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:32:14 compute-0 ceph-mon[192914]: pgmap v2500: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:14 compute-0 podman[483306]: 2025-12-05 02:32:14.875193846 +0000 UTC m=+0.123603847 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3)
Dec 05 02:32:14 compute-0 podman[483308]: 2025-12-05 02:32:14.894856336 +0000 UTC m=+0.132649239 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, release=1755695350, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Dec 05 02:32:14 compute-0 podman[483307]: 2025-12-05 02:32:14.900245342 +0000 UTC m=+0.131076173 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:32:14 compute-0 podman[483325]: 2025-12-05 02:32:14.971475459 +0000 UTC m=+0.162097664 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:32:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2501: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:15 compute-0 nova_compute[349548]: 2025-12-05 02:32:15.757 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:32:16 compute-0 ceph-mon[192914]: pgmap v2501: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:32:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:32:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:32:16
Dec 05 02:32:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:32:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:32:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.log', 'backups', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'images', 'vms', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data']
Dec 05 02:32:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:32:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:32:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:32:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:32:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:32:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2502: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:17 compute-0 nova_compute[349548]: 2025-12-05 02:32:17.997 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:32:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:32:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:32:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:32:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:32:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:32:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:32:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:32:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:32:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:32:18 compute-0 ceph-mon[192914]: pgmap v2502: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:32:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:32:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2503: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:19 compute-0 ceph-mon[192914]: pgmap v2503: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:20 compute-0 nova_compute[349548]: 2025-12-05 02:32:20.763 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:32:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2504: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:21 compute-0 sudo[483393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:32:21 compute-0 sudo[483393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:32:21 compute-0 sudo[483393]: pam_unix(sudo:session): session closed for user root
Dec 05 02:32:22 compute-0 sudo[483418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:32:22 compute-0 sudo[483418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:32:22 compute-0 sudo[483418]: pam_unix(sudo:session): session closed for user root
Dec 05 02:32:22 compute-0 ceph-mon[192914]: pgmap v2504: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:22 compute-0 sudo[483443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:32:22 compute-0 sudo[483443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:32:22 compute-0 sudo[483443]: pam_unix(sudo:session): session closed for user root
Dec 05 02:32:22 compute-0 sudo[483468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 05 02:32:22 compute-0 sudo[483468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:32:23 compute-0 nova_compute[349548]: 2025-12-05 02:32:23.000 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:32:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2505: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:23 compute-0 podman[483558]: 2025-12-05 02:32:23.17908428 +0000 UTC m=+0.113861944 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec 05 02:32:23 compute-0 podman[483558]: 2025-12-05 02:32:23.296388624 +0000 UTC m=+0.231166218 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:32:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:32:24 compute-0 ceph-mon[192914]: pgmap v2505: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:24 compute-0 sudo[483468]: pam_unix(sudo:session): session closed for user root
Dec 05 02:32:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:32:24 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:32:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:32:24 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:32:24 compute-0 sudo[483710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:32:24 compute-0 sudo[483710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:32:24 compute-0 sudo[483710]: pam_unix(sudo:session): session closed for user root
Dec 05 02:32:24 compute-0 sudo[483735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:32:24 compute-0 sudo[483735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:32:24 compute-0 sudo[483735]: pam_unix(sudo:session): session closed for user root
Dec 05 02:32:24 compute-0 sudo[483760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:32:24 compute-0 sudo[483760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:32:24 compute-0 sudo[483760]: pam_unix(sudo:session): session closed for user root
Dec 05 02:32:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2506: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:25 compute-0 sudo[483785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:32:25 compute-0 sudo[483785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:32:25 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:32:25 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:32:25 compute-0 ceph-mon[192914]: pgmap v2506: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:25 compute-0 nova_compute[349548]: 2025-12-05 02:32:25.768 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:32:25 compute-0 sudo[483785]: pam_unix(sudo:session): session closed for user root
Dec 05 02:32:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:32:25 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:32:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:32:25 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:32:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:32:25 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:32:25 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 2ca6ab86-75ae-4828-805b-ba47f64c4f95 does not exist
Dec 05 02:32:25 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d89b6710-ca8f-40bb-822b-ae14fbe743e3 does not exist
Dec 05 02:32:25 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 6d1b2ac2-a9cd-46db-8b82-6b9666eb69a6 does not exist
Dec 05 02:32:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:32:25 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:32:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:32:25 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:32:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:32:25 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:32:26 compute-0 sudo[483841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:32:26 compute-0 sudo[483841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:32:26 compute-0 sudo[483841]: pam_unix(sudo:session): session closed for user root
Dec 05 02:32:26 compute-0 sudo[483866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:32:26 compute-0 sudo[483866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:32:26 compute-0 sudo[483866]: pam_unix(sudo:session): session closed for user root
Dec 05 02:32:26 compute-0 sudo[483891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:32:26 compute-0 sudo[483891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:32:26 compute-0 sudo[483891]: pam_unix(sudo:session): session closed for user root
Dec 05 02:32:26 compute-0 sudo[483916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:32:26 compute-0 sudo[483916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:32:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:32:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:32:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:32:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:32:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:32:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:32:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:32:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4800.0 total, 600.0 interval
                                            Cumulative writes: 11K writes, 51K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.01 MB/s
                                            Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.07 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1342 writes, 6176 keys, 1342 commit groups, 1.0 writes per commit group, ingest: 8.73 MB, 0.01 MB/s
                                            Interval WAL: 1342 writes, 1342 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                            
                                            ** Compaction Stats [default] **
                                            Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    103.2      0.62              0.29        36    0.017       0      0       0.0       0.0
                                              L6      1/0    7.27 MB   0.0      0.3     0.1      0.2       0.3      0.0       0.0   4.1    127.1    104.7      2.51              1.14        35    0.072    193K    19K       0.0       0.0
                                             Sum      1/0    7.27 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.1    102.1    104.4      3.13              1.43        71    0.044    193K    19K       0.0       0.0
                                             Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.6    118.0    120.0      0.40              0.21        10    0.040     33K   2548       0.0       0.0
                                            
                                            ** Compaction Stats [default] **
                                            Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                            ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.3      0.0       0.0   0.0    127.1    104.7      2.51              1.14        35    0.072    193K    19K       0.0       0.0
                                            High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    104.0      0.61              0.29        35    0.017       0      0       0.0       0.0
                                            User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            
                                            Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                            
                                            Uptime(secs): 4800.0 total, 600.0 interval
                                            Flush(GB): cumulative 0.062, interval 0.008
                                            AddFile(GB): cumulative 0.000, interval 0.000
                                            AddFile(Total Files): cumulative 0, interval 0
                                            AddFile(L0 Files): cumulative 0, interval 0
                                            AddFile(Keys): cumulative 0, interval 0
                                            Cumulative compaction: 0.32 GB write, 0.07 MB/s write, 0.31 GB read, 0.07 MB/s read, 3.1 seconds
                                            Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.4 seconds
                                            Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                            Block cache BinnedLRUCache@0x56463779d1f0#2 capacity: 304.00 MB usage: 40.25 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000324 secs_since: 0
                                            Block cache entry stats(count,size,portion): DataBlock(2775,38.85 MB,12.7811%) FilterBlock(72,542.05 KB,0.174126%) IndexBlock(72,885.92 KB,0.284591%) Misc(1,0.00 KB,0%)
                                            
                                            ** File Read Latency Histogram By Level [default] **
Dec 05 02:32:26 compute-0 podman[483977]: 2025-12-05 02:32:26.97437691 +0000 UTC m=+0.089840587 container create 65f2c0cb4be7998175a7d35d53ebf9a31ca696f3a1a8814fde1f9665456cdbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nightingale, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:32:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2507: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:27 compute-0 podman[483977]: 2025-12-05 02:32:26.941331671 +0000 UTC m=+0.056795338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:32:27 compute-0 systemd[1]: Started libpod-conmon-65f2c0cb4be7998175a7d35d53ebf9a31ca696f3a1a8814fde1f9665456cdbee.scope.
Dec 05 02:32:27 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:32:27 compute-0 podman[483977]: 2025-12-05 02:32:27.134437544 +0000 UTC m=+0.249901271 container init 65f2c0cb4be7998175a7d35d53ebf9a31ca696f3a1a8814fde1f9665456cdbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec 05 02:32:27 compute-0 podman[483977]: 2025-12-05 02:32:27.147554924 +0000 UTC m=+0.263018591 container start 65f2c0cb4be7998175a7d35d53ebf9a31ca696f3a1a8814fde1f9665456cdbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nightingale, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 05 02:32:27 compute-0 podman[483977]: 2025-12-05 02:32:27.154371722 +0000 UTC m=+0.269835429 container attach 65f2c0cb4be7998175a7d35d53ebf9a31ca696f3a1a8814fde1f9665456cdbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nightingale, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Dec 05 02:32:27 compute-0 crazy_nightingale[483994]: 167 167
Dec 05 02:32:27 compute-0 podman[483977]: 2025-12-05 02:32:27.158489202 +0000 UTC m=+0.273952879 container died 65f2c0cb4be7998175a7d35d53ebf9a31ca696f3a1a8814fde1f9665456cdbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nightingale, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 05 02:32:27 compute-0 systemd[1]: libpod-65f2c0cb4be7998175a7d35d53ebf9a31ca696f3a1a8814fde1f9665456cdbee.scope: Deactivated successfully.
Dec 05 02:32:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d16128bf7bbc1297e67d3806d3df142484b10dbf773f007e251977f8396dc65-merged.mount: Deactivated successfully.
Dec 05 02:32:27 compute-0 podman[483977]: 2025-12-05 02:32:27.230185472 +0000 UTC m=+0.345649139 container remove 65f2c0cb4be7998175a7d35d53ebf9a31ca696f3a1a8814fde1f9665456cdbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nightingale, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Dec 05 02:32:27 compute-0 systemd[1]: libpod-conmon-65f2c0cb4be7998175a7d35d53ebf9a31ca696f3a1a8814fde1f9665456cdbee.scope: Deactivated successfully.
Dec 05 02:32:27 compute-0 podman[484018]: 2025-12-05 02:32:27.503999496 +0000 UTC m=+0.083616727 container create 7ae83c15933be6120ac1927d9c40ef1d11898219740eb644b66dcad15a378f02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lewin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 05 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 05 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:32:27 compute-0 ceph-mon[192914]: pgmap v2507: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:27 compute-0 podman[484018]: 2025-12-05 02:32:27.471180424 +0000 UTC m=+0.050797705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:32:27 compute-0 systemd[1]: Started libpod-conmon-7ae83c15933be6120ac1927d9c40ef1d11898219740eb644b66dcad15a378f02.scope.
Dec 05 02:32:27 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:32:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886f3be810430158d726cf90e2fa04d74ff7a0ed10e3ecc6ecf40c6e9398ddd1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:32:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886f3be810430158d726cf90e2fa04d74ff7a0ed10e3ecc6ecf40c6e9398ddd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:32:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886f3be810430158d726cf90e2fa04d74ff7a0ed10e3ecc6ecf40c6e9398ddd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:32:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886f3be810430158d726cf90e2fa04d74ff7a0ed10e3ecc6ecf40c6e9398ddd1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:32:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886f3be810430158d726cf90e2fa04d74ff7a0ed10e3ecc6ecf40c6e9398ddd1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:32:27 compute-0 podman[484018]: 2025-12-05 02:32:27.649524608 +0000 UTC m=+0.229141839 container init 7ae83c15933be6120ac1927d9c40ef1d11898219740eb644b66dcad15a378f02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lewin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Dec 05 02:32:27 compute-0 podman[484018]: 2025-12-05 02:32:27.66443315 +0000 UTC m=+0.244050351 container start 7ae83c15933be6120ac1927d9c40ef1d11898219740eb644b66dcad15a378f02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lewin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 05 02:32:27 compute-0 podman[484018]: 2025-12-05 02:32:27.669985671 +0000 UTC m=+0.249602902 container attach 7ae83c15933be6120ac1927d9c40ef1d11898219740eb644b66dcad15a378f02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:32:28 compute-0 nova_compute[349548]: 2025-12-05 02:32:28.002 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:32:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:32:28 compute-0 nervous_lewin[484034]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:32:28 compute-0 nervous_lewin[484034]: --> relative data size: 1.0
Dec 05 02:32:28 compute-0 nervous_lewin[484034]: --> All data devices are unavailable
Dec 05 02:32:28 compute-0 systemd[1]: libpod-7ae83c15933be6120ac1927d9c40ef1d11898219740eb644b66dcad15a378f02.scope: Deactivated successfully.
Dec 05 02:32:28 compute-0 podman[484018]: 2025-12-05 02:32:28.981168781 +0000 UTC m=+1.560785982 container died 7ae83c15933be6120ac1927d9c40ef1d11898219740eb644b66dcad15a378f02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 05 02:32:28 compute-0 systemd[1]: libpod-7ae83c15933be6120ac1927d9c40ef1d11898219740eb644b66dcad15a378f02.scope: Consumed 1.243s CPU time.
Dec 05 02:32:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2508: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-886f3be810430158d726cf90e2fa04d74ff7a0ed10e3ecc6ecf40c6e9398ddd1-merged.mount: Deactivated successfully.
Dec 05 02:32:29 compute-0 podman[484018]: 2025-12-05 02:32:29.077392493 +0000 UTC m=+1.657009694 container remove 7ae83c15933be6120ac1927d9c40ef1d11898219740eb644b66dcad15a378f02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 02:32:29 compute-0 systemd[1]: libpod-conmon-7ae83c15933be6120ac1927d9c40ef1d11898219740eb644b66dcad15a378f02.scope: Deactivated successfully.
Dec 05 02:32:29 compute-0 sudo[483916]: pam_unix(sudo:session): session closed for user root
Dec 05 02:32:29 compute-0 sudo[484075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:32:29 compute-0 sudo[484075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:32:29 compute-0 sudo[484075]: pam_unix(sudo:session): session closed for user root
Dec 05 02:32:29 compute-0 sudo[484100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:32:29 compute-0 sudo[484100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:32:29 compute-0 sudo[484100]: pam_unix(sudo:session): session closed for user root
Dec 05 02:32:29 compute-0 sudo[484125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:32:29 compute-0 sudo[484125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:32:29 compute-0 sudo[484125]: pam_unix(sudo:session): session closed for user root
Dec 05 02:32:29 compute-0 sudo[484150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:32:29 compute-0 sudo[484150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:32:29 compute-0 podman[158197]: time="2025-12-05T02:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:32:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:32:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8212 "" "Go-http-client/1.1"
Dec 05 02:32:30 compute-0 ceph-mon[192914]: pgmap v2508: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:30 compute-0 podman[484216]: 2025-12-05 02:32:30.2885087 +0000 UTC m=+0.090995221 container create 5cacc8849655b4c75ced046cba084eb54535f582629b9c369837ddcc38fbaf30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 05 02:32:30 compute-0 podman[484216]: 2025-12-05 02:32:30.255504463 +0000 UTC m=+0.057991034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:32:30 compute-0 systemd[1]: Started libpod-conmon-5cacc8849655b4c75ced046cba084eb54535f582629b9c369837ddcc38fbaf30.scope.
Dec 05 02:32:30 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:32:30 compute-0 podman[484216]: 2025-12-05 02:32:30.431580281 +0000 UTC m=+0.234066822 container init 5cacc8849655b4c75ced046cba084eb54535f582629b9c369837ddcc38fbaf30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_thompson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:32:30 compute-0 podman[484216]: 2025-12-05 02:32:30.446035131 +0000 UTC m=+0.248521632 container start 5cacc8849655b4c75ced046cba084eb54535f582629b9c369837ddcc38fbaf30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 05 02:32:30 compute-0 podman[484216]: 2025-12-05 02:32:30.451940612 +0000 UTC m=+0.254427183 container attach 5cacc8849655b4c75ced046cba084eb54535f582629b9c369837ddcc38fbaf30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_thompson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:32:30 compute-0 jovial_thompson[484234]: 167 167
Dec 05 02:32:30 compute-0 systemd[1]: libpod-5cacc8849655b4c75ced046cba084eb54535f582629b9c369837ddcc38fbaf30.scope: Deactivated successfully.
Dec 05 02:32:30 compute-0 podman[484216]: 2025-12-05 02:32:30.459967565 +0000 UTC m=+0.262454076 container died 5cacc8849655b4c75ced046cba084eb54535f582629b9c369837ddcc38fbaf30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 05 02:32:30 compute-0 podman[484229]: 2025-12-05 02:32:30.484229899 +0000 UTC m=+0.121083344 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec 05 02:32:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-54da0405268325e6a4bf4b9149e04de7dc6b6fe9f17d38a6a671465817c502db-merged.mount: Deactivated successfully.
Dec 05 02:32:30 compute-0 podman[484216]: 2025-12-05 02:32:30.516568137 +0000 UTC m=+0.319054628 container remove 5cacc8849655b4c75ced046cba084eb54535f582629b9c369837ddcc38fbaf30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_thompson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 05 02:32:30 compute-0 podman[484233]: 2025-12-05 02:32:30.517477353 +0000 UTC m=+0.148312864 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:32:30 compute-0 systemd[1]: libpod-conmon-5cacc8849655b4c75ced046cba084eb54535f582629b9c369837ddcc38fbaf30.scope: Deactivated successfully.
Dec 05 02:32:30 compute-0 nova_compute[349548]: 2025-12-05 02:32:30.772 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:32:30 compute-0 podman[484293]: 2025-12-05 02:32:30.801298538 +0000 UTC m=+0.098602772 container create b65e4cf75b83eb832d7e838205b90243499b4c52909cc959e35d724e2eacc8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 02:32:30 compute-0 podman[484293]: 2025-12-05 02:32:30.753553832 +0000 UTC m=+0.050858116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:32:30 compute-0 systemd[1]: Started libpod-conmon-b65e4cf75b83eb832d7e838205b90243499b4c52909cc959e35d724e2eacc8aa.scope.
Dec 05 02:32:30 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a96ea874ccdf4806039850a9f720e5e1b4bacc89f40c625536c79fd68d0231f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a96ea874ccdf4806039850a9f720e5e1b4bacc89f40c625536c79fd68d0231f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a96ea874ccdf4806039850a9f720e5e1b4bacc89f40c625536c79fd68d0231f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a96ea874ccdf4806039850a9f720e5e1b4bacc89f40c625536c79fd68d0231f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:32:30 compute-0 podman[484293]: 2025-12-05 02:32:30.946540131 +0000 UTC m=+0.243844365 container init b65e4cf75b83eb832d7e838205b90243499b4c52909cc959e35d724e2eacc8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 05 02:32:30 compute-0 podman[484293]: 2025-12-05 02:32:30.964193554 +0000 UTC m=+0.261497778 container start b65e4cf75b83eb832d7e838205b90243499b4c52909cc959e35d724e2eacc8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 05 02:32:30 compute-0 podman[484293]: 2025-12-05 02:32:30.97095239 +0000 UTC m=+0.268256634 container attach b65e4cf75b83eb832d7e838205b90243499b4c52909cc959e35d724e2eacc8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 05 02:32:30 compute-0 sshd-session[484173]: Invalid user oracle from 45.140.17.124 port 56582
Dec 05 02:32:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2509: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:31 compute-0 sshd-session[484173]: Connection reset by invalid user oracle 45.140.17.124 port 56582 [preauth]
Dec 05 02:32:31 compute-0 openstack_network_exporter[366555]: ERROR   02:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:32:31 compute-0 openstack_network_exporter[366555]: ERROR   02:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:32:31 compute-0 openstack_network_exporter[366555]: ERROR   02:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:32:31 compute-0 openstack_network_exporter[366555]: ERROR   02:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:32:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:32:31 compute-0 openstack_network_exporter[366555]: ERROR   02:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:32:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]: {
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:     "0": [
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:         {
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "devices": [
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "/dev/loop3"
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             ],
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "lv_name": "ceph_lv0",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "lv_size": "21470642176",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "name": "ceph_lv0",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "tags": {
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.cluster_name": "ceph",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.crush_device_class": "",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.encrypted": "0",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.osd_id": "0",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.type": "block",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.vdo": "0"
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             },
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "type": "block",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "vg_name": "ceph_vg0"
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:         }
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:     ],
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:     "1": [
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:         {
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "devices": [
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "/dev/loop4"
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             ],
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "lv_name": "ceph_lv1",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "lv_size": "21470642176",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "name": "ceph_lv1",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "tags": {
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.cluster_name": "ceph",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.crush_device_class": "",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.encrypted": "0",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.osd_id": "1",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.type": "block",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.vdo": "0"
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             },
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "type": "block",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "vg_name": "ceph_vg1"
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:         }
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:     ],
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:     "2": [
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:         {
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "devices": [
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "/dev/loop5"
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             ],
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "lv_name": "ceph_lv2",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "lv_size": "21470642176",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "name": "ceph_lv2",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "tags": {
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.cluster_name": "ceph",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.crush_device_class": "",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.encrypted": "0",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.osd_id": "2",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.type": "block",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:                 "ceph.vdo": "0"
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             },
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "type": "block",
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:             "vg_name": "ceph_vg2"
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:         }
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]:     ]
Dec 05 02:32:31 compute-0 gallant_mclaren[484310]: }
Dec 05 02:32:31 compute-0 podman[484293]: 2025-12-05 02:32:31.860036493 +0000 UTC m=+1.157340707 container died b65e4cf75b83eb832d7e838205b90243499b4c52909cc959e35d724e2eacc8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:32:31 compute-0 systemd[1]: libpod-b65e4cf75b83eb832d7e838205b90243499b4c52909cc959e35d724e2eacc8aa.scope: Deactivated successfully.
Dec 05 02:32:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a96ea874ccdf4806039850a9f720e5e1b4bacc89f40c625536c79fd68d0231f5-merged.mount: Deactivated successfully.
Dec 05 02:32:31 compute-0 podman[484293]: 2025-12-05 02:32:31.972352312 +0000 UTC m=+1.269656516 container remove b65e4cf75b83eb832d7e838205b90243499b4c52909cc959e35d724e2eacc8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 05 02:32:31 compute-0 systemd[1]: libpod-conmon-b65e4cf75b83eb832d7e838205b90243499b4c52909cc959e35d724e2eacc8aa.scope: Deactivated successfully.
Dec 05 02:32:32 compute-0 sudo[484150]: pam_unix(sudo:session): session closed for user root
Dec 05 02:32:32 compute-0 ceph-mon[192914]: pgmap v2509: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:32 compute-0 sudo[484333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:32:32 compute-0 sudo[484333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:32:32 compute-0 sudo[484333]: pam_unix(sudo:session): session closed for user root
Dec 05 02:32:32 compute-0 sudo[484358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:32:32 compute-0 sudo[484358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:32:32 compute-0 sudo[484358]: pam_unix(sudo:session): session closed for user root
Dec 05 02:32:32 compute-0 sudo[484383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:32:32 compute-0 sudo[484383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:32:32 compute-0 sudo[484383]: pam_unix(sudo:session): session closed for user root
Dec 05 02:32:32 compute-0 sudo[484408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:32:32 compute-0 sudo[484408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:32:32 compute-0 sshd-session[484315]: Connection reset by authenticating user root 45.140.17.124 port 47824 [preauth]
Dec 05 02:32:33 compute-0 nova_compute[349548]: 2025-12-05 02:32:33.005 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:32:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2510: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:33 compute-0 podman[484473]: 2025-12-05 02:32:33.159122443 +0000 UTC m=+0.089751265 container create d7931cee9f0f97b6f1da712de764b370ad968c27713f47d43f5c8e757ffaa979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_tu, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 05 02:32:33 compute-0 systemd[1]: Started libpod-conmon-d7931cee9f0f97b6f1da712de764b370ad968c27713f47d43f5c8e757ffaa979.scope.
Dec 05 02:32:33 compute-0 podman[484473]: 2025-12-05 02:32:33.124128807 +0000 UTC m=+0.054757689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:32:33 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:32:33 compute-0 podman[484473]: 2025-12-05 02:32:33.273559083 +0000 UTC m=+0.204187935 container init d7931cee9f0f97b6f1da712de764b370ad968c27713f47d43f5c8e757ffaa979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:32:33 compute-0 podman[484473]: 2025-12-05 02:32:33.292494232 +0000 UTC m=+0.223123074 container start d7931cee9f0f97b6f1da712de764b370ad968c27713f47d43f5c8e757ffaa979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_tu, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:32:33 compute-0 podman[484473]: 2025-12-05 02:32:33.298991371 +0000 UTC m=+0.229620263 container attach d7931cee9f0f97b6f1da712de764b370ad968c27713f47d43f5c8e757ffaa979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:32:33 compute-0 confident_tu[484489]: 167 167
Dec 05 02:32:33 compute-0 systemd[1]: libpod-d7931cee9f0f97b6f1da712de764b370ad968c27713f47d43f5c8e757ffaa979.scope: Deactivated successfully.
Dec 05 02:32:33 compute-0 podman[484473]: 2025-12-05 02:32:33.304271844 +0000 UTC m=+0.234900696 container died d7931cee9f0f97b6f1da712de764b370ad968c27713f47d43f5c8e757ffaa979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_tu, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:32:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2c49fcb05fabfae135382fba92814f1937df16112e2cb8051fd64dcaa5eaa69-merged.mount: Deactivated successfully.
Dec 05 02:32:33 compute-0 podman[484473]: 2025-12-05 02:32:33.388372564 +0000 UTC m=+0.319001416 container remove d7931cee9f0f97b6f1da712de764b370ad968c27713f47d43f5c8e757ffaa979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_tu, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:32:33 compute-0 systemd[1]: libpod-conmon-d7931cee9f0f97b6f1da712de764b370ad968c27713f47d43f5c8e757ffaa979.scope: Deactivated successfully.
Dec 05 02:32:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:32:33 compute-0 podman[484511]: 2025-12-05 02:32:33.686086171 +0000 UTC m=+0.102574807 container create 7d2b02233e3f6cdc4e5d8535b564ee011e55a0dde1d99b790c30380b0e5e2420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ardinghelli, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Dec 05 02:32:33 compute-0 podman[484511]: 2025-12-05 02:32:33.645649998 +0000 UTC m=+0.062138684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:32:33 compute-0 systemd[1]: Started libpod-conmon-7d2b02233e3f6cdc4e5d8535b564ee011e55a0dde1d99b790c30380b0e5e2420.scope.
Dec 05 02:32:33 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6817b5d0072dcb73462bf59526c3b29b775494e6c50a6f58a6ae8ab65e37bf80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6817b5d0072dcb73462bf59526c3b29b775494e6c50a6f58a6ae8ab65e37bf80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6817b5d0072dcb73462bf59526c3b29b775494e6c50a6f58a6ae8ab65e37bf80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6817b5d0072dcb73462bf59526c3b29b775494e6c50a6f58a6ae8ab65e37bf80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:32:33 compute-0 podman[484511]: 2025-12-05 02:32:33.869878894 +0000 UTC m=+0.286367560 container init 7d2b02233e3f6cdc4e5d8535b564ee011e55a0dde1d99b790c30380b0e5e2420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ardinghelli, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:32:33 compute-0 podman[484511]: 2025-12-05 02:32:33.895417174 +0000 UTC m=+0.311905810 container start 7d2b02233e3f6cdc4e5d8535b564ee011e55a0dde1d99b790c30380b0e5e2420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ardinghelli, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 02:32:33 compute-0 podman[484511]: 2025-12-05 02:32:33.902211902 +0000 UTC m=+0.318700598 container attach 7d2b02233e3f6cdc4e5d8535b564ee011e55a0dde1d99b790c30380b0e5e2420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ardinghelli, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 05 02:32:34 compute-0 ceph-mon[192914]: pgmap v2510: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2511: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:35 compute-0 quizzical_ardinghelli[484527]: {
Dec 05 02:32:35 compute-0 quizzical_ardinghelli[484527]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:32:35 compute-0 quizzical_ardinghelli[484527]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:32:35 compute-0 quizzical_ardinghelli[484527]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:32:35 compute-0 quizzical_ardinghelli[484527]:         "osd_id": 0,
Dec 05 02:32:35 compute-0 quizzical_ardinghelli[484527]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:32:35 compute-0 quizzical_ardinghelli[484527]:         "type": "bluestore"
Dec 05 02:32:35 compute-0 quizzical_ardinghelli[484527]:     },
Dec 05 02:32:35 compute-0 quizzical_ardinghelli[484527]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:32:35 compute-0 quizzical_ardinghelli[484527]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:32:35 compute-0 quizzical_ardinghelli[484527]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:32:35 compute-0 quizzical_ardinghelli[484527]:         "osd_id": 1,
Dec 05 02:32:35 compute-0 quizzical_ardinghelli[484527]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:32:35 compute-0 quizzical_ardinghelli[484527]:         "type": "bluestore"
Dec 05 02:32:35 compute-0 quizzical_ardinghelli[484527]:     },
Dec 05 02:32:35 compute-0 quizzical_ardinghelli[484527]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:32:35 compute-0 quizzical_ardinghelli[484527]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:32:35 compute-0 quizzical_ardinghelli[484527]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:32:35 compute-0 quizzical_ardinghelli[484527]:         "osd_id": 2,
Dec 05 02:32:35 compute-0 quizzical_ardinghelli[484527]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:32:35 compute-0 quizzical_ardinghelli[484527]:         "type": "bluestore"
Dec 05 02:32:35 compute-0 quizzical_ardinghelli[484527]:     }
Dec 05 02:32:35 compute-0 quizzical_ardinghelli[484527]: }
Dec 05 02:32:35 compute-0 sshd-session[484470]: Connection reset by authenticating user root 45.140.17.124 port 47830 [preauth]
Dec 05 02:32:35 compute-0 systemd[1]: libpod-7d2b02233e3f6cdc4e5d8535b564ee011e55a0dde1d99b790c30380b0e5e2420.scope: Deactivated successfully.
Dec 05 02:32:35 compute-0 systemd[1]: libpod-7d2b02233e3f6cdc4e5d8535b564ee011e55a0dde1d99b790c30380b0e5e2420.scope: Consumed 1.191s CPU time.
Dec 05 02:32:35 compute-0 podman[484561]: 2025-12-05 02:32:35.164681789 +0000 UTC m=+0.036959523 container died 7d2b02233e3f6cdc4e5d8535b564ee011e55a0dde1d99b790c30380b0e5e2420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 05 02:32:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-6817b5d0072dcb73462bf59526c3b29b775494e6c50a6f58a6ae8ab65e37bf80-merged.mount: Deactivated successfully.
Dec 05 02:32:35 compute-0 podman[484561]: 2025-12-05 02:32:35.245469473 +0000 UTC m=+0.117747127 container remove 7d2b02233e3f6cdc4e5d8535b564ee011e55a0dde1d99b790c30380b0e5e2420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ardinghelli, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 05 02:32:35 compute-0 podman[484567]: 2025-12-05 02:32:35.251856598 +0000 UTC m=+0.104548984 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec 05 02:32:35 compute-0 systemd[1]: libpod-conmon-7d2b02233e3f6cdc4e5d8535b564ee011e55a0dde1d99b790c30380b0e5e2420.scope: Deactivated successfully.
Dec 05 02:32:35 compute-0 podman[484562]: 2025-12-05 02:32:35.266085721 +0000 UTC m=+0.127014346 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, managed_by=edpm_ansible, container_name=kepler, config_id=edpm, release-0.7.12=, distribution-scope=public, vcs-type=git, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 05 02:32:35 compute-0 podman[484560]: 2025-12-05 02:32:35.274979739 +0000 UTC m=+0.137427058 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec 05 02:32:35 compute-0 sudo[484408]: pam_unix(sudo:session): session closed for user root
Dec 05 02:32:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:32:35 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:32:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:32:35 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:32:35 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev fb0d0580-4915-49b7-9c98-39f52d7a0f8b does not exist
Dec 05 02:32:35 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev ab1c9d76-ce53-44e9-9d71-f4d5d6d65796 does not exist
Dec 05 02:32:35 compute-0 sudo[484629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:32:35 compute-0 sudo[484629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:32:35 compute-0 sudo[484629]: pam_unix(sudo:session): session closed for user root
Dec 05 02:32:35 compute-0 sudo[484654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:32:35 compute-0 sudo[484654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:32:35 compute-0 sudo[484654]: pam_unix(sudo:session): session closed for user root
Dec 05 02:32:35 compute-0 nova_compute[349548]: 2025-12-05 02:32:35.777 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:32:36 compute-0 ceph-mon[192914]: pgmap v2511: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:36 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:32:36 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:32:37 compute-0 sshd-session[484628]: Connection reset by authenticating user root 45.140.17.124 port 47846 [preauth]
Dec 05 02:32:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2512: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:38 compute-0 nova_compute[349548]: 2025-12-05 02:32:38.009 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:32:38 compute-0 ceph-mon[192914]: pgmap v2512: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.330 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.331 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.349 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.349 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.350 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.350 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.351 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.351 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.352 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.352 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.353 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.353 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.354 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.354 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.354 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.355 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.355 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.355 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.355 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:32:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:32:38 compute-0 sshd-session[484680]: Connection reset by authenticating user root 45.140.17.124 port 47854 [preauth]
Dec 05 02:32:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2513: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:40 compute-0 ceph-mon[192914]: pgmap v2513: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:40 compute-0 nova_compute[349548]: 2025-12-05 02:32:40.782 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:32:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2514: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:42 compute-0 ceph-mon[192914]: pgmap v2514: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:43 compute-0 nova_compute[349548]: 2025-12-05 02:32:43.012 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:32:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2515: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:32:44 compute-0 ceph-mon[192914]: pgmap v2515: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2516: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:32:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4211330410' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:32:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:32:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4211330410' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:32:45 compute-0 podman[484686]: 2025-12-05 02:32:45.733341581 +0000 UTC m=+0.116824900 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, container_name=openstack_network_exporter, vendor=Red Hat, Inc., vcs-type=git, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 05 02:32:45 compute-0 podman[484683]: 2025-12-05 02:32:45.741740805 +0000 UTC m=+0.141339972 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:32:45 compute-0 podman[484684]: 2025-12-05 02:32:45.744604398 +0000 UTC m=+0.139912521 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 02:32:45 compute-0 podman[484685]: 2025-12-05 02:32:45.764379981 +0000 UTC m=+0.149677133 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:32:45 compute-0 nova_compute[349548]: 2025-12-05 02:32:45.785 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:32:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:32:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:32:46 compute-0 ceph-mon[192914]: pgmap v2516: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/4211330410' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:32:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/4211330410' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:32:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:32:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:32:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:32:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:32:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2517: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:48 compute-0 nova_compute[349548]: 2025-12-05 02:32:48.018 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:32:48 compute-0 ceph-mon[192914]: pgmap v2517: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.664365) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901968664477, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1077, "num_deletes": 257, "total_data_size": 1592659, "memory_usage": 1622944, "flush_reason": "Manual Compaction"}
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901968684293, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 1555889, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51004, "largest_seqno": 52080, "table_properties": {"data_size": 1550614, "index_size": 2735, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11064, "raw_average_key_size": 19, "raw_value_size": 1540061, "raw_average_value_size": 2687, "num_data_blocks": 123, "num_entries": 573, "num_filter_entries": 573, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764901866, "oldest_key_time": 1764901866, "file_creation_time": 1764901968, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 20043 microseconds, and 10974 cpu microseconds.
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.684405) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 1555889 bytes OK
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.684440) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.687100) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.687124) EVENT_LOG_v1 {"time_micros": 1764901968687117, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.687149) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 1587599, prev total WAL file size 1587599, number of live WAL files 2.
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.688544) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303039' seq:72057594037927935, type:22 .. '6C6F676D0032323632' seq:0, type:0; will stop at (end)
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1519KB)], [122(7441KB)]
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901968688616, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 9175753, "oldest_snapshot_seqno": -1}
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 6560 keys, 9064759 bytes, temperature: kUnknown
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901968760199, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 9064759, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9022938, "index_size": 24301, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16453, "raw_key_size": 172020, "raw_average_key_size": 26, "raw_value_size": 8906439, "raw_average_value_size": 1357, "num_data_blocks": 966, "num_entries": 6560, "num_filter_entries": 6560, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764901968, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.760524) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 9064759 bytes
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.763388) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.0 rd, 126.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.3 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(11.7) write-amplify(5.8) OK, records in: 7086, records dropped: 526 output_compression: NoCompression
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.763419) EVENT_LOG_v1 {"time_micros": 1764901968763405, "job": 74, "event": "compaction_finished", "compaction_time_micros": 71680, "compaction_time_cpu_micros": 44650, "output_level": 6, "num_output_files": 1, "total_output_size": 9064759, "num_input_records": 7086, "num_output_records": 6560, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901968764152, "job": 74, "event": "table_file_deletion", "file_number": 124}
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901968767421, "job": 74, "event": "table_file_deletion", "file_number": 122}
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.688326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.767753) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.767764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.767767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.767770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.767774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:32:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2518: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:49 compute-0 ceph-mon[192914]: pgmap v2518: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:50 compute-0 nova_compute[349548]: 2025-12-05 02:32:50.790 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:32:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2519: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:52 compute-0 ceph-mon[192914]: pgmap v2519: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:53 compute-0 nova_compute[349548]: 2025-12-05 02:32:53.022 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:32:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2520: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:32:54 compute-0 ceph-mon[192914]: pgmap v2520: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2521: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:55 compute-0 nova_compute[349548]: 2025-12-05 02:32:55.796 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:32:56 compute-0 ceph-mon[192914]: pgmap v2521: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:32:56.236 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:32:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:32:56.236 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:32:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:32:56.237 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:32:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2522: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:57 compute-0 nova_compute[349548]: 2025-12-05 02:32:57.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:32:57 compute-0 nova_compute[349548]: 2025-12-05 02:32:57.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:32:57 compute-0 nova_compute[349548]: 2025-12-05 02:32:57.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:32:57 compute-0 nova_compute[349548]: 2025-12-05 02:32:57.145 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 02:32:58 compute-0 nova_compute[349548]: 2025-12-05 02:32:58.023 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:32:58 compute-0 ceph-mon[192914]: pgmap v2522: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:32:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2523: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:32:59 compute-0 podman[158197]: time="2025-12-05T02:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:32:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:32:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8204 "" "Go-http-client/1.1"
Dec 05 02:33:00 compute-0 ceph-mon[192914]: pgmap v2523: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:00 compute-0 podman[484772]: 2025-12-05 02:33:00.711366012 +0000 UTC m=+0.116207882 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 05 02:33:00 compute-0 podman[484773]: 2025-12-05 02:33:00.738741586 +0000 UTC m=+0.135044869 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:33:00 compute-0 nova_compute[349548]: 2025-12-05 02:33:00.799 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:33:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2524: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:01 compute-0 openstack_network_exporter[366555]: ERROR   02:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:33:01 compute-0 openstack_network_exporter[366555]: ERROR   02:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:33:01 compute-0 openstack_network_exporter[366555]: ERROR   02:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:33:01 compute-0 openstack_network_exporter[366555]: ERROR   02:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:33:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:33:01 compute-0 openstack_network_exporter[366555]: ERROR   02:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:33:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:33:01 compute-0 nova_compute[349548]: 2025-12-05 02:33:01.483 349552 DEBUG oslo_concurrency.processutils [None req-f776971b-def7-42eb-8170-c70550c5a615 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:33:01 compute-0 nova_compute[349548]: 2025-12-05 02:33:01.524 349552 DEBUG oslo_concurrency.processutils [None req-f776971b-def7-42eb-8170-c70550c5a615 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "env LANG=C uptime" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:33:02 compute-0 nova_compute[349548]: 2025-12-05 02:33:02.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:33:02 compute-0 nova_compute[349548]: 2025-12-05 02:33:02.099 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:33:02 compute-0 nova_compute[349548]: 2025-12-05 02:33:02.100 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:33:02 compute-0 nova_compute[349548]: 2025-12-05 02:33:02.101 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:33:02 compute-0 nova_compute[349548]: 2025-12-05 02:33:02.101 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:33:02 compute-0 nova_compute[349548]: 2025-12-05 02:33:02.102 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:33:02 compute-0 ceph-mon[192914]: pgmap v2524: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:33:02 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1866606673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:33:02 compute-0 nova_compute[349548]: 2025-12-05 02:33:02.578 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.026 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:33:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2525: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.083 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.084 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3929MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.085 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.085 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.157 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.158 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:33:03 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1866606673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.187 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:33:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:33:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:33:03 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4139789969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.693 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.705 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.727 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.730 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.730 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:33:04 compute-0 ceph-mon[192914]: pgmap v2525: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:04 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4139789969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:33:04 compute-0 nova_compute[349548]: 2025-12-05 02:33:04.732 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:33:04 compute-0 nova_compute[349548]: 2025-12-05 02:33:04.733 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:33:04 compute-0 nova_compute[349548]: 2025-12-05 02:33:04.733 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:33:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2526: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:05 compute-0 nova_compute[349548]: 2025-12-05 02:33:05.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:33:05 compute-0 nova_compute[349548]: 2025-12-05 02:33:05.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:33:05 compute-0 podman[484861]: 2025-12-05 02:33:05.724807424 +0000 UTC m=+0.118975043 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:33:05 compute-0 podman[484860]: 2025-12-05 02:33:05.727380029 +0000 UTC m=+0.127612444 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, container_name=kepler, release-0.7.12=, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, com.redhat.component=ubi9-container, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 02:33:05 compute-0 podman[484859]: 2025-12-05 02:33:05.753794295 +0000 UTC m=+0.161470216 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Dec 05 02:33:05 compute-0 nova_compute[349548]: 2025-12-05 02:33:05.804 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:33:06 compute-0 nova_compute[349548]: 2025-12-05 02:33:06.063 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:33:06 compute-0 ceph-mon[192914]: pgmap v2526: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2527: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:08 compute-0 nova_compute[349548]: 2025-12-05 02:33:08.029 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:33:08 compute-0 nova_compute[349548]: 2025-12-05 02:33:08.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:33:08 compute-0 ceph-mon[192914]: pgmap v2527: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:33:08 compute-0 nova_compute[349548]: 2025-12-05 02:33:08.879 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:33:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:33:08.880 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 05 02:33:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:33:08.882 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 05 02:33:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2528: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:09 compute-0 nova_compute[349548]: 2025-12-05 02:33:09.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:33:10 compute-0 ceph-mon[192914]: pgmap v2528: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:10 compute-0 nova_compute[349548]: 2025-12-05 02:33:10.807 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:33:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2529: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:12 compute-0 ceph-mon[192914]: pgmap v2529: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:13 compute-0 nova_compute[349548]: 2025-12-05 02:33:13.032 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:33:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2530: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:33:13 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:33:13.884 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 05 02:33:14 compute-0 ceph-mon[192914]: pgmap v2530: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2531: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:15 compute-0 nova_compute[349548]: 2025-12-05 02:33:15.813 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:33:16 compute-0 ceph-mon[192914]: pgmap v2531: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:33:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:33:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:33:16
Dec 05 02:33:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:33:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:33:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'backups', 'default.rgw.meta', '.mgr', 'volumes', 'vms', 'images']
Dec 05 02:33:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:33:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:33:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:33:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:33:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:33:16 compute-0 podman[484917]: 2025-12-05 02:33:16.719466852 +0000 UTC m=+0.115794611 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 02:33:16 compute-0 podman[484919]: 2025-12-05 02:33:16.736041893 +0000 UTC m=+0.117863581 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, release=1755695350, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-type=git, config_id=edpm, container_name=openstack_network_exporter)
Dec 05 02:33:16 compute-0 podman[484916]: 2025-12-05 02:33:16.738749821 +0000 UTC m=+0.139211030 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec 05 02:33:16 compute-0 podman[484918]: 2025-12-05 02:33:16.781581894 +0000 UTC m=+0.171419905 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 02:33:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2532: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:18 compute-0 nova_compute[349548]: 2025-12-05 02:33:18.036 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:33:18 compute-0 ceph-mon[192914]: pgmap v2532: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:33:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:33:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:33:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:33:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:33:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:33:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:33:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:33:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:33:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:33:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:33:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2533: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:20 compute-0 ceph-mon[192914]: pgmap v2533: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:20 compute-0 nova_compute[349548]: 2025-12-05 02:33:20.818 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:33:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2534: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:22 compute-0 ceph-mon[192914]: pgmap v2534: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:23 compute-0 nova_compute[349548]: 2025-12-05 02:33:23.040 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:33:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2535: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:33:24 compute-0 ceph-mon[192914]: pgmap v2535: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2536: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:25 compute-0 nova_compute[349548]: 2025-12-05 02:33:25.823 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:33:26 compute-0 ceph-mon[192914]: pgmap v2536: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2537: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 05 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:33:28 compute-0 nova_compute[349548]: 2025-12-05 02:33:28.043 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:33:28 compute-0 ceph-mon[192914]: pgmap v2537: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:33:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2538: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:29 compute-0 podman[158197]: time="2025-12-05T02:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:33:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:33:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8199 "" "Go-http-client/1.1"
Dec 05 02:33:30 compute-0 ceph-mon[192914]: pgmap v2538: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:30 compute-0 nova_compute[349548]: 2025-12-05 02:33:30.828 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:33:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2539: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:31 compute-0 openstack_network_exporter[366555]: ERROR   02:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:33:31 compute-0 openstack_network_exporter[366555]: ERROR   02:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:33:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:33:31 compute-0 openstack_network_exporter[366555]: ERROR   02:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:33:31 compute-0 openstack_network_exporter[366555]: ERROR   02:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:33:31 compute-0 openstack_network_exporter[366555]: ERROR   02:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:33:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:33:31 compute-0 podman[485005]: 2025-12-05 02:33:31.725211321 +0000 UTC m=+0.130385624 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 02:33:31 compute-0 podman[485006]: 2025-12-05 02:33:31.729702011 +0000 UTC m=+0.127028906 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:33:32 compute-0 ceph-mon[192914]: pgmap v2539: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:33 compute-0 nova_compute[349548]: 2025-12-05 02:33:33.047 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:33:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2540: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:33:34 compute-0 ceph-mon[192914]: pgmap v2540: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2541: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:35 compute-0 sudo[485044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:33:35 compute-0 sudo[485044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:33:35 compute-0 sudo[485044]: pam_unix(sudo:session): session closed for user root
Dec 05 02:33:35 compute-0 nova_compute[349548]: 2025-12-05 02:33:35.834 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:33:35 compute-0 sudo[485069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:33:35 compute-0 sudo[485069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:33:35 compute-0 sudo[485069]: pam_unix(sudo:session): session closed for user root
Dec 05 02:33:36 compute-0 podman[485095]: 2025-12-05 02:33:36.031232039 +0000 UTC m=+0.108949272 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm)
Dec 05 02:33:36 compute-0 sudo[485113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:33:36 compute-0 sudo[485113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:33:36 compute-0 sudo[485113]: pam_unix(sudo:session): session closed for user root
Dec 05 02:33:36 compute-0 podman[485093]: 2025-12-05 02:33:36.045657588 +0000 UTC m=+0.133128054 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute)
Dec 05 02:33:36 compute-0 podman[485094]: 2025-12-05 02:33:36.045769781 +0000 UTC m=+0.123149134 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, release=1214.1726694543, io.openshift.expose-services=, config_id=edpm, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.tags=base rhel9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, managed_by=edpm_ansible, name=ubi9)
Dec 05 02:33:36 compute-0 sudo[485170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:33:36 compute-0 sudo[485170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:33:36 compute-0 ceph-mon[192914]: pgmap v2541: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:36 compute-0 sudo[485170]: pam_unix(sudo:session): session closed for user root
Dec 05 02:33:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:33:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:33:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:33:36 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:33:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:33:36 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:33:36 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d59dcbc8-1e0a-4f2c-945e-6f5046236e90 does not exist
Dec 05 02:33:36 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 380cf9b3-c9a9-4977-837e-ac52da1f78a2 does not exist
Dec 05 02:33:36 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c8b1d7a4-987f-488c-99e9-1cb434461c36 does not exist
Dec 05 02:33:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:33:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:33:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:33:36 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:33:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:33:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:33:37 compute-0 sudo[485225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:33:37 compute-0 sudo[485225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:33:37 compute-0 sudo[485225]: pam_unix(sudo:session): session closed for user root
Dec 05 02:33:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2542: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:37 compute-0 sudo[485250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:33:37 compute-0 sudo[485250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:33:37 compute-0 sudo[485250]: pam_unix(sudo:session): session closed for user root
Dec 05 02:33:37 compute-0 sudo[485275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:33:37 compute-0 sudo[485275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:33:37 compute-0 sudo[485275]: pam_unix(sudo:session): session closed for user root
Dec 05 02:33:37 compute-0 sudo[485300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:33:37 compute-0 sudo[485300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:33:37 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:33:37 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:33:37 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:33:37 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:33:37 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:33:37 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:33:38 compute-0 nova_compute[349548]: 2025-12-05 02:33:38.050 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:33:38 compute-0 podman[485362]: 2025-12-05 02:33:38.052747197 +0000 UTC m=+0.096014846 container create e5ae5ffbd107b091e7b61477414f03f1b34cee39a5070eea4ccb2959c153dc87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:33:38 compute-0 podman[485362]: 2025-12-05 02:33:38.01525744 +0000 UTC m=+0.058525129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:33:38 compute-0 systemd[1]: Started libpod-conmon-e5ae5ffbd107b091e7b61477414f03f1b34cee39a5070eea4ccb2959c153dc87.scope.
Dec 05 02:33:38 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:33:38 compute-0 podman[485362]: 2025-12-05 02:33:38.223057888 +0000 UTC m=+0.266325577 container init e5ae5ffbd107b091e7b61477414f03f1b34cee39a5070eea4ccb2959c153dc87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 05 02:33:38 compute-0 podman[485362]: 2025-12-05 02:33:38.239873876 +0000 UTC m=+0.283141525 container start e5ae5ffbd107b091e7b61477414f03f1b34cee39a5070eea4ccb2959c153dc87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 05 02:33:38 compute-0 podman[485362]: 2025-12-05 02:33:38.246148208 +0000 UTC m=+0.289415857 container attach e5ae5ffbd107b091e7b61477414f03f1b34cee39a5070eea4ccb2959c153dc87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 02:33:38 compute-0 youthful_jennings[485378]: 167 167
Dec 05 02:33:38 compute-0 systemd[1]: libpod-e5ae5ffbd107b091e7b61477414f03f1b34cee39a5070eea4ccb2959c153dc87.scope: Deactivated successfully.
Dec 05 02:33:38 compute-0 podman[485362]: 2025-12-05 02:33:38.255145509 +0000 UTC m=+0.298413148 container died e5ae5ffbd107b091e7b61477414f03f1b34cee39a5070eea4ccb2959c153dc87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jennings, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 05 02:33:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-f719ce65fd267afd0cee13e5bfb285715db20374966b33af131c9109bc7ab8ce-merged.mount: Deactivated successfully.
Dec 05 02:33:38 compute-0 podman[485362]: 2025-12-05 02:33:38.350260149 +0000 UTC m=+0.393527758 container remove e5ae5ffbd107b091e7b61477414f03f1b34cee39a5070eea4ccb2959c153dc87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 05 02:33:38 compute-0 systemd[1]: libpod-conmon-e5ae5ffbd107b091e7b61477414f03f1b34cee39a5070eea4ccb2959c153dc87.scope: Deactivated successfully.
Dec 05 02:33:38 compute-0 ceph-mon[192914]: pgmap v2542: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:38 compute-0 podman[485401]: 2025-12-05 02:33:38.605727231 +0000 UTC m=+0.096740208 container create ce28cd8078b0f9f24de5b8df6910688a050c1761861db9360e6b0c206b7dc802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:33:38 compute-0 podman[485401]: 2025-12-05 02:33:38.572202328 +0000 UTC m=+0.063215355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:33:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:33:38 compute-0 systemd[1]: Started libpod-conmon-ce28cd8078b0f9f24de5b8df6910688a050c1761861db9360e6b0c206b7dc802.scope.
Dec 05 02:33:38 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bb64c044fae5e769dd2f56bf1035ed55ad0c6ec840d36c6470b0e0730ce81c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bb64c044fae5e769dd2f56bf1035ed55ad0c6ec840d36c6470b0e0730ce81c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bb64c044fae5e769dd2f56bf1035ed55ad0c6ec840d36c6470b0e0730ce81c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bb64c044fae5e769dd2f56bf1035ed55ad0c6ec840d36c6470b0e0730ce81c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bb64c044fae5e769dd2f56bf1035ed55ad0c6ec840d36c6470b0e0730ce81c5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:33:38 compute-0 podman[485401]: 2025-12-05 02:33:38.800189022 +0000 UTC m=+0.291201999 container init ce28cd8078b0f9f24de5b8df6910688a050c1761861db9360e6b0c206b7dc802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 05 02:33:38 compute-0 podman[485401]: 2025-12-05 02:33:38.825574389 +0000 UTC m=+0.316587356 container start ce28cd8078b0f9f24de5b8df6910688a050c1761861db9360e6b0c206b7dc802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_maxwell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 05 02:33:38 compute-0 podman[485401]: 2025-12-05 02:33:38.832000115 +0000 UTC m=+0.323013082 container attach ce28cd8078b0f9f24de5b8df6910688a050c1761861db9360e6b0c206b7dc802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_maxwell, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:33:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2543: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:39 compute-0 peaceful_maxwell[485417]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:33:39 compute-0 peaceful_maxwell[485417]: --> relative data size: 1.0
Dec 05 02:33:39 compute-0 peaceful_maxwell[485417]: --> All data devices are unavailable
Dec 05 02:33:40 compute-0 systemd[1]: libpod-ce28cd8078b0f9f24de5b8df6910688a050c1761861db9360e6b0c206b7dc802.scope: Deactivated successfully.
Dec 05 02:33:40 compute-0 systemd[1]: libpod-ce28cd8078b0f9f24de5b8df6910688a050c1761861db9360e6b0c206b7dc802.scope: Consumed 1.122s CPU time.
Dec 05 02:33:40 compute-0 podman[485401]: 2025-12-05 02:33:40.003696478 +0000 UTC m=+1.494709645 container died ce28cd8078b0f9f24de5b8df6910688a050c1761861db9360e6b0c206b7dc802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_maxwell, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:33:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bb64c044fae5e769dd2f56bf1035ed55ad0c6ec840d36c6470b0e0730ce81c5-merged.mount: Deactivated successfully.
Dec 05 02:33:40 compute-0 podman[485401]: 2025-12-05 02:33:40.103316148 +0000 UTC m=+1.594329115 container remove ce28cd8078b0f9f24de5b8df6910688a050c1761861db9360e6b0c206b7dc802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:33:40 compute-0 systemd[1]: libpod-conmon-ce28cd8078b0f9f24de5b8df6910688a050c1761861db9360e6b0c206b7dc802.scope: Deactivated successfully.
Dec 05 02:33:40 compute-0 sudo[485300]: pam_unix(sudo:session): session closed for user root
Dec 05 02:33:40 compute-0 sudo[485459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:33:40 compute-0 sudo[485459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:33:40 compute-0 sudo[485459]: pam_unix(sudo:session): session closed for user root
Dec 05 02:33:40 compute-0 sudo[485484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:33:40 compute-0 sudo[485484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:33:40 compute-0 sudo[485484]: pam_unix(sudo:session): session closed for user root
Dec 05 02:33:40 compute-0 ceph-mon[192914]: pgmap v2543: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:40 compute-0 sudo[485509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:33:40 compute-0 sudo[485509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:33:40 compute-0 sudo[485509]: pam_unix(sudo:session): session closed for user root
Dec 05 02:33:40 compute-0 sudo[485534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:33:40 compute-0 sudo[485534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:33:40 compute-0 nova_compute[349548]: 2025-12-05 02:33:40.840 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:33:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2544: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:41 compute-0 podman[485597]: 2025-12-05 02:33:41.346268309 +0000 UTC m=+0.116817790 container create bf6180495013447a7306d928cb82881110b5e39d3936c26ef2d2307857b1de88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 05 02:33:41 compute-0 podman[485597]: 2025-12-05 02:33:41.292493809 +0000 UTC m=+0.063043340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:33:41 compute-0 systemd[1]: Started libpod-conmon-bf6180495013447a7306d928cb82881110b5e39d3936c26ef2d2307857b1de88.scope.
Dec 05 02:33:41 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:33:41 compute-0 podman[485597]: 2025-12-05 02:33:41.497937949 +0000 UTC m=+0.268487490 container init bf6180495013447a7306d928cb82881110b5e39d3936c26ef2d2307857b1de88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 02:33:41 compute-0 podman[485597]: 2025-12-05 02:33:41.518385003 +0000 UTC m=+0.288934494 container start bf6180495013447a7306d928cb82881110b5e39d3936c26ef2d2307857b1de88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chaplygin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:33:41 compute-0 podman[485597]: 2025-12-05 02:33:41.525264892 +0000 UTC m=+0.295814373 container attach bf6180495013447a7306d928cb82881110b5e39d3936c26ef2d2307857b1de88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 05 02:33:41 compute-0 recursing_chaplygin[485611]: 167 167
Dec 05 02:33:41 compute-0 systemd[1]: libpod-bf6180495013447a7306d928cb82881110b5e39d3936c26ef2d2307857b1de88.scope: Deactivated successfully.
Dec 05 02:33:41 compute-0 podman[485597]: 2025-12-05 02:33:41.532000068 +0000 UTC m=+0.302549569 container died bf6180495013447a7306d928cb82881110b5e39d3936c26ef2d2307857b1de88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chaplygin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 05 02:33:41 compute-0 ceph-mon[192914]: pgmap v2544: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bf66bf10d92bb219200e607eb03a3d6c1a789755c3d97adc7428487b864f3db-merged.mount: Deactivated successfully.
Dec 05 02:33:41 compute-0 podman[485597]: 2025-12-05 02:33:41.611607817 +0000 UTC m=+0.382157298 container remove bf6180495013447a7306d928cb82881110b5e39d3936c26ef2d2307857b1de88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 05 02:33:41 compute-0 systemd[1]: libpod-conmon-bf6180495013447a7306d928cb82881110b5e39d3936c26ef2d2307857b1de88.scope: Deactivated successfully.
Dec 05 02:33:41 compute-0 podman[485637]: 2025-12-05 02:33:41.859320214 +0000 UTC m=+0.072550966 container create 8bf87643ba2086b1e6cc4b01545d7d07339f9ea95c0ed4abe5731641906cffa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 05 02:33:41 compute-0 podman[485637]: 2025-12-05 02:33:41.834313398 +0000 UTC m=+0.047544161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:33:41 compute-0 systemd[1]: Started libpod-conmon-8bf87643ba2086b1e6cc4b01545d7d07339f9ea95c0ed4abe5731641906cffa3.scope.
Dec 05 02:33:41 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6b3db947a40813ee0c30fffd69f90df7217dcd24f512826eda9fccae68fffa4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6b3db947a40813ee0c30fffd69f90df7217dcd24f512826eda9fccae68fffa4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6b3db947a40813ee0c30fffd69f90df7217dcd24f512826eda9fccae68fffa4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6b3db947a40813ee0c30fffd69f90df7217dcd24f512826eda9fccae68fffa4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:33:42 compute-0 podman[485637]: 2025-12-05 02:33:42.013548769 +0000 UTC m=+0.226779551 container init 8bf87643ba2086b1e6cc4b01545d7d07339f9ea95c0ed4abe5731641906cffa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 05 02:33:42 compute-0 podman[485637]: 2025-12-05 02:33:42.030558972 +0000 UTC m=+0.243789714 container start 8bf87643ba2086b1e6cc4b01545d7d07339f9ea95c0ed4abe5731641906cffa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamport, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 05 02:33:42 compute-0 podman[485637]: 2025-12-05 02:33:42.03738805 +0000 UTC m=+0.250618822 container attach 8bf87643ba2086b1e6cc4b01545d7d07339f9ea95c0ed4abe5731641906cffa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 05 02:33:42 compute-0 practical_lamport[485653]: {
Dec 05 02:33:42 compute-0 practical_lamport[485653]:     "0": [
Dec 05 02:33:42 compute-0 practical_lamport[485653]:         {
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "devices": [
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "/dev/loop3"
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             ],
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "lv_name": "ceph_lv0",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "lv_size": "21470642176",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "name": "ceph_lv0",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "tags": {
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.cluster_name": "ceph",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.crush_device_class": "",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.encrypted": "0",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.osd_id": "0",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.type": "block",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.vdo": "0"
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             },
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "type": "block",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "vg_name": "ceph_vg0"
Dec 05 02:33:42 compute-0 practical_lamport[485653]:         }
Dec 05 02:33:42 compute-0 practical_lamport[485653]:     ],
Dec 05 02:33:42 compute-0 practical_lamport[485653]:     "1": [
Dec 05 02:33:42 compute-0 practical_lamport[485653]:         {
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "devices": [
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "/dev/loop4"
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             ],
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "lv_name": "ceph_lv1",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "lv_size": "21470642176",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "name": "ceph_lv1",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "tags": {
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.cluster_name": "ceph",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.crush_device_class": "",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.encrypted": "0",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.osd_id": "1",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.type": "block",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.vdo": "0"
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             },
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "type": "block",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "vg_name": "ceph_vg1"
Dec 05 02:33:42 compute-0 practical_lamport[485653]:         }
Dec 05 02:33:42 compute-0 practical_lamport[485653]:     ],
Dec 05 02:33:42 compute-0 practical_lamport[485653]:     "2": [
Dec 05 02:33:42 compute-0 practical_lamport[485653]:         {
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "devices": [
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "/dev/loop5"
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             ],
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "lv_name": "ceph_lv2",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "lv_size": "21470642176",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "name": "ceph_lv2",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "tags": {
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.cluster_name": "ceph",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.crush_device_class": "",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.encrypted": "0",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.osd_id": "2",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.type": "block",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:                 "ceph.vdo": "0"
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             },
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "type": "block",
Dec 05 02:33:42 compute-0 practical_lamport[485653]:             "vg_name": "ceph_vg2"
Dec 05 02:33:42 compute-0 practical_lamport[485653]:         }
Dec 05 02:33:42 compute-0 practical_lamport[485653]:     ]
Dec 05 02:33:42 compute-0 practical_lamport[485653]: }
Dec 05 02:33:42 compute-0 systemd[1]: libpod-8bf87643ba2086b1e6cc4b01545d7d07339f9ea95c0ed4abe5731641906cffa3.scope: Deactivated successfully.
Dec 05 02:33:42 compute-0 podman[485637]: 2025-12-05 02:33:42.852613462 +0000 UTC m=+1.065844214 container died 8bf87643ba2086b1e6cc4b01545d7d07339f9ea95c0ed4abe5731641906cffa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 05 02:33:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6b3db947a40813ee0c30fffd69f90df7217dcd24f512826eda9fccae68fffa4-merged.mount: Deactivated successfully.
Dec 05 02:33:42 compute-0 podman[485637]: 2025-12-05 02:33:42.960305136 +0000 UTC m=+1.173535858 container remove 8bf87643ba2086b1e6cc4b01545d7d07339f9ea95c0ed4abe5731641906cffa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 02:33:42 compute-0 systemd[1]: libpod-conmon-8bf87643ba2086b1e6cc4b01545d7d07339f9ea95c0ed4abe5731641906cffa3.scope: Deactivated successfully.
Dec 05 02:33:43 compute-0 sudo[485534]: pam_unix(sudo:session): session closed for user root
Dec 05 02:33:43 compute-0 nova_compute[349548]: 2025-12-05 02:33:43.053 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:33:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2545: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:43 compute-0 sudo[485673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:33:43 compute-0 sudo[485673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:33:43 compute-0 sudo[485673]: pam_unix(sudo:session): session closed for user root
Dec 05 02:33:43 compute-0 sudo[485698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:33:43 compute-0 sudo[485698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:33:43 compute-0 sudo[485698]: pam_unix(sudo:session): session closed for user root
Dec 05 02:33:43 compute-0 sudo[485723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:33:43 compute-0 sudo[485723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:33:43 compute-0 sudo[485723]: pam_unix(sudo:session): session closed for user root
Dec 05 02:33:43 compute-0 sudo[485748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:33:43 compute-0 sudo[485748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:33:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:33:44 compute-0 podman[485810]: 2025-12-05 02:33:44.072171463 +0000 UTC m=+0.071455014 container create aabf914e7a19c52ba66d1508368de680db88a38f8f7d26fa0012e4d852ccec50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_khorana, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:33:44 compute-0 podman[485810]: 2025-12-05 02:33:44.035614663 +0000 UTC m=+0.034898274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:33:44 compute-0 ceph-mon[192914]: pgmap v2545: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:44 compute-0 systemd[1]: Started libpod-conmon-aabf914e7a19c52ba66d1508368de680db88a38f8f7d26fa0012e4d852ccec50.scope.
Dec 05 02:33:44 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:33:44 compute-0 podman[485810]: 2025-12-05 02:33:44.206231583 +0000 UTC m=+0.205515174 container init aabf914e7a19c52ba66d1508368de680db88a38f8f7d26fa0012e4d852ccec50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 05 02:33:44 compute-0 podman[485810]: 2025-12-05 02:33:44.219963971 +0000 UTC m=+0.219247532 container start aabf914e7a19c52ba66d1508368de680db88a38f8f7d26fa0012e4d852ccec50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 05 02:33:44 compute-0 podman[485810]: 2025-12-05 02:33:44.226106109 +0000 UTC m=+0.225389720 container attach aabf914e7a19c52ba66d1508368de680db88a38f8f7d26fa0012e4d852ccec50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_khorana, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 05 02:33:44 compute-0 reverent_khorana[485824]: 167 167
Dec 05 02:33:44 compute-0 systemd[1]: libpod-aabf914e7a19c52ba66d1508368de680db88a38f8f7d26fa0012e4d852ccec50.scope: Deactivated successfully.
Dec 05 02:33:44 compute-0 podman[485810]: 2025-12-05 02:33:44.231510066 +0000 UTC m=+0.230793617 container died aabf914e7a19c52ba66d1508368de680db88a38f8f7d26fa0012e4d852ccec50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:33:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f265eef36bac6a84777f97d4ecea8c308f4828d7b9880be86566e165edef4f5d-merged.mount: Deactivated successfully.
Dec 05 02:33:44 compute-0 podman[485810]: 2025-12-05 02:33:44.300650232 +0000 UTC m=+0.299933793 container remove aabf914e7a19c52ba66d1508368de680db88a38f8f7d26fa0012e4d852ccec50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_khorana, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:33:44 compute-0 systemd[1]: libpod-conmon-aabf914e7a19c52ba66d1508368de680db88a38f8f7d26fa0012e4d852ccec50.scope: Deactivated successfully.
Dec 05 02:33:44 compute-0 podman[485846]: 2025-12-05 02:33:44.592839769 +0000 UTC m=+0.093738100 container create 95dc9476ec9357b371bfa26c991f42bc59b3cef1de5649044757e472bcff7c6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Dec 05 02:33:44 compute-0 podman[485846]: 2025-12-05 02:33:44.557663389 +0000 UTC m=+0.058561780 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:33:44 compute-0 systemd[1]: Started libpod-conmon-95dc9476ec9357b371bfa26c991f42bc59b3cef1de5649044757e472bcff7c6e.scope.
Dec 05 02:33:44 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e58c6e5ab86a86ccb18fce0ead6a833969339eb18c55d61441adb0a83bff34ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e58c6e5ab86a86ccb18fce0ead6a833969339eb18c55d61441adb0a83bff34ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e58c6e5ab86a86ccb18fce0ead6a833969339eb18c55d61441adb0a83bff34ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e58c6e5ab86a86ccb18fce0ead6a833969339eb18c55d61441adb0a83bff34ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:33:44 compute-0 podman[485846]: 2025-12-05 02:33:44.799448703 +0000 UTC m=+0.300347044 container init 95dc9476ec9357b371bfa26c991f42bc59b3cef1de5649044757e472bcff7c6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hoover, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 02:33:44 compute-0 podman[485846]: 2025-12-05 02:33:44.822932375 +0000 UTC m=+0.323830716 container start 95dc9476ec9357b371bfa26c991f42bc59b3cef1de5649044757e472bcff7c6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 05 02:33:44 compute-0 podman[485846]: 2025-12-05 02:33:44.829694961 +0000 UTC m=+0.330593302 container attach 95dc9476ec9357b371bfa26c991f42bc59b3cef1de5649044757e472bcff7c6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:33:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2546: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:33:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1355625633' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:33:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:33:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1355625633' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:33:45 compute-0 nova_compute[349548]: 2025-12-05 02:33:45.846 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:33:45 compute-0 distracted_hoover[485862]: {
Dec 05 02:33:45 compute-0 distracted_hoover[485862]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:33:45 compute-0 distracted_hoover[485862]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:33:45 compute-0 distracted_hoover[485862]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:33:45 compute-0 distracted_hoover[485862]:         "osd_id": 0,
Dec 05 02:33:45 compute-0 distracted_hoover[485862]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:33:45 compute-0 distracted_hoover[485862]:         "type": "bluestore"
Dec 05 02:33:45 compute-0 distracted_hoover[485862]:     },
Dec 05 02:33:45 compute-0 distracted_hoover[485862]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:33:45 compute-0 distracted_hoover[485862]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:33:45 compute-0 distracted_hoover[485862]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:33:45 compute-0 distracted_hoover[485862]:         "osd_id": 1,
Dec 05 02:33:45 compute-0 distracted_hoover[485862]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:33:45 compute-0 distracted_hoover[485862]:         "type": "bluestore"
Dec 05 02:33:45 compute-0 distracted_hoover[485862]:     },
Dec 05 02:33:45 compute-0 distracted_hoover[485862]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:33:45 compute-0 distracted_hoover[485862]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:33:45 compute-0 distracted_hoover[485862]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:33:45 compute-0 distracted_hoover[485862]:         "osd_id": 2,
Dec 05 02:33:45 compute-0 distracted_hoover[485862]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:33:45 compute-0 distracted_hoover[485862]:         "type": "bluestore"
Dec 05 02:33:45 compute-0 distracted_hoover[485862]:     }
Dec 05 02:33:45 compute-0 distracted_hoover[485862]: }
Dec 05 02:33:45 compute-0 systemd[1]: libpod-95dc9476ec9357b371bfa26c991f42bc59b3cef1de5649044757e472bcff7c6e.scope: Deactivated successfully.
Dec 05 02:33:45 compute-0 systemd[1]: libpod-95dc9476ec9357b371bfa26c991f42bc59b3cef1de5649044757e472bcff7c6e.scope: Consumed 1.139s CPU time.
Dec 05 02:33:46 compute-0 podman[485896]: 2025-12-05 02:33:46.041290452 +0000 UTC m=+0.059467707 container died 95dc9476ec9357b371bfa26c991f42bc59b3cef1de5649044757e472bcff7c6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hoover, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:33:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e58c6e5ab86a86ccb18fce0ead6a833969339eb18c55d61441adb0a83bff34ef-merged.mount: Deactivated successfully.
Dec 05 02:33:46 compute-0 podman[485896]: 2025-12-05 02:33:46.135005261 +0000 UTC m=+0.153182476 container remove 95dc9476ec9357b371bfa26c991f42bc59b3cef1de5649044757e472bcff7c6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hoover, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 05 02:33:46 compute-0 ceph-mon[192914]: pgmap v2546: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1355625633' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:33:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1355625633' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:33:46 compute-0 systemd[1]: libpod-conmon-95dc9476ec9357b371bfa26c991f42bc59b3cef1de5649044757e472bcff7c6e.scope: Deactivated successfully.
Dec 05 02:33:46 compute-0 sudo[485748]: pam_unix(sudo:session): session closed for user root
Dec 05 02:33:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:33:46 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:33:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:33:46 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:33:46 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 68dbbce2-2ee6-496b-9963-df32038e75f3 does not exist
Dec 05 02:33:46 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 884a5939-91a3-41cb-84af-49f649b26c39 does not exist
Dec 05 02:33:46 compute-0 sudo[485909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:33:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:33:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:33:46 compute-0 sudo[485909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:33:46 compute-0 sudo[485909]: pam_unix(sudo:session): session closed for user root
Dec 05 02:33:46 compute-0 sudo[485934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:33:46 compute-0 sudo[485934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:33:46 compute-0 sudo[485934]: pam_unix(sudo:session): session closed for user root
Dec 05 02:33:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:33:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:33:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:33:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:33:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2547: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:47 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:33:47 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:33:47 compute-0 podman[485959]: 2025-12-05 02:33:47.72221194 +0000 UTC m=+0.125096680 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 02:33:47 compute-0 podman[485960]: 2025-12-05 02:33:47.746547866 +0000 UTC m=+0.145253625 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 02:33:47 compute-0 podman[485962]: 2025-12-05 02:33:47.751033356 +0000 UTC m=+0.139512388 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 05 02:33:47 compute-0 podman[485961]: 2025-12-05 02:33:47.774455336 +0000 UTC m=+0.169663284 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 02:33:48 compute-0 nova_compute[349548]: 2025-12-05 02:33:48.057 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:33:48 compute-0 ceph-mon[192914]: pgmap v2547: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:33:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2548: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:50 compute-0 ceph-mon[192914]: pgmap v2548: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:50 compute-0 nova_compute[349548]: 2025-12-05 02:33:50.853 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:33:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2549: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:52 compute-0 ceph-mon[192914]: pgmap v2549: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:53 compute-0 nova_compute[349548]: 2025-12-05 02:33:53.062 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:33:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2550: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:33:54 compute-0 ceph-mon[192914]: pgmap v2550: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2551: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:55 compute-0 nova_compute[349548]: 2025-12-05 02:33:55.859 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:33:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:33:56.237 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:33:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:33:56.237 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:33:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:33:56.237 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:33:56 compute-0 ceph-mon[192914]: pgmap v2551: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2552: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:58 compute-0 nova_compute[349548]: 2025-12-05 02:33:58.066 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:33:58 compute-0 ceph-mon[192914]: pgmap v2552: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:33:59 compute-0 nova_compute[349548]: 2025-12-05 02:33:59.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:33:59 compute-0 nova_compute[349548]: 2025-12-05 02:33:59.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:33:59 compute-0 nova_compute[349548]: 2025-12-05 02:33:59.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:33:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2553: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:33:59 compute-0 nova_compute[349548]: 2025-12-05 02:33:59.083 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 02:33:59 compute-0 podman[158197]: time="2025-12-05T02:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:33:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:33:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8216 "" "Go-http-client/1.1"
Dec 05 02:34:00 compute-0 ceph-mon[192914]: pgmap v2553: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:00 compute-0 nova_compute[349548]: 2025-12-05 02:34:00.864 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:34:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2554: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:01 compute-0 openstack_network_exporter[366555]: ERROR   02:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:34:01 compute-0 openstack_network_exporter[366555]: ERROR   02:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:34:01 compute-0 openstack_network_exporter[366555]: ERROR   02:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:34:01 compute-0 openstack_network_exporter[366555]: ERROR   02:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:34:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:34:01 compute-0 openstack_network_exporter[366555]: ERROR   02:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:34:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:34:02 compute-0 nova_compute[349548]: 2025-12-05 02:34:02.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:34:02 compute-0 nova_compute[349548]: 2025-12-05 02:34:02.098 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:34:02 compute-0 nova_compute[349548]: 2025-12-05 02:34:02.098 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:34:02 compute-0 nova_compute[349548]: 2025-12-05 02:34:02.098 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:34:02 compute-0 nova_compute[349548]: 2025-12-05 02:34:02.098 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:34:02 compute-0 nova_compute[349548]: 2025-12-05 02:34:02.099 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:34:02 compute-0 ceph-mon[192914]: pgmap v2554: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:34:02 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4223263005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:34:02 compute-0 nova_compute[349548]: 2025-12-05 02:34:02.676 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:34:02 compute-0 podman[486065]: 2025-12-05 02:34:02.699123876 +0000 UTC m=+0.105107551 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 02:34:02 compute-0 podman[486064]: 2025-12-05 02:34:02.715163971 +0000 UTC m=+0.136306725 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 05 02:34:03 compute-0 nova_compute[349548]: 2025-12-05 02:34:03.069 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:34:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2555: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:03 compute-0 nova_compute[349548]: 2025-12-05 02:34:03.141 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:34:03 compute-0 nova_compute[349548]: 2025-12-05 02:34:03.142 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3906MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:34:03 compute-0 nova_compute[349548]: 2025-12-05 02:34:03.142 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:34:03 compute-0 nova_compute[349548]: 2025-12-05 02:34:03.143 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:34:03 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4223263005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:34:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:34:03 compute-0 nova_compute[349548]: 2025-12-05 02:34:03.969 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:34:03 compute-0 nova_compute[349548]: 2025-12-05 02:34:03.970 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:34:04 compute-0 ceph-mon[192914]: pgmap v2555: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:04 compute-0 nova_compute[349548]: 2025-12-05 02:34:04.451 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing inventories for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 05 02:34:04 compute-0 nova_compute[349548]: 2025-12-05 02:34:04.866 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating ProviderTree inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 05 02:34:04 compute-0 nova_compute[349548]: 2025-12-05 02:34:04.867 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 05 02:34:04 compute-0 nova_compute[349548]: 2025-12-05 02:34:04.889 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing aggregate associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 05 02:34:04 compute-0 nova_compute[349548]: 2025-12-05 02:34:04.938 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing trait associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, traits: HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 05 02:34:04 compute-0 nova_compute[349548]: 2025-12-05 02:34:04.962 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:34:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2556: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:34:05 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1365559343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:34:05 compute-0 nova_compute[349548]: 2025-12-05 02:34:05.417 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:34:05 compute-0 nova_compute[349548]: 2025-12-05 02:34:05.430 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:34:05 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1365559343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:34:05 compute-0 nova_compute[349548]: 2025-12-05 02:34:05.474 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:34:05 compute-0 nova_compute[349548]: 2025-12-05 02:34:05.477 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:34:05 compute-0 nova_compute[349548]: 2025-12-05 02:34:05.478 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.335s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:34:05 compute-0 nova_compute[349548]: 2025-12-05 02:34:05.871 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:34:06 compute-0 ceph-mon[192914]: pgmap v2556: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:06 compute-0 nova_compute[349548]: 2025-12-05 02:34:06.480 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:34:06 compute-0 nova_compute[349548]: 2025-12-05 02:34:06.481 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:34:06 compute-0 nova_compute[349548]: 2025-12-05 02:34:06.481 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:34:06 compute-0 nova_compute[349548]: 2025-12-05 02:34:06.482 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:34:06 compute-0 podman[486130]: 2025-12-05 02:34:06.720775874 +0000 UTC m=+0.123375491 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.openshift.expose-services=, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_id=edpm, managed_by=edpm_ansible, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 05 02:34:06 compute-0 podman[486129]: 2025-12-05 02:34:06.764927325 +0000 UTC m=+0.167322336 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 05 02:34:06 compute-0 podman[486131]: 2025-12-05 02:34:06.784664997 +0000 UTC m=+0.175445691 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 05 02:34:07 compute-0 nova_compute[349548]: 2025-12-05 02:34:07.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:34:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2557: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:08 compute-0 nova_compute[349548]: 2025-12-05 02:34:08.061 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:34:08 compute-0 nova_compute[349548]: 2025-12-05 02:34:08.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:34:08 compute-0 nova_compute[349548]: 2025-12-05 02:34:08.073 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:34:08 compute-0 ceph-mon[192914]: pgmap v2557: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:34:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2558: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:10 compute-0 ceph-mon[192914]: pgmap v2558: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:10 compute-0 nova_compute[349548]: 2025-12-05 02:34:10.877 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:34:11 compute-0 nova_compute[349548]: 2025-12-05 02:34:11.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:34:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2559: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:11 compute-0 nova_compute[349548]: 2025-12-05 02:34:11.085 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:34:12 compute-0 ceph-mon[192914]: pgmap v2559: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:13 compute-0 nova_compute[349548]: 2025-12-05 02:34:13.078 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:34:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2560: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:13 compute-0 ceph-mon[192914]: pgmap v2560: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:34:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2561: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:15 compute-0 nova_compute[349548]: 2025-12-05 02:34:15.882 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:34:16 compute-0 ceph-mon[192914]: pgmap v2561: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:34:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:34:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:34:16
Dec 05 02:34:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:34:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:34:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['images', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'volumes', 'default.rgw.control', 'backups', '.mgr']
Dec 05 02:34:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:34:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:34:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:34:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:34:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:34:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2562: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:18 compute-0 nova_compute[349548]: 2025-12-05 02:34:18.079 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:34:18 compute-0 ceph-mon[192914]: pgmap v2562: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:34:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:34:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:34:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:34:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:34:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:34:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:34:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:34:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:34:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:34:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:34:18 compute-0 podman[486186]: 2025-12-05 02:34:18.731079388 +0000 UTC m=+0.134514604 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:34:18 compute-0 podman[486192]: 2025-12-05 02:34:18.737299548 +0000 UTC m=+0.113220206 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, config_id=edpm, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, release=1755695350, version=9.6, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Dec 05 02:34:18 compute-0 podman[486187]: 2025-12-05 02:34:18.756738442 +0000 UTC m=+0.147022946 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 02:34:18 compute-0 podman[486188]: 2025-12-05 02:34:18.776402553 +0000 UTC m=+0.161848447 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 05 02:34:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2563: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:20 compute-0 ceph-mon[192914]: pgmap v2563: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:20 compute-0 nova_compute[349548]: 2025-12-05 02:34:20.887 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:34:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2564: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:22 compute-0 ceph-mon[192914]: pgmap v2564: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:34:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4800.1 total, 600.0 interval
                                            Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 10K writes, 2909 syncs, 3.56 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 340 writes, 834 keys, 340 commit groups, 1.0 writes per commit group, ingest: 0.30 MB, 0.00 MB/s
                                            Interval WAL: 340 writes, 160 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:34:23 compute-0 nova_compute[349548]: 2025-12-05 02:34:23.082 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:34:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2565: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:34:24 compute-0 ceph-mon[192914]: pgmap v2565: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2566: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:25 compute-0 nova_compute[349548]: 2025-12-05 02:34:25.891 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:34:26 compute-0 ceph-mon[192914]: pgmap v2566: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2567: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 05 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:34:28 compute-0 nova_compute[349548]: 2025-12-05 02:34:28.084 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:34:28 compute-0 ceph-mon[192914]: pgmap v2567: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:34:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:34:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4800.2 total, 600.0 interval
                                            Cumulative writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.01 MB/s
                                            Cumulative WAL: 12K writes, 3420 syncs, 3.58 writes per sync, written: 0.04 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 497 writes, 1263 keys, 497 commit groups, 1.0 writes per commit group, ingest: 0.47 MB, 0.00 MB/s
                                            Interval WAL: 497 writes, 236 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:34:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2568: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:29 compute-0 podman[158197]: time="2025-12-05T02:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:34:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:34:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8220 "" "Go-http-client/1.1"
Dec 05 02:34:30 compute-0 ceph-mon[192914]: pgmap v2568: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:30 compute-0 nova_compute[349548]: 2025-12-05 02:34:30.897 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:34:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2569: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:31 compute-0 openstack_network_exporter[366555]: ERROR   02:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:34:31 compute-0 openstack_network_exporter[366555]: ERROR   02:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:34:31 compute-0 openstack_network_exporter[366555]: ERROR   02:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:34:31 compute-0 openstack_network_exporter[366555]: ERROR   02:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:34:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:34:31 compute-0 openstack_network_exporter[366555]: ERROR   02:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:34:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:34:32 compute-0 ceph-mon[192914]: pgmap v2569: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:33 compute-0 nova_compute[349548]: 2025-12-05 02:34:33.090 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:34:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2570: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:34:33 compute-0 podman[486270]: 2025-12-05 02:34:33.717501555 +0000 UTC m=+0.127072418 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 02:34:33 compute-0 podman[486271]: 2025-12-05 02:34:33.725050834 +0000 UTC m=+0.127787919 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 05 02:34:34 compute-0 ceph-mon[192914]: pgmap v2570: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2571: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:34:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4800.1 total, 600.0 interval
                                            Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 10K writes, 2729 syncs, 3.68 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 481 writes, 1444 keys, 481 commit groups, 1.0 writes per commit group, ingest: 0.40 MB, 0.00 MB/s
                                            Interval WAL: 481 writes, 225 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:34:35 compute-0 nova_compute[349548]: 2025-12-05 02:34:35.900 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:34:36 compute-0 ceph-mon[192914]: pgmap v2571: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2572: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:37 compute-0 ceph-mgr[193209]: [devicehealth INFO root] Check health
Dec 05 02:34:37 compute-0 podman[486312]: 2025-12-05 02:34:37.723082415 +0000 UTC m=+0.120921729 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, managed_by=edpm_ansible, vcs-type=git, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, distribution-scope=public, release=1214.1726694543, release-0.7.12=, io.buildah.version=1.29.0, name=ubi9, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec 05 02:34:37 compute-0 podman[486313]: 2025-12-05 02:34:37.734946509 +0000 UTC m=+0.126132850 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi)
Dec 05 02:34:37 compute-0 podman[486311]: 2025-12-05 02:34:37.761724126 +0000 UTC m=+0.174631977 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 05 02:34:38 compute-0 nova_compute[349548]: 2025-12-05 02:34:38.094 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:34:38 compute-0 ceph-mon[192914]: pgmap v2572: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.331 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.332 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.343 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.343 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:34:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:34:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2573: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:40 compute-0 ceph-mon[192914]: pgmap v2573: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:40 compute-0 nova_compute[349548]: 2025-12-05 02:34:40.906 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:34:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2574: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:42 compute-0 ceph-mon[192914]: pgmap v2574: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:43 compute-0 nova_compute[349548]: 2025-12-05 02:34:43.098 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:34:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2575: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:34:44 compute-0 ceph-mon[192914]: pgmap v2575: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2576: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:34:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3421371381' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:34:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:34:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3421371381' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:34:45 compute-0 nova_compute[349548]: 2025-12-05 02:34:45.910 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:34:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:34:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:34:46 compute-0 ceph-mon[192914]: pgmap v2576: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/3421371381' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:34:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/3421371381' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:34:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:34:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:34:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:34:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:34:46 compute-0 sudo[486367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:34:46 compute-0 sudo[486367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:34:46 compute-0 sudo[486367]: pam_unix(sudo:session): session closed for user root
Dec 05 02:34:46 compute-0 sudo[486392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:34:46 compute-0 sudo[486392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:34:46 compute-0 sudo[486392]: pam_unix(sudo:session): session closed for user root
Dec 05 02:34:46 compute-0 sudo[486417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:34:46 compute-0 sudo[486417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:34:46 compute-0 sudo[486417]: pam_unix(sudo:session): session closed for user root
Dec 05 02:34:47 compute-0 sudo[486442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:34:47 compute-0 sudo[486442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:34:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2577: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:47 compute-0 sudo[486442]: pam_unix(sudo:session): session closed for user root
Dec 05 02:34:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:34:47 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:34:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:34:47 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:34:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:34:47 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:34:47 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev e6cb9b20-6fda-4924-8121-b1988e6a06b7 does not exist
Dec 05 02:34:47 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev fd67c415-b8ad-4554-9633-66385b65794d does not exist
Dec 05 02:34:47 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 479c2e56-3a1f-48d2-a6a2-d6ede7a7ae06 does not exist
Dec 05 02:34:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:34:47 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:34:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:34:47 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:34:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:34:47 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:34:47 compute-0 sudo[486497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:34:47 compute-0 sudo[486497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:34:47 compute-0 sudo[486497]: pam_unix(sudo:session): session closed for user root
Dec 05 02:34:48 compute-0 sudo[486522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:34:48 compute-0 sudo[486522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:34:48 compute-0 sudo[486522]: pam_unix(sudo:session): session closed for user root
Dec 05 02:34:48 compute-0 nova_compute[349548]: 2025-12-05 02:34:48.104 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:34:48 compute-0 sudo[486547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:34:48 compute-0 sudo[486547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:34:48 compute-0 sudo[486547]: pam_unix(sudo:session): session closed for user root
Dec 05 02:34:48 compute-0 sudo[486572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:34:48 compute-0 sudo[486572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:34:48 compute-0 ceph-mon[192914]: pgmap v2577: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:34:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:34:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:34:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:34:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:34:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:34:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:34:48 compute-0 podman[486635]: 2025-12-05 02:34:48.945288073 +0000 UTC m=+0.088766956 container create 80de0e85572e99a01657b4abb2d6767b7672de331dd0a1f8d9027e749115c8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamport, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:34:48 compute-0 podman[486635]: 2025-12-05 02:34:48.905817438 +0000 UTC m=+0.049296361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:34:49 compute-0 systemd[1]: Started libpod-conmon-80de0e85572e99a01657b4abb2d6767b7672de331dd0a1f8d9027e749115c8a4.scope.
Dec 05 02:34:49 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:34:49 compute-0 podman[486635]: 2025-12-05 02:34:49.103959847 +0000 UTC m=+0.247438740 container init 80de0e85572e99a01657b4abb2d6767b7672de331dd0a1f8d9027e749115c8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamport, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 02:34:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2578: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:49 compute-0 podman[486635]: 2025-12-05 02:34:49.11684108 +0000 UTC m=+0.260319933 container start 80de0e85572e99a01657b4abb2d6767b7672de331dd0a1f8d9027e749115c8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 05 02:34:49 compute-0 podman[486635]: 2025-12-05 02:34:49.121768513 +0000 UTC m=+0.265247366 container attach 80de0e85572e99a01657b4abb2d6767b7672de331dd0a1f8d9027e749115c8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamport, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:34:49 compute-0 relaxed_lamport[486671]: 167 167
Dec 05 02:34:49 compute-0 systemd[1]: libpod-80de0e85572e99a01657b4abb2d6767b7672de331dd0a1f8d9027e749115c8a4.scope: Deactivated successfully.
Dec 05 02:34:49 compute-0 podman[486635]: 2025-12-05 02:34:49.128573881 +0000 UTC m=+0.272052734 container died 80de0e85572e99a01657b4abb2d6767b7672de331dd0a1f8d9027e749115c8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 02:34:49 compute-0 podman[486654]: 2025-12-05 02:34:49.145307466 +0000 UTC m=+0.104980966 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, version=9.6, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, architecture=x86_64, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 02:34:49 compute-0 podman[486652]: 2025-12-05 02:34:49.145545253 +0000 UTC m=+0.121052783 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 02:34:49 compute-0 podman[486649]: 2025-12-05 02:34:49.151223788 +0000 UTC m=+0.135395079 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 05 02:34:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbd2ae7c40916ed493cc381cfc4b00c10a1b3c91d38d32bd6f3fea70941d22d0-merged.mount: Deactivated successfully.
Dec 05 02:34:49 compute-0 podman[486653]: 2025-12-05 02:34:49.177516591 +0000 UTC m=+0.149319713 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 05 02:34:49 compute-0 podman[486635]: 2025-12-05 02:34:49.183083022 +0000 UTC m=+0.326561865 container remove 80de0e85572e99a01657b4abb2d6767b7672de331dd0a1f8d9027e749115c8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamport, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:34:49 compute-0 systemd[1]: libpod-conmon-80de0e85572e99a01657b4abb2d6767b7672de331dd0a1f8d9027e749115c8a4.scope: Deactivated successfully.
Dec 05 02:34:49 compute-0 podman[486761]: 2025-12-05 02:34:49.426509634 +0000 UTC m=+0.095462670 container create 12b82819801b3505b93a2cd0703263c085f246224b93323ec0368333f292c0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:34:49 compute-0 podman[486761]: 2025-12-05 02:34:49.384733213 +0000 UTC m=+0.053686299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:34:49 compute-0 systemd[1]: Started libpod-conmon-12b82819801b3505b93a2cd0703263c085f246224b93323ec0368333f292c0ed.scope.
Dec 05 02:34:49 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e59e62c8430b02cdfbd915f40581ad47e057e2f88be408953d16e08f6f73b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e59e62c8430b02cdfbd915f40581ad47e057e2f88be408953d16e08f6f73b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e59e62c8430b02cdfbd915f40581ad47e057e2f88be408953d16e08f6f73b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e59e62c8430b02cdfbd915f40581ad47e057e2f88be408953d16e08f6f73b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e59e62c8430b02cdfbd915f40581ad47e057e2f88be408953d16e08f6f73b8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:34:49 compute-0 podman[486761]: 2025-12-05 02:34:49.607170736 +0000 UTC m=+0.276123822 container init 12b82819801b3505b93a2cd0703263c085f246224b93323ec0368333f292c0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:34:49 compute-0 podman[486761]: 2025-12-05 02:34:49.623203631 +0000 UTC m=+0.292156677 container start 12b82819801b3505b93a2cd0703263c085f246224b93323ec0368333f292c0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 05 02:34:49 compute-0 podman[486761]: 2025-12-05 02:34:49.630001848 +0000 UTC m=+0.298954964 container attach 12b82819801b3505b93a2cd0703263c085f246224b93323ec0368333f292c0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:34:50 compute-0 ceph-mon[192914]: pgmap v2578: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:50 compute-0 zen_yonath[486778]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:34:50 compute-0 zen_yonath[486778]: --> relative data size: 1.0
Dec 05 02:34:50 compute-0 zen_yonath[486778]: --> All data devices are unavailable
Dec 05 02:34:50 compute-0 nova_compute[349548]: 2025-12-05 02:34:50.914 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:34:50 compute-0 systemd[1]: libpod-12b82819801b3505b93a2cd0703263c085f246224b93323ec0368333f292c0ed.scope: Deactivated successfully.
Dec 05 02:34:50 compute-0 systemd[1]: libpod-12b82819801b3505b93a2cd0703263c085f246224b93323ec0368333f292c0ed.scope: Consumed 1.235s CPU time.
Dec 05 02:34:50 compute-0 podman[486761]: 2025-12-05 02:34:50.921639622 +0000 UTC m=+1.590592638 container died 12b82819801b3505b93a2cd0703263c085f246224b93323ec0368333f292c0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 05 02:34:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-96e59e62c8430b02cdfbd915f40581ad47e057e2f88be408953d16e08f6f73b8-merged.mount: Deactivated successfully.
Dec 05 02:34:51 compute-0 podman[486761]: 2025-12-05 02:34:51.015580167 +0000 UTC m=+1.684533183 container remove 12b82819801b3505b93a2cd0703263c085f246224b93323ec0368333f292c0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:34:51 compute-0 systemd[1]: libpod-conmon-12b82819801b3505b93a2cd0703263c085f246224b93323ec0368333f292c0ed.scope: Deactivated successfully.
Dec 05 02:34:51 compute-0 sudo[486572]: pam_unix(sudo:session): session closed for user root
Dec 05 02:34:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2579: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:51 compute-0 sudo[486817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:34:51 compute-0 sudo[486817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:34:51 compute-0 sudo[486817]: pam_unix(sudo:session): session closed for user root
Dec 05 02:34:51 compute-0 sudo[486842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:34:51 compute-0 sudo[486842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:34:51 compute-0 sudo[486842]: pam_unix(sudo:session): session closed for user root
Dec 05 02:34:51 compute-0 sudo[486867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:34:51 compute-0 sudo[486867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:34:51 compute-0 sudo[486867]: pam_unix(sudo:session): session closed for user root
Dec 05 02:34:51 compute-0 sudo[486892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:34:51 compute-0 sudo[486892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:34:52 compute-0 podman[486955]: 2025-12-05 02:34:52.138214327 +0000 UTC m=+0.073195585 container create 94ed85aa2b942c4c6d9af70f8be9d19dabb4041ca3cdc1cd5a716b85046cf1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 05 02:34:52 compute-0 systemd[1]: Started libpod-conmon-94ed85aa2b942c4c6d9af70f8be9d19dabb4041ca3cdc1cd5a716b85046cf1a9.scope.
Dec 05 02:34:52 compute-0 podman[486955]: 2025-12-05 02:34:52.108185495 +0000 UTC m=+0.043166803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:34:52 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:34:52 compute-0 podman[486955]: 2025-12-05 02:34:52.258186927 +0000 UTC m=+0.193168235 container init 94ed85aa2b942c4c6d9af70f8be9d19dabb4041ca3cdc1cd5a716b85046cf1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:34:52 compute-0 podman[486955]: 2025-12-05 02:34:52.277171018 +0000 UTC m=+0.212152276 container start 94ed85aa2b942c4c6d9af70f8be9d19dabb4041ca3cdc1cd5a716b85046cf1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 05 02:34:52 compute-0 podman[486955]: 2025-12-05 02:34:52.284083168 +0000 UTC m=+0.219064476 container attach 94ed85aa2b942c4c6d9af70f8be9d19dabb4041ca3cdc1cd5a716b85046cf1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:34:52 compute-0 happy_lamport[486971]: 167 167
Dec 05 02:34:52 compute-0 systemd[1]: libpod-94ed85aa2b942c4c6d9af70f8be9d19dabb4041ca3cdc1cd5a716b85046cf1a9.scope: Deactivated successfully.
Dec 05 02:34:52 compute-0 podman[486955]: 2025-12-05 02:34:52.287009303 +0000 UTC m=+0.221990521 container died 94ed85aa2b942c4c6d9af70f8be9d19dabb4041ca3cdc1cd5a716b85046cf1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 05 02:34:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-dee03e4bedd022cf511fe9e6361b282b6fcca9b34cd4b20583148ea06aebdc03-merged.mount: Deactivated successfully.
Dec 05 02:34:52 compute-0 podman[486955]: 2025-12-05 02:34:52.360814305 +0000 UTC m=+0.295795563 container remove 94ed85aa2b942c4c6d9af70f8be9d19dabb4041ca3cdc1cd5a716b85046cf1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 02:34:52 compute-0 systemd[1]: libpod-conmon-94ed85aa2b942c4c6d9af70f8be9d19dabb4041ca3cdc1cd5a716b85046cf1a9.scope: Deactivated successfully.
Dec 05 02:34:52 compute-0 ceph-mon[192914]: pgmap v2579: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:52 compute-0 podman[486994]: 2025-12-05 02:34:52.659567522 +0000 UTC m=+0.085187832 container create e814d263d3c7407d68fec74de89f5c0c81dcc69c6625e6d234e6a5a1ca2b7a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:34:52 compute-0 podman[486994]: 2025-12-05 02:34:52.636716989 +0000 UTC m=+0.062337329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:34:52 compute-0 systemd[1]: Started libpod-conmon-e814d263d3c7407d68fec74de89f5c0c81dcc69c6625e6d234e6a5a1ca2b7a41.scope.
Dec 05 02:34:52 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7774ec02517239eb4e51fc1573b16a478c5aec753581142ff9530ed38dd2be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7774ec02517239eb4e51fc1573b16a478c5aec753581142ff9530ed38dd2be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7774ec02517239eb4e51fc1573b16a478c5aec753581142ff9530ed38dd2be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7774ec02517239eb4e51fc1573b16a478c5aec753581142ff9530ed38dd2be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:34:52 compute-0 podman[486994]: 2025-12-05 02:34:52.815563828 +0000 UTC m=+0.241184198 container init e814d263d3c7407d68fec74de89f5c0c81dcc69c6625e6d234e6a5a1ca2b7a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_antonelli, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 05 02:34:52 compute-0 podman[486994]: 2025-12-05 02:34:52.830907933 +0000 UTC m=+0.256528233 container start e814d263d3c7407d68fec74de89f5c0c81dcc69c6625e6d234e6a5a1ca2b7a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 05 02:34:52 compute-0 podman[486994]: 2025-12-05 02:34:52.836416003 +0000 UTC m=+0.262036363 container attach e814d263d3c7407d68fec74de89f5c0c81dcc69c6625e6d234e6a5a1ca2b7a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_antonelli, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:34:53 compute-0 nova_compute[349548]: 2025-12-05 02:34:53.106 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:34:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2580: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]: {
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:     "0": [
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:         {
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "devices": [
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "/dev/loop3"
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             ],
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "lv_name": "ceph_lv0",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "lv_size": "21470642176",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "name": "ceph_lv0",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "tags": {
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.cluster_name": "ceph",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.crush_device_class": "",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.encrypted": "0",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.osd_id": "0",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.type": "block",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.vdo": "0"
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             },
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "type": "block",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "vg_name": "ceph_vg0"
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:         }
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:     ],
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:     "1": [
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:         {
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "devices": [
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "/dev/loop4"
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             ],
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "lv_name": "ceph_lv1",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "lv_size": "21470642176",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "name": "ceph_lv1",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "tags": {
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.cluster_name": "ceph",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.crush_device_class": "",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.encrypted": "0",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.osd_id": "1",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.type": "block",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.vdo": "0"
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             },
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "type": "block",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "vg_name": "ceph_vg1"
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:         }
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:     ],
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:     "2": [
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:         {
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "devices": [
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "/dev/loop5"
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             ],
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "lv_name": "ceph_lv2",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "lv_size": "21470642176",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "name": "ceph_lv2",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "tags": {
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.cluster_name": "ceph",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.crush_device_class": "",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.encrypted": "0",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.osd_id": "2",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.type": "block",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:                 "ceph.vdo": "0"
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             },
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "type": "block",
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:             "vg_name": "ceph_vg2"
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:         }
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]:     ]
Dec 05 02:34:53 compute-0 jolly_antonelli[487009]: }
Dec 05 02:34:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:34:53 compute-0 systemd[1]: libpod-e814d263d3c7407d68fec74de89f5c0c81dcc69c6625e6d234e6a5a1ca2b7a41.scope: Deactivated successfully.
Dec 05 02:34:53 compute-0 podman[486994]: 2025-12-05 02:34:53.713836269 +0000 UTC m=+1.139456599 container died e814d263d3c7407d68fec74de89f5c0c81dcc69c6625e6d234e6a5a1ca2b7a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:34:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a7774ec02517239eb4e51fc1573b16a478c5aec753581142ff9530ed38dd2be-merged.mount: Deactivated successfully.
Dec 05 02:34:53 compute-0 podman[486994]: 2025-12-05 02:34:53.831635247 +0000 UTC m=+1.257255597 container remove e814d263d3c7407d68fec74de89f5c0c81dcc69c6625e6d234e6a5a1ca2b7a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:34:53 compute-0 systemd[1]: libpod-conmon-e814d263d3c7407d68fec74de89f5c0c81dcc69c6625e6d234e6a5a1ca2b7a41.scope: Deactivated successfully.
Dec 05 02:34:53 compute-0 sudo[486892]: pam_unix(sudo:session): session closed for user root
Dec 05 02:34:54 compute-0 sudo[487030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:34:54 compute-0 sudo[487030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:34:54 compute-0 sudo[487030]: pam_unix(sudo:session): session closed for user root
Dec 05 02:34:54 compute-0 sudo[487055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:34:54 compute-0 sudo[487055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:34:54 compute-0 sudo[487055]: pam_unix(sudo:session): session closed for user root
Dec 05 02:34:54 compute-0 sudo[487080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:34:54 compute-0 sudo[487080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:34:54 compute-0 sudo[487080]: pam_unix(sudo:session): session closed for user root
Dec 05 02:34:54 compute-0 sudo[487105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:34:54 compute-0 ceph-mon[192914]: pgmap v2580: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:54 compute-0 sudo[487105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:34:55 compute-0 podman[487169]: 2025-12-05 02:34:55.020620913 +0000 UTC m=+0.076569162 container create 72bed6ca347f177490130b11114fc5b986d7ef694d9528c912593e80cbb8967e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_satoshi, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:34:55 compute-0 podman[487169]: 2025-12-05 02:34:54.984521885 +0000 UTC m=+0.040470134 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:34:55 compute-0 systemd[1]: Started libpod-conmon-72bed6ca347f177490130b11114fc5b986d7ef694d9528c912593e80cbb8967e.scope.
Dec 05 02:34:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2581: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:55 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:34:55 compute-0 podman[487169]: 2025-12-05 02:34:55.162349504 +0000 UTC m=+0.218297813 container init 72bed6ca347f177490130b11114fc5b986d7ef694d9528c912593e80cbb8967e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 05 02:34:55 compute-0 podman[487169]: 2025-12-05 02:34:55.179023708 +0000 UTC m=+0.234971967 container start 72bed6ca347f177490130b11114fc5b986d7ef694d9528c912593e80cbb8967e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_satoshi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:34:55 compute-0 intelligent_satoshi[487184]: 167 167
Dec 05 02:34:55 compute-0 podman[487169]: 2025-12-05 02:34:55.185686442 +0000 UTC m=+0.241634731 container attach 72bed6ca347f177490130b11114fc5b986d7ef694d9528c912593e80cbb8967e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_satoshi, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 05 02:34:55 compute-0 systemd[1]: libpod-72bed6ca347f177490130b11114fc5b986d7ef694d9528c912593e80cbb8967e.scope: Deactivated successfully.
Dec 05 02:34:55 compute-0 podman[487169]: 2025-12-05 02:34:55.191391027 +0000 UTC m=+0.247339276 container died 72bed6ca347f177490130b11114fc5b986d7ef694d9528c912593e80cbb8967e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 05 02:34:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-9348fcf3ab7ddf1a24c9ad110a6e7d5236ecfdcd7cd2df40816bbc80aa5ddfda-merged.mount: Deactivated successfully.
Dec 05 02:34:55 compute-0 podman[487169]: 2025-12-05 02:34:55.268224936 +0000 UTC m=+0.324173185 container remove 72bed6ca347f177490130b11114fc5b986d7ef694d9528c912593e80cbb8967e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 05 02:34:55 compute-0 systemd[1]: libpod-conmon-72bed6ca347f177490130b11114fc5b986d7ef694d9528c912593e80cbb8967e.scope: Deactivated successfully.
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.455914) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902095455987, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 1227, "num_deletes": 251, "total_data_size": 1859781, "memory_usage": 1892824, "flush_reason": "Manual Compaction"}
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902095472637, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 1841996, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52081, "largest_seqno": 53307, "table_properties": {"data_size": 1836072, "index_size": 3255, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12310, "raw_average_key_size": 19, "raw_value_size": 1824286, "raw_average_value_size": 2932, "num_data_blocks": 146, "num_entries": 622, "num_filter_entries": 622, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764901969, "oldest_key_time": 1764901969, "file_creation_time": 1764902095, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 16829 microseconds, and 7879 cpu microseconds.
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.472740) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 1841996 bytes OK
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.472772) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.475424) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.475448) EVENT_LOG_v1 {"time_micros": 1764902095475440, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.475478) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 1854229, prev total WAL file size 1854229, number of live WAL files 2.
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.477109) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(1798KB)], [125(8852KB)]
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902095477227, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 10906755, "oldest_snapshot_seqno": -1}
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 6668 keys, 9144766 bytes, temperature: kUnknown
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902095548875, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 9144766, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9102181, "index_size": 24808, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 174903, "raw_average_key_size": 26, "raw_value_size": 8983548, "raw_average_value_size": 1347, "num_data_blocks": 982, "num_entries": 6668, "num_filter_entries": 6668, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764902095, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:34:55 compute-0 podman[487209]: 2025-12-05 02:34:55.550585648 +0000 UTC m=+0.098989622 container create 0aaf33170d6d09054a9f215b58812976f18747f6dabcd5d35f2539f523f113f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bhabha, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.549336) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 9144766 bytes
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.552688) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.9 rd, 127.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 8.6 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(10.9) write-amplify(5.0) OK, records in: 7182, records dropped: 514 output_compression: NoCompression
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.552722) EVENT_LOG_v1 {"time_micros": 1764902095552706, "job": 76, "event": "compaction_finished", "compaction_time_micros": 71811, "compaction_time_cpu_micros": 42013, "output_level": 6, "num_output_files": 1, "total_output_size": 9144766, "num_input_records": 7182, "num_output_records": 6668, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902095553971, "job": 76, "event": "table_file_deletion", "file_number": 127}
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902095557530, "job": 76, "event": "table_file_deletion", "file_number": 125}
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.476805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.557779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.557788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.557792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.557795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.557799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:34:55 compute-0 podman[487209]: 2025-12-05 02:34:55.495175721 +0000 UTC m=+0.043579715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:34:55 compute-0 systemd[1]: Started libpod-conmon-0aaf33170d6d09054a9f215b58812976f18747f6dabcd5d35f2539f523f113f4.scope.
Dec 05 02:34:55 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20488091599a5006d00d04a4a49a9aa037ebb21eeac34aa94b67c4e5c0d66058/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20488091599a5006d00d04a4a49a9aa037ebb21eeac34aa94b67c4e5c0d66058/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20488091599a5006d00d04a4a49a9aa037ebb21eeac34aa94b67c4e5c0d66058/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20488091599a5006d00d04a4a49a9aa037ebb21eeac34aa94b67c4e5c0d66058/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:34:55 compute-0 podman[487209]: 2025-12-05 02:34:55.725005819 +0000 UTC m=+0.273409813 container init 0aaf33170d6d09054a9f215b58812976f18747f6dabcd5d35f2539f523f113f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bhabha, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 05 02:34:55 compute-0 podman[487209]: 2025-12-05 02:34:55.743451784 +0000 UTC m=+0.291855768 container start 0aaf33170d6d09054a9f215b58812976f18747f6dabcd5d35f2539f523f113f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bhabha, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:34:55 compute-0 podman[487209]: 2025-12-05 02:34:55.750495758 +0000 UTC m=+0.298899802 container attach 0aaf33170d6d09054a9f215b58812976f18747f6dabcd5d35f2539f523f113f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bhabha, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 02:34:55 compute-0 nova_compute[349548]: 2025-12-05 02:34:55.920 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:34:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:34:56.238 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:34:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:34:56.240 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:34:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:34:56.241 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:34:56 compute-0 ceph-mon[192914]: pgmap v2581: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:56 compute-0 hardcore_bhabha[487225]: {
Dec 05 02:34:56 compute-0 hardcore_bhabha[487225]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:34:56 compute-0 hardcore_bhabha[487225]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:34:56 compute-0 hardcore_bhabha[487225]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:34:56 compute-0 hardcore_bhabha[487225]:         "osd_id": 0,
Dec 05 02:34:56 compute-0 hardcore_bhabha[487225]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:34:56 compute-0 hardcore_bhabha[487225]:         "type": "bluestore"
Dec 05 02:34:56 compute-0 hardcore_bhabha[487225]:     },
Dec 05 02:34:56 compute-0 hardcore_bhabha[487225]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:34:56 compute-0 hardcore_bhabha[487225]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:34:56 compute-0 hardcore_bhabha[487225]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:34:56 compute-0 hardcore_bhabha[487225]:         "osd_id": 1,
Dec 05 02:34:56 compute-0 hardcore_bhabha[487225]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:34:56 compute-0 hardcore_bhabha[487225]:         "type": "bluestore"
Dec 05 02:34:56 compute-0 hardcore_bhabha[487225]:     },
Dec 05 02:34:56 compute-0 hardcore_bhabha[487225]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:34:56 compute-0 hardcore_bhabha[487225]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:34:56 compute-0 hardcore_bhabha[487225]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:34:56 compute-0 hardcore_bhabha[487225]:         "osd_id": 2,
Dec 05 02:34:56 compute-0 hardcore_bhabha[487225]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:34:56 compute-0 hardcore_bhabha[487225]:         "type": "bluestore"
Dec 05 02:34:56 compute-0 hardcore_bhabha[487225]:     }
Dec 05 02:34:56 compute-0 hardcore_bhabha[487225]: }
Dec 05 02:34:56 compute-0 systemd[1]: libpod-0aaf33170d6d09054a9f215b58812976f18747f6dabcd5d35f2539f523f113f4.scope: Deactivated successfully.
Dec 05 02:34:56 compute-0 systemd[1]: libpod-0aaf33170d6d09054a9f215b58812976f18747f6dabcd5d35f2539f523f113f4.scope: Consumed 1.256s CPU time.
Dec 05 02:34:56 compute-0 podman[487209]: 2025-12-05 02:34:56.995682815 +0000 UTC m=+1.544086799 container died 0aaf33170d6d09054a9f215b58812976f18747f6dabcd5d35f2539f523f113f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bhabha, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:34:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-20488091599a5006d00d04a4a49a9aa037ebb21eeac34aa94b67c4e5c0d66058-merged.mount: Deactivated successfully.
Dec 05 02:34:57 compute-0 podman[487209]: 2025-12-05 02:34:57.114347527 +0000 UTC m=+1.662751511 container remove 0aaf33170d6d09054a9f215b58812976f18747f6dabcd5d35f2539f523f113f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bhabha, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:34:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2582: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:57 compute-0 systemd[1]: libpod-conmon-0aaf33170d6d09054a9f215b58812976f18747f6dabcd5d35f2539f523f113f4.scope: Deactivated successfully.
Dec 05 02:34:57 compute-0 sudo[487105]: pam_unix(sudo:session): session closed for user root
Dec 05 02:34:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:34:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:34:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:34:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:34:57 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 4e48526e-7e77-45bf-8a9c-b934abd29d1f does not exist
Dec 05 02:34:57 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 96d2e696-05b5-482c-8829-5934c662c59e does not exist
Dec 05 02:34:57 compute-0 sudo[487273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:34:57 compute-0 sudo[487273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:34:57 compute-0 sudo[487273]: pam_unix(sudo:session): session closed for user root
Dec 05 02:34:57 compute-0 sudo[487298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:34:57 compute-0 sudo[487298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:34:57 compute-0 sudo[487298]: pam_unix(sudo:session): session closed for user root
Dec 05 02:34:58 compute-0 nova_compute[349548]: 2025-12-05 02:34:58.110 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:34:58 compute-0 ceph-mon[192914]: pgmap v2582: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:34:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:34:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:34:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2583: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:34:59 compute-0 podman[158197]: time="2025-12-05T02:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:34:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:34:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8223 "" "Go-http-client/1.1"
Dec 05 02:35:00 compute-0 sshd-session[487253]: Connection reset by authenticating user root 91.202.233.33 port 26992 [preauth]
Dec 05 02:35:00 compute-0 ceph-mon[192914]: pgmap v2583: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:00 compute-0 nova_compute[349548]: 2025-12-05 02:35:00.926 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:35:01 compute-0 nova_compute[349548]: 2025-12-05 02:35:01.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:35:01 compute-0 nova_compute[349548]: 2025-12-05 02:35:01.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:35:01 compute-0 nova_compute[349548]: 2025-12-05 02:35:01.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:35:01 compute-0 nova_compute[349548]: 2025-12-05 02:35:01.091 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 02:35:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2584: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:01 compute-0 openstack_network_exporter[366555]: ERROR   02:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:35:01 compute-0 openstack_network_exporter[366555]: ERROR   02:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:35:01 compute-0 openstack_network_exporter[366555]: ERROR   02:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:35:01 compute-0 openstack_network_exporter[366555]: ERROR   02:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:35:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:35:01 compute-0 openstack_network_exporter[366555]: ERROR   02:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:35:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:35:02 compute-0 nova_compute[349548]: 2025-12-05 02:35:02.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:35:02 compute-0 nova_compute[349548]: 2025-12-05 02:35:02.146 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:35:02 compute-0 nova_compute[349548]: 2025-12-05 02:35:02.147 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:35:02 compute-0 nova_compute[349548]: 2025-12-05 02:35:02.148 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:35:02 compute-0 nova_compute[349548]: 2025-12-05 02:35:02.149 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:35:02 compute-0 nova_compute[349548]: 2025-12-05 02:35:02.150 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:35:02 compute-0 ceph-mon[192914]: pgmap v2584: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:02 compute-0 sshd-session[487324]: Connection reset by authenticating user root 91.202.233.33 port 47440 [preauth]
Dec 05 02:35:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:35:02 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2773503289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:35:02 compute-0 nova_compute[349548]: 2025-12-05 02:35:02.586 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.113 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:35:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2585: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.130 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.132 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3900MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.133 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.134 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:35:03 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2773503289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.292 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.293 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.324 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:35:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:35:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:35:03 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1580157474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.825 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.838 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.977 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.982 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.983 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:35:04 compute-0 ceph-mon[192914]: pgmap v2585: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:04 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1580157474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:35:04 compute-0 sshd-session[487346]: Connection reset by authenticating user root 91.202.233.33 port 47446 [preauth]
Dec 05 02:35:04 compute-0 podman[487374]: 2025-12-05 02:35:04.724057242 +0000 UTC m=+0.119712864 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 02:35:04 compute-0 podman[487372]: 2025-12-05 02:35:04.742989241 +0000 UTC m=+0.146491401 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 05 02:35:04 compute-0 nova_compute[349548]: 2025-12-05 02:35:04.983 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:35:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2586: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:05 compute-0 nova_compute[349548]: 2025-12-05 02:35:05.931 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:35:06 compute-0 nova_compute[349548]: 2025-12-05 02:35:06.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:35:06 compute-0 nova_compute[349548]: 2025-12-05 02:35:06.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:35:06 compute-0 sshd-session[487373]: Invalid user user from 91.202.233.33 port 47470
Dec 05 02:35:06 compute-0 ceph-mon[192914]: pgmap v2586: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:06 compute-0 sshd-session[487373]: Connection reset by invalid user user 91.202.233.33 port 47470 [preauth]
Dec 05 02:35:07 compute-0 nova_compute[349548]: 2025-12-05 02:35:07.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:35:07 compute-0 nova_compute[349548]: 2025-12-05 02:35:07.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:35:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2587: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:08 compute-0 nova_compute[349548]: 2025-12-05 02:35:08.115 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:35:08 compute-0 ceph-mon[192914]: pgmap v2587: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:08 compute-0 podman[487415]: 2025-12-05 02:35:08.446635273 +0000 UTC m=+0.127429208 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 05 02:35:08 compute-0 podman[487416]: 2025-12-05 02:35:08.479561568 +0000 UTC m=+0.150578160 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, vcs-type=git, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release-0.7.12=, version=9.4, com.redhat.component=ubi9-container)
Dec 05 02:35:08 compute-0 podman[487417]: 2025-12-05 02:35:08.499103945 +0000 UTC m=+0.167481170 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 05 02:35:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:35:09 compute-0 nova_compute[349548]: 2025-12-05 02:35:09.063 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:35:09 compute-0 nova_compute[349548]: 2025-12-05 02:35:09.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:35:09 compute-0 nova_compute[349548]: 2025-12-05 02:35:09.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:35:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2588: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:09 compute-0 sshd-session[487413]: Connection reset by authenticating user root 91.202.233.33 port 47486 [preauth]
Dec 05 02:35:10 compute-0 ceph-mon[192914]: pgmap v2588: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:10 compute-0 nova_compute[349548]: 2025-12-05 02:35:10.937 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:35:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2589: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:12 compute-0 nova_compute[349548]: 2025-12-05 02:35:12.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:35:12 compute-0 ceph-mon[192914]: pgmap v2589: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:13 compute-0 nova_compute[349548]: 2025-12-05 02:35:13.121 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:35:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2590: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:35:14 compute-0 ceph-mon[192914]: pgmap v2590: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2591: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:15 compute-0 nova_compute[349548]: 2025-12-05 02:35:15.943 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:35:16 compute-0 ceph-mon[192914]: pgmap v2591: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:35:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:35:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:35:16
Dec 05 02:35:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:35:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:35:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'images', 'vms', '.rgw.root', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta']
Dec 05 02:35:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:35:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:35:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:35:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:35:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:35:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2592: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:18 compute-0 nova_compute[349548]: 2025-12-05 02:35:18.121 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:35:18 compute-0 ceph-mon[192914]: pgmap v2592: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:35:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:35:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:35:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:35:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:35:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:35:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:35:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:35:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:35:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:35:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:35:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2593: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:19 compute-0 podman[487478]: 2025-12-05 02:35:19.72698048 +0000 UTC m=+0.106636344 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, version=9.6, config_id=edpm, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 05 02:35:19 compute-0 podman[487476]: 2025-12-05 02:35:19.738387721 +0000 UTC m=+0.132314059 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:35:19 compute-0 podman[487475]: 2025-12-05 02:35:19.748868016 +0000 UTC m=+0.145324138 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:35:19 compute-0 podman[487477]: 2025-12-05 02:35:19.77661039 +0000 UTC m=+0.162767993 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 05 02:35:20 compute-0 ceph-mon[192914]: pgmap v2593: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:20 compute-0 nova_compute[349548]: 2025-12-05 02:35:20.948 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:35:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2594: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:22 compute-0 ceph-mon[192914]: pgmap v2594: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:23 compute-0 nova_compute[349548]: 2025-12-05 02:35:23.126 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:35:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2595: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:35:24 compute-0 ceph-mon[192914]: pgmap v2595: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2596: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:25 compute-0 nova_compute[349548]: 2025-12-05 02:35:25.953 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:35:26 compute-0 ceph-mon[192914]: pgmap v2596: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2597: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 05 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:35:28 compute-0 nova_compute[349548]: 2025-12-05 02:35:28.127 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:35:28 compute-0 ceph-mon[192914]: pgmap v2597: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:35:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2598: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:29 compute-0 podman[158197]: time="2025-12-05T02:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:35:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:35:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8215 "" "Go-http-client/1.1"
Dec 05 02:35:30 compute-0 ceph-mon[192914]: pgmap v2598: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:30 compute-0 nova_compute[349548]: 2025-12-05 02:35:30.958 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:35:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2599: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:31 compute-0 openstack_network_exporter[366555]: ERROR   02:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:35:31 compute-0 openstack_network_exporter[366555]: ERROR   02:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:35:31 compute-0 openstack_network_exporter[366555]: ERROR   02:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:35:31 compute-0 openstack_network_exporter[366555]: ERROR   02:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:35:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:35:31 compute-0 openstack_network_exporter[366555]: ERROR   02:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:35:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:35:32 compute-0 ceph-mon[192914]: pgmap v2599: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:33 compute-0 nova_compute[349548]: 2025-12-05 02:35:33.130 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:35:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2600: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:35:34 compute-0 ceph-mon[192914]: pgmap v2600: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2601: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:35 compute-0 podman[487557]: 2025-12-05 02:35:35.712612056 +0000 UTC m=+0.110006302 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 05 02:35:35 compute-0 podman[487556]: 2025-12-05 02:35:35.727948081 +0000 UTC m=+0.123841774 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 05 02:35:35 compute-0 nova_compute[349548]: 2025-12-05 02:35:35.964 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:35:36 compute-0 ceph-mon[192914]: pgmap v2601: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2602: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:38 compute-0 nova_compute[349548]: 2025-12-05 02:35:38.132 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:35:38 compute-0 ceph-mon[192914]: pgmap v2602: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:35:38 compute-0 podman[487599]: 2025-12-05 02:35:38.719998848 +0000 UTC m=+0.114783171 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, maintainer=Red Hat, Inc., config_id=edpm, release-0.7.12=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, vcs-type=git, name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 05 02:35:38 compute-0 podman[487598]: 2025-12-05 02:35:38.738177745 +0000 UTC m=+0.140126126 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec 05 02:35:38 compute-0 podman[487600]: 2025-12-05 02:35:38.759469013 +0000 UTC m=+0.150962091 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 05 02:35:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2603: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:40 compute-0 ceph-mon[192914]: pgmap v2603: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:40 compute-0 nova_compute[349548]: 2025-12-05 02:35:40.970 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:35:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2604: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:42 compute-0 ceph-mon[192914]: pgmap v2604: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:43 compute-0 nova_compute[349548]: 2025-12-05 02:35:43.136 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:35:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2605: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:35:44 compute-0 ceph-mon[192914]: pgmap v2605: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2606: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:35:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/935678120' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:35:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:35:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/935678120' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:35:45 compute-0 ceph-mon[192914]: pgmap v2606: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/935678120' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:35:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/935678120' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:35:45 compute-0 nova_compute[349548]: 2025-12-05 02:35:45.975 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:35:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:35:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:35:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:35:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:35:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:35:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:35:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2607: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:48 compute-0 nova_compute[349548]: 2025-12-05 02:35:48.140 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:35:48 compute-0 ceph-mon[192914]: pgmap v2607: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:35:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2608: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:50 compute-0 ceph-mon[192914]: pgmap v2608: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:50 compute-0 podman[487652]: 2025-12-05 02:35:50.715075832 +0000 UTC m=+0.117485230 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec 05 02:35:50 compute-0 podman[487655]: 2025-12-05 02:35:50.728830881 +0000 UTC m=+0.112343460 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, name=ubi9-minimal, container_name=openstack_network_exporter, distribution-scope=public, version=9.6, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.openshift.expose-services=)
Dec 05 02:35:50 compute-0 podman[487653]: 2025-12-05 02:35:50.745760202 +0000 UTC m=+0.143384031 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 02:35:50 compute-0 podman[487654]: 2025-12-05 02:35:50.797824753 +0000 UTC m=+0.190517259 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:35:50 compute-0 nova_compute[349548]: 2025-12-05 02:35:50.978 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:35:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2609: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:52 compute-0 ceph-mon[192914]: pgmap v2609: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:53 compute-0 nova_compute[349548]: 2025-12-05 02:35:53.142 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:35:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2610: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:35:54 compute-0 ceph-mon[192914]: pgmap v2610: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2611: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:55 compute-0 nova_compute[349548]: 2025-12-05 02:35:55.986 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:35:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:35:56.240 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:35:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:35:56.240 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:35:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:35:56.240 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:35:56 compute-0 ceph-mon[192914]: pgmap v2611: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2612: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:57 compute-0 sudo[487737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:35:57 compute-0 sudo[487737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:35:57 compute-0 sudo[487737]: pam_unix(sudo:session): session closed for user root
Dec 05 02:35:57 compute-0 sudo[487762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:35:57 compute-0 sudo[487762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:35:57 compute-0 sudo[487762]: pam_unix(sudo:session): session closed for user root
Dec 05 02:35:57 compute-0 sudo[487787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:35:57 compute-0 sudo[487787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:35:57 compute-0 sudo[487787]: pam_unix(sudo:session): session closed for user root
Dec 05 02:35:58 compute-0 sudo[487812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:35:58 compute-0 sudo[487812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:35:58 compute-0 nova_compute[349548]: 2025-12-05 02:35:58.145 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:35:58 compute-0 ceph-mon[192914]: pgmap v2612: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:35:58 compute-0 sudo[487812]: pam_unix(sudo:session): session closed for user root
Dec 05 02:35:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:35:58 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:35:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:35:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:35:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:35:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:35:58 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 81e514d0-0cdb-4f71-b2d8-55cc621c9633 does not exist
Dec 05 02:35:58 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 3f746431-bf80-48e7-adaa-0139548eac26 does not exist
Dec 05 02:35:58 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 0abcaef5-31c2-4cf7-abb2-d9da583269fc does not exist
Dec 05 02:35:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:35:58 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:35:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:35:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:35:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:35:58 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:35:58 compute-0 sudo[487867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:35:58 compute-0 sudo[487867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:35:59 compute-0 sudo[487867]: pam_unix(sudo:session): session closed for user root
Dec 05 02:35:59 compute-0 sudo[487892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:35:59 compute-0 sudo[487892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:35:59 compute-0 sudo[487892]: pam_unix(sudo:session): session closed for user root
Dec 05 02:35:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2613: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:35:59 compute-0 sudo[487917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:35:59 compute-0 sudo[487917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:35:59 compute-0 sudo[487917]: pam_unix(sudo:session): session closed for user root
Dec 05 02:35:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:35:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:35:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:35:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:35:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:35:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:35:59 compute-0 sudo[487942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:35:59 compute-0 sudo[487942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:35:59 compute-0 podman[158197]: time="2025-12-05T02:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:35:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:35:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8219 "" "Go-http-client/1.1"
Dec 05 02:36:00 compute-0 podman[488005]: 2025-12-05 02:36:00.051284924 +0000 UTC m=+0.091415833 container create b7c5944302970a20d21743f1ae37fbb9bc337a4178c2d302c536f838503fc6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ride, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 05 02:36:00 compute-0 podman[488005]: 2025-12-05 02:36:00.004584769 +0000 UTC m=+0.044715698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:36:00 compute-0 systemd[1]: Started libpod-conmon-b7c5944302970a20d21743f1ae37fbb9bc337a4178c2d302c536f838503fc6a3.scope.
Dec 05 02:36:00 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:36:00 compute-0 podman[488005]: 2025-12-05 02:36:00.209753122 +0000 UTC m=+0.249884051 container init b7c5944302970a20d21743f1ae37fbb9bc337a4178c2d302c536f838503fc6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ride, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:36:00 compute-0 podman[488005]: 2025-12-05 02:36:00.226368934 +0000 UTC m=+0.266499843 container start b7c5944302970a20d21743f1ae37fbb9bc337a4178c2d302c536f838503fc6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ride, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:36:00 compute-0 podman[488005]: 2025-12-05 02:36:00.232500422 +0000 UTC m=+0.272631381 container attach b7c5944302970a20d21743f1ae37fbb9bc337a4178c2d302c536f838503fc6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ride, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Dec 05 02:36:00 compute-0 epic_ride[488019]: 167 167
Dec 05 02:36:00 compute-0 systemd[1]: libpod-b7c5944302970a20d21743f1ae37fbb9bc337a4178c2d302c536f838503fc6a3.scope: Deactivated successfully.
Dec 05 02:36:00 compute-0 podman[488005]: 2025-12-05 02:36:00.241558094 +0000 UTC m=+0.281689043 container died b7c5944302970a20d21743f1ae37fbb9bc337a4178c2d302c536f838503fc6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 05 02:36:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-6be2398cf58b4f5e7378904737dfad404409b358285cd09e9ced1f05af90f0d9-merged.mount: Deactivated successfully.
Dec 05 02:36:00 compute-0 ceph-mon[192914]: pgmap v2613: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:00 compute-0 podman[488005]: 2025-12-05 02:36:00.330386211 +0000 UTC m=+0.370517130 container remove b7c5944302970a20d21743f1ae37fbb9bc337a4178c2d302c536f838503fc6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Dec 05 02:36:00 compute-0 systemd[1]: libpod-conmon-b7c5944302970a20d21743f1ae37fbb9bc337a4178c2d302c536f838503fc6a3.scope: Deactivated successfully.
Dec 05 02:36:00 compute-0 podman[488041]: 2025-12-05 02:36:00.616212654 +0000 UTC m=+0.084122052 container create ad0be05eb48f59a457eb2f21bb8ddc5318b1977df8796af8f1347dbdf980ec62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:36:00 compute-0 podman[488041]: 2025-12-05 02:36:00.584669319 +0000 UTC m=+0.052578747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:36:00 compute-0 systemd[1]: Started libpod-conmon-ad0be05eb48f59a457eb2f21bb8ddc5318b1977df8796af8f1347dbdf980ec62.scope.
Dec 05 02:36:00 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5973efe39d538744b7f298fc2aa810d823a00d7ec48211d19a9b54c63704324/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5973efe39d538744b7f298fc2aa810d823a00d7ec48211d19a9b54c63704324/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5973efe39d538744b7f298fc2aa810d823a00d7ec48211d19a9b54c63704324/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5973efe39d538744b7f298fc2aa810d823a00d7ec48211d19a9b54c63704324/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5973efe39d538744b7f298fc2aa810d823a00d7ec48211d19a9b54c63704324/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:36:00 compute-0 podman[488041]: 2025-12-05 02:36:00.826861676 +0000 UTC m=+0.294771124 container init ad0be05eb48f59a457eb2f21bb8ddc5318b1977df8796af8f1347dbdf980ec62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 02:36:00 compute-0 podman[488041]: 2025-12-05 02:36:00.847180435 +0000 UTC m=+0.315089823 container start ad0be05eb48f59a457eb2f21bb8ddc5318b1977df8796af8f1347dbdf980ec62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poitras, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 05 02:36:00 compute-0 podman[488041]: 2025-12-05 02:36:00.85423577 +0000 UTC m=+0.322145208 container attach ad0be05eb48f59a457eb2f21bb8ddc5318b1977df8796af8f1347dbdf980ec62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 05 02:36:00 compute-0 nova_compute[349548]: 2025-12-05 02:36:00.989 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:36:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2614: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:01 compute-0 openstack_network_exporter[366555]: ERROR   02:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:36:01 compute-0 openstack_network_exporter[366555]: ERROR   02:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:36:01 compute-0 openstack_network_exporter[366555]: ERROR   02:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:36:01 compute-0 openstack_network_exporter[366555]: ERROR   02:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:36:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:36:01 compute-0 openstack_network_exporter[366555]: ERROR   02:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:36:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:36:02 compute-0 nova_compute[349548]: 2025-12-05 02:36:02.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:36:02 compute-0 nova_compute[349548]: 2025-12-05 02:36:02.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:36:02 compute-0 nova_compute[349548]: 2025-12-05 02:36:02.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:36:02 compute-0 nova_compute[349548]: 2025-12-05 02:36:02.092 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 02:36:02 compute-0 nova_compute[349548]: 2025-12-05 02:36:02.093 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:36:02 compute-0 nova_compute[349548]: 2025-12-05 02:36:02.123 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:36:02 compute-0 nova_compute[349548]: 2025-12-05 02:36:02.124 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:36:02 compute-0 nova_compute[349548]: 2025-12-05 02:36:02.124 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:36:02 compute-0 nova_compute[349548]: 2025-12-05 02:36:02.125 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:36:02 compute-0 nova_compute[349548]: 2025-12-05 02:36:02.126 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:36:02 compute-0 elated_poitras[488057]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:36:02 compute-0 elated_poitras[488057]: --> relative data size: 1.0
Dec 05 02:36:02 compute-0 elated_poitras[488057]: --> All data devices are unavailable
Dec 05 02:36:02 compute-0 systemd[1]: libpod-ad0be05eb48f59a457eb2f21bb8ddc5318b1977df8796af8f1347dbdf980ec62.scope: Deactivated successfully.
Dec 05 02:36:02 compute-0 podman[488041]: 2025-12-05 02:36:02.192377373 +0000 UTC m=+1.660286751 container died ad0be05eb48f59a457eb2f21bb8ddc5318b1977df8796af8f1347dbdf980ec62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 05 02:36:02 compute-0 systemd[1]: libpod-ad0be05eb48f59a457eb2f21bb8ddc5318b1977df8796af8f1347dbdf980ec62.scope: Consumed 1.278s CPU time.
Dec 05 02:36:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5973efe39d538744b7f298fc2aa810d823a00d7ec48211d19a9b54c63704324-merged.mount: Deactivated successfully.
Dec 05 02:36:02 compute-0 podman[488041]: 2025-12-05 02:36:02.278994166 +0000 UTC m=+1.746903524 container remove ad0be05eb48f59a457eb2f21bb8ddc5318b1977df8796af8f1347dbdf980ec62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poitras, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:36:02 compute-0 systemd[1]: libpod-conmon-ad0be05eb48f59a457eb2f21bb8ddc5318b1977df8796af8f1347dbdf980ec62.scope: Deactivated successfully.
Dec 05 02:36:02 compute-0 sudo[487942]: pam_unix(sudo:session): session closed for user root
Dec 05 02:36:02 compute-0 ceph-mon[192914]: pgmap v2614: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:02 compute-0 sudo[488107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:36:02 compute-0 sudo[488107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:36:02 compute-0 sudo[488107]: pam_unix(sudo:session): session closed for user root
Dec 05 02:36:02 compute-0 sudo[488141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:36:02 compute-0 sudo[488141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:36:02 compute-0 sudo[488141]: pam_unix(sudo:session): session closed for user root
Dec 05 02:36:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:36:02 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3428556063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:36:02 compute-0 sudo[488166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:36:02 compute-0 sudo[488166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:36:02 compute-0 sudo[488166]: pam_unix(sudo:session): session closed for user root
Dec 05 02:36:02 compute-0 nova_compute[349548]: 2025-12-05 02:36:02.666 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:36:02 compute-0 sudo[488193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:36:02 compute-0 sudo[488193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.147 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:36:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2615: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.217 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.219 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3936MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.219 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.219 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:36:03 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3428556063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.339 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.340 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.368 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:36:03 compute-0 podman[488255]: 2025-12-05 02:36:03.404545601 +0000 UTC m=+0.094544374 container create 99f01bbf25d6c33d7ec99428bbfa42840602f283b2b2322567cc902d7f8704de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 05 02:36:03 compute-0 podman[488255]: 2025-12-05 02:36:03.365714284 +0000 UTC m=+0.055713067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:36:03 compute-0 systemd[1]: Started libpod-conmon-99f01bbf25d6c33d7ec99428bbfa42840602f283b2b2322567cc902d7f8704de.scope.
Dec 05 02:36:03 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:36:03 compute-0 podman[488255]: 2025-12-05 02:36:03.552841393 +0000 UTC m=+0.242840176 container init 99f01bbf25d6c33d7ec99428bbfa42840602f283b2b2322567cc902d7f8704de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:36:03 compute-0 podman[488255]: 2025-12-05 02:36:03.573108471 +0000 UTC m=+0.263107244 container start 99f01bbf25d6c33d7ec99428bbfa42840602f283b2b2322567cc902d7f8704de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_albattani, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:36:03 compute-0 podman[488255]: 2025-12-05 02:36:03.580101044 +0000 UTC m=+0.270099807 container attach 99f01bbf25d6c33d7ec99428bbfa42840602f283b2b2322567cc902d7f8704de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_albattani, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:36:03 compute-0 vibrant_albattani[488270]: 167 167
Dec 05 02:36:03 compute-0 systemd[1]: libpod-99f01bbf25d6c33d7ec99428bbfa42840602f283b2b2322567cc902d7f8704de.scope: Deactivated successfully.
Dec 05 02:36:03 compute-0 conmon[488270]: conmon 99f01bbf25d6c33d7ec9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-99f01bbf25d6c33d7ec99428bbfa42840602f283b2b2322567cc902d7f8704de.scope/container/memory.events
Dec 05 02:36:03 compute-0 podman[488255]: 2025-12-05 02:36:03.587680274 +0000 UTC m=+0.277679047 container died 99f01bbf25d6c33d7ec99428bbfa42840602f283b2b2322567cc902d7f8704de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_albattani, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:36:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff03be082ea669cf36fb4599f5b0519c6304c2663173a44a207232f4d6cc4bbc-merged.mount: Deactivated successfully.
Dec 05 02:36:03 compute-0 podman[488255]: 2025-12-05 02:36:03.667737827 +0000 UTC m=+0.357736570 container remove 99f01bbf25d6c33d7ec99428bbfa42840602f283b2b2322567cc902d7f8704de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_albattani, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 05 02:36:03 compute-0 systemd[1]: libpod-conmon-99f01bbf25d6c33d7ec99428bbfa42840602f283b2b2322567cc902d7f8704de.scope: Deactivated successfully.
Dec 05 02:36:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:36:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:36:03 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3583223027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.895 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.906 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.925 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.928 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.929 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.930 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.931 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 05 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.959 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 05 02:36:04 compute-0 podman[488313]: 2025-12-05 02:36:04.011640894 +0000 UTC m=+0.100667431 container create 7aea174fdf3003ce02c8ce67c36a6c26cf1ba508bbf67e71a5673775ef80093d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bhaskara, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:36:04 compute-0 podman[488313]: 2025-12-05 02:36:03.974862807 +0000 UTC m=+0.063889404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:36:04 compute-0 systemd[1]: Started libpod-conmon-7aea174fdf3003ce02c8ce67c36a6c26cf1ba508bbf67e71a5673775ef80093d.scope.
Dec 05 02:36:04 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0adf3345e301cf7163c338e8b9eb1e192e9657cc1d4e80389b8a6ec5d4aa56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0adf3345e301cf7163c338e8b9eb1e192e9657cc1d4e80389b8a6ec5d4aa56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0adf3345e301cf7163c338e8b9eb1e192e9657cc1d4e80389b8a6ec5d4aa56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0adf3345e301cf7163c338e8b9eb1e192e9657cc1d4e80389b8a6ec5d4aa56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:36:04 compute-0 podman[488313]: 2025-12-05 02:36:04.200608947 +0000 UTC m=+0.289635464 container init 7aea174fdf3003ce02c8ce67c36a6c26cf1ba508bbf67e71a5673775ef80093d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bhaskara, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:36:04 compute-0 podman[488313]: 2025-12-05 02:36:04.213088869 +0000 UTC m=+0.302115356 container start 7aea174fdf3003ce02c8ce67c36a6c26cf1ba508bbf67e71a5673775ef80093d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 05 02:36:04 compute-0 podman[488313]: 2025-12-05 02:36:04.217272081 +0000 UTC m=+0.306298618 container attach 7aea174fdf3003ce02c8ce67c36a6c26cf1ba508bbf67e71a5673775ef80093d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bhaskara, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:36:04 compute-0 ceph-mon[192914]: pgmap v2615: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:04 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3583223027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]: {
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:     "0": [
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:         {
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "devices": [
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "/dev/loop3"
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             ],
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "lv_name": "ceph_lv0",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "lv_size": "21470642176",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "name": "ceph_lv0",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "tags": {
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.cluster_name": "ceph",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.crush_device_class": "",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.encrypted": "0",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.osd_id": "0",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.type": "block",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.vdo": "0"
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             },
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "type": "block",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "vg_name": "ceph_vg0"
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:         }
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:     ],
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:     "1": [
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:         {
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "devices": [
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "/dev/loop4"
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             ],
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "lv_name": "ceph_lv1",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "lv_size": "21470642176",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "name": "ceph_lv1",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "tags": {
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.cluster_name": "ceph",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.crush_device_class": "",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.encrypted": "0",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.osd_id": "1",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.type": "block",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.vdo": "0"
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             },
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "type": "block",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "vg_name": "ceph_vg1"
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:         }
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:     ],
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:     "2": [
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:         {
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "devices": [
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "/dev/loop5"
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             ],
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "lv_name": "ceph_lv2",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "lv_size": "21470642176",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "name": "ceph_lv2",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "tags": {
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.cluster_name": "ceph",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.crush_device_class": "",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.encrypted": "0",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.osd_id": "2",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.type": "block",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:                 "ceph.vdo": "0"
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             },
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "type": "block",
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:             "vg_name": "ceph_vg2"
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:         }
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]:     ]
Dec 05 02:36:05 compute-0 reverent_bhaskara[488328]: }
Dec 05 02:36:05 compute-0 systemd[1]: libpod-7aea174fdf3003ce02c8ce67c36a6c26cf1ba508bbf67e71a5673775ef80093d.scope: Deactivated successfully.
Dec 05 02:36:05 compute-0 podman[488313]: 2025-12-05 02:36:05.056014505 +0000 UTC m=+1.145041052 container died 7aea174fdf3003ce02c8ce67c36a6c26cf1ba508bbf67e71a5673775ef80093d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bhaskara, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 05 02:36:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e0adf3345e301cf7163c338e8b9eb1e192e9657cc1d4e80389b8a6ec5d4aa56-merged.mount: Deactivated successfully.
Dec 05 02:36:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2616: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:05 compute-0 podman[488313]: 2025-12-05 02:36:05.175186292 +0000 UTC m=+1.264212819 container remove 7aea174fdf3003ce02c8ce67c36a6c26cf1ba508bbf67e71a5673775ef80093d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:36:05 compute-0 systemd[1]: libpod-conmon-7aea174fdf3003ce02c8ce67c36a6c26cf1ba508bbf67e71a5673775ef80093d.scope: Deactivated successfully.
Dec 05 02:36:05 compute-0 sudo[488193]: pam_unix(sudo:session): session closed for user root
Dec 05 02:36:05 compute-0 sudo[488350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:36:05 compute-0 sudo[488350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:36:05 compute-0 sudo[488350]: pam_unix(sudo:session): session closed for user root
Dec 05 02:36:05 compute-0 sudo[488375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:36:05 compute-0 sudo[488375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:36:05 compute-0 sudo[488375]: pam_unix(sudo:session): session closed for user root
Dec 05 02:36:05 compute-0 sudo[488400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:36:05 compute-0 sudo[488400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:36:05 compute-0 sudo[488400]: pam_unix(sudo:session): session closed for user root
Dec 05 02:36:05 compute-0 sudo[488425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:36:05 compute-0 sudo[488425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:36:05 compute-0 podman[488450]: 2025-12-05 02:36:05.9954431 +0000 UTC m=+0.137010586 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 02:36:05 compute-0 nova_compute[349548]: 2025-12-05 02:36:05.995 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:36:06 compute-0 podman[488449]: 2025-12-05 02:36:06.004530613 +0000 UTC m=+0.148929071 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:36:06 compute-0 ceph-mon[192914]: pgmap v2616: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:06 compute-0 podman[488525]: 2025-12-05 02:36:06.407634939 +0000 UTC m=+0.079118827 container create 089a1ae7cd011b2ea377f67abb9eca2d2d5e5a950380c0963fba7039fde4d488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_taussig, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 05 02:36:06 compute-0 podman[488525]: 2025-12-05 02:36:06.382113828 +0000 UTC m=+0.053597766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:36:06 compute-0 systemd[1]: Started libpod-conmon-089a1ae7cd011b2ea377f67abb9eca2d2d5e5a950380c0963fba7039fde4d488.scope.
Dec 05 02:36:06 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:36:06 compute-0 podman[488525]: 2025-12-05 02:36:06.54347585 +0000 UTC m=+0.214959758 container init 089a1ae7cd011b2ea377f67abb9eca2d2d5e5a950380c0963fba7039fde4d488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:36:06 compute-0 podman[488525]: 2025-12-05 02:36:06.55593153 +0000 UTC m=+0.227415418 container start 089a1ae7cd011b2ea377f67abb9eca2d2d5e5a950380c0963fba7039fde4d488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:36:06 compute-0 podman[488525]: 2025-12-05 02:36:06.56042061 +0000 UTC m=+0.231904498 container attach 089a1ae7cd011b2ea377f67abb9eca2d2d5e5a950380c0963fba7039fde4d488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_taussig, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:36:06 compute-0 keen_taussig[488540]: 167 167
Dec 05 02:36:06 compute-0 systemd[1]: libpod-089a1ae7cd011b2ea377f67abb9eca2d2d5e5a950380c0963fba7039fde4d488.scope: Deactivated successfully.
Dec 05 02:36:06 compute-0 podman[488525]: 2025-12-05 02:36:06.566531388 +0000 UTC m=+0.238015286 container died 089a1ae7cd011b2ea377f67abb9eca2d2d5e5a950380c0963fba7039fde4d488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:36:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-8418a7655ce8b4574589b2da50fd315ad95a4608937ff01e6da2b2cede4228b4-merged.mount: Deactivated successfully.
Dec 05 02:36:06 compute-0 podman[488525]: 2025-12-05 02:36:06.6248582 +0000 UTC m=+0.296342098 container remove 089a1ae7cd011b2ea377f67abb9eca2d2d5e5a950380c0963fba7039fde4d488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:36:06 compute-0 systemd[1]: libpod-conmon-089a1ae7cd011b2ea377f67abb9eca2d2d5e5a950380c0963fba7039fde4d488.scope: Deactivated successfully.
Dec 05 02:36:06 compute-0 podman[488562]: 2025-12-05 02:36:06.894669458 +0000 UTC m=+0.089860328 container create 19f596c617150c2594559bb3c2da76cfa8eb8f3935a6aabc62836bd9269ea6c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 05 02:36:06 compute-0 nova_compute[349548]: 2025-12-05 02:36:06.933 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:36:06 compute-0 nova_compute[349548]: 2025-12-05 02:36:06.934 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:36:06 compute-0 nova_compute[349548]: 2025-12-05 02:36:06.935 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:36:06 compute-0 podman[488562]: 2025-12-05 02:36:06.862144414 +0000 UTC m=+0.057335364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:36:06 compute-0 systemd[1]: Started libpod-conmon-19f596c617150c2594559bb3c2da76cfa8eb8f3935a6aabc62836bd9269ea6c5.scope.
Dec 05 02:36:07 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:36:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da10ebf04ff64fc5e490a083edfdf15f790648309b02d7ccf2f3802de754e389/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:36:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da10ebf04ff64fc5e490a083edfdf15f790648309b02d7ccf2f3802de754e389/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:36:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da10ebf04ff64fc5e490a083edfdf15f790648309b02d7ccf2f3802de754e389/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:36:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da10ebf04ff64fc5e490a083edfdf15f790648309b02d7ccf2f3802de754e389/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:36:07 compute-0 nova_compute[349548]: 2025-12-05 02:36:07.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:36:07 compute-0 podman[488562]: 2025-12-05 02:36:07.081131487 +0000 UTC m=+0.276322387 container init 19f596c617150c2594559bb3c2da76cfa8eb8f3935a6aabc62836bd9269ea6c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 05 02:36:07 compute-0 podman[488562]: 2025-12-05 02:36:07.112320752 +0000 UTC m=+0.307511642 container start 19f596c617150c2594559bb3c2da76cfa8eb8f3935a6aabc62836bd9269ea6c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:36:07 compute-0 podman[488562]: 2025-12-05 02:36:07.119073728 +0000 UTC m=+0.314264768 container attach 19f596c617150c2594559bb3c2da76cfa8eb8f3935a6aabc62836bd9269ea6c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_germain, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 05 02:36:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2617: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Dec 05 02:36:08 compute-0 nova_compute[349548]: 2025-12-05 02:36:08.151 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:36:08 compute-0 relaxed_germain[488578]: {
Dec 05 02:36:08 compute-0 relaxed_germain[488578]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:36:08 compute-0 relaxed_germain[488578]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:36:08 compute-0 relaxed_germain[488578]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:36:08 compute-0 relaxed_germain[488578]:         "osd_id": 0,
Dec 05 02:36:08 compute-0 relaxed_germain[488578]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:36:08 compute-0 relaxed_germain[488578]:         "type": "bluestore"
Dec 05 02:36:08 compute-0 relaxed_germain[488578]:     },
Dec 05 02:36:08 compute-0 relaxed_germain[488578]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:36:08 compute-0 relaxed_germain[488578]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:36:08 compute-0 relaxed_germain[488578]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:36:08 compute-0 relaxed_germain[488578]:         "osd_id": 1,
Dec 05 02:36:08 compute-0 relaxed_germain[488578]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:36:08 compute-0 relaxed_germain[488578]:         "type": "bluestore"
Dec 05 02:36:08 compute-0 relaxed_germain[488578]:     },
Dec 05 02:36:08 compute-0 relaxed_germain[488578]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:36:08 compute-0 relaxed_germain[488578]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:36:08 compute-0 relaxed_germain[488578]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:36:08 compute-0 relaxed_germain[488578]:         "osd_id": 2,
Dec 05 02:36:08 compute-0 relaxed_germain[488578]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:36:08 compute-0 relaxed_germain[488578]:         "type": "bluestore"
Dec 05 02:36:08 compute-0 relaxed_germain[488578]:     }
Dec 05 02:36:08 compute-0 relaxed_germain[488578]: }
Dec 05 02:36:08 compute-0 systemd[1]: libpod-19f596c617150c2594559bb3c2da76cfa8eb8f3935a6aabc62836bd9269ea6c5.scope: Deactivated successfully.
Dec 05 02:36:08 compute-0 systemd[1]: libpod-19f596c617150c2594559bb3c2da76cfa8eb8f3935a6aabc62836bd9269ea6c5.scope: Consumed 1.178s CPU time.
Dec 05 02:36:08 compute-0 conmon[488578]: conmon 19f596c617150c259455 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-19f596c617150c2594559bb3c2da76cfa8eb8f3935a6aabc62836bd9269ea6c5.scope/container/memory.events
Dec 05 02:36:08 compute-0 podman[488562]: 2025-12-05 02:36:08.30483916 +0000 UTC m=+1.500030020 container died 19f596c617150c2594559bb3c2da76cfa8eb8f3935a6aabc62836bd9269ea6c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 05 02:36:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-da10ebf04ff64fc5e490a083edfdf15f790648309b02d7ccf2f3802de754e389-merged.mount: Deactivated successfully.
Dec 05 02:36:08 compute-0 ceph-mon[192914]: pgmap v2617: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Dec 05 02:36:08 compute-0 podman[488562]: 2025-12-05 02:36:08.409761294 +0000 UTC m=+1.604952164 container remove 19f596c617150c2594559bb3c2da76cfa8eb8f3935a6aabc62836bd9269ea6c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_germain, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 05 02:36:08 compute-0 systemd[1]: libpod-conmon-19f596c617150c2594559bb3c2da76cfa8eb8f3935a6aabc62836bd9269ea6c5.scope: Deactivated successfully.
Dec 05 02:36:08 compute-0 sudo[488425]: pam_unix(sudo:session): session closed for user root
Dec 05 02:36:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:36:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:36:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:36:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:36:08 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a46209fb-bdae-41d2-8faf-6f2f0361daf2 does not exist
Dec 05 02:36:08 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 99404964-396b-4a3d-a0dc-b97ced1880d9 does not exist
Dec 05 02:36:08 compute-0 sudo[488622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:36:08 compute-0 sudo[488622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:36:08 compute-0 sudo[488622]: pam_unix(sudo:session): session closed for user root
Dec 05 02:36:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:36:08 compute-0 sudo[488647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:36:08 compute-0 sudo[488647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:36:08 compute-0 sudo[488647]: pam_unix(sudo:session): session closed for user root
Dec 05 02:36:08 compute-0 podman[488673]: 2025-12-05 02:36:08.964684504 +0000 UTC m=+0.121407453 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 02:36:08 compute-0 podman[488671]: 2025-12-05 02:36:08.975096026 +0000 UTC m=+0.141071484 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 05 02:36:08 compute-0 podman[488672]: 2025-12-05 02:36:08.990552184 +0000 UTC m=+0.150398354 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, container_name=kepler, name=ubi9, config_id=edpm, release=1214.1726694543, vcs-type=git, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 05 02:36:09 compute-0 nova_compute[349548]: 2025-12-05 02:36:09.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:36:09 compute-0 nova_compute[349548]: 2025-12-05 02:36:09.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:36:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2618: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Dec 05 02:36:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:36:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:36:10 compute-0 ceph-mon[192914]: pgmap v2618: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Dec 05 02:36:11 compute-0 nova_compute[349548]: 2025-12-05 02:36:11.001 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:36:11 compute-0 nova_compute[349548]: 2025-12-05 02:36:11.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:36:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2619: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Dec 05 02:36:12 compute-0 ceph-mon[192914]: pgmap v2619: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Dec 05 02:36:13 compute-0 nova_compute[349548]: 2025-12-05 02:36:13.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:36:13 compute-0 nova_compute[349548]: 2025-12-05 02:36:13.157 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:36:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2620: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 02:36:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:36:14 compute-0 ceph-mon[192914]: pgmap v2620: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 02:36:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2621: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 02:36:16 compute-0 nova_compute[349548]: 2025-12-05 02:36:16.007 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:36:16 compute-0 nova_compute[349548]: 2025-12-05 02:36:16.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:36:16 compute-0 nova_compute[349548]: 2025-12-05 02:36:16.107 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:36:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:36:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:36:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:36:16
Dec 05 02:36:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:36:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:36:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'images', '.mgr', 'default.rgw.log', 'backups', 'volumes']
Dec 05 02:36:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:36:16 compute-0 ceph-mon[192914]: pgmap v2621: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 02:36:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:36:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:36:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:36:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:36:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2622: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 02:36:17 compute-0 ceph-mon[192914]: pgmap v2622: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 05 02:36:18 compute-0 nova_compute[349548]: 2025-12-05 02:36:18.159 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:36:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:36:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:36:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:36:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:36:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:36:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:36:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:36:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:36:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:36:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:36:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:36:19 compute-0 nova_compute[349548]: 2025-12-05 02:36:19.079 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:36:19 compute-0 nova_compute[349548]: 2025-12-05 02:36:19.079 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 05 02:36:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2623: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 37 op/s
Dec 05 02:36:20 compute-0 ceph-mon[192914]: pgmap v2623: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 37 op/s
Dec 05 02:36:21 compute-0 nova_compute[349548]: 2025-12-05 02:36:21.011 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:36:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2624: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 37 op/s
Dec 05 02:36:21 compute-0 podman[488729]: 2025-12-05 02:36:21.720858197 +0000 UTC m=+0.116291325 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 05 02:36:21 compute-0 podman[488731]: 2025-12-05 02:36:21.73681925 +0000 UTC m=+0.116090489 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, name=ubi9-minimal, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, release=1755695350)
Dec 05 02:36:21 compute-0 podman[488728]: 2025-12-05 02:36:21.745721928 +0000 UTC m=+0.144467192 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Dec 05 02:36:21 compute-0 podman[488730]: 2025-12-05 02:36:21.777712976 +0000 UTC m=+0.162573327 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 02:36:22 compute-0 ceph-mon[192914]: pgmap v2624: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 37 op/s
Dec 05 02:36:23 compute-0 nova_compute[349548]: 2025-12-05 02:36:23.162 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:36:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2625: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 15 op/s
Dec 05 02:36:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:36:24 compute-0 ceph-mon[192914]: pgmap v2625: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 15 op/s
Dec 05 02:36:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2626: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:26 compute-0 nova_compute[349548]: 2025-12-05 02:36:26.015 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:36:26 compute-0 ceph-mon[192914]: pgmap v2626: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2627: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 05 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:36:28 compute-0 nova_compute[349548]: 2025-12-05 02:36:28.165 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:36:28 compute-0 ceph-mon[192914]: pgmap v2627: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:36:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2628: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:29 compute-0 podman[158197]: time="2025-12-05T02:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:36:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:36:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8219 "" "Go-http-client/1.1"
Dec 05 02:36:30 compute-0 nova_compute[349548]: 2025-12-05 02:36:30.128 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:36:30 compute-0 ceph-mon[192914]: pgmap v2628: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:31 compute-0 nova_compute[349548]: 2025-12-05 02:36:31.020 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:36:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2629: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:31 compute-0 openstack_network_exporter[366555]: ERROR   02:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:36:31 compute-0 openstack_network_exporter[366555]: ERROR   02:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:36:31 compute-0 openstack_network_exporter[366555]: ERROR   02:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:36:31 compute-0 openstack_network_exporter[366555]: ERROR   02:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:36:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:36:31 compute-0 openstack_network_exporter[366555]: ERROR   02:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:36:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:36:32 compute-0 ceph-mon[192914]: pgmap v2629: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:33 compute-0 nova_compute[349548]: 2025-12-05 02:36:33.168 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:36:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2630: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:36:34 compute-0 ceph-mon[192914]: pgmap v2630: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2631: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:36 compute-0 nova_compute[349548]: 2025-12-05 02:36:36.024 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:36:36 compute-0 ceph-mon[192914]: pgmap v2631: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:36 compute-0 podman[488814]: 2025-12-05 02:36:36.70450585 +0000 UTC m=+0.112700591 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 02:36:36 compute-0 podman[488813]: 2025-12-05 02:36:36.740433832 +0000 UTC m=+0.141443565 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 05 02:36:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2632: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:38 compute-0 nova_compute[349548]: 2025-12-05 02:36:38.173 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.332 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.332 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.349 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.349 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.350 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.350 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.350 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.351 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.351 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.351 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.352 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.352 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.353 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.353 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.353 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.354 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.354 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:36:38 compute-0 ceph-mon[192914]: pgmap v2632: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:36:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2633: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:39 compute-0 podman[488856]: 2025-12-05 02:36:39.689200703 +0000 UTC m=+0.097665045 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute)
Dec 05 02:36:39 compute-0 podman[488857]: 2025-12-05 02:36:39.699362857 +0000 UTC m=+0.098310022 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, managed_by=edpm_ansible, io.openshift.tags=base rhel9, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, version=9.4)
Dec 05 02:36:39 compute-0 podman[488858]: 2025-12-05 02:36:39.710637665 +0000 UTC m=+0.105611015 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:36:40 compute-0 ceph-mon[192914]: pgmap v2633: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:41 compute-0 nova_compute[349548]: 2025-12-05 02:36:41.028 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:36:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2634: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:42 compute-0 ceph-mon[192914]: pgmap v2634: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:43 compute-0 nova_compute[349548]: 2025-12-05 02:36:43.173 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:36:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2635: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:36:44 compute-0 ceph-mon[192914]: pgmap v2635: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2636: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:36:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1885167744' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:36:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:36:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1885167744' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:36:46 compute-0 nova_compute[349548]: 2025-12-05 02:36:46.032 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:36:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:36:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:36:46 compute-0 ceph-mon[192914]: pgmap v2636: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1885167744' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:36:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1885167744' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:36:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:36:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:36:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:36:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:36:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2637: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:48 compute-0 nova_compute[349548]: 2025-12-05 02:36:48.176 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:36:48 compute-0 ceph-mon[192914]: pgmap v2637: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:36:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2638: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:50 compute-0 ceph-mon[192914]: pgmap v2638: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:51 compute-0 nova_compute[349548]: 2025-12-05 02:36:51.038 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:36:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2639: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:52 compute-0 ceph-mon[192914]: pgmap v2639: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:52 compute-0 podman[488912]: 2025-12-05 02:36:52.740496431 +0000 UTC m=+0.144145693 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd)
Dec 05 02:36:52 compute-0 podman[488915]: 2025-12-05 02:36:52.743352144 +0000 UTC m=+0.120732504 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=openstack_network_exporter, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, architecture=x86_64, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 05 02:36:52 compute-0 podman[488913]: 2025-12-05 02:36:52.754034914 +0000 UTC m=+0.147851501 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 05 02:36:52 compute-0 podman[488914]: 2025-12-05 02:36:52.786397733 +0000 UTC m=+0.173738042 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Dec 05 02:36:53 compute-0 nova_compute[349548]: 2025-12-05 02:36:53.179 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:36:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2640: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:36:54 compute-0 ceph-mon[192914]: pgmap v2640: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2641: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:56 compute-0 nova_compute[349548]: 2025-12-05 02:36:56.041 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:36:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:36:56.241 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:36:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:36:56.241 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:36:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:36:56.242 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:36:56 compute-0 ceph-mon[192914]: pgmap v2641: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2642: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:58 compute-0 nova_compute[349548]: 2025-12-05 02:36:58.183 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:36:58 compute-0 ceph-mon[192914]: pgmap v2642: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:36:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2643: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:36:59 compute-0 podman[158197]: time="2025-12-05T02:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:36:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:36:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8223 "" "Go-http-client/1.1"
Dec 05 02:37:00 compute-0 nova_compute[349548]: 2025-12-05 02:37:00.082 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:37:00 compute-0 ceph-mon[192914]: pgmap v2643: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:01 compute-0 nova_compute[349548]: 2025-12-05 02:37:01.045 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:37:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2644: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:01 compute-0 openstack_network_exporter[366555]: ERROR   02:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:37:01 compute-0 openstack_network_exporter[366555]: ERROR   02:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:37:01 compute-0 openstack_network_exporter[366555]: ERROR   02:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:37:01 compute-0 openstack_network_exporter[366555]: ERROR   02:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:37:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:37:01 compute-0 openstack_network_exporter[366555]: ERROR   02:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:37:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:37:02 compute-0 ceph-mon[192914]: pgmap v2644: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:03 compute-0 nova_compute[349548]: 2025-12-05 02:37:03.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:37:03 compute-0 nova_compute[349548]: 2025-12-05 02:37:03.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:37:03 compute-0 nova_compute[349548]: 2025-12-05 02:37:03.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:37:03 compute-0 nova_compute[349548]: 2025-12-05 02:37:03.117 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 02:37:03 compute-0 nova_compute[349548]: 2025-12-05 02:37:03.118 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:37:03 compute-0 nova_compute[349548]: 2025-12-05 02:37:03.157 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:37:03 compute-0 nova_compute[349548]: 2025-12-05 02:37:03.158 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:37:03 compute-0 nova_compute[349548]: 2025-12-05 02:37:03.158 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:37:03 compute-0 nova_compute[349548]: 2025-12-05 02:37:03.159 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:37:03 compute-0 nova_compute[349548]: 2025-12-05 02:37:03.159 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:37:03 compute-0 nova_compute[349548]: 2025-12-05 02:37:03.188 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:37:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2645: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:37:03 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1736455509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:37:03 compute-0 nova_compute[349548]: 2025-12-05 02:37:03.658 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:37:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:37:04 compute-0 nova_compute[349548]: 2025-12-05 02:37:04.116 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:37:04 compute-0 nova_compute[349548]: 2025-12-05 02:37:04.118 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3944MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:37:04 compute-0 nova_compute[349548]: 2025-12-05 02:37:04.118 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:37:04 compute-0 nova_compute[349548]: 2025-12-05 02:37:04.119 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:37:04 compute-0 nova_compute[349548]: 2025-12-05 02:37:04.192 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:37:04 compute-0 nova_compute[349548]: 2025-12-05 02:37:04.193 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:37:04 compute-0 nova_compute[349548]: 2025-12-05 02:37:04.237 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:37:04 compute-0 ceph-mon[192914]: pgmap v2645: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:04 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1736455509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:37:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:37:04 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/63555516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:37:04 compute-0 nova_compute[349548]: 2025-12-05 02:37:04.710 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:37:04 compute-0 nova_compute[349548]: 2025-12-05 02:37:04.725 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:37:04 compute-0 nova_compute[349548]: 2025-12-05 02:37:04.797 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:37:04 compute-0 nova_compute[349548]: 2025-12-05 02:37:04.801 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:37:04 compute-0 nova_compute[349548]: 2025-12-05 02:37:04.802 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:37:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2646: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:05 compute-0 podman[158197]: time="2025-12-05T02:37:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:37:05 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/63555516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:37:05 compute-0 podman[158197]: @ - - [05/Dec/2025:02:37:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 43259 "" "Go-http-client/1.1"
Dec 05 02:37:06 compute-0 nova_compute[349548]: 2025-12-05 02:37:06.049 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:37:06 compute-0 ceph-mon[192914]: pgmap v2646: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:06 compute-0 nova_compute[349548]: 2025-12-05 02:37:06.752 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:37:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2647: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:07 compute-0 ceph-mon[192914]: pgmap v2647: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:07 compute-0 podman[489038]: 2025-12-05 02:37:07.714849408 +0000 UTC m=+0.116353507 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:37:07 compute-0 podman[489039]: 2025-12-05 02:37:07.739631547 +0000 UTC m=+0.134532645 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:37:08 compute-0 nova_compute[349548]: 2025-12-05 02:37:08.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:37:08 compute-0 nova_compute[349548]: 2025-12-05 02:37:08.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:37:08 compute-0 nova_compute[349548]: 2025-12-05 02:37:08.187 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:37:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:37:08 compute-0 sudo[489080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:37:08 compute-0 sudo[489080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:37:08 compute-0 sudo[489080]: pam_unix(sudo:session): session closed for user root
Dec 05 02:37:09 compute-0 nova_compute[349548]: 2025-12-05 02:37:09.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:37:09 compute-0 sudo[489105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:37:09 compute-0 sudo[489105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:37:09 compute-0 sudo[489105]: pam_unix(sudo:session): session closed for user root
Dec 05 02:37:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2648: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:09 compute-0 sudo[489130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:37:09 compute-0 sudo[489130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:37:09 compute-0 sudo[489130]: pam_unix(sudo:session): session closed for user root
Dec 05 02:37:09 compute-0 sudo[489155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:37:09 compute-0 sudo[489155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:37:10 compute-0 sudo[489155]: pam_unix(sudo:session): session closed for user root
Dec 05 02:37:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:37:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:37:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:37:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:37:10 compute-0 nova_compute[349548]: 2025-12-05 02:37:10.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:37:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:37:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:37:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev e63a8bb8-810e-44dd-af9c-05813104530b does not exist
Dec 05 02:37:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev ae0cb3df-9921-444d-8929-654a21dc3c14 does not exist
Dec 05 02:37:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 03c98e2e-0c37-4f94-811c-9c26ca6cde86 does not exist
Dec 05 02:37:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:37:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:37:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:37:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:37:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:37:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:37:10 compute-0 sudo[489212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:37:10 compute-0 sudo[489212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:37:10 compute-0 sudo[489212]: pam_unix(sudo:session): session closed for user root
Dec 05 02:37:10 compute-0 ceph-mon[192914]: pgmap v2648: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:37:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:37:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:37:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:37:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:37:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:37:10 compute-0 sudo[489255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:37:10 compute-0 sudo[489255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:37:10 compute-0 sudo[489255]: pam_unix(sudo:session): session closed for user root
Dec 05 02:37:10 compute-0 podman[489238]: 2025-12-05 02:37:10.416224421 +0000 UTC m=+0.133529305 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 05 02:37:10 compute-0 podman[489237]: 2025-12-05 02:37:10.420430233 +0000 UTC m=+0.147197982 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, distribution-scope=public, version=9.4, io.buildah.version=1.29.0, name=ubi9, release=1214.1726694543, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git)
Dec 05 02:37:10 compute-0 podman[489236]: 2025-12-05 02:37:10.422594665 +0000 UTC m=+0.151629350 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec 05 02:37:10 compute-0 sudo[489316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:37:10 compute-0 sudo[489316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:37:10 compute-0 sudo[489316]: pam_unix(sudo:session): session closed for user root
Dec 05 02:37:10 compute-0 sudo[489342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:37:10 compute-0 sudo[489342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:37:11 compute-0 nova_compute[349548]: 2025-12-05 02:37:11.052 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:37:11 compute-0 nova_compute[349548]: 2025-12-05 02:37:11.061 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:37:11 compute-0 nova_compute[349548]: 2025-12-05 02:37:11.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:37:11 compute-0 podman[489405]: 2025-12-05 02:37:11.175085806 +0000 UTC m=+0.093889805 container create 049e7dad698f4b831ab7c2ac0014ccd9d59afcd8d18f8acade41a3c48620c29a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:37:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2649: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:11 compute-0 podman[489405]: 2025-12-05 02:37:11.140583735 +0000 UTC m=+0.059387874 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:37:11 compute-0 systemd[1]: Started libpod-conmon-049e7dad698f4b831ab7c2ac0014ccd9d59afcd8d18f8acade41a3c48620c29a.scope.
Dec 05 02:37:11 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:37:11 compute-0 podman[489405]: 2025-12-05 02:37:11.323863812 +0000 UTC m=+0.242667821 container init 049e7dad698f4b831ab7c2ac0014ccd9d59afcd8d18f8acade41a3c48620c29a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_perlman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 05 02:37:11 compute-0 podman[489405]: 2025-12-05 02:37:11.34274215 +0000 UTC m=+0.261546139 container start 049e7dad698f4b831ab7c2ac0014ccd9d59afcd8d18f8acade41a3c48620c29a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_perlman, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:37:11 compute-0 podman[489405]: 2025-12-05 02:37:11.349439804 +0000 UTC m=+0.268243813 container attach 049e7dad698f4b831ab7c2ac0014ccd9d59afcd8d18f8acade41a3c48620c29a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:37:11 compute-0 eloquent_perlman[489421]: 167 167
Dec 05 02:37:11 compute-0 systemd[1]: libpod-049e7dad698f4b831ab7c2ac0014ccd9d59afcd8d18f8acade41a3c48620c29a.scope: Deactivated successfully.
Dec 05 02:37:11 compute-0 podman[489405]: 2025-12-05 02:37:11.355036577 +0000 UTC m=+0.273840576 container died 049e7dad698f4b831ab7c2ac0014ccd9d59afcd8d18f8acade41a3c48620c29a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 05 02:37:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4bd43943694570aac397ed5225c14b2dba046edfcf459ed7a8a225be95a17ad-merged.mount: Deactivated successfully.
Dec 05 02:37:11 compute-0 podman[489405]: 2025-12-05 02:37:11.432790813 +0000 UTC m=+0.351594782 container remove 049e7dad698f4b831ab7c2ac0014ccd9d59afcd8d18f8acade41a3c48620c29a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 05 02:37:11 compute-0 systemd[1]: libpod-conmon-049e7dad698f4b831ab7c2ac0014ccd9d59afcd8d18f8acade41a3c48620c29a.scope: Deactivated successfully.
Dec 05 02:37:11 compute-0 podman[489444]: 2025-12-05 02:37:11.740009366 +0000 UTC m=+0.086930023 container create b15319d84ffa9ecab4b01fc0d661d1e7b79cedc2aacd085bec793456b1199d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_fermat, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 05 02:37:11 compute-0 podman[489444]: 2025-12-05 02:37:11.704590788 +0000 UTC m=+0.051511505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:37:11 compute-0 systemd[1]: Started libpod-conmon-b15319d84ffa9ecab4b01fc0d661d1e7b79cedc2aacd085bec793456b1199d50.scope.
Dec 05 02:37:11 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7eec744628028bff7c2313b05aa0f01b5472c8e6a4c19748711c464f4dd55e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7eec744628028bff7c2313b05aa0f01b5472c8e6a4c19748711c464f4dd55e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7eec744628028bff7c2313b05aa0f01b5472c8e6a4c19748711c464f4dd55e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7eec744628028bff7c2313b05aa0f01b5472c8e6a4c19748711c464f4dd55e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7eec744628028bff7c2313b05aa0f01b5472c8e6a4c19748711c464f4dd55e2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:37:11 compute-0 podman[489444]: 2025-12-05 02:37:11.935015673 +0000 UTC m=+0.281936410 container init b15319d84ffa9ecab4b01fc0d661d1e7b79cedc2aacd085bec793456b1199d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_fermat, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:37:11 compute-0 podman[489444]: 2025-12-05 02:37:11.965098546 +0000 UTC m=+0.312019183 container start b15319d84ffa9ecab4b01fc0d661d1e7b79cedc2aacd085bec793456b1199d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_fermat, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 05 02:37:11 compute-0 podman[489444]: 2025-12-05 02:37:11.971378938 +0000 UTC m=+0.318299605 container attach b15319d84ffa9ecab4b01fc0d661d1e7b79cedc2aacd085bec793456b1199d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:37:12 compute-0 ceph-mon[192914]: pgmap v2649: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:13 compute-0 nova_compute[349548]: 2025-12-05 02:37:13.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:37:13 compute-0 nova_compute[349548]: 2025-12-05 02:37:13.189 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:37:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2650: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:13 compute-0 frosty_fermat[489461]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:37:13 compute-0 frosty_fermat[489461]: --> relative data size: 1.0
Dec 05 02:37:13 compute-0 frosty_fermat[489461]: --> All data devices are unavailable
Dec 05 02:37:13 compute-0 systemd[1]: libpod-b15319d84ffa9ecab4b01fc0d661d1e7b79cedc2aacd085bec793456b1199d50.scope: Deactivated successfully.
Dec 05 02:37:13 compute-0 podman[489444]: 2025-12-05 02:37:13.298145671 +0000 UTC m=+1.645066328 container died b15319d84ffa9ecab4b01fc0d661d1e7b79cedc2aacd085bec793456b1199d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 02:37:13 compute-0 systemd[1]: libpod-b15319d84ffa9ecab4b01fc0d661d1e7b79cedc2aacd085bec793456b1199d50.scope: Consumed 1.285s CPU time.
Dec 05 02:37:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7eec744628028bff7c2313b05aa0f01b5472c8e6a4c19748711c464f4dd55e2-merged.mount: Deactivated successfully.
Dec 05 02:37:13 compute-0 podman[489444]: 2025-12-05 02:37:13.384173307 +0000 UTC m=+1.731093934 container remove b15319d84ffa9ecab4b01fc0d661d1e7b79cedc2aacd085bec793456b1199d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_fermat, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 05 02:37:13 compute-0 systemd[1]: libpod-conmon-b15319d84ffa9ecab4b01fc0d661d1e7b79cedc2aacd085bec793456b1199d50.scope: Deactivated successfully.
Dec 05 02:37:13 compute-0 sudo[489342]: pam_unix(sudo:session): session closed for user root
Dec 05 02:37:13 compute-0 sudo[489501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:37:13 compute-0 sudo[489501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:37:13 compute-0 sudo[489501]: pam_unix(sudo:session): session closed for user root
Dec 05 02:37:13 compute-0 sudo[489526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:37:13 compute-0 sudo[489526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:37:13 compute-0 sudo[489526]: pam_unix(sudo:session): session closed for user root
Dec 05 02:37:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.738074) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902233738134, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 1321, "num_deletes": 250, "total_data_size": 2050765, "memory_usage": 2078336, "flush_reason": "Manual Compaction"}
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902233753272, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 1192564, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53308, "largest_seqno": 54628, "table_properties": {"data_size": 1187829, "index_size": 2130, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12280, "raw_average_key_size": 20, "raw_value_size": 1177581, "raw_average_value_size": 1982, "num_data_blocks": 97, "num_entries": 594, "num_filter_entries": 594, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764902096, "oldest_key_time": 1764902096, "file_creation_time": 1764902233, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 15295 microseconds, and 8413 cpu microseconds.
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.753367) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 1192564 bytes OK
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.753394) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.756033) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.756062) EVENT_LOG_v1 {"time_micros": 1764902233756053, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.756120) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 2044868, prev total WAL file size 2044868, number of live WAL files 2.
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.757556) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323533' seq:72057594037927935, type:22 .. '6D6772737461740032353034' seq:0, type:0; will stop at (end)
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(1164KB)], [128(8930KB)]
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902233757606, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 10337330, "oldest_snapshot_seqno": -1}
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 6813 keys, 7865586 bytes, temperature: kUnknown
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902233824068, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 7865586, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7824950, "index_size": 22475, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 178050, "raw_average_key_size": 26, "raw_value_size": 7706698, "raw_average_value_size": 1131, "num_data_blocks": 890, "num_entries": 6813, "num_filter_entries": 6813, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764902233, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.824409) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 7865586 bytes
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.826785) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.2 rd, 118.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 8.7 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(15.3) write-amplify(6.6) OK, records in: 7262, records dropped: 449 output_compression: NoCompression
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.826815) EVENT_LOG_v1 {"time_micros": 1764902233826801, "job": 78, "event": "compaction_finished", "compaction_time_micros": 66592, "compaction_time_cpu_micros": 36646, "output_level": 6, "num_output_files": 1, "total_output_size": 7865586, "num_input_records": 7262, "num_output_records": 6813, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902233827459, "job": 78, "event": "table_file_deletion", "file_number": 130}
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902233832543, "job": 78, "event": "table_file_deletion", "file_number": 128}
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.757304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.832722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.832731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.832734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.832736) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.832738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:37:13 compute-0 sudo[489551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:37:13 compute-0 sudo[489551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:37:13 compute-0 sudo[489551]: pam_unix(sudo:session): session closed for user root
Dec 05 02:37:13 compute-0 sudo[489576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:37:13 compute-0 sudo[489576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:37:14 compute-0 ceph-mon[192914]: pgmap v2650: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:14 compute-0 podman[489638]: 2025-12-05 02:37:14.544199572 +0000 UTC m=+0.066419218 container create bf931a445d428a0d1635102bd9140cf493e25318080054bbc130042515b231cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_saha, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:37:14 compute-0 podman[489638]: 2025-12-05 02:37:14.510746451 +0000 UTC m=+0.032966157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:37:14 compute-0 systemd[1]: Started libpod-conmon-bf931a445d428a0d1635102bd9140cf493e25318080054bbc130042515b231cc.scope.
Dec 05 02:37:14 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:37:14 compute-0 podman[489638]: 2025-12-05 02:37:14.696615443 +0000 UTC m=+0.218835149 container init bf931a445d428a0d1635102bd9140cf493e25318080054bbc130042515b231cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_saha, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 05 02:37:14 compute-0 podman[489638]: 2025-12-05 02:37:14.713140813 +0000 UTC m=+0.235360469 container start bf931a445d428a0d1635102bd9140cf493e25318080054bbc130042515b231cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:37:14 compute-0 podman[489638]: 2025-12-05 02:37:14.720755634 +0000 UTC m=+0.242975360 container attach bf931a445d428a0d1635102bd9140cf493e25318080054bbc130042515b231cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_saha, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:37:14 compute-0 romantic_saha[489653]: 167 167
Dec 05 02:37:14 compute-0 systemd[1]: libpod-bf931a445d428a0d1635102bd9140cf493e25318080054bbc130042515b231cc.scope: Deactivated successfully.
Dec 05 02:37:14 compute-0 podman[489638]: 2025-12-05 02:37:14.726113849 +0000 UTC m=+0.248333535 container died bf931a445d428a0d1635102bd9140cf493e25318080054bbc130042515b231cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_saha, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 02:37:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-773c81cc43e355514af9ca0851138f229d481544da94f5f7b644875be3eaec27-merged.mount: Deactivated successfully.
Dec 05 02:37:14 compute-0 podman[489638]: 2025-12-05 02:37:14.80335945 +0000 UTC m=+0.325579106 container remove bf931a445d428a0d1635102bd9140cf493e25318080054bbc130042515b231cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_saha, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:37:14 compute-0 systemd[1]: libpod-conmon-bf931a445d428a0d1635102bd9140cf493e25318080054bbc130042515b231cc.scope: Deactivated successfully.
Dec 05 02:37:15 compute-0 podman[489676]: 2025-12-05 02:37:15.077954027 +0000 UTC m=+0.087478949 container create 97ff5e2eaa0e9a925e3fc052d022daf1c497a41390ed6e1e3a4c9782cb74d33c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 05 02:37:15 compute-0 podman[489676]: 2025-12-05 02:37:15.046725151 +0000 UTC m=+0.056250113 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:37:15 compute-0 systemd[1]: Started libpod-conmon-97ff5e2eaa0e9a925e3fc052d022daf1c497a41390ed6e1e3a4c9782cb74d33c.scope.
Dec 05 02:37:15 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:37:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2651: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba29bf67165a349d2ce6b09b8d86f910289168b96b6f45566b9b7d1b346c4ac3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba29bf67165a349d2ce6b09b8d86f910289168b96b6f45566b9b7d1b346c4ac3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba29bf67165a349d2ce6b09b8d86f910289168b96b6f45566b9b7d1b346c4ac3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba29bf67165a349d2ce6b09b8d86f910289168b96b6f45566b9b7d1b346c4ac3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:37:15 compute-0 podman[489676]: 2025-12-05 02:37:15.256832937 +0000 UTC m=+0.266357939 container init 97ff5e2eaa0e9a925e3fc052d022daf1c497a41390ed6e1e3a4c9782cb74d33c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kare, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 02:37:15 compute-0 podman[489676]: 2025-12-05 02:37:15.276214079 +0000 UTC m=+0.285739021 container start 97ff5e2eaa0e9a925e3fc052d022daf1c497a41390ed6e1e3a4c9782cb74d33c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Dec 05 02:37:15 compute-0 podman[489676]: 2025-12-05 02:37:15.282655986 +0000 UTC m=+0.292180998 container attach 97ff5e2eaa0e9a925e3fc052d022daf1c497a41390ed6e1e3a4c9782cb74d33c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kare, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 05 02:37:16 compute-0 nova_compute[349548]: 2025-12-05 02:37:16.059 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:37:16 compute-0 upbeat_kare[489692]: {
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:     "0": [
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:         {
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "devices": [
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "/dev/loop3"
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             ],
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "lv_name": "ceph_lv0",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "lv_size": "21470642176",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "name": "ceph_lv0",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "tags": {
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.cluster_name": "ceph",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.crush_device_class": "",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.encrypted": "0",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.osd_id": "0",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.type": "block",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.vdo": "0"
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             },
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "type": "block",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "vg_name": "ceph_vg0"
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:         }
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:     ],
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:     "1": [
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:         {
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "devices": [
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "/dev/loop4"
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             ],
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "lv_name": "ceph_lv1",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "lv_size": "21470642176",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "name": "ceph_lv1",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "tags": {
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.cluster_name": "ceph",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.crush_device_class": "",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.encrypted": "0",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.osd_id": "1",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.type": "block",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.vdo": "0"
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             },
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "type": "block",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "vg_name": "ceph_vg1"
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:         }
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:     ],
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:     "2": [
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:         {
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "devices": [
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "/dev/loop5"
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             ],
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "lv_name": "ceph_lv2",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "lv_size": "21470642176",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "name": "ceph_lv2",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "tags": {
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.cluster_name": "ceph",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.crush_device_class": "",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.encrypted": "0",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.osd_id": "2",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.type": "block",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:                 "ceph.vdo": "0"
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             },
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "type": "block",
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:             "vg_name": "ceph_vg2"
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:         }
Dec 05 02:37:16 compute-0 upbeat_kare[489692]:     ]
Dec 05 02:37:16 compute-0 upbeat_kare[489692]: }
Dec 05 02:37:16 compute-0 systemd[1]: libpod-97ff5e2eaa0e9a925e3fc052d022daf1c497a41390ed6e1e3a4c9782cb74d33c.scope: Deactivated successfully.
Dec 05 02:37:16 compute-0 podman[489676]: 2025-12-05 02:37:16.111960876 +0000 UTC m=+1.121485828 container died 97ff5e2eaa0e9a925e3fc052d022daf1c497a41390ed6e1e3a4c9782cb74d33c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 05 02:37:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba29bf67165a349d2ce6b09b8d86f910289168b96b6f45566b9b7d1b346c4ac3-merged.mount: Deactivated successfully.
Dec 05 02:37:16 compute-0 podman[489676]: 2025-12-05 02:37:16.215653254 +0000 UTC m=+1.225178206 container remove 97ff5e2eaa0e9a925e3fc052d022daf1c497a41390ed6e1e3a4c9782cb74d33c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kare, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 05 02:37:16 compute-0 systemd[1]: libpod-conmon-97ff5e2eaa0e9a925e3fc052d022daf1c497a41390ed6e1e3a4c9782cb74d33c.scope: Deactivated successfully.
Dec 05 02:37:16 compute-0 sudo[489576]: pam_unix(sudo:session): session closed for user root
Dec 05 02:37:16 compute-0 ceph-mon[192914]: pgmap v2651: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:37:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:37:16 compute-0 sudo[489715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:37:16 compute-0 sudo[489715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:37:16 compute-0 sudo[489715]: pam_unix(sudo:session): session closed for user root
Dec 05 02:37:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:37:16
Dec 05 02:37:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:37:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:37:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'backups', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', '.mgr', 'vms']
Dec 05 02:37:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:37:16 compute-0 sudo[489740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:37:16 compute-0 sudo[489740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:37:16 compute-0 sudo[489740]: pam_unix(sudo:session): session closed for user root
Dec 05 02:37:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:37:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:37:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:37:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:37:16 compute-0 sudo[489765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:37:16 compute-0 sudo[489765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:37:16 compute-0 sudo[489765]: pam_unix(sudo:session): session closed for user root
Dec 05 02:37:16 compute-0 sudo[489790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:37:16 compute-0 sudo[489790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:37:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2652: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:17 compute-0 podman[489853]: 2025-12-05 02:37:17.447111062 +0000 UTC m=+0.099285132 container create 81adf68ee9fdedb6e39cd5c4475ce812bb7869d4ab9c26a828b2fff1dfdea498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_torvalds, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 05 02:37:17 compute-0 podman[489853]: 2025-12-05 02:37:17.417948006 +0000 UTC m=+0.070122156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:37:17 compute-0 systemd[1]: Started libpod-conmon-81adf68ee9fdedb6e39cd5c4475ce812bb7869d4ab9c26a828b2fff1dfdea498.scope.
Dec 05 02:37:17 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:37:17 compute-0 podman[489853]: 2025-12-05 02:37:17.577491584 +0000 UTC m=+0.229665684 container init 81adf68ee9fdedb6e39cd5c4475ce812bb7869d4ab9c26a828b2fff1dfdea498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_torvalds, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:37:17 compute-0 podman[489853]: 2025-12-05 02:37:17.593363575 +0000 UTC m=+0.245537645 container start 81adf68ee9fdedb6e39cd5c4475ce812bb7869d4ab9c26a828b2fff1dfdea498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 05 02:37:17 compute-0 podman[489853]: 2025-12-05 02:37:17.600254605 +0000 UTC m=+0.252428695 container attach 81adf68ee9fdedb6e39cd5c4475ce812bb7869d4ab9c26a828b2fff1dfdea498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:37:17 compute-0 charming_torvalds[489869]: 167 167
Dec 05 02:37:17 compute-0 systemd[1]: libpod-81adf68ee9fdedb6e39cd5c4475ce812bb7869d4ab9c26a828b2fff1dfdea498.scope: Deactivated successfully.
Dec 05 02:37:17 compute-0 podman[489853]: 2025-12-05 02:37:17.604287592 +0000 UTC m=+0.256461712 container died 81adf68ee9fdedb6e39cd5c4475ce812bb7869d4ab9c26a828b2fff1dfdea498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 05 02:37:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-60f287d55666313ef9199d5e4780852ac981aa896278b1c990806919ec7652d5-merged.mount: Deactivated successfully.
Dec 05 02:37:17 compute-0 podman[489853]: 2025-12-05 02:37:17.69142065 +0000 UTC m=+0.343594750 container remove 81adf68ee9fdedb6e39cd5c4475ce812bb7869d4ab9c26a828b2fff1dfdea498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 05 02:37:17 compute-0 systemd[1]: libpod-conmon-81adf68ee9fdedb6e39cd5c4475ce812bb7869d4ab9c26a828b2fff1dfdea498.scope: Deactivated successfully.
Dec 05 02:37:17 compute-0 podman[489892]: 2025-12-05 02:37:17.979208039 +0000 UTC m=+0.084825922 container create 8d16c2035d87a7a2f31d2a502d48b2df713d68c516fd58539b98c2a0eccdb731 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hodgkin, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 05 02:37:18 compute-0 podman[489892]: 2025-12-05 02:37:17.944872763 +0000 UTC m=+0.050490696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:37:18 compute-0 systemd[1]: Started libpod-conmon-8d16c2035d87a7a2f31d2a502d48b2df713d68c516fd58539b98c2a0eccdb731.scope.
Dec 05 02:37:18 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272e076b75b6d2d3970460941ae550776c9e4544e10b0d22972c2dd535be2803/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272e076b75b6d2d3970460941ae550776c9e4544e10b0d22972c2dd535be2803/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272e076b75b6d2d3970460941ae550776c9e4544e10b0d22972c2dd535be2803/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272e076b75b6d2d3970460941ae550776c9e4544e10b0d22972c2dd535be2803/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:37:18 compute-0 podman[489892]: 2025-12-05 02:37:18.145252595 +0000 UTC m=+0.250870528 container init 8d16c2035d87a7a2f31d2a502d48b2df713d68c516fd58539b98c2a0eccdb731 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hodgkin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 02:37:18 compute-0 podman[489892]: 2025-12-05 02:37:18.178575822 +0000 UTC m=+0.284193695 container start 8d16c2035d87a7a2f31d2a502d48b2df713d68c516fd58539b98c2a0eccdb731 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 05 02:37:18 compute-0 podman[489892]: 2025-12-05 02:37:18.185109322 +0000 UTC m=+0.290727255 container attach 8d16c2035d87a7a2f31d2a502d48b2df713d68c516fd58539b98c2a0eccdb731 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hodgkin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 05 02:37:18 compute-0 nova_compute[349548]: 2025-12-05 02:37:18.192 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:37:18 compute-0 ceph-mon[192914]: pgmap v2652: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:37:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:37:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:37:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:37:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:37:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:37:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:37:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:37:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:37:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:37:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:37:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2653: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:19 compute-0 festive_hodgkin[489906]: {
Dec 05 02:37:19 compute-0 festive_hodgkin[489906]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:37:19 compute-0 festive_hodgkin[489906]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:37:19 compute-0 festive_hodgkin[489906]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:37:19 compute-0 festive_hodgkin[489906]:         "osd_id": 0,
Dec 05 02:37:19 compute-0 festive_hodgkin[489906]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:37:19 compute-0 festive_hodgkin[489906]:         "type": "bluestore"
Dec 05 02:37:19 compute-0 festive_hodgkin[489906]:     },
Dec 05 02:37:19 compute-0 festive_hodgkin[489906]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:37:19 compute-0 festive_hodgkin[489906]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:37:19 compute-0 festive_hodgkin[489906]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:37:19 compute-0 festive_hodgkin[489906]:         "osd_id": 1,
Dec 05 02:37:19 compute-0 festive_hodgkin[489906]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:37:19 compute-0 festive_hodgkin[489906]:         "type": "bluestore"
Dec 05 02:37:19 compute-0 festive_hodgkin[489906]:     },
Dec 05 02:37:19 compute-0 festive_hodgkin[489906]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:37:19 compute-0 festive_hodgkin[489906]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:37:19 compute-0 festive_hodgkin[489906]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:37:19 compute-0 festive_hodgkin[489906]:         "osd_id": 2,
Dec 05 02:37:19 compute-0 festive_hodgkin[489906]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:37:19 compute-0 festive_hodgkin[489906]:         "type": "bluestore"
Dec 05 02:37:19 compute-0 festive_hodgkin[489906]:     }
Dec 05 02:37:19 compute-0 festive_hodgkin[489906]: }
Dec 05 02:37:19 compute-0 systemd[1]: libpod-8d16c2035d87a7a2f31d2a502d48b2df713d68c516fd58539b98c2a0eccdb731.scope: Deactivated successfully.
Dec 05 02:37:19 compute-0 podman[489892]: 2025-12-05 02:37:19.392362237 +0000 UTC m=+1.497980110 container died 8d16c2035d87a7a2f31d2a502d48b2df713d68c516fd58539b98c2a0eccdb731 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hodgkin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 05 02:37:19 compute-0 systemd[1]: libpod-8d16c2035d87a7a2f31d2a502d48b2df713d68c516fd58539b98c2a0eccdb731.scope: Consumed 1.209s CPU time.
Dec 05 02:37:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-272e076b75b6d2d3970460941ae550776c9e4544e10b0d22972c2dd535be2803-merged.mount: Deactivated successfully.
Dec 05 02:37:19 compute-0 podman[489892]: 2025-12-05 02:37:19.497654152 +0000 UTC m=+1.603272035 container remove 8d16c2035d87a7a2f31d2a502d48b2df713d68c516fd58539b98c2a0eccdb731 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hodgkin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:37:19 compute-0 systemd[1]: libpod-conmon-8d16c2035d87a7a2f31d2a502d48b2df713d68c516fd58539b98c2a0eccdb731.scope: Deactivated successfully.
Dec 05 02:37:19 compute-0 sudo[489790]: pam_unix(sudo:session): session closed for user root
Dec 05 02:37:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:37:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:37:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:37:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:37:19 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev ba6f8717-8ceb-4d72-b90f-7932a88da4c5 does not exist
Dec 05 02:37:19 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 58e26a78-d765-4458-999b-2f20f3982d37 does not exist
Dec 05 02:37:19 compute-0 sudo[489951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:37:19 compute-0 sudo[489951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:37:19 compute-0 sudo[489951]: pam_unix(sudo:session): session closed for user root
Dec 05 02:37:19 compute-0 sudo[489976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:37:19 compute-0 sudo[489976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:37:19 compute-0 sudo[489976]: pam_unix(sudo:session): session closed for user root
Dec 05 02:37:20 compute-0 ceph-mon[192914]: pgmap v2653: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:37:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:37:21 compute-0 nova_compute[349548]: 2025-12-05 02:37:21.063 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:37:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2654: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:22 compute-0 ceph-mon[192914]: pgmap v2654: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:23 compute-0 nova_compute[349548]: 2025-12-05 02:37:23.197 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:37:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2655: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:23 compute-0 ceph-mon[192914]: pgmap v2655: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:23 compute-0 podman[490002]: 2025-12-05 02:37:23.726150051 +0000 UTC m=+0.120645881 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:37:23 compute-0 podman[490001]: 2025-12-05 02:37:23.737498911 +0000 UTC m=+0.131068444 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:37:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:37:23 compute-0 podman[490004]: 2025-12-05 02:37:23.763933738 +0000 UTC m=+0.146756849 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=ubi9-minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 05 02:37:23 compute-0 podman[490003]: 2025-12-05 02:37:23.769716645 +0000 UTC m=+0.160735624 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 05 02:37:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2656: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:26 compute-0 nova_compute[349548]: 2025-12-05 02:37:26.068 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:37:26 compute-0 ceph-mon[192914]: pgmap v2656: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2657: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 05 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:37:28 compute-0 nova_compute[349548]: 2025-12-05 02:37:28.198 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:37:28 compute-0 ceph-mon[192914]: pgmap v2657: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:37:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2658: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:29 compute-0 podman[158197]: time="2025-12-05T02:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:37:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:37:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8217 "" "Go-http-client/1.1"
Dec 05 02:37:30 compute-0 ceph-mon[192914]: pgmap v2658: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:31 compute-0 nova_compute[349548]: 2025-12-05 02:37:31.071 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:37:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2659: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:31 compute-0 openstack_network_exporter[366555]: ERROR   02:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:37:31 compute-0 openstack_network_exporter[366555]: ERROR   02:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:37:31 compute-0 openstack_network_exporter[366555]: ERROR   02:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:37:31 compute-0 openstack_network_exporter[366555]: ERROR   02:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:37:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:37:31 compute-0 openstack_network_exporter[366555]: ERROR   02:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:37:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:37:32 compute-0 ceph-mon[192914]: pgmap v2659: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:33 compute-0 nova_compute[349548]: 2025-12-05 02:37:33.201 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:37:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2660: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:37:34 compute-0 ceph-mon[192914]: pgmap v2660: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2661: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:36 compute-0 nova_compute[349548]: 2025-12-05 02:37:36.074 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:37:36 compute-0 ceph-mon[192914]: pgmap v2661: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2662: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:38 compute-0 nova_compute[349548]: 2025-12-05 02:37:38.205 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:37:38 compute-0 ceph-mon[192914]: pgmap v2662: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:38 compute-0 podman[490088]: 2025-12-05 02:37:38.689127329 +0000 UTC m=+0.105414299 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 02:37:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:37:38 compute-0 podman[490089]: 2025-12-05 02:37:38.746610557 +0000 UTC m=+0.147749287 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 05 02:37:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2663: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:40 compute-0 ceph-mon[192914]: pgmap v2663: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:40 compute-0 podman[490130]: 2025-12-05 02:37:40.729108543 +0000 UTC m=+0.130580179 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:37:40 compute-0 podman[490132]: 2025-12-05 02:37:40.744207672 +0000 UTC m=+0.135221765 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 05 02:37:40 compute-0 podman[490131]: 2025-12-05 02:37:40.758500906 +0000 UTC m=+0.162385482 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, managed_by=edpm_ansible, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.buildah.version=1.29.0, version=9.4, io.openshift.expose-services=)
Dec 05 02:37:41 compute-0 nova_compute[349548]: 2025-12-05 02:37:41.078 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:37:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2664: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:42 compute-0 ceph-mon[192914]: pgmap v2664: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:43 compute-0 nova_compute[349548]: 2025-12-05 02:37:43.208 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:37:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2665: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:37:44 compute-0 ceph-mon[192914]: pgmap v2665: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2666: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:37:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3647953052' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:37:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:37:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3647953052' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:37:46 compute-0 nova_compute[349548]: 2025-12-05 02:37:46.082 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:37:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:37:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:37:46 compute-0 ceph-mon[192914]: pgmap v2666: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/3647953052' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:37:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/3647953052' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:37:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:37:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:37:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:37:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:37:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2667: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:48 compute-0 nova_compute[349548]: 2025-12-05 02:37:48.212 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:37:48 compute-0 ceph-mon[192914]: pgmap v2667: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:37:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2668: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:50 compute-0 ceph-mon[192914]: pgmap v2668: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:51 compute-0 nova_compute[349548]: 2025-12-05 02:37:51.087 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:37:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2669: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:52 compute-0 ceph-mon[192914]: pgmap v2669: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:53 compute-0 nova_compute[349548]: 2025-12-05 02:37:53.215 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:37:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2670: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:37:54 compute-0 ceph-mon[192914]: pgmap v2670: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:54 compute-0 podman[490185]: 2025-12-05 02:37:54.719646067 +0000 UTC m=+0.113234216 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 05 02:37:54 compute-0 podman[490184]: 2025-12-05 02:37:54.728642978 +0000 UTC m=+0.129321963 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=multipathd)
Dec 05 02:37:54 compute-0 podman[490187]: 2025-12-05 02:37:54.733315834 +0000 UTC m=+0.114078611 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc.)
Dec 05 02:37:54 compute-0 podman[490186]: 2025-12-05 02:37:54.774802737 +0000 UTC m=+0.157792379 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 05 02:37:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2671: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:56 compute-0 nova_compute[349548]: 2025-12-05 02:37:56.091 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:37:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:37:56.241 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:37:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:37:56.242 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:37:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:37:56.242 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:37:56 compute-0 ceph-mon[192914]: pgmap v2671: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2672: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:58 compute-0 nova_compute[349548]: 2025-12-05 02:37:58.219 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:37:58 compute-0 ceph-mon[192914]: pgmap v2672: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:37:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2673: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:37:59 compute-0 podman[158197]: time="2025-12-05T02:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:37:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec 05 02:37:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8217 "" "Go-http-client/1.1"
Dec 05 02:38:00 compute-0 ceph-mon[192914]: pgmap v2673: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:01 compute-0 nova_compute[349548]: 2025-12-05 02:38:01.095 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:38:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2674: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:01 compute-0 openstack_network_exporter[366555]: ERROR   02:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:38:01 compute-0 openstack_network_exporter[366555]: ERROR   02:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:38:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:38:01 compute-0 openstack_network_exporter[366555]: ERROR   02:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:38:01 compute-0 openstack_network_exporter[366555]: ERROR   02:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:38:01 compute-0 openstack_network_exporter[366555]: ERROR   02:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:38:01 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:38:02 compute-0 ceph-mon[192914]: pgmap v2674: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:03 compute-0 nova_compute[349548]: 2025-12-05 02:38:03.222 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:38:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2675: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:38:04 compute-0 nova_compute[349548]: 2025-12-05 02:38:04.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:38:04 compute-0 nova_compute[349548]: 2025-12-05 02:38:04.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 05 02:38:04 compute-0 nova_compute[349548]: 2025-12-05 02:38:04.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 05 02:38:04 compute-0 nova_compute[349548]: 2025-12-05 02:38:04.099 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 05 02:38:04 compute-0 nova_compute[349548]: 2025-12-05 02:38:04.099 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:38:04 compute-0 nova_compute[349548]: 2025-12-05 02:38:04.263 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:38:04 compute-0 nova_compute[349548]: 2025-12-05 02:38:04.264 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:38:04 compute-0 nova_compute[349548]: 2025-12-05 02:38:04.264 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:38:04 compute-0 nova_compute[349548]: 2025-12-05 02:38:04.264 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 05 02:38:04 compute-0 nova_compute[349548]: 2025-12-05 02:38:04.264 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:38:04 compute-0 ceph-mon[192914]: pgmap v2675: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:38:04 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3472407716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:38:04 compute-0 nova_compute[349548]: 2025-12-05 02:38:04.792 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:38:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2676: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:05 compute-0 nova_compute[349548]: 2025-12-05 02:38:05.414 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 05 02:38:05 compute-0 nova_compute[349548]: 2025-12-05 02:38:05.416 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3908MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 05 02:38:05 compute-0 nova_compute[349548]: 2025-12-05 02:38:05.417 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:38:05 compute-0 nova_compute[349548]: 2025-12-05 02:38:05.418 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:38:05 compute-0 nova_compute[349548]: 2025-12-05 02:38:05.487 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 05 02:38:05 compute-0 nova_compute[349548]: 2025-12-05 02:38:05.488 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 05 02:38:05 compute-0 nova_compute[349548]: 2025-12-05 02:38:05.504 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 05 02:38:05 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3472407716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:38:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 05 02:38:05 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1949335426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:38:05 compute-0 nova_compute[349548]: 2025-12-05 02:38:05.957 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 05 02:38:05 compute-0 nova_compute[349548]: 2025-12-05 02:38:05.972 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 05 02:38:05 compute-0 nova_compute[349548]: 2025-12-05 02:38:05.992 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 05 02:38:05 compute-0 nova_compute[349548]: 2025-12-05 02:38:05.995 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 05 02:38:05 compute-0 nova_compute[349548]: 2025-12-05 02:38:05.996 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:38:06 compute-0 nova_compute[349548]: 2025-12-05 02:38:06.098 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:38:06 compute-0 ceph-mon[192914]: pgmap v2676: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:06 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1949335426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 05 02:38:06 compute-0 nova_compute[349548]: 2025-12-05 02:38:06.964 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:38:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2677: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:07 compute-0 ceph-mon[192914]: pgmap v2677: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:08 compute-0 nova_compute[349548]: 2025-12-05 02:38:08.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:38:08 compute-0 nova_compute[349548]: 2025-12-05 02:38:08.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 05 02:38:08 compute-0 nova_compute[349548]: 2025-12-05 02:38:08.225 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:38:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:38:09 compute-0 nova_compute[349548]: 2025-12-05 02:38:09.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:38:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2678: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:09 compute-0 podman[490311]: 2025-12-05 02:38:09.72466403 +0000 UTC m=+0.117668697 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 05 02:38:09 compute-0 podman[490310]: 2025-12-05 02:38:09.75908822 +0000 UTC m=+0.156593954 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 05 02:38:10 compute-0 ceph-mon[192914]: pgmap v2678: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:11 compute-0 nova_compute[349548]: 2025-12-05 02:38:11.100 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:38:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2679: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:11 compute-0 podman[490352]: 2025-12-05 02:38:11.716518204 +0000 UTC m=+0.112484331 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi)
Dec 05 02:38:11 compute-0 podman[490351]: 2025-12-05 02:38:11.721087233 +0000 UTC m=+0.120944379 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc.)
Dec 05 02:38:11 compute-0 podman[490350]: 2025-12-05 02:38:11.763810827 +0000 UTC m=+0.169326983 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 05 02:38:12 compute-0 nova_compute[349548]: 2025-12-05 02:38:12.061 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:38:12 compute-0 nova_compute[349548]: 2025-12-05 02:38:12.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:38:12 compute-0 nova_compute[349548]: 2025-12-05 02:38:12.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:38:12 compute-0 ceph-mon[192914]: pgmap v2679: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:13 compute-0 nova_compute[349548]: 2025-12-05 02:38:13.229 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:38:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2680: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:38:14 compute-0 ceph-mon[192914]: pgmap v2680: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:15 compute-0 nova_compute[349548]: 2025-12-05 02:38:15.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:38:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2681: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:16 compute-0 nova_compute[349548]: 2025-12-05 02:38:16.106 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:38:16 compute-0 ceph-mon[192914]: pgmap v2681: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:38:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:38:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:38:16
Dec 05 02:38:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 05 02:38:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec 05 02:38:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'volumes', 'backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'vms', '.mgr']
Dec 05 02:38:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec 05 02:38:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:38:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:38:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:38:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:38:16 compute-0 sshd-session[490406]: Accepted publickey for zuul from 192.168.122.10 port 46354 ssh2: ECDSA SHA256:hwGZQQKn4dthinw64cUBuhjxWFkXfIx1t2ux3FT0yvk
Dec 05 02:38:16 compute-0 systemd-logind[792]: New session 67 of user zuul.
Dec 05 02:38:16 compute-0 systemd[1]: Started Session 67 of User zuul.
Dec 05 02:38:16 compute-0 sshd-session[490406]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 05 02:38:16 compute-0 sudo[490410]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 05 02:38:16 compute-0 sudo[490410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 05 02:38:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2682: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:18 compute-0 nova_compute[349548]: 2025-12-05 02:38:18.231 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:38:18 compute-0 ceph-mon[192914]: pgmap v2682: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 05 02:38:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:38:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 05 02:38:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 05 02:38:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:38:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 05 02:38:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:38:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 05 02:38:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:38:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 05 02:38:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:38:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2683: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:19 compute-0 ceph-mon[192914]: pgmap v2683: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:20 compute-0 sudo[490543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:38:20 compute-0 sudo[490543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:38:20 compute-0 sudo[490543]: pam_unix(sudo:session): session closed for user root
Dec 05 02:38:20 compute-0 sudo[490583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:38:20 compute-0 sudo[490583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:38:20 compute-0 sudo[490583]: pam_unix(sudo:session): session closed for user root
Dec 05 02:38:20 compute-0 sudo[490611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:38:20 compute-0 sudo[490611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:38:20 compute-0 sudo[490611]: pam_unix(sudo:session): session closed for user root
Dec 05 02:38:20 compute-0 sudo[490655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 05 02:38:20 compute-0 sudo[490655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:38:20 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15879 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:21 compute-0 nova_compute[349548]: 2025-12-05 02:38:21.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 05 02:38:21 compute-0 sudo[490655]: pam_unix(sudo:session): session closed for user root
Dec 05 02:38:21 compute-0 nova_compute[349548]: 2025-12-05 02:38:21.112 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:38:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:38:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:38:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 05 02:38:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:38:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 05 02:38:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:38:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c6b284c2-2224-42e8-9ddd-61ceb1b93540 does not exist
Dec 05 02:38:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 18fc9123-a481-42fb-870a-3a053c9ba51f does not exist
Dec 05 02:38:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev fb29df09-c097-4a72-b6fb-aba2a8a62520 does not exist
Dec 05 02:38:21 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15881 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 05 02:38:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:38:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 05 02:38:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:38:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:38:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:38:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:38:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 05 02:38:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:38:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 05 02:38:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 05 02:38:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:38:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2684: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:21 compute-0 sudo[490739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:38:21 compute-0 sudo[490739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:38:21 compute-0 sudo[490739]: pam_unix(sudo:session): session closed for user root
Dec 05 02:38:21 compute-0 sudo[490768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:38:21 compute-0 sudo[490768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:38:21 compute-0 sudo[490768]: pam_unix(sudo:session): session closed for user root
Dec 05 02:38:21 compute-0 sudo[490811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:38:21 compute-0 sudo[490811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:38:21 compute-0 sudo[490811]: pam_unix(sudo:session): session closed for user root
Dec 05 02:38:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Dec 05 02:38:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2273979279' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 05 02:38:21 compute-0 sudo[490836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Dec 05 02:38:21 compute-0 sudo[490836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:38:22 compute-0 ceph-mon[192914]: from='client.15879 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:22 compute-0 ceph-mon[192914]: from='client.15881 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:22 compute-0 ceph-mon[192914]: pgmap v2684: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:22 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2273979279' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 05 02:38:22 compute-0 podman[490921]: 2025-12-05 02:38:22.320236482 +0000 UTC m=+0.102704255 container create 231641cee290a00313b1f675a3940094fd5dd0298cd408ebae698a23085380b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hertz, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:38:22 compute-0 podman[490921]: 2025-12-05 02:38:22.273942117 +0000 UTC m=+0.056409900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:38:22 compute-0 systemd[1]: Started libpod-conmon-231641cee290a00313b1f675a3940094fd5dd0298cd408ebae698a23085380b3.scope.
Dec 05 02:38:22 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:38:22 compute-0 podman[490921]: 2025-12-05 02:38:22.447398656 +0000 UTC m=+0.229866509 container init 231641cee290a00313b1f675a3940094fd5dd0298cd408ebae698a23085380b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 05 02:38:22 compute-0 podman[490921]: 2025-12-05 02:38:22.465970259 +0000 UTC m=+0.248438072 container start 231641cee290a00313b1f675a3940094fd5dd0298cd408ebae698a23085380b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:38:22 compute-0 podman[490921]: 2025-12-05 02:38:22.474275493 +0000 UTC m=+0.256743296 container attach 231641cee290a00313b1f675a3940094fd5dd0298cd408ebae698a23085380b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hertz, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 05 02:38:22 compute-0 frosty_hertz[490943]: 167 167
Dec 05 02:38:22 compute-0 systemd[1]: libpod-231641cee290a00313b1f675a3940094fd5dd0298cd408ebae698a23085380b3.scope: Deactivated successfully.
Dec 05 02:38:22 compute-0 podman[490921]: 2025-12-05 02:38:22.479590183 +0000 UTC m=+0.262057986 container died 231641cee290a00313b1f675a3940094fd5dd0298cd408ebae698a23085380b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:38:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c15fe9051b497c8e572e16586cb612808aae0f015f5ffdb2c620221228401d2d-merged.mount: Deactivated successfully.
Dec 05 02:38:22 compute-0 podman[490921]: 2025-12-05 02:38:22.542567038 +0000 UTC m=+0.325034801 container remove 231641cee290a00313b1f675a3940094fd5dd0298cd408ebae698a23085380b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hertz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:38:22 compute-0 systemd[1]: libpod-conmon-231641cee290a00313b1f675a3940094fd5dd0298cd408ebae698a23085380b3.scope: Deactivated successfully.
Dec 05 02:38:22 compute-0 podman[490967]: 2025-12-05 02:38:22.793329834 +0000 UTC m=+0.079913903 container create e14c9ab73f417af1b0f9af095f0cb7d7ec33e147fc9ba7682dc2ecd5304b7c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 05 02:38:22 compute-0 podman[490967]: 2025-12-05 02:38:22.76479269 +0000 UTC m=+0.051376789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:38:22 compute-0 systemd[1]: Started libpod-conmon-e14c9ab73f417af1b0f9af095f0cb7d7ec33e147fc9ba7682dc2ecd5304b7c3c.scope.
Dec 05 02:38:22 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d6a7839f0d224d90e8bea2c9bde6b7f2272d1023f7059e305e581413bdcfef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d6a7839f0d224d90e8bea2c9bde6b7f2272d1023f7059e305e581413bdcfef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d6a7839f0d224d90e8bea2c9bde6b7f2272d1023f7059e305e581413bdcfef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d6a7839f0d224d90e8bea2c9bde6b7f2272d1023f7059e305e581413bdcfef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d6a7839f0d224d90e8bea2c9bde6b7f2272d1023f7059e305e581413bdcfef/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 05 02:38:22 compute-0 podman[490967]: 2025-12-05 02:38:22.967042159 +0000 UTC m=+0.253626238 container init e14c9ab73f417af1b0f9af095f0cb7d7ec33e147fc9ba7682dc2ecd5304b7c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ganguly, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:38:22 compute-0 podman[490967]: 2025-12-05 02:38:22.979419718 +0000 UTC m=+0.266003777 container start e14c9ab73f417af1b0f9af095f0cb7d7ec33e147fc9ba7682dc2ecd5304b7c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ganguly, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 05 02:38:22 compute-0 podman[490967]: 2025-12-05 02:38:22.983784921 +0000 UTC m=+0.270368980 container attach e14c9ab73f417af1b0f9af095f0cb7d7ec33e147fc9ba7682dc2ecd5304b7c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ganguly, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:38:23 compute-0 nova_compute[349548]: 2025-12-05 02:38:23.232 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:38:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2685: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:23 compute-0 ceph-mon[192914]: pgmap v2685: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:38:24 compute-0 funny_ganguly[490983]: --> passed data devices: 0 physical, 3 LVM
Dec 05 02:38:24 compute-0 funny_ganguly[490983]: --> relative data size: 1.0
Dec 05 02:38:24 compute-0 funny_ganguly[490983]: --> All data devices are unavailable
Dec 05 02:38:24 compute-0 systemd[1]: libpod-e14c9ab73f417af1b0f9af095f0cb7d7ec33e147fc9ba7682dc2ecd5304b7c3c.scope: Deactivated successfully.
Dec 05 02:38:24 compute-0 systemd[1]: libpod-e14c9ab73f417af1b0f9af095f0cb7d7ec33e147fc9ba7682dc2ecd5304b7c3c.scope: Consumed 1.274s CPU time.
Dec 05 02:38:24 compute-0 podman[490967]: 2025-12-05 02:38:24.307824735 +0000 UTC m=+1.594408834 container died e14c9ab73f417af1b0f9af095f0cb7d7ec33e147fc9ba7682dc2ecd5304b7c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 05 02:38:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1d6a7839f0d224d90e8bea2c9bde6b7f2272d1023f7059e305e581413bdcfef-merged.mount: Deactivated successfully.
Dec 05 02:38:24 compute-0 podman[490967]: 2025-12-05 02:38:24.428340952 +0000 UTC m=+1.714925011 container remove e14c9ab73f417af1b0f9af095f0cb7d7ec33e147fc9ba7682dc2ecd5304b7c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ganguly, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 05 02:38:24 compute-0 systemd[1]: libpod-conmon-e14c9ab73f417af1b0f9af095f0cb7d7ec33e147fc9ba7682dc2ecd5304b7c3c.scope: Deactivated successfully.
Dec 05 02:38:24 compute-0 sudo[490836]: pam_unix(sudo:session): session closed for user root
Dec 05 02:38:24 compute-0 sudo[491031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:38:24 compute-0 sudo[491031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:38:24 compute-0 sudo[491031]: pam_unix(sudo:session): session closed for user root
Dec 05 02:38:24 compute-0 sudo[491068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:38:24 compute-0 sudo[491068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:38:24 compute-0 sudo[491068]: pam_unix(sudo:session): session closed for user root
Dec 05 02:38:24 compute-0 ovs-vsctl[491130]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 05 02:38:24 compute-0 podman[491098]: 2025-12-05 02:38:24.908993457 +0000 UTC m=+0.121609458 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:38:24 compute-0 sudo[491118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:38:24 compute-0 sudo[491118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:38:24 compute-0 sudo[491118]: pam_unix(sudo:session): session closed for user root
Dec 05 02:38:24 compute-0 podman[491099]: 2025-12-05 02:38:24.934158767 +0000 UTC m=+0.133496814 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:38:24 compute-0 podman[491100]: 2025-12-05 02:38:24.938027946 +0000 UTC m=+0.136958841 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-type=git)
Dec 05 02:38:25 compute-0 sudo[491198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- lvm list --format json
Dec 05 02:38:25 compute-0 sudo[491198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:38:25 compute-0 podman[491189]: 2025-12-05 02:38:25.05493472 +0000 UTC m=+0.131020243 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec 05 02:38:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2686: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:25 compute-0 ceph-mon[192914]: pgmap v2686: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:25 compute-0 podman[491307]: 2025-12-05 02:38:25.549867488 +0000 UTC m=+0.094826773 container create 4ed4eba216ee27a88d7a529a9990823bb00a5cd6aa0116550380afe3094b9c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jemison, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:38:25 compute-0 podman[491307]: 2025-12-05 02:38:25.512574107 +0000 UTC m=+0.057533452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:38:25 compute-0 systemd[1]: Started libpod-conmon-4ed4eba216ee27a88d7a529a9990823bb00a5cd6aa0116550380afe3094b9c6c.scope.
Dec 05 02:38:25 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:38:25 compute-0 podman[491307]: 2025-12-05 02:38:25.730292693 +0000 UTC m=+0.275251968 container init 4ed4eba216ee27a88d7a529a9990823bb00a5cd6aa0116550380afe3094b9c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 05 02:38:25 compute-0 podman[491307]: 2025-12-05 02:38:25.750149773 +0000 UTC m=+0.295109068 container start 4ed4eba216ee27a88d7a529a9990823bb00a5cd6aa0116550380afe3094b9c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jemison, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 05 02:38:25 compute-0 podman[491307]: 2025-12-05 02:38:25.757548961 +0000 UTC m=+0.302508216 container attach 4ed4eba216ee27a88d7a529a9990823bb00a5cd6aa0116550380afe3094b9c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:38:25 compute-0 interesting_jemison[491332]: 167 167
Dec 05 02:38:25 compute-0 systemd[1]: libpod-4ed4eba216ee27a88d7a529a9990823bb00a5cd6aa0116550380afe3094b9c6c.scope: Deactivated successfully.
Dec 05 02:38:25 compute-0 podman[491340]: 2025-12-05 02:38:25.838654667 +0000 UTC m=+0.059341724 container died 4ed4eba216ee27a88d7a529a9990823bb00a5cd6aa0116550380afe3094b9c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jemison, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:38:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d2322a8c2eed5e7aa0d1e019ee0707e37f0ec60cf97c5bce9a7abeb29745ec2-merged.mount: Deactivated successfully.
Dec 05 02:38:25 compute-0 podman[491340]: 2025-12-05 02:38:25.915228475 +0000 UTC m=+0.135915522 container remove 4ed4eba216ee27a88d7a529a9990823bb00a5cd6aa0116550380afe3094b9c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:38:25 compute-0 systemd[1]: libpod-conmon-4ed4eba216ee27a88d7a529a9990823bb00a5cd6aa0116550380afe3094b9c6c.scope: Deactivated successfully.
Dec 05 02:38:26 compute-0 nova_compute[349548]: 2025-12-05 02:38:26.113 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:38:26 compute-0 virtqemud[138703]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 05 02:38:26 compute-0 virtqemud[138703]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 05 02:38:26 compute-0 podman[491439]: 2025-12-05 02:38:26.21423023 +0000 UTC m=+0.089472471 container create da706aafdd4c35ccc1d5057a5d4b485e652b69df6b215d716c71ef6b305d2db4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:38:26 compute-0 virtqemud[138703]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 05 02:38:26 compute-0 podman[491439]: 2025-12-05 02:38:26.178460682 +0000 UTC m=+0.053703033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:38:26 compute-0 systemd[1]: Started libpod-conmon-da706aafdd4c35ccc1d5057a5d4b485e652b69df6b215d716c71ef6b305d2db4.scope.
Dec 05 02:38:26 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e93b6ff996702dce68d7303bc497d1da3751820d8324d1986ece4d0237fa3a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e93b6ff996702dce68d7303bc497d1da3751820d8324d1986ece4d0237fa3a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e93b6ff996702dce68d7303bc497d1da3751820d8324d1986ece4d0237fa3a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e93b6ff996702dce68d7303bc497d1da3751820d8324d1986ece4d0237fa3a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:38:26 compute-0 podman[491439]: 2025-12-05 02:38:26.373702444 +0000 UTC m=+0.248944775 container init da706aafdd4c35ccc1d5057a5d4b485e652b69df6b215d716c71ef6b305d2db4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:38:26 compute-0 podman[491439]: 2025-12-05 02:38:26.396768084 +0000 UTC m=+0.272010335 container start da706aafdd4c35ccc1d5057a5d4b485e652b69df6b215d716c71ef6b305d2db4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:38:26 compute-0 podman[491439]: 2025-12-05 02:38:26.400976393 +0000 UTC m=+0.276218694 container attach da706aafdd4c35ccc1d5057a5d4b485e652b69df6b215d716c71ef6b305d2db4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:38:26 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: cache status {prefix=cache status} (starting...)
Dec 05 02:38:27 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: client ls {prefix=client ls} (starting...)
Dec 05 02:38:27 compute-0 jolly_knuth[491471]: {
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:     "0": [
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:         {
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "devices": [
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "/dev/loop3"
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             ],
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "lv_name": "ceph_lv0",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "lv_size": "21470642176",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "name": "ceph_lv0",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "tags": {
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.cluster_name": "ceph",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.crush_device_class": "",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.encrypted": "0",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.osd_id": "0",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.type": "block",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.vdo": "0"
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             },
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "type": "block",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "vg_name": "ceph_vg0"
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:         }
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:     ],
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:     "1": [
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:         {
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "devices": [
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "/dev/loop4"
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             ],
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "lv_name": "ceph_lv1",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "lv_size": "21470642176",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "name": "ceph_lv1",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "tags": {
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.cluster_name": "ceph",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.crush_device_class": "",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.encrypted": "0",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.osd_id": "1",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.type": "block",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.vdo": "0"
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             },
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "type": "block",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "vg_name": "ceph_vg1"
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:         }
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:     ],
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:     "2": [
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:         {
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "devices": [
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "/dev/loop5"
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             ],
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "lv_name": "ceph_lv2",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "lv_size": "21470642176",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "name": "ceph_lv2",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "tags": {
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.cephx_lockbox_secret": "",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.cluster_name": "ceph",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.crush_device_class": "",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.encrypted": "0",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.osd_id": "2",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.type": "block",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:                 "ceph.vdo": "0"
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             },
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "type": "block",
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:             "vg_name": "ceph_vg2"
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:         }
Dec 05 02:38:27 compute-0 jolly_knuth[491471]:     ]
Dec 05 02:38:27 compute-0 jolly_knuth[491471]: }
Dec 05 02:38:27 compute-0 systemd[1]: libpod-da706aafdd4c35ccc1d5057a5d4b485e652b69df6b215d716c71ef6b305d2db4.scope: Deactivated successfully.
Dec 05 02:38:27 compute-0 podman[491439]: 2025-12-05 02:38:27.206753631 +0000 UTC m=+1.081995882 container died da706aafdd4c35ccc1d5057a5d4b485e652b69df6b215d716c71ef6b305d2db4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 05 02:38:27 compute-0 lvm[491651]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 05 02:38:27 compute-0 lvm[491651]: VG ceph_vg0 finished
Dec 05 02:38:27 compute-0 lvm[491658]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 05 02:38:27 compute-0 lvm[491658]: VG ceph_vg2 finished
Dec 05 02:38:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e93b6ff996702dce68d7303bc497d1da3751820d8324d1986ece4d0237fa3a8-merged.mount: Deactivated successfully.
Dec 05 02:38:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2687: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:27 compute-0 podman[491439]: 2025-12-05 02:38:27.292097096 +0000 UTC m=+1.167339347 container remove da706aafdd4c35ccc1d5057a5d4b485e652b69df6b215d716c71ef6b305d2db4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 05 02:38:27 compute-0 systemd[1]: libpod-conmon-da706aafdd4c35ccc1d5057a5d4b485e652b69df6b215d716c71ef6b305d2db4.scope: Deactivated successfully.
Dec 05 02:38:27 compute-0 lvm[491686]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 05 02:38:27 compute-0 lvm[491686]: VG ceph_vg1 finished
Dec 05 02:38:27 compute-0 ceph-mon[192914]: pgmap v2687: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:27 compute-0 sudo[491198]: pam_unix(sudo:session): session closed for user root
Dec 05 02:38:27 compute-0 sudo[491695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:38:27 compute-0 sudo[491695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:38:27 compute-0 sudo[491695]: pam_unix(sudo:session): session closed for user root
Dec 05 02:38:27 compute-0 sudo[491747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 05 02:38:27 compute-0 sudo[491747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:38:27 compute-0 sudo[491747]: pam_unix(sudo:session): session closed for user root
Dec 05 02:38:27 compute-0 sudo[491791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec 05 02:38:27 compute-0 sudo[491791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 05 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:38:27 compute-0 sudo[491791]: pam_unix(sudo:session): session closed for user root
Dec 05 02:38:27 compute-0 sudo[491845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -- raw list --format json
Dec 05 02:38:27 compute-0 sudo[491845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:38:27 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: damage ls {prefix=damage ls} (starting...)
Dec 05 02:38:28 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: dump loads {prefix=dump loads} (starting...)
Dec 05 02:38:28 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15885 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:28 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 05 02:38:28 compute-0 podman[491967]: 2025-12-05 02:38:28.206121645 +0000 UTC m=+0.122392680 container create bd32dd67c950b9505c8f1e5bf8a3a1a1c9b1fb61fcaafe47e909b5b774db0f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:38:28 compute-0 podman[491967]: 2025-12-05 02:38:28.116201331 +0000 UTC m=+0.032472386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:38:28 compute-0 nova_compute[349548]: 2025-12-05 02:38:28.234 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:38:28 compute-0 systemd[1]: Started libpod-conmon-bd32dd67c950b9505c8f1e5bf8a3a1a1c9b1fb61fcaafe47e909b5b774db0f59.scope.
Dec 05 02:38:28 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:38:28 compute-0 podman[491967]: 2025-12-05 02:38:28.302970805 +0000 UTC m=+0.219241870 container init bd32dd67c950b9505c8f1e5bf8a3a1a1c9b1fb61fcaafe47e909b5b774db0f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_brattain, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:38:28 compute-0 podman[491967]: 2025-12-05 02:38:28.31984787 +0000 UTC m=+0.236118905 container start bd32dd67c950b9505c8f1e5bf8a3a1a1c9b1fb61fcaafe47e909b5b774db0f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:38:28 compute-0 podman[491967]: 2025-12-05 02:38:28.324148602 +0000 UTC m=+0.240419657 container attach bd32dd67c950b9505c8f1e5bf8a3a1a1c9b1fb61fcaafe47e909b5b774db0f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_brattain, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Dec 05 02:38:28 compute-0 funny_brattain[492018]: 167 167
Dec 05 02:38:28 compute-0 systemd[1]: libpod-bd32dd67c950b9505c8f1e5bf8a3a1a1c9b1fb61fcaafe47e909b5b774db0f59.scope: Deactivated successfully.
Dec 05 02:38:28 compute-0 podman[491967]: 2025-12-05 02:38:28.326068666 +0000 UTC m=+0.242339701 container died bd32dd67c950b9505c8f1e5bf8a3a1a1c9b1fb61fcaafe47e909b5b774db0f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 05 02:38:28 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 05 02:38:28 compute-0 ceph-mon[192914]: from='client.15885 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f05bc9066e0cfe1d0aff81f454e1e8744b47753fb81a2b52d4ecbeb024a6a58-merged.mount: Deactivated successfully.
Dec 05 02:38:28 compute-0 podman[491967]: 2025-12-05 02:38:28.386669323 +0000 UTC m=+0.302940358 container remove bd32dd67c950b9505c8f1e5bf8a3a1a1c9b1fb61fcaafe47e909b5b774db0f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 05 02:38:28 compute-0 systemd[1]: libpod-conmon-bd32dd67c950b9505c8f1e5bf8a3a1a1c9b1fb61fcaafe47e909b5b774db0f59.scope: Deactivated successfully.
Dec 05 02:38:28 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 05 02:38:28 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15887 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:28 compute-0 podman[492064]: 2025-12-05 02:38:28.58845819 +0000 UTC m=+0.064969292 container create a3bd66726044175ac2121cefacc47f02572aa495e82fa43740adbbd3bab50b3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 05 02:38:28 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 05 02:38:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Dec 05 02:38:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1603919827' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 05 02:38:28 compute-0 podman[492064]: 2025-12-05 02:38:28.562867639 +0000 UTC m=+0.039378831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 05 02:38:28 compute-0 systemd[1]: Started libpod-conmon-a3bd66726044175ac2121cefacc47f02572aa495e82fa43740adbbd3bab50b3d.scope.
Dec 05 02:38:28 compute-0 systemd[1]: Started libcrun container.
Dec 05 02:38:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f02d74ef53ae62a24e0fc555c669b197971b74f2eff843649d18e6df9290e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 05 02:38:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f02d74ef53ae62a24e0fc555c669b197971b74f2eff843649d18e6df9290e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 05 02:38:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f02d74ef53ae62a24e0fc555c669b197971b74f2eff843649d18e6df9290e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 05 02:38:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f02d74ef53ae62a24e0fc555c669b197971b74f2eff843649d18e6df9290e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 05 02:38:28 compute-0 podman[492064]: 2025-12-05 02:38:28.699090688 +0000 UTC m=+0.175601810 container init a3bd66726044175ac2121cefacc47f02572aa495e82fa43740adbbd3bab50b3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 05 02:38:28 compute-0 podman[492064]: 2025-12-05 02:38:28.713684049 +0000 UTC m=+0.190195151 container start a3bd66726044175ac2121cefacc47f02572aa495e82fa43740adbbd3bab50b3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 05 02:38:28 compute-0 podman[492064]: 2025-12-05 02:38:28.718733722 +0000 UTC m=+0.195244844 container attach a3bd66726044175ac2121cefacc47f02572aa495e82fa43740adbbd3bab50b3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 05 02:38:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:38:28 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 05 02:38:28 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 05 02:38:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 05 02:38:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/371342701' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:38:29 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15895 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:29 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T02:38:29.258+0000 7f1b09f03640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 05 02:38:29 compute-0 ceph-mgr[193209]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 05 02:38:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2688: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:29 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: ops {prefix=ops} (starting...)
Dec 05 02:38:29 compute-0 ceph-mon[192914]: from='client.15887 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:29 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1603919827' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 05 02:38:29 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/371342701' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 05 02:38:29 compute-0 ceph-mon[192914]: from='client.15895 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:29 compute-0 ceph-mon[192914]: pgmap v2688: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Dec 05 02:38:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3741055999' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 05 02:38:29 compute-0 silly_merkle[492097]: {
Dec 05 02:38:29 compute-0 silly_merkle[492097]:     "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec 05 02:38:29 compute-0 silly_merkle[492097]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:38:29 compute-0 silly_merkle[492097]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 05 02:38:29 compute-0 silly_merkle[492097]:         "osd_id": 0,
Dec 05 02:38:29 compute-0 silly_merkle[492097]:         "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec 05 02:38:29 compute-0 silly_merkle[492097]:         "type": "bluestore"
Dec 05 02:38:29 compute-0 silly_merkle[492097]:     },
Dec 05 02:38:29 compute-0 silly_merkle[492097]:     "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec 05 02:38:29 compute-0 silly_merkle[492097]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:38:29 compute-0 silly_merkle[492097]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 05 02:38:29 compute-0 silly_merkle[492097]:         "osd_id": 1,
Dec 05 02:38:29 compute-0 silly_merkle[492097]:         "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec 05 02:38:29 compute-0 silly_merkle[492097]:         "type": "bluestore"
Dec 05 02:38:29 compute-0 silly_merkle[492097]:     },
Dec 05 02:38:29 compute-0 silly_merkle[492097]:     "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec 05 02:38:29 compute-0 silly_merkle[492097]:         "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec 05 02:38:29 compute-0 silly_merkle[492097]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec 05 02:38:29 compute-0 silly_merkle[492097]:         "osd_id": 2,
Dec 05 02:38:29 compute-0 silly_merkle[492097]:         "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec 05 02:38:29 compute-0 silly_merkle[492097]:         "type": "bluestore"
Dec 05 02:38:29 compute-0 silly_merkle[492097]:     }
Dec 05 02:38:29 compute-0 silly_merkle[492097]: }
Dec 05 02:38:29 compute-0 podman[492064]: 2025-12-05 02:38:29.690752205 +0000 UTC m=+1.167263317 container died a3bd66726044175ac2121cefacc47f02572aa495e82fa43740adbbd3bab50b3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 05 02:38:29 compute-0 systemd[1]: libpod-a3bd66726044175ac2121cefacc47f02572aa495e82fa43740adbbd3bab50b3d.scope: Deactivated successfully.
Dec 05 02:38:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Dec 05 02:38:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1703454689' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 05 02:38:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5f02d74ef53ae62a24e0fc555c669b197971b74f2eff843649d18e6df9290e1-merged.mount: Deactivated successfully.
Dec 05 02:38:29 compute-0 podman[158197]: time="2025-12-05T02:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 05 02:38:29 compute-0 podman[492064]: 2025-12-05 02:38:29.765395567 +0000 UTC m=+1.241906669 container remove a3bd66726044175ac2121cefacc47f02572aa495e82fa43740adbbd3bab50b3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 05 02:38:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 44140 "" "Go-http-client/1.1"
Dec 05 02:38:29 compute-0 systemd[1]: libpod-conmon-a3bd66726044175ac2121cefacc47f02572aa495e82fa43740adbbd3bab50b3d.scope: Deactivated successfully.
Dec 05 02:38:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8205 "" "Go-http-client/1.1"
Dec 05 02:38:29 compute-0 sudo[491845]: pam_unix(sudo:session): session closed for user root
Dec 05 02:38:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 05 02:38:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:38:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 05 02:38:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:38:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev cef351cb-e8f6-473b-bb51-15817bfed718 does not exist
Dec 05 02:38:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 6fa266ab-8dc2-433d-8fc8-fe8a76628d74 does not exist
Dec 05 02:38:29 compute-0 sudo[492286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 05 02:38:29 compute-0 sudo[492286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:38:29 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: session ls {prefix=session ls} (starting...)
Dec 05 02:38:29 compute-0 sudo[492286]: pam_unix(sudo:session): session closed for user root
Dec 05 02:38:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Dec 05 02:38:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1659965562' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 05 02:38:30 compute-0 sudo[492330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 05 02:38:30 compute-0 sudo[492330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 05 02:38:30 compute-0 sudo[492330]: pam_unix(sudo:session): session closed for user root
Dec 05 02:38:30 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: status {prefix=status} (starting...)
Dec 05 02:38:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec 05 02:38:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3576470875' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 05 02:38:30 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15905 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:30 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3741055999' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 05 02:38:30 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1703454689' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 05 02:38:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:38:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec 05 02:38:30 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1659965562' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 05 02:38:30 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3576470875' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 05 02:38:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec 05 02:38:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1086202521' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 05 02:38:30 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15909 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec 05 02:38:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1805997846' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 05 02:38:31 compute-0 nova_compute[349548]: 2025-12-05 02:38:31.118 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:38:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Dec 05 02:38:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3674282684' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 05 02:38:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2689: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:31 compute-0 ceph-mon[192914]: from='client.15905 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:31 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1086202521' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 05 02:38:31 compute-0 ceph-mon[192914]: from='client.15909 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:31 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1805997846' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 05 02:38:31 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3674282684' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 05 02:38:31 compute-0 ceph-mon[192914]: pgmap v2689: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:31 compute-0 openstack_network_exporter[366555]: ERROR   02:38:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 05 02:38:31 compute-0 openstack_network_exporter[366555]: ERROR   02:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:38:31 compute-0 openstack_network_exporter[366555]: ERROR   02:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 05 02:38:31 compute-0 openstack_network_exporter[366555]: ERROR   02:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 05 02:38:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:38:31 compute-0 openstack_network_exporter[366555]: ERROR   02:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 05 02:38:31 compute-0 openstack_network_exporter[366555]: 
Dec 05 02:38:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec 05 02:38:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3768399922' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 02:38:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Dec 05 02:38:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/465384984' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 05 02:38:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Dec 05 02:38:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2729641969' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 05 02:38:32 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15921 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:32 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T02:38:32.126+0000 7f1b09f03640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 05 02:38:32 compute-0 ceph-mgr[193209]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 05 02:38:32 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3768399922' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 02:38:32 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/465384984' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 05 02:38:32 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2729641969' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 05 02:38:32 compute-0 ceph-mon[192914]: from='client.15921 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec 05 02:38:32 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/482079521' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 05 02:38:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Dec 05 02:38:32 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/586608825' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 05 02:38:32 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15927 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Dec 05 02:38:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1475494273' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 05 02:38:33 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15931 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:33 compute-0 nova_compute[349548]: 2025-12-05 02:38:33.236 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:38:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2690: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:33 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/482079521' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 05 02:38:33 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/586608825' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 05 02:38:33 compute-0 ceph-mon[192914]: from='client.15927 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:33 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1475494273' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 05 02:38:33 compute-0 ceph-mon[192914]: from='client.15931 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:33 compute-0 ceph-mon[192914]: pgmap v2690: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec 05 02:38:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2162378691' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 05 02:38:33 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15935 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:18.746080+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:19.746336+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:20.746632+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:21.746853+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:22.747426+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:23.747756+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c43a4cf400 session 0x55c43a54e780
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4398b2800 session 0x55c4397da1e0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 173.469711304s of 174.075759888s, submitted: 52
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4398b2000 session 0x55c43986dc20
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:24.748172+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:25.748575+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4398b3000 session 0x55c4399641e0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:26.749077+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c300ae/0x1cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210818 data_alloc: 218103808 data_used: 11628544
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:27.749381+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:28.749809+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c300ae/0x1cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:29.750196+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:30.750583+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c300ae/0x1cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:31.751064+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210818 data_alloc: 218103808 data_used: 11628544
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:32.751573+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:33.751810+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.715733528s of 10.014011383s, submitted: 44
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103849984 unmapped: 34160640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:34.752314+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103874560 unmapped: 34136064 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:35.752705+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103874560 unmapped: 34136064 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c300ae/0x1cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [0,0,0,1])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:36.753081+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210818 data_alloc: 218103808 data_used: 11628544
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9977000/0x0/0x4ffc00000, data 0x1c300ae/0x1cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:37.753430+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:38.753749+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:39.753994+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:40.754272+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:41.754557+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210818 data_alloc: 218103808 data_used: 11628544
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:42.754913+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9977000/0x0/0x4ffc00000, data 0x1c300ae/0x1cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:43.755195+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:44.755609+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4398b2c00 session 0x55c43990bc20
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c437982c00 session 0x55c4398c2d20
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4399f0c00 session 0x55c4398c2780
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.585576057s of 11.133249283s, submitted: 76
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:45.756105+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100327424 unmapped: 37683200 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4398b2000 session 0x55c4373285a0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:46.756392+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:47.756630+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:48.756970+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:49.757175+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:50.757603+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:51.757856+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:52.758359+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:53.758831+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:54.759262+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:55.759772+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:56.760171+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:57.760599+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:58.761145+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:59.761575+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:00.761970+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:01.762370+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:02.762770+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:03.763156+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:04.763391+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:05.763819+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:06.764209+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:07.764651+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:08.765164+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:09.765560+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:10.766160+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:11.766549+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:12.767194+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:13.767619+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:14.767840+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:15.768191+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:16.768557+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:17.769031+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:18.769246+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:19.769600+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:20.770033+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:21.770399+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:22.770746+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:23.771102+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:24.771493+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:25.771877+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:26.772170+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:27.772589+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:28.773111+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:29.773427+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:30.773811+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:31.774173+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:32.774596+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:33.775073+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:34.775566+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:35.775776+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:36.776165+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:37.776586+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:38.777046+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:39.777415+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:40.777713+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:41.778053+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:42.778460+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:43.778827+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:44.779032+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:45.779271+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:46.779601+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:47.779870+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:48.780300+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:49.780622+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:50.780961+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:51.781151+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:52.781503+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:53.781831+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:54.782183+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:55.782425+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:56.782839+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:57.783231+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:58.783517+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:59.783759+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:00.795158+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:01.795541+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:02.796162+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:03.796535+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:04.797009+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:05.797217+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 80.609878540s of 80.641670227s, submitted: 13
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 37224448 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:06.797395+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4398b2800 session 0x55c439c743c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100802560 unmapped: 37208064 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:07.797628+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061603 data_alloc: 218103808 data_used: 4386816
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _renew_subs
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100802560 unmapped: 37208064 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa4e0000/0x0/0x4ffc00000, data 0x10c5c1b/0x118d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:08.798149+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 ms_handle_reset con 0x55c4398b3000 session 0x55c4398a9860
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:09.798447+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:10.798993+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:11.799329+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:12.799732+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:13.800083+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:14.800353+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15937 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:15.800730+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:16.801154+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:17.801523+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:18.801772+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:19.802225+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:20.802424+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:21.802678+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:22.803118+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:23.803592+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:24.804141+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:25.804502+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:26.805045+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:27.805419+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:28.805762+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:29.806165+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:30.806493+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:31.806813+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:32.807204+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:33.807576+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:34.808053+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:35.808417+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:36.808662+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:37.808933+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:38.809172+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:39.809533+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:40.810045+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:41.810420+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:42.810794+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:43.811167+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:44.811500+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:45.811851+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:46.812240+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:47.812666+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125576 data_alloc: 218103808 data_used: 4407296
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:48.813095+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:49.813444+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:50.813798+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:51.814169+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:52.814582+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125576 data_alloc: 218103808 data_used: 4407296
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:53.814956+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:54.815345+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:55.815694+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:56.816147+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:57.816409+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125576 data_alloc: 218103808 data_used: 4407296
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:58.816785+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:59.817173+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:00.817482+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:01.818061+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:02.818421+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125576 data_alloc: 218103808 data_used: 4407296
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:03.818674+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 ms_handle_reset con 0x55c437982c00 session 0x55c4373b0780
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 ms_handle_reset con 0x55c4398b2000 session 0x55c43911be00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 ms_handle_reset con 0x55c4398b2800 session 0x55c43803e780
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:04.818915+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:05.819086+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399f0c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 59.751800537s of 59.914466858s, submitted: 18
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43a4cf400
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _renew_subs
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c43a4cf400 session 0x55c43911a1e0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4399f0c00 session 0x55c4378c32c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:06.819346+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107839488 unmapped: 30171136 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9cd8000/0x0/0x4ffc00000, data 0x18c935b/0x1995000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c437982c00 session 0x55c439165e00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:07.819691+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398b2800 session 0x55c436fc4d20
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107356160 unmapped: 30654464 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398b2000 session 0x55c439164960
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4399ee800 session 0x55c4373183c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c438edbc00 session 0x55c437aa0000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398b2000 session 0x55c437aa1c20
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398b2800 session 0x55c437aa0780
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4399ee800 session 0x55c43914a1e0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210372 data_alloc: 218103808 data_used: 11231232
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c437982c00 session 0x55c439859860
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43a4ce800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398a0800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398a0800 session 0x55c43914a000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c43a4ce800 session 0x55c437319c20
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398b2000 session 0x55c4398a9a40
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c437982c00 session 0x55c43914bc20
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4399ee800 session 0x55c4373312c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:08.819953+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438cd6800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107626496 unmapped: 30384128 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c438cd6800 session 0x55c437330000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c437982c00 session 0x55c4399cd4a0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398b2000 session 0x55c4399cde00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _renew_subs
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 handle_osd_map epochs [136,136], i have 136, src has [1,136]
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:09.820602+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107667456 unmapped: 30343168 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24f1f7b/0x25c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,1])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c4398b2800 session 0x55c4397dab40
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:10.820956+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107675648 unmapped: 30334976 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:11.821314+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 30056448 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c4399ee800 session 0x55c4399643c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43a4ce800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c43a4ce800 session 0x55c4398c3680
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c437982c00 session 0x55c4398665a0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c4398b2000 session 0x55c43980cd20
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c4398b2800 session 0x55c4398d4780
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:12.821726+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 30015488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304547 data_alloc: 218103808 data_used: 11243520
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:13.822111+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 30015488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8b0c000/0x0/0x4ffc00000, data 0x2a94f58/0x2b62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c4399ee800 session 0x55c4398d45a0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43989d400
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:14.822470+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 30015488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c4398b3800 session 0x55c437329860
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:15.822692+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107569152 unmapped: 30441472 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:16.823165+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107569152 unmapped: 30441472 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:17.823341+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107175936 unmapped: 30834688 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1370618 data_alloc: 234881024 data_used: 19755008
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _renew_subs
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.508833885s of 12.396708488s, submitted: 129
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:18.823587+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109142016 unmapped: 28868608 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f8ade000/0x0/0x4ffc00000, data 0x2ac09bb/0x2b8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:19.823754+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 27688960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:20.824047+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 27688960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f8ade000/0x0/0x4ffc00000, data 0x2ac09bb/0x2b8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:21.830207+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 27688960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f8ade000/0x0/0x4ffc00000, data 0x2ac09bb/0x2b8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:22.830675+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 27688960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c43989d400 session 0x55c43965e3c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c438edbc00 session 0x55c43802a000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387780 data_alloc: 234881024 data_used: 21659648
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2400
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2400 session 0x55c437aa1e00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:23.831017+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 108232704 unmapped: 29777920 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b3c00 session 0x55c4399cd0e0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:24.831328+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30408704 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:25.832092+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30408704 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91d8000/0x0/0x4ffc00000, data 0x23c79bb/0x2496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:26.832515+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30408704 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:27.833016+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30408704 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302492 data_alloc: 234881024 data_used: 17076224
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:28.833477+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107610112 unmapped: 30400512 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:29.833872+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91d8000/0x0/0x4ffc00000, data 0x23c79bb/0x2496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:30.834147+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91d8000/0x0/0x4ffc00000, data 0x23c79bb/0x2496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:31.834511+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:32.835076+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332732 data_alloc: 234881024 data_used: 21368832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:33.835326+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:34.835640+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:35.836079+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:36.836480+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91d8000/0x0/0x4ffc00000, data 0x23c79bb/0x2496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:37.836839+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332732 data_alloc: 234881024 data_used: 21368832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:38.837056+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:39.837436+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:40.837629+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91d8000/0x0/0x4ffc00000, data 0x23c79bb/0x2496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:41.837793+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:42.838002+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91d8000/0x0/0x4ffc00000, data 0x23c79bb/0x2496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332732 data_alloc: 234881024 data_used: 21368832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:43.838271+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 28295168 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399efc00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4399efc00 session 0x55c4399cc5a0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c438edbc00 session 0x55c4399cd2c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:44.838471+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43989d400
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c43989d400 session 0x55c4399cc000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2400
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2400 session 0x55c43802ba40
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 28295168 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.453773499s of 26.580440521s, submitted: 32
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b3c00 session 0x55c43986d860
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399efc00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4399efc00 session 0x55c43965fc20
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c438edbc00 session 0x55c4373b70e0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43989d400
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c43989d400 session 0x55c43914bc20
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2400
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2400 session 0x55c4398a92c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:45.838711+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 27394048 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:46.839117+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 27394048 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:47.839332+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 27394048 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389229 data_alloc: 234881024 data_used: 21372928
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:48.839576+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f8b33000/0x0/0x4ffc00000, data 0x2a6ba1d/0x2b3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110649344 unmapped: 27361280 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:49.839789+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 27344896 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:50.840275+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 20815872 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:51.840480+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b3c00 session 0x55c43980c3c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 22495232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399efc00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ef800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:52.840741+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7e8f000/0x0/0x4ffc00000, data 0x370ea1d/0x37de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115564544 unmapped: 22446080 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503030 data_alloc: 234881024 data_used: 22388736
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:53.840994+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 22896640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:54.841408+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 20946944 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:55.841673+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117784576 unmapped: 20226048 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:56.841857+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120913920 unmapped: 17096704 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4399efc00 session 0x55c4398590e0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.942190170s of 12.467167854s, submitted: 119
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4399ef800 session 0x55c4399cc1e0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:57.842061+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118046720 unmapped: 19963904 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7e8a000/0x0/0x4ffc00000, data 0x3714a1d/0x37e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,1])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451822 data_alloc: 234881024 data_used: 22401024
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c438edbc00 session 0x55c43967b4a0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:58.842465+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7e8a000/0x0/0x4ffc00000, data 0x3714a1d/0x37e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 19947520 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:59.842791+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f852e000/0x0/0x4ffc00000, data 0x30709bb/0x313f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 19947520 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f852e000/0x0/0x4ffc00000, data 0x30709bb/0x313f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:00.843055+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120578048 unmapped: 17432576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:01.843288+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120578048 unmapped: 17432576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:02.844306+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120602624 unmapped: 17408000 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503456 data_alloc: 234881024 data_used: 22880256
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:03.844667+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120602624 unmapped: 17408000 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:04.845174+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120602624 unmapped: 17408000 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:05.845490+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120602624 unmapped: 17408000 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:06.845987+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120602624 unmapped: 17408000 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:07.846445+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503456 data_alloc: 234881024 data_used: 22880256
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.669157028s of 11.117276192s, submitted: 70
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:08.846825+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:09.847254+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:10.847514+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:11.847805+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:12.848340+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503472 data_alloc: 234881024 data_used: 22880256
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:13.848577+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:14.849015+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:15.849234+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:16.849637+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:17.850077+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503472 data_alloc: 234881024 data_used: 22880256
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:18.850383+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:19.850562+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:20.851264+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:21.851543+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:22.852043+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503472 data_alloc: 234881024 data_used: 22880256
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:23.852264+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:24.852450+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:25.852810+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:26.853362+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:27.853674+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503792 data_alloc: 234881024 data_used: 22888448
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:28.854142+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:29.854577+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:30.855069+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:31.855489+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 17375232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:32.855793+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 17375232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503792 data_alloc: 234881024 data_used: 22888448
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:33.856138+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 17375232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:34.856529+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 17375232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:35.856955+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 17375232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:36.857356+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 17375232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:37.857555+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43989d400
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.523187637s of 29.533178329s, submitted: 1
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c43989d400 session 0x55c439e163c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118743040 unmapped: 19267584 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2400
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2400 session 0x55c439100f00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b3c00 session 0x55c439c74960
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c438edbc00 session 0x55c43990b2c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43989d400
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c43989d400 session 0x55c4399ccf00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:38.857991+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533172 data_alloc: 234881024 data_used: 22888448
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118743040 unmapped: 19267584 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:39.858337+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:40.858844+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:41.859223+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2800 session 0x55c437aa0b40
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2000 session 0x55c4373183c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:42.859611+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7c4e000/0x0/0x4ffc00000, data 0x39519bb/0x3a20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2400
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2400 session 0x55c43965e000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:43.860130+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533172 data_alloc: 234881024 data_used: 22888448
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:44.860359+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43989d400
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:45.860848+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c437982c00 session 0x55c437c10960
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b3000 session 0x55c43965ef00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7c4e000/0x0/0x4ffc00000, data 0x39519bb/0x3a20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:46.860969+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2000 session 0x55c43717eb40
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 22814720 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:47.861269+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 22806528 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:48.861452+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352478 data_alloc: 234881024 data_used: 17600512
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 22806528 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:49.861843+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 22806528 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.346131325s of 12.543250084s, submitted: 31
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:50.862074+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2800 session 0x55c437aa0960
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ef800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ef400
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 22806528 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f89f5000/0x0/0x4ffc00000, data 0x2750949/0x281d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:51.862395+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 22806528 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:52.862802+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 22806528 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:53.863156+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355411 data_alloc: 234881024 data_used: 17719296
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115335168 unmapped: 22675456 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:54.863497+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4399ef800 session 0x55c437aa14a0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4399ef400 session 0x55c439101e00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115335168 unmapped: 22675456 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e396c/0x24b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:55.863747+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c437982c00 session 0x55c43ac4de00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e396c/0x24b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:56.864153+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:57.864570+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:58.865024+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324103 data_alloc: 234881024 data_used: 17600512
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:59.865410+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:00.865777+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:01.866001+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:02.866346+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:03.866651+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324103 data_alloc: 234881024 data_used: 17600512
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:04.867011+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:05.867523+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:06.867968+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:07.868388+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:08.868796+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324103 data_alloc: 234881024 data_used: 17600512
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:09.869041+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:10.869222+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:11.869441+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:12.869819+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:13.870182+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324103 data_alloc: 234881024 data_used: 17600512
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:14.870579+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:15.871001+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:16.871308+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:17.871698+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:18.871996+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324103 data_alloc: 234881024 data_used: 17600512
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:19.872442+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.978549957s of 29.158163071s, submitted: 17
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 23904256 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:20.872770+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 23904256 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:21.873121+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 23904256 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:22.873343+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91be000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 23904256 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:23.873697+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324439 data_alloc: 234881024 data_used: 17666048
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 23904256 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:24.874085+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:25.874461+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:26.874807+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:27.875018+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91b9000/0x0/0x4ffc00000, data 0x23e8949/0x24b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:28.875321+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327689 data_alloc: 234881024 data_used: 17661952
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:29.875677+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:30.876022+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:31.876266+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:32.876534+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.963225365s of 13.028007507s, submitted: 11
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:33.877407+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 24256512 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91b8000/0x0/0x4ffc00000, data 0x23e9949/0x24b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1326709 data_alloc: 234881024 data_used: 17661952
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:34.877874+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 24256512 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:35.878206+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 24256512 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:36.878685+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 24256512 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f81b7000/0x0/0x4ffc00000, data 0x33e9959/0x34b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:37.879058+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 32628736 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f81b7000/0x0/0x4ffc00000, data 0x33e9959/0x34b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _renew_subs
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7d47000/0x0/0x4ffc00000, data 0x3859959/0x3927000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c43ac4c000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:38.879436+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 32620544 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1474162 data_alloc: 234881024 data_used: 17670144
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:39.879708+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 32620544 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:40.880159+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 32620544 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:41.880515+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 32620544 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7d44000/0x0/0x4ffc00000, data 0x385b4d6/0x392a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:42.880851+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 32620544 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:43.881225+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 32620544 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473458 data_alloc: 234881024 data_used: 17670144
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:44.881641+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 32612352 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7d44000/0x0/0x4ffc00000, data 0x385b4d6/0x392a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:45.882134+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 32612352 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7d44000/0x0/0x4ffc00000, data 0x385b4d6/0x392a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:46.882506+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 32612352 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:47.882879+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 32612352 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7d44000/0x0/0x4ffc00000, data 0x385b4d6/0x392a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.567854881s of 15.844666481s, submitted: 24
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2800 session 0x55c437380960
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:48.883263+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 121815040 unmapped: 24592384 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c437aa0d20
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1525586 data_alloc: 251658240 data_used: 37007360
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:49.883533+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 132841472 unmapped: 13565952 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7d44000/0x0/0x4ffc00000, data 0x385b4d6/0x392a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c439838000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c43989d400 session 0x55c4398a90e0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c43806cd20
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2800 session 0x55c437aa05a0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c43a54e5a0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:50.883762+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c43a54ef00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c43990b4a0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7715000/0x0/0x4ffc00000, data 0x3e8a4d6/0x3f59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c439101c20
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:51.884222+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:52.884572+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:53.885061+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c4373314a0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1443055 data_alloc: 234881024 data_used: 30597120
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:54.885421+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43989d400
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e01000/0x0/0x4ffc00000, data 0x336d4d6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:55.885761+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e01000/0x0/0x4ffc00000, data 0x336d4d6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:56.886127+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:57.886494+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e01000/0x0/0x4ffc00000, data 0x336d4d6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:58.886835+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1443187 data_alloc: 234881024 data_used: 30597120
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:59.887165+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:00.887542+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ef400
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399ef400 session 0x55c439101a40
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c439100f00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:01.887865+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c4398a92c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c438015680
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 20307968 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.293066978s of 13.490474701s, submitted: 33
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e01000/0x0/0x4ffc00000, data 0x336d4d6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,1])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c4373b0b40
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399ee000 session 0x55c4373310e0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e91000/0x0/0x4ffc00000, data 0x370d4e6/0x37dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c437330d20
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:02.888368+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c437aae3c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c437380960
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130064384 unmapped: 20021248 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:03.888629+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130064384 unmapped: 20021248 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1507941 data_alloc: 251658240 data_used: 35082240
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:04.889150+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130064384 unmapped: 20021248 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e91000/0x0/0x4ffc00000, data 0x370d4e6/0x37dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:05.889371+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130064384 unmapped: 20021248 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:06.889693+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130064384 unmapped: 20021248 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:07.890129+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130064384 unmapped: 20021248 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c437381e00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:08.891209+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130367488 unmapped: 19718144 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399eec00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1524263 data_alloc: 251658240 data_used: 36999168
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:09.891384+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130367488 unmapped: 19718144 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:10.891557+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130400256 unmapped: 19685376 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:11.891729+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:12.891969+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:13.892214+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550983 data_alloc: 251658240 data_used: 40714240
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:14.892380+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:15.892760+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:16.893142+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:17.893486+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:18.894026+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550983 data_alloc: 251658240 data_used: 40714240
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:19.894467+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:20.894816+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:21.895075+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:22.895371+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:23.895595+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550983 data_alloc: 251658240 data_used: 40714240
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:24.895799+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:25.896024+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:26.896237+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:27.896440+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:28.896646+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550983 data_alloc: 251658240 data_used: 40714240
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:29.896848+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:30.897029+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:31.897259+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131710976 unmapped: 18374656 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:32.897503+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131710976 unmapped: 18374656 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:33.897702+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 18366464 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550983 data_alloc: 251658240 data_used: 40714240
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:34.897954+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 32.743537903s of 32.836509705s, submitted: 6
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 12705792 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:35.898205+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140926976 unmapped: 9158656 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:36.898588+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7455000/0x0/0x4ffc00000, data 0x41434e6/0x4213000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 9125888 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7455000/0x0/0x4ffc00000, data 0x41434e6/0x4213000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:37.898874+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 139755520 unmapped: 10330112 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:38.899127+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 139755520 unmapped: 10330112 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1631561 data_alloc: 251658240 data_used: 41697280
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:39.899321+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 139755520 unmapped: 10330112 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:40.899536+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141393920 unmapped: 10166272 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:41.899775+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6e8a000/0x0/0x4ffc00000, data 0x470c4e6/0x47dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141541376 unmapped: 10018816 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:42.900006+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 10010624 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:43.900204+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 10010624 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697183 data_alloc: 251658240 data_used: 41992192
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:44.900365+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 10010624 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:45.900548+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 10010624 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cec000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:46.900727+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 10010624 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:47.900980+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:48.901170+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.211829185s of 13.867918968s, submitted: 144
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697199 data_alloc: 251658240 data_used: 41992192
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:49.901396+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:50.901594+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:51.901802+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cec000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:52.902012+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cec000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cec000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:53.902212+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cec000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697199 data_alloc: 251658240 data_used: 41992192
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:54.902436+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:55.902633+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:56.902848+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:57.903013+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:58.903201+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cf4000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:59.903387+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1692287 data_alloc: 251658240 data_used: 41992192
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:00.903583+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c439892c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cf4000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439892c00 session 0x55c437319680
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c439892c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439892c00 session 0x55c437aaeb40
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c439164960
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c4371734a0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:01.903755+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.983115196s of 13.000985146s, submitted: 2
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c439964b40
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c4373b63c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c4373b01e0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 25296896 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c43967b0e0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cf4000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c4398c2f00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:02.904006+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 25288704 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c439892c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439892c00 session 0x55c43ac4d680
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c438014d20
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c43806da40
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:03.904232+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c437aa0960
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c437aa0b40
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c439892c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439892c00 session 0x55c43912a5a0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b3000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c437bdfc20
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140345344 unmapped: 25911296 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c4398a9a40
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c439101e00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:04.904412+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1777183 data_alloc: 251658240 data_used: 41992192
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:05.904587+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628b000/0x0/0x4ffc00000, data 0x53124f6/0x53e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:06.904767+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:07.905027+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628b000/0x0/0x4ffc00000, data 0x53124f6/0x53e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:08.905230+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:09.905469+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1777359 data_alloc: 251658240 data_used: 41992192
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:10.905683+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c439892c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439892c00 session 0x55c4373303c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:11.907999+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.160284996s of 10.497438431s, submitted: 42
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398a5400
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 25935872 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:12.908234+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 25935872 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:13.908419+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398a5800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398a5800 session 0x55c43a54e3c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 25706496 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398a5c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c439c88c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:14.908722+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1807465 data_alloc: 251658240 data_used: 45797376
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 146636800 unmapped: 19619840 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:15.908971+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 148963328 unmapped: 17293312 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:16.909155+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149340160 unmapped: 16916480 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:17.909485+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 16695296 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:18.909866+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 16695296 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:19.910325+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1857705 data_alloc: 268435456 data_used: 52809728
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 16687104 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:20.910687+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 16687104 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:21.911196+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 16687104 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:22.911607+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149602304 unmapped: 16654336 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:23.911977+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149602304 unmapped: 16654336 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:24.912309+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1857705 data_alloc: 268435456 data_used: 52809728
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149602304 unmapped: 16654336 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:25.912545+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149602304 unmapped: 16654336 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:26.912867+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149602304 unmapped: 16654336 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:27.913123+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149635072 unmapped: 16621568 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:28.913527+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 16613376 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:29.913811+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1857705 data_alloc: 268435456 data_used: 52809728
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 16613376 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:30.914074+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 16613376 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:31.914401+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 16613376 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:32.914771+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 16613376 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:33.915099+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149684224 unmapped: 16572416 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:34.915449+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1857705 data_alloc: 268435456 data_used: 52809728
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149684224 unmapped: 16572416 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:35.915759+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149684224 unmapped: 16572416 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:36.916097+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149684224 unmapped: 16572416 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:37.916295+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149684224 unmapped: 16572416 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:38.916526+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.395227432s of 26.412460327s, submitted: 3
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 16465920 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:39.916855+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1858713 data_alloc: 268435456 data_used: 52813824
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 16465920 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:40.917254+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149798912 unmapped: 16457728 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:41.917497+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:42.917801+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149807104 unmapped: 16449536 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:43.918019+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149815296 unmapped: 16441344 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:44.918187+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149815296 unmapped: 16441344 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1858713 data_alloc: 268435456 data_used: 52813824
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:45.918452+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149815296 unmapped: 16441344 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:46.918764+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149823488 unmapped: 16433152 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:47.918987+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149823488 unmapped: 16433152 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:48.919190+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 151429120 unmapped: 14827520 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.783483505s of 10.946872711s, submitted: 24
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:49.919358+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 152084480 unmapped: 14172160 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1889331 data_alloc: 268435456 data_used: 52891648
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:50.919665+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 152961024 unmapped: 13295616 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5df9000/0x0/0x4ffc00000, data 0x57a3519/0x5875000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:51.920518+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153083904 unmapped: 13172736 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:52.920751+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153354240 unmapped: 12902400 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:53.921087+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153354240 unmapped: 12902400 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5de9000/0x0/0x4ffc00000, data 0x57b3519/0x5885000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:54.921355+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153387008 unmapped: 12869632 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1911755 data_alloc: 268435456 data_used: 53923840
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:55.921499+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153387008 unmapped: 12869632 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c43ac4d2c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398a5400 session 0x55c4398583c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:56.921950+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 152756224 unmapped: 13500416 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c4398c32c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:57.922255+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 18595840 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:58.922652+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 18595840 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f69de000/0x0/0x4ffc00000, data 0x4bbf4e6/0x4c8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:59.927709+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 18595840 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1743569 data_alloc: 251658240 data_used: 43810816
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:00.928098+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 18595840 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f69de000/0x0/0x4ffc00000, data 0x4bbf4e6/0x4c8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399eec00 session 0x55c43a54ef00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399ee800 session 0x55c43802b0e0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:01.928329+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 18595840 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.789140701s of 12.240109444s, submitted: 93
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c437c112c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:02.928770+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:03.929199+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:04.930514+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1625360 data_alloc: 251658240 data_used: 39796736
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:05.930997+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:06.933108+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:07.934980+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:08.935571+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:09.937506+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1625360 data_alloc: 251658240 data_used: 39796736
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:10.938266+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:11.938448+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:12.938724+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:13.939101+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:14.939374+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1625360 data_alloc: 251658240 data_used: 39796736
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:15.939739+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:16.940137+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:17.940618+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:18.940927+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:19.941355+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1625360 data_alloc: 251658240 data_used: 39796736
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:20.941841+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.772724152s of 19.821142197s, submitted: 11
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:21.942008+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:22.942367+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:23.942775+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:24.943232+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1625872 data_alloc: 251658240 data_used: 39800832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:25.943636+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:26.944072+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398a5c00 session 0x55c437aaef00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439c88c00 session 0x55c4398a8000
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:27.944289+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c43ac4c780
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:28.944660+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:29.945189+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:30.945489+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:31.945747+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:32.946110+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:33.946440+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:34.946814+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:35.947181+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:36.947611+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:37.947831+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:38.948079+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:39.948462+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:40.948839+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:41.949304+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:42.949664+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:43.950105+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:44.950448+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:45.951004+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:46.951256+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:47.951639+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec 05 02:38:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/315993858' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:48.952067+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:49.952455+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:50.952871+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:51.953138+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:52.953610+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:53.954070+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:54.954489+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:55.955024+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:56.955360+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:57.955784+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:58.956173+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:59.956531+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:00.956743+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:01.957177+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:02.957441+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:03.957699+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:04.958079+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.1 total, 600.0 interval
                                            Cumulative writes: 9064 writes, 35K keys, 9064 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 9064 writes, 2319 syncs, 3.91 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 1653 writes, 6154 keys, 1653 commit groups, 1.0 writes per commit group, ingest: 6.39 MB, 0.01 MB/s
                                            Interval WAL: 1653 writes, 687 syncs, 2.41 writes per sync, written: 0.01 GB, 0.01 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:05.958442+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:06.958796+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:07.959162+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:08.959556+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:09.959948+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:10.960352+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143785984 unmapped: 22470656 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: mgrc ms_handle_reset ms_handle_reset con 0x55c437983800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/858078637
Dec 05 02:38:34 compute-0 ceph-osd[208828]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/858078637,v1:192.168.122.100:6801/858078637]
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: get_auth_request con 0x55c4398b2000 auth_method 0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: mgrc handle_mgr_configure stats_period=5
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:11.960644+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:12.960941+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:13.961153+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:14.961365+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:15.961576+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:16.962080+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:17.962459+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:18.962865+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:19.963396+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:20.963835+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:21.964284+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:22.964654+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:23.964880+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:24.965124+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:25.965351+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:26.965684+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:27.966165+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:28.966583+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:29.967088+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:30.967335+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:31.967803+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:32.968281+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:33.968800+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:34.969198+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:35.969648+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:36.970204+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:37.970656+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:38.971105+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:39.971502+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:40.972013+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:41.972426+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:42.973017+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:43.973326+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:44.973810+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:45.974117+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:46.974477+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:47.974748+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:48.975070+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:49.975340+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:50.975752+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:51.976088+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:52.976436+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:53.977064+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:54.977411+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:55.977750+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:56.978172+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:57.978386+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:58.978724+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:59.979049+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:00.979346+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 22331392 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:01.979686+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 22331392 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 100.267555237s of 100.389305115s, submitted: 23
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c43ac4de00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398a5400
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398a5400 session 0x55c4398a90e0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c4373292c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c439e17680
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398a5c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398a5c00 session 0x55c437319e00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:02.980106+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 26796032 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:03.980412+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 26796032 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d26000/0x0/0x4ffc00000, data 0x48794d6/0x4948000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:04.980656+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 26796032 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:05.981055+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 26796032 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1681860 data_alloc: 251658240 data_used: 37978112
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:06.981288+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 26796032 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d26000/0x0/0x4ffc00000, data 0x48794d6/0x4948000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:07.981621+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 26787840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:08.982052+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 26787840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c439c88c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439c88c00 session 0x55c43914a3c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399eec00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:09.982228+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 26763264 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:10.982527+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 26763264 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1684438 data_alloc: 251658240 data_used: 37978112
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:11.982789+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 26763264 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:12.983079+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 26763264 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:13.983455+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 26755072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:14.983703+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143941632 unmapped: 26517504 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:15.983863+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145530880 unmapped: 24928256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1765558 data_alloc: 251658240 data_used: 49324032
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:16.984302+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:17.984477+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:18.984697+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:19.985047+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:20.985439+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1765558 data_alloc: 251658240 data_used: 49324032
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:21.985674+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:22.985955+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:23.986294+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:24.986665+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:25.986997+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1765558 data_alloc: 251658240 data_used: 49324032
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:26.987188+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399ee400 session 0x55c439867c20
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c437982c00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:27.987582+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:28.987777+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:29.988205+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:30.988436+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1765558 data_alloc: 251658240 data_used: 49324032
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:31.988860+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:32.989373+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:33.989734+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.556104660s of 31.715848923s, submitted: 28
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:34.989934+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,1])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 24788992 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:35.990442+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 24723456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1766950 data_alloc: 251658240 data_used: 49364992
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:36.990939+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:37.991673+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:38.991959+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:39.992224+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:40.992614+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1766950 data_alloc: 251658240 data_used: 49364992
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:41.993180+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:42.993582+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:43.994152+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:44.994530+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:45.994983+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.810025215s of 12.461395264s, submitted: 108
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1791816 data_alloc: 251658240 data_used: 49373184
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:46.995213+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:47.995534+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159358976 unmapped: 11100160 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5eab000/0x0/0x4ffc00000, data 0x56ed4f9/0x57bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:48.995955+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159612928 unmapped: 10846208 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:49.996143+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159825920 unmapped: 10633216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e75000/0x0/0x4ffc00000, data 0x57214f9/0x57f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:50.996550+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159825920 unmapped: 10633216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892200 data_alloc: 268435456 data_used: 50962432
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e75000/0x0/0x4ffc00000, data 0x57214f9/0x57f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:51.997399+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 10600448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:52.997736+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 10600448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:53.998156+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 10600448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:54.998629+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 10600448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:55.999145+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159866880 unmapped: 10592256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892200 data_alloc: 268435456 data_used: 50962432
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:56.999477+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159866880 unmapped: 10592256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e75000/0x0/0x4ffc00000, data 0x57214f9/0x57f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:57.999798+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159899648 unmapped: 10559488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.857128143s of 12.298008919s, submitted: 144
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:59.000100+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159899648 unmapped: 10559488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:00.000353+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159899648 unmapped: 10559488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:01.000683+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159899648 unmapped: 10559488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892412 data_alloc: 268435456 data_used: 50962432
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:02.000910+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e74000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159899648 unmapped: 10559488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:03.001158+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159899648 unmapped: 10559488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:04.001378+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:05.001729+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e74000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:06.002148+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892412 data_alloc: 268435456 data_used: 50962432
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:07.002559+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:08.003096+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:09.003453+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:10.003645+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:11.004016+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e74000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159940608 unmapped: 10518528 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892412 data_alloc: 268435456 data_used: 50962432
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:12.004325+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159948800 unmapped: 10510336 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:13.004616+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159948800 unmapped: 10510336 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:14.005146+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159948800 unmapped: 10510336 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:15.005372+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159956992 unmapped: 10502144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e74000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:16.005781+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159989760 unmapped: 10469376 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892412 data_alloc: 268435456 data_used: 50962432
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:17.006149+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159989760 unmapped: 10469376 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:18.006390+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159989760 unmapped: 10469376 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:19.006717+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159989760 unmapped: 10469376 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:20.007046+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.230848312s of 21.238258362s, submitted: 1
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:21.007231+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1885948 data_alloc: 268435456 data_used: 50962432
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:22.007483+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:23.007944+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:24.008260+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:25.008602+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:26.008831+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1885948 data_alloc: 268435456 data_used: 50962432
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:27.009104+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:28.009314+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:29.009497+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:30.009678+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:31.010007+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1885948 data_alloc: 268435456 data_used: 50962432
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:32.010213+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:33.010625+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:34.011125+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:35.011326+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:36.011723+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1885948 data_alloc: 268435456 data_used: 50962432
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:37.012162+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.680427551s of 17.691595078s, submitted: 2
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:38.012436+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:39.012752+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:40.013072+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:41.013427+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1888652 data_alloc: 268435456 data_used: 51240960
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:42.013678+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:43.013920+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:44.014243+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:45.014648+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:46.014977+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1888476 data_alloc: 268435456 data_used: 51240960
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:47.426769+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:48.427190+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:49.427596+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:50.428041+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.009906769s of 13.027759552s, submitted: 2
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159318016 unmapped: 11141120 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:51.428374+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159318016 unmapped: 11141120 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1889356 data_alloc: 268435456 data_used: 51240960
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:52.429082+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159318016 unmapped: 11141120 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:53.429403+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159318016 unmapped: 11141120 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:54.429574+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 11755520 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:55.429959+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 11755520 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:56.430235+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 11755520 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1889516 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:57.430525+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 11755520 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:58.430968+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 11755520 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:59.431250+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 11755520 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:00.431481+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:01.431643+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1889516 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:02.432098+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:03.432472+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:04.432788+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:05.433130+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:06.433482+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1889516 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:07.433770+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:08.434002+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158720000 unmapped: 11739136 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:09.434414+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158720000 unmapped: 11739136 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:10.434731+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158720000 unmapped: 11739136 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:11.434989+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158720000 unmapped: 11739136 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1889516 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:12.435344+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158720000 unmapped: 11739136 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:13.435756+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.845460892s of 22.864625931s, submitted: 4
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:14.436007+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:15.436198+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:16.436568+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:17.436992+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:18.437373+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:19.437725+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:20.438056+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:21.438388+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:22.438797+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:23.439726+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:24.440080+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:25.440423+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:26.441605+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:27.442047+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:28.442440+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:29.442735+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:30.443141+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:31.443456+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:32.443725+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:33.444108+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:34.444403+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:35.444623+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:36.444857+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:37.446684+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:38.446940+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:39.449438+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:40.450822+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:41.451491+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:42.453679+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:43.455574+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:44.459798+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:45.460757+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:46.461272+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:47.465280+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:48.465963+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:49.467371+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:50.469640+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:51.471378+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:52.474160+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:53.474569+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:54.478127+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:55.478425+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:56.479407+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:57.479725+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:58.480284+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:59.483686+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:00.485282+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:01.485590+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:02.487863+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:03.489344+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:04.490299+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:05.493756+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:06.495226+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:07.496563+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:08.504544+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:09.506228+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:10.506450+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:11.508240+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:12.512328+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:13.513680+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:14.513974+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:15.517369+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:16.519981+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:17.522591+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:18.522829+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:19.525477+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:20.527605+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:21.529785+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:22.530288+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:23.534193+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:24.535671+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:25.538522+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:26.540045+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:27.543743+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:28.545766+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:29.547413+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:30.548804+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:31.550658+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:32.552256+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:33.552967+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:34.553451+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:35.554232+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:36.554607+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:37.555118+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:38.555440+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:39.555813+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:40.556244+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:41.556561+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:42.557089+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:43.557479+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:44.558005+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:45.558422+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:46.558939+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:47.559365+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:48.559606+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:49.559945+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:50.560282+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:51.560697+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:52.561103+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:53.561412+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:54.561789+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:55.562194+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:56.562592+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:57.563052+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:58.563398+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:59.563788+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:00.564457+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:01.565076+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:02.565878+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:03.566481+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:04.566880+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:05.567604+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:06.568128+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:07.568517+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:08.569069+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:09.569665+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:10.570090+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:11.570455+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:12.570799+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:13.571159+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:14.571583+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:15.571964+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:16.572316+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:17.572683+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:18.573112+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:19.573473+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:20.574050+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:21.574368+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:22.574794+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:23.575302+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:24.575667+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:25.576111+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:26.576547+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:27.577006+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:28.577515+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:29.577852+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:30.578200+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:31.578549+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:32.579099+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:33.579679+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:34.580086+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:35.580555+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:36.581005+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:37.581532+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:38.582137+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:39.595287+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:40.595760+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:41.596297+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:42.596692+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:43.597053+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:44.597318+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:45.597728+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:46.598187+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:47.598548+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:48.599070+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:49.599423+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:50.599844+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:51.600324+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:52.600574+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:53.601125+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:54.601348+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:55.601665+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:56.602308+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:57.602678+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:58.603093+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:59.603329+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:00.603589+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:01.603977+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:02.604301+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:03.604574+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:04.604791+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158867456 unmapped: 11591680 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:05.605009+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:06.605203+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:07.605463+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:08.608252+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:09.608617+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:10.609172+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:11.609482+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:12.609866+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:13.610349+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:14.610635+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:15.610910+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:16.611335+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:17.611766+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:18.612154+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:19.612540+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:20.612985+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:21.613410+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:22.613820+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:23.614312+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:24.614734+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:25.615230+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:26.615649+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:27.615979+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:28.616446+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:29.616834+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:30.617212+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:31.617626+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:32.618091+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:33.618522+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:34.619055+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:35.619432+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:36.619970+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:37.620432+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 11567104 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:38.620849+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 11567104 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:39.621250+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 11567104 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:40.621503+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 11567104 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:41.621837+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 11567104 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:42.622150+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 11567104 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:43.622419+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 11558912 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:44.622647+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 11558912 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:45.623094+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 11558912 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:46.623394+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 11558912 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:47.623717+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 11558912 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:48.624114+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 215.814193726s of 215.831085205s, submitted: 14
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159014912 unmapped: 11444224 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:49.624529+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:50.624843+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:51.625161+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:52.625467+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:53.625835+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1894956 data_alloc: 251658240 data_used: 51838976
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:54.626169+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:55.626516+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:56.626866+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:57.627274+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:58.627631+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892500 data_alloc: 251658240 data_used: 51838976
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7a000/0x0/0x4ffc00000, data 0x57244f9/0x57f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:59.628121+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:00.628496+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7a000/0x0/0x4ffc00000, data 0x57244f9/0x57f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:01.629032+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:02.629426+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:03.629844+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892500 data_alloc: 251658240 data_used: 51838976
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:04.630374+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:05.630650+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7a000/0x0/0x4ffc00000, data 0x57244f9/0x57f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:06.631238+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:07.631645+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:08.632143+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892660 data_alloc: 251658240 data_used: 51843072
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7a000/0x0/0x4ffc00000, data 0x57244f9/0x57f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:09.632539+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:10.632778+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.156604767s of 22.175872803s, submitted: 2
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:11.633098+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:12.633416+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:13.633842+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892904 data_alloc: 251658240 data_used: 51843072
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:14.634114+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:15.634494+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159055872 unmapped: 11403264 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:16.634738+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:17.635072+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:18.635382+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892904 data_alloc: 251658240 data_used: 51843072
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:19.635832+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:20.636127+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:21.636516+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:22.636802+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:23.637119+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892904 data_alloc: 251658240 data_used: 51843072
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:24.637495+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.559044838s of 13.568158150s, submitted: 1
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:25.637999+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:26.638294+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:27.638472+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:28.638872+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895352 data_alloc: 251658240 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:29.639357+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:30.639760+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:31.640141+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:32.640516+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:33.640989+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895352 data_alloc: 251658240 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:34.641322+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:35.641586+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:36.641859+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:37.642093+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:38.642433+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895352 data_alloc: 251658240 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:39.642719+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.334384918s of 15.360384941s, submitted: 14
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:40.643116+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:41.643519+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:42.643758+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:43.644043+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:44.644416+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:45.644625+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:46.644940+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:47.645237+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:48.645643+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:49.646196+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:50.646663+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:51.647208+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:52.647525+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:53.648084+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:54.648511+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:55.648857+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:56.649168+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:57.649470+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:58.650089+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:59.650415+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:00.651216+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:01.651591+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:02.652023+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:03.652497+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:04.653097+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:05.653518+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:06.653973+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:07.654468+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:08.654723+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:09.655124+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:10.655552+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:11.655850+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:12.656425+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:13.656870+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:14.657405+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:15.657761+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:16.658214+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:17.658642+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:18.658986+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:19.659690+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:20.660233+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:21.661030+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:22.661183+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:23.661529+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:24.662034+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:25.662405+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:26.662829+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:27.663131+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:28.663515+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:29.663854+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:30.664304+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 11313152 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:31.664680+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 11313152 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:32.665023+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 11313152 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:33.665371+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 11313152 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:34.666189+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159154176 unmapped: 11304960 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:35.666562+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159154176 unmapped: 11304960 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:36.667068+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159154176 unmapped: 11304960 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:37.667518+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:38.668088+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:39.668410+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:40.668785+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:41.669007+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:42.669268+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:43.669614+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:44.670075+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:45.670375+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:46.670745+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:47.671174+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:48.671516+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:49.671846+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:50.672161+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:51.672546+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:52.673005+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:53.673441+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:54.673846+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:55.674623+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:56.675080+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:57.675323+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:58.675678+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:59.675984+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:00.676398+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:01.676639+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:02.676835+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:03.677218+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:04.677619+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:05.678072+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:06.678442+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:07.678814+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:08.679163+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:09.679563+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:10.680051+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:11.680426+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:12.680770+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:13.681200+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:14.681429+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:15.682133+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:16.682389+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:17.682662+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:18.683192+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:19.683403+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:20.683711+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:21.684156+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:22.684495+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:23.685045+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:24.685360+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:25.685653+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:26.686155+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:27.686501+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:28.686824+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:29.687118+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:30.687506+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159203328 unmapped: 11255808 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:31.687734+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159203328 unmapped: 11255808 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:32.687950+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:33.688331+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:34.688789+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:35.689140+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:36.689620+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:37.689861+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:38.690195+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:39.690575+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:40.690957+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:41.691342+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:42.691669+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159219712 unmapped: 11239424 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:43.692057+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159219712 unmapped: 11239424 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:44.692268+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159219712 unmapped: 11239424 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:45.692658+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159219712 unmapped: 11239424 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:46.693057+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159219712 unmapped: 11239424 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:47.693352+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159219712 unmapped: 11239424 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:48.693737+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:49.694303+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:50.694967+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:51.695221+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:52.695567+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:53.696055+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:54.696449+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:55.696813+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:56.697203+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:57.697637+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:58.698057+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:59.698259+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:00.698671+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:01.699032+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:02.699459+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:03.699737+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:04.700022+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.1 total, 600.0 interval
                                            Cumulative writes: 9570 writes, 37K keys, 9570 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 9570 writes, 2504 syncs, 3.82 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 506 writes, 1776 keys, 506 commit groups, 1.0 writes per commit group, ingest: 2.56 MB, 0.00 MB/s
                                            Interval WAL: 506 writes, 185 syncs, 2.74 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:05.701035+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:06.701370+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:07.701590+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:08.702009+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:09.702265+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:10.702608+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:11.702849+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:12.703108+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:13.703360+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:14.703649+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:15.704084+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:16.704400+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:17.705402+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:18.705795+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159244288 unmapped: 11214848 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:19.706215+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159252480 unmapped: 11206656 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:20.706583+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159252480 unmapped: 11206656 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:21.707044+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159252480 unmapped: 11206656 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:22.708021+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159252480 unmapped: 11206656 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:23.708832+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159252480 unmapped: 11206656 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:24.709226+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:25.709767+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:26.710240+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:27.710586+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:28.711086+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:29.711430+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:30.711784+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:31.712443+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:32.712877+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:33.713340+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:34.713561+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:35.714009+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:36.714480+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:37.715129+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:38.715403+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:39.715822+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:40.716186+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:41.716537+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:42.716987+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:43.717416+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:44.717743+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:45.718097+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:46.718521+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159277056 unmapped: 11182080 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:47.718952+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159277056 unmapped: 11182080 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:48.719277+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159277056 unmapped: 11182080 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:49.719651+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159277056 unmapped: 11182080 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:50.720061+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159277056 unmapped: 11182080 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:51.720479+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159285248 unmapped: 11173888 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:52.720779+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159285248 unmapped: 11173888 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:53.722421+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159285248 unmapped: 11173888 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:54.722804+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159285248 unmapped: 11173888 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:55.723008+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159285248 unmapped: 11173888 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 196.116027832s of 196.124725342s, submitted: 1
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c43989d400 session 0x55c43965fa40
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:56.723720+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2800 session 0x55c4398c23c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:57.724264+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c439c75c20
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:58.724761+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:59.725151+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1723092 data_alloc: 234881024 data_used: 44167168
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:00.725457+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6eca000/0x0/0x4ffc00000, data 0x46d44f9/0x47a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:01.725975+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:02.726381+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6eca000/0x0/0x4ffc00000, data 0x46d44f9/0x47a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:03.726815+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6eca000/0x0/0x4ffc00000, data 0x46d44f9/0x47a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:04.727183+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1723092 data_alloc: 234881024 data_used: 44167168
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:05.727376+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:06.727719+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:07.728063+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:08.728430+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:09.728756+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6eca000/0x0/0x4ffc00000, data 0x46d44f9/0x47a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399ee800 session 0x55c4373183c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.321928978s of 13.440460205s, submitted: 22
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399eec00 session 0x55c438015680
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1722960 data_alloc: 234881024 data_used: 44167168
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:10.729039+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c4380143c0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:11.729396+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:12.729715+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:13.730132+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8860000/0x0/0x4ffc00000, data 0x2d3e4d6/0x2e0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:14.730374+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1438434 data_alloc: 218103808 data_used: 30633984
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:15.730668+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:16.731122+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:17.731496+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:18.731749+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8860000/0x0/0x4ffc00000, data 0x2d3e4d6/0x2e0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:19.732141+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1438434 data_alloc: 218103808 data_used: 30633984
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:20.732449+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c43989d400
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.228912354s of 11.431051254s, submitted: 36
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142688256 unmapped: 27770880 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:21.733210+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _renew_subs
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124952576 unmapped: 45506560 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 139 ms_handle_reset con 0x55c43989d400 session 0x55c439867860
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:22.733515+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4398b2800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141778944 unmapped: 45465600 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:23.734001+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cce000/0x0/0x4ffc00000, data 0x18d00a7/0x19a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _renew_subs
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 140 ms_handle_reset con 0x55c4398b2800 session 0x55c4373310e0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c4399ee800
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125018112 unmapped: 62226432 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:24.735234+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262617 data_alloc: 218103808 data_used: 11313152
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:25.735661+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 141 ms_handle_reset con 0x55c4399ee800 session 0x55c437c10960
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:26.736016+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:27.736512+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9cc9000/0x0/0x4ffc00000, data 0x18d3811/0x19a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:28.736963+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9cc9000/0x0/0x4ffc00000, data 0x18d3811/0x19a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:29.737239+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259196 data_alloc: 218103808 data_used: 11313152
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:30.737720+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:31.738078+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:32.738433+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9cc9000/0x0/0x4ffc00000, data 0x18d3811/0x19a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.351562500s of 11.958790779s, submitted: 94
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:33.738985+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc5000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: get_auth_request con 0x55c437982000 auth_method 0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:34.739347+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:35.739668+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 62062592 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:36.740523+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125206528 unmapped: 62038016 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:37.740965+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:38.741389+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:39.742033+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:40.742767+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:41.743183+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:42.743464+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:43.744183+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:44.744485+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:45.745220+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:46.745745+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:47.746313+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:48.746837+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:49.747189+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:50.747597+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:51.748099+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:52.748449+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:53.749065+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:54.749748+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:55.750139+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:56.750564+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:57.751002+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:58.751417+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:59.751802+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:00.752149+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:01.752661+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:02.753053+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:03.753489+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:04.753737+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:05.754110+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:06.754492+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:07.754664+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:08.755188+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:09.755605+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:10.756129+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:11.756727+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:12.757183+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:13.757754+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:14.758166+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:15.758582+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:16.759107+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:17.759487+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:18.759973+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:19.760384+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:20.760693+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:21.761185+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:22.761529+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:23.762085+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:24.762460+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:25.762858+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:26.763281+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:27.763596+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:28.764040+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:29.764528+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:30.765238+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:31.766116+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:32.766695+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:33.767500+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:34.768026+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:35.768734+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:36.769151+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:37.769520+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:38.769782+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:39.770477+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:40.771020+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:41.771493+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:42.772055+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:43.772559+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:44.773077+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:45.773308+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:46.773646+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:47.774172+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:48.774478+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:49.774864+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:50.775308+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:51.775692+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:52.776008+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:53.776321+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:54.776627+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:55.776981+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:56.777203+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125403136 unmapped: 61841408 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:57.777457+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: do_command 'config diff' '{prefix=config diff}'
Dec 05 02:38:34 compute-0 ceph-osd[208828]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 05 02:38:34 compute-0 ceph-osd[208828]: do_command 'config show' '{prefix=config show}'
Dec 05 02:38:34 compute-0 ceph-osd[208828]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 05 02:38:34 compute-0 ceph-osd[208828]: do_command 'counter dump' '{prefix=counter dump}'
Dec 05 02:38:34 compute-0 ceph-osd[208828]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 05 02:38:34 compute-0 ceph-osd[208828]: do_command 'counter schema' '{prefix=counter schema}'
Dec 05 02:38:34 compute-0 ceph-osd[208828]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125493248 unmapped: 61751296 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:58.777652+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:59.777996+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125394944 unmapped: 61849600 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: do_command 'log dump' '{prefix=log dump}'
Dec 05 02:38:34 compute-0 ceph-osd[208828]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:00.778251+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: do_command 'perf dump' '{prefix=perf dump}'
Dec 05 02:38:34 compute-0 ceph-osd[208828]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec 05 02:38:34 compute-0 ceph-osd[208828]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec 05 02:38:34 compute-0 ceph-osd[208828]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125329408 unmapped: 61915136 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: do_command 'perf schema' '{prefix=perf schema}'
Dec 05 02:38:34 compute-0 ceph-osd[208828]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:01.779971+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:02.780236+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:03.780527+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:04.780741+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:05.782028+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:06.782329+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:07.782542+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:08.782715+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:09.782937+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:10.783153+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:11.783370+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:12.783593+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:13.783798+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:14.784018+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:15.784185+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:16.784532+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:17.785162+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:18.785678+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:19.786086+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:20.786653+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:21.787948+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:22.789105+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:23.789589+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:24.790280+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:25.791265+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:26.792778+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:27.793947+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:28.794704+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:29.795112+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:30.795477+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:31.795824+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:32.796088+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:33.796331+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:34.796669+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:35.802339+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:36.802567+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:37.802962+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:38.803341+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:39.803690+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:40.804064+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:41.804440+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:42.804822+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:43.805230+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:44.805504+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:45.805751+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:46.806179+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:47.806549+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:48.806857+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:49.807265+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:50.807683+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:51.808136+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:52.808552+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:53.809058+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:54.809429+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:55.809684+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:56.810088+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:57.810380+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:58.810802+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:59.811159+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:00.811530+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:01.811984+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:02.812410+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:03.812857+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:04.813258+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:05.813695+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:06.814053+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:07.814325+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:08.814730+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:09.815007+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:10.815361+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:11.815699+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:12.816119+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:13.816507+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:14.816720+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:15.817143+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:16.817516+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:17.818039+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:18.818419+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:19.818843+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:20.819138+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:21.819521+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:22.819859+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:23.820286+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:24.820685+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:25.821092+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:26.821420+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:27.822025+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:28.822342+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:29.822701+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:30.823115+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:31.823441+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:32.823790+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:33.824209+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:34.824589+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:35.825000+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:36.825358+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:37.825693+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:38.826015+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:39.826460+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:40.826795+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:41.827216+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:42.827650+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:43.828143+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:44.828500+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:45.828726+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:46.829117+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:47.829482+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:48.829863+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:49.830339+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:50.830751+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:51.831164+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:52.831476+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:53.832044+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:54.832396+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:55.832814+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:56.833242+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:57.833622+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:58.834148+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:59.834541+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:00.834822+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:01.835062+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:02.835316+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:03.835623+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:04.836333+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:05.836804+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:06.837210+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:07.837609+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:08.838116+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:09.838547+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:10.839020+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:11.839429+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:12.840019+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:13.840327+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:14.840561+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:15.841059+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:16.841492+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:17.842025+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:18.842436+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:19.842826+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:20.843236+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:21.843649+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:22.844086+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:23.844532+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:24.845007+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:25.845362+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:26.845723+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:27.846045+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:28.846427+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:29.846638+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:30.847124+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:31.847509+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:32.847988+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:33.848434+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:34.848798+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:35.849161+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:36.849534+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:37.849989+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:38.850344+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:39.850752+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:40.851089+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:41.851440+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:42.851762+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:43.852138+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:44.852436+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:45.852766+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:46.853098+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:47.853436+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:48.853711+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:49.854084+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:50.854468+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:51.854818+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:52.855067+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:53.855453+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:54.855876+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:55.856386+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:56.856652+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:57.857137+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:58.857525+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:59.857984+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:00.858553+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:01.859106+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:02.859396+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:03.859728+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:04.860064+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:05.860330+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:06.860749+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:07.861174+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:08.861587+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:09.862036+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:10.862426+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:11.862778+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:12.863148+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:13.863597+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:14.864021+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:15.864338+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:16.864837+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:17.865210+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:18.865559+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:19.865866+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:20.866325+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:21.866725+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:22.867044+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:23.867356+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:24.867789+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:25.868130+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:26.868468+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 ms_handle_reset con 0x55c437982c00 session 0x55c437bdfa40
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: handle_auth_request added challenge on 0x55c438edbc00
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:27.868784+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:28.869187+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:29.869629+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:30.869848+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:31.870198+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:32.870547+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:33.871026+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:34.871538+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:35.871957+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:36.872364+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:37.872779+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:38.873195+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:39.873652+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:40.874018+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:41.874392+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:42.874680+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:43.875979+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:44.876471+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:45.877046+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:46.877453+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:47.877993+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:48.878341+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:49.878845+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:50.879306+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:51.879763+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:52.880192+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:53.880646+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:54.881119+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:55.881605+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:56.882117+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:57.882366+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:58.882754+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:59.883269+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:00.883704+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:01.884194+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:02.884631+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:03.885107+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:04.885386+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:05.885734+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:06.886199+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:07.886530+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:08.887049+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:09.887458+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:10.887838+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:11.888253+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:12.888680+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:13.889158+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:14.889499+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:15.889990+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:16.890395+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:17.890837+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:18.891241+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:19.891513+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:20.892007+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:21.892424+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:22.892788+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:23.893215+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:24.893612+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:25.893870+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:26.894264+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:27.894645+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:28.895063+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:29.895420+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:30.895648+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:31.896141+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:32.896469+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:33.896724+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:34.897149+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:35.897460+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:36.897870+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:37.898325+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:38.898645+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:39.899058+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:40.899452+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:41.899774+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:42.900043+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:43.900469+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:44.900671+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:45.901177+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:46.901550+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:47.902070+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:48.902461+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:49.902796+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:50.903139+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:51.903502+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:52.903750+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:53.904027+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:54.904340+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:55.904721+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:56.905063+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:57.905377+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:58.905785+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:59.906068+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:00.906506+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:01.906854+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:02.907077+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:03.907396+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:04.907617+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:05.908035+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:06.908439+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:07.908781+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:08.909144+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:09.909522+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:10.909854+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:11.910075+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:12.910315+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:13.910579+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:14.911112+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:15.911488+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:16.912035+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:17.912405+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:18.912828+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:19.913206+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:20.913637+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:21.914115+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:22.914510+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:23.915010+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:24.915446+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:25.915988+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:26.916386+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:27.916799+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:28.917303+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:29.917626+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:30.918139+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:31.918719+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:32.918981+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:33.919399+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:34.919754+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:35.920181+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:36.920534+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:37.920967+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:38.921369+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:39.921774+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:40.922134+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:41.922543+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:42.923185+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:43.923637+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:44.924083+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:45.924501+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:46.925784+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:47.926979+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:48.928659+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:49.930326+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:50.932037+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:51.933839+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:52.935836+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:53.938415+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:54.940034+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:55.941635+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:56.942964+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:57.944424+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:58.945193+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:59.945584+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:00.946136+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:01.946530+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:02.946991+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:03.947446+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:04.947869+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:05.949023+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:06.950398+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:07.951857+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:08.953526+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:09.954494+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:10.955104+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:11.955361+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:12.955980+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:13.957156+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:14.958717+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:15.959624+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:16.960094+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:17.960478+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:18.960824+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:19.961231+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:20.961627+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:21.962062+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:22.962430+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:23.963020+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:24.963456+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:25.963994+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:26.964399+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:27.964836+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:28.965511+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:29.967128+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:30.968869+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:31.970972+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:32.972660+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:33.974984+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:34.976273+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:35.978075+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:36.979763+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:37.981703+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:38.982837+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:39.983790+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:40.984300+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:41.984702+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:42.985116+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:43.985548+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:44.986004+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:45.986294+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:46.986691+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:47.987059+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:48.987506+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:49.987794+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:50.988144+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:51.988505+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:52.988867+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:53.989373+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:54.989749+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:55.990090+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:56.990592+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:57.991156+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:58.991571+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:59.992104+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:00.992540+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:01.993056+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:02.993502+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:03.994132+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:04.994535+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4800.1 total, 600.0 interval
                                            Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 10K writes, 2729 syncs, 3.68 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 481 writes, 1444 keys, 481 commit groups, 1.0 writes per commit group, ingest: 0.40 MB, 0.00 MB/s
                                            Interval WAL: 481 writes, 225 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:05.995014+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:06.995434+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:07.995875+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:08.996497+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:09.996737+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:10.997049+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:11.997453+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:12.997847+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:13.998441+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:14.999002+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:15.999490+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:16.999826+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:18.000166+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:19.000479+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:20.000829+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:21.001330+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:22.001669+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:23.002037+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:24.002792+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:25.003157+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:26.003581+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:27.005701+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:28.006172+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:29.006423+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:30.006810+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:31.007291+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:32.007649+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:33.008064+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:34.008471+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:35.008871+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:36.009357+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:37.009744+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:38.010121+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:39.010518+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:40.010983+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:41.011398+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:42.011813+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:43.012131+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:44.012496+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:45.013009+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:46.013441+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:47.013868+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:48.014456+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:49.015776+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:50.016217+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:51.016547+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:52.017064+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:53.017487+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:54.017997+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:55.018409+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:56.018787+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:57.019267+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:58.019712+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:59.020129+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:00.020564+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:01.021160+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:02.021706+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:03.022179+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:04.022709+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:05.023041+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:06.023377+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:07.023756+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:08.024108+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:09.024496+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:10.024879+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:11.025304+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:12.025646+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:13.025857+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:14.026134+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:15.027147+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:16.027473+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:17.027818+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:18.028199+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:19.028522+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:20.028862+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:21.029316+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:22.029661+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:23.030057+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:24.030352+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:25.030687+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:26.031071+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:27.031429+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:28.031828+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:29.032159+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:30.032579+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:31.033109+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:32.033521+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124936192 unmapped: 62308352 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:33.033960+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124936192 unmapped: 62308352 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:34.034338+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124936192 unmapped: 62308352 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:35.034765+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124936192 unmapped: 62308352 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 601.736877441s of 602.376464844s, submitted: 104
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:36.035052+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124952576 unmapped: 62291968 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:37.035410+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124993536 unmapped: 62251008 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:38.035806+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:39.036169+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:40.036516+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:41.036852+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:42.037244+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:43.037582+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:44.038159+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:45.038441+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:46.038851+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:47.039268+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:48.039684+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:49.040073+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:50.040499+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:51.040836+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:52.041275+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:53.041668+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:54.042124+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:55.042571+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:56.043030+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:57.043443+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:58.044132+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:59.044555+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:00.044987+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:01.045378+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:02.045833+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:03.046232+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:04.046679+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:05.047122+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:06.047399+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:07.047800+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:08.048164+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:09.048554+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:10.049090+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:11.049506+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:12.050023+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:13.050512+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:14.050994+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:15.051489+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:16.052802+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:17.053541+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:18.054033+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:19.054361+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:20.054722+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:21.055148+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:22.055651+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:23.055996+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:24.056431+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:25.057086+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:26.057401+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:27.058471+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:28.058811+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:29.059287+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:30.059671+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:31.060205+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:32.060502+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:33.061027+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:34.061556+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:35.062321+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:36.062746+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:37.063255+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:38.063644+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:39.064048+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:40.064454+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:41.064791+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:42.065204+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:43.065630+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:44.066115+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:45.066493+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:46.066864+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:47.067359+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:48.067773+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:49.068141+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:50.068522+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:51.068984+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:52.069463+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:53.069849+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:54.070406+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:55.070846+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:56.071145+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:57.071550+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:58.072364+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:59.072825+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:00.073212+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:01.073546+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:02.073811+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:03.074247+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:04.074742+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:05.075113+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:06.075556+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:07.076123+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:08.076574+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:09.077122+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:10.077691+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:11.078088+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:12.081322+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:13.085260+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:14.085856+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:15.086361+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:16.086689+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:17.087252+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:18.087743+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:19.088307+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:20.088800+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:21.089177+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:22.090028+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:23.090301+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:24.090680+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:25.091106+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:26.092340+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:27.093314+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:28.094040+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:29.094582+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:30.095196+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:31.096344+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:32.096723+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:33.097073+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:34.097583+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:35.097972+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:36.098296+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:37.098708+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:38.099103+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:39.099467+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:40.099832+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:41.100228+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:42.100625+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:43.101251+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:44.102131+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:45.103027+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:46.103465+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:47.103946+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:48.104279+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:49.104708+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:50.105087+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:51.105440+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:52.105833+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:53.106014+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:54.106432+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:55.106845+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125124608 unmapped: 62119936 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:56.107281+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125124608 unmapped: 62119936 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:57.107492+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125124608 unmapped: 62119936 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:58.107683+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125124608 unmapped: 62119936 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:59.107875+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125124608 unmapped: 62119936 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:00.108181+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125124608 unmapped: 62119936 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:01.108482+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: do_command 'config diff' '{prefix=config diff}'
Dec 05 02:38:34 compute-0 ceph-osd[208828]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 05 02:38:34 compute-0 ceph-osd[208828]: do_command 'config show' '{prefix=config show}'
Dec 05 02:38:34 compute-0 ceph-osd[208828]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 05 02:38:34 compute-0 ceph-osd[208828]: do_command 'counter dump' '{prefix=counter dump}'
Dec 05 02:38:34 compute-0 ceph-osd[208828]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec 05 02:38:34 compute-0 ceph-osd[208828]: do_command 'counter schema' '{prefix=counter schema}'
Dec 05 02:38:34 compute-0 ceph-osd[208828]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 61980672 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:02.108655+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 61980672 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: tick
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_tickets
Dec 05 02:38:34 compute-0 ceph-osd[208828]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:03.108988+0000)
Dec 05 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec 05 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124829696 unmapped: 62414848 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:34 compute-0 ceph-osd[208828]: do_command 'log dump' '{prefix=log dump}'
Dec 05 02:38:34 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2162378691' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 05 02:38:34 compute-0 ceph-mon[192914]: from='client.15935 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:34 compute-0 ceph-mon[192914]: from='client.15937 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:34 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/315993858' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 05 02:38:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec 05 02:38:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/799941036' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 05 02:38:34 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15943 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:34 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 02:38:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec 05 02:38:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3859075767' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 02:38:34 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15947 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2691: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:35 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15951 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec 05 02:38:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4120978798' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 05 02:38:35 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/799941036' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 05 02:38:35 compute-0 ceph-mon[192914]: from='client.15943 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:35 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3859075767' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 05 02:38:35 compute-0 ceph-mon[192914]: from='client.15947 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:35 compute-0 ceph-mon[192914]: pgmap v2691: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:35 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4120978798' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 05 02:38:35 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15953 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:38:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Dec 05 02:38:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3016016635' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 05 02:38:36 compute-0 nova_compute[349548]: 2025-12-05 02:38:36.120 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:38:36 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15957 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:38:36 compute-0 ceph-mon[192914]: from='client.15951 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:36 compute-0 ceph-mon[192914]: from='client.15953 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:38:36 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3016016635' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 05 02:38:36 compute-0 ceph-mon[192914]: from='client.15957 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:38:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Dec 05 02:38:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3547954620' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 05 02:38:37 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15965 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:38:37 compute-0 ceph-mgr[193209]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 05 02:38:37 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T02:38:37.020+0000 7f1b09f03640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 05 02:38:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2692: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Dec 05 02:38:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3215335810' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 05 02:38:37 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3547954620' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 05 02:38:37 compute-0 ceph-mon[192914]: from='client.15965 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:38:37 compute-0 ceph-mon[192914]: pgmap v2692: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:37 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3215335810' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 05 02:38:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Dec 05 02:38:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3514639259' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 05 02:38:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Dec 05 02:38:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1296937051' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 05 02:38:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Dec 05 02:38:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3173020999' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 05 02:38:38 compute-0 nova_compute[349548]: 2025-12-05 02:38:38.238 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:38:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Dec 05 02:38:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1738194113' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.332 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.333 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 05 02:38:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Dec 05 02:38:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2614369423' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 05 02:38:38 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3514639259' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 05 02:38:38 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1296937051' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 05 02:38:38 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3173020999' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 05 02:38:38 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1738194113' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 05 02:38:38 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2614369423' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 05 02:38:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Dec 05 02:38:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1405470282' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 05 02:38:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:38:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Dec 05 02:38:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1005288873' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:32.934615+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 42090496 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:33.935064+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 42090496 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.749129295s of 10.003149033s, submitted: 39
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:34.935482+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 41992192 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149030 data_alloc: 218103808 data_used: 11771904
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:35.935962+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 41984000 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4f2000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:36.936367+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:37.936732+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:38.937051+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4f2000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:39.937405+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4f2000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149030 data_alloc: 218103808 data_used: 11771904
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:40.937712+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:41.938140+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:42.938545+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4f2000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:43.939010+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.765543938s of 10.422777176s, submitted: 87
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 ms_handle_reset con 0x56484ac72c00 session 0x56484af661e0
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 ms_handle_reset con 0x56484887c000 session 0x56484af674a0
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:44.939299+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484abb5000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108707840 unmapped: 41943040 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054182 data_alloc: 218103808 data_used: 7081984
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:45.939616+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4f2000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,1])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 ms_handle_reset con 0x56484abb5000 session 0x56484aeee3c0
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:46.940101+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:47.940850+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:48.941277+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:49.941827+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:50.942232+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:51.942589+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:52.943115+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:53.943481+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:54.944083+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:55.944474+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:56.945000+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:57.945403+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:58.945697+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:59.946113+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:00.946529+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:01.947233+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:02.947643+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:03.948159+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:04.948655+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:05.949109+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:06.949495+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:07.950081+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:08.950482+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:09.951030+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:10.951402+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:11.951678+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:12.952290+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:13.952736+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:14.953256+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:15.953660+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:16.954210+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:17.954633+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:18.955088+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:19.955460+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:20.956040+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:21.956477+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:22.957060+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:23.957501+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:24.957987+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:25.958235+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:26.958587+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:27.959103+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:28.959547+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:29.960100+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:30.960545+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:31.960866+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:32.961371+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:33.961874+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:34.962527+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:35.962954+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:36.963402+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:37.963788+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:38.964178+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:39.964555+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:40.965004+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:41.965288+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:42.965679+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:43.966184+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:44.966827+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:45.967175+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:46.967548+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:47.968010+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:48.968382+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:49.968812+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:50.969151+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:51.969550+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:52.969734+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:53.970244+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:54.970714+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:55.971084+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:56.971389+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:57.971741+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:58.972132+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:59.972460+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:00.972647+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:01.973163+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:02.973526+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:03.973796+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:04.974229+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:05.974598+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 81.139320374s of 81.423446655s, submitted: 52
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051319 data_alloc: 218103808 data_used: 7057408
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:06.975057+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104734720 unmapped: 45916160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _renew_subs
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 133 ms_handle_reset con 0x56484887c000 session 0x564849e343c0
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:07.975444+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104742912 unmapped: 45907968 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa072000/0x0/0x4ffc00000, data 0x1531265/0x15fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:08.975840+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 45899776 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _renew_subs
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 ms_handle_reset con 0x564847ff7000 session 0x56484aafe1e0
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:09.976350+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:10.976805+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:11.977124+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:12.977500+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:13.978053+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:14.978539+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:15.978879+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:16.979316+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:17.979627+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:18.979865+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:19.980287+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:20.980668+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:21.981071+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:22.981487+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:23.981865+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:24.982465+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:25.983070+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:26.983471+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:27.984034+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:28.984423+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:29.984750+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:30.985114+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:31.985869+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:32.986372+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:33.986810+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:34.987482+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:35.987875+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:36.988358+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:37.988717+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:38.989076+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:39.989818+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:40.990167+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:41.990590+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:42.991222+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:43.991604+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:44.992035+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:45.992365+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:46.992648+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:47.993150+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:48.993463+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:49.993838+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:50.994223+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:51.994629+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:52.995148+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:53.995471+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:54.995841+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:55.996228+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:56.996632+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:57.997054+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:58.997404+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:59.997695+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:00.998129+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:01.998552+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:02.998840+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:03.999008+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484804a000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 ms_handle_reset con 0x56484804a000 session 0x5648493263c0
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac72000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 ms_handle_reset con 0x56484ac72000 session 0x564847f9da40
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 ms_handle_reset con 0x564847ff7000 session 0x56484a5e8f00
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:04.999225+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104808448 unmapped: 45842432 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484804a000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 ms_handle_reset con 0x56484804a000 session 0x56484a5e9860
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:05.999571+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484abb5000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 59.769268036s of 59.916522980s, submitted: 11
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 42229760 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 ms_handle_reset con 0x56484887c000 session 0x564849e343c0
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _renew_subs
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484abb5000 session 0x564848a7a5a0
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198393 data_alloc: 218103808 data_used: 11743232
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:06.999955+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f9bf9000/0x0/0x4ffc00000, data 0x19a4d92/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac72400
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484ac72400 session 0x564848e26000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 42246144 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac72400
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x564847ff7000 session 0x564848ec7c20
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484ac72400 session 0x564848e26b40
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484804a000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484804a000 session 0x56484a5e0d20
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484887c000 session 0x56484aeed680
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:08.001144+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484abb5000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484abb5000 session 0x564847ceed20
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x564847ff7000 session 0x56484a845680
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108691456 unmapped: 41959424 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484804a000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484887c000 session 0x56484aeefa40
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac72400
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484804a000 session 0x56484aeee5a0
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac72c00
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484ac72c00 session 0x56484a842960
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848d3bc00
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x564848d3bc00 session 0x56484a5ec5a0
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x564847ff7000 session 0x564848005e00
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:09.001452+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484804a000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484804a000 session 0x56484a78fa40
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 40763392 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _renew_subs
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:10.001625+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484ac72400 session 0x5648493261e0
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 109756416 unmapped: 40894464 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8dc1000/0x0/0x4ffc00000, data 0x27d99c5/0x28ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:11.001816+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484887c000 session 0x564847d114a0
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac72c00
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484ac72c00 session 0x564847d11c20
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x564847ff7000 session 0x56484ab0e1e0
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 109756416 unmapped: 40894464 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484804a000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484804a000 session 0x56484ab0ef00
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:12.002330+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1370612 data_alloc: 218103808 data_used: 11743232
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484887c000 session 0x56484ab0e780
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af76c00
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484af76c00 session 0x56484a845860
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af76000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484af76000 session 0x56484a5e0d20
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x564847ff7000 session 0x564848e26b40
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484804a000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484804a000 session 0x56484a9fe000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 40402944 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484887c000 session 0x5648474ef4a0
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:13.002604+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af76c00
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484af76c00 session 0x5648474ee5a0
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484aa26400
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484aa26400 session 0x564847d10960
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 40402944 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x564847ff7000 session 0x564847d10f00
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:14.002803+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484804a000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484887c000 session 0x56484ab0e3c0
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484804a000 session 0x56484ab0eb40
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 110641152 unmapped: 40009728 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f883f000/0x0/0x4ffc00000, data 0x2d5c5d5/0x2e2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af76c00
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849bfac00
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:15.003060+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849db2000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x564849db2000 session 0x56484a845a40
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac72400
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484ac72400 session 0x56484a9fe5a0
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 40026112 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484804a000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:16.003342+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 40026112 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8813000/0x0/0x4ffc00000, data 0x2d86618/0x2e5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:17.003729+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1378599 data_alloc: 218103808 data_used: 11751424
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 40017920 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:18.003959+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _renew_subs
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.355422020s of 12.096014023s, submitted: 115
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 38879232 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:19.004133+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f880f000/0x0/0x4ffc00000, data 0x2d8807b/0x2e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115818496 unmapped: 34832384 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:20.004404+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 32759808 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:21.004696+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484887c000 session 0x56484aa50780
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f880f000/0x0/0x4ffc00000, data 0x2d8807b/0x2e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 32759808 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849db2000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849db2000 session 0x56484813c000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:22.005065+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1485893 data_alloc: 234881024 data_used: 26185728
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 32759808 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:23.005415+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484af76c00 session 0x56484ab0f0e0
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849bfac00 session 0x56484aeee1e0
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848d3a400
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564848d3a400 session 0x564849327c20
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849bfac00
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 32759808 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484887c000 session 0x564847f98b40
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:24.005701+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849db2000
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af76c00
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849bfac00 session 0x56484a8450e0
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f87e6000/0x0/0x4ffc00000, data 0x2db208e/0x2e88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 33431552 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:25.006113+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 33423360 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:26.006509+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 33423360 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:27.006721+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451549 data_alloc: 234881024 data_used: 24010752
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 33423360 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f88ae000/0x0/0x4ffc00000, data 0x2b5fff9/0x2c33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:28.006956+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f88ae000/0x0/0x4ffc00000, data 0x2b5fff9/0x2c33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118177792 unmapped: 32473088 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:29.007178+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118685696 unmapped: 31965184 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:30.007380+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f88ae000/0x0/0x4ffc00000, data 0x2b5fff9/0x2c33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 30597120 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:31.007582+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 30597120 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:38 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:32.007842+0000)
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1484509 data_alloc: 234881024 data_used: 28577792
Dec 05 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 30597120 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:33.008112+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 30597120 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:34.008303+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 30597120 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:35.008535+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 30597120 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:36.008823+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f88ae000/0x0/0x4ffc00000, data 0x2b5fff9/0x2c33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:37.009039+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1484509 data_alloc: 234881024 data_used: 28577792
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:38.009229+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:39.009435+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:40.009660+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f88ae000/0x0/0x4ffc00000, data 0x2b5fff9/0x2c33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:41.009848+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:42.010124+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1484509 data_alloc: 234881024 data_used: 28577792
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:43.010327+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120102912 unmapped: 30547968 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:44.010542+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120102912 unmapped: 30547968 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:45.010766+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848e20000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.486093521s of 26.861923218s, submitted: 69
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564848e20000 session 0x564849c06960
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af77000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484af77000 session 0x56484813d860
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af76400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484af76400 session 0x56484aa512c0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484887c000 session 0x564848ec74a0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848e20000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121430016 unmapped: 29220864 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564848e20000 session 0x56484a8421e0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:46.011246+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8280000/0x0/0x4ffc00000, data 0x331aff9/0x33ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121430016 unmapped: 29220864 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:47.011652+0000)
Dec 05 02:38:39 compute-0 crontab[493680]: (root) LIST (root)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550538 data_alloc: 234881024 data_used: 28581888
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121430016 unmapped: 29220864 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:48.011833+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8280000/0x0/0x4ffc00000, data 0x331aff9/0x33ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121430016 unmapped: 29220864 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:49.012016+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121430016 unmapped: 29220864 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:50.012549+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 127156224 unmapped: 23494656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:51.013086+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 128073728 unmapped: 22577152 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:52.013350+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849bfac00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849bfac00 session 0x5648481c5c20
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1620314 data_alloc: 234881024 data_used: 29696000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af77000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849477c00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 125485056 unmapped: 25165824 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:53.013523+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 23896064 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:54.013756+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f78df000/0x0/0x4ffc00000, data 0x3cb3ff9/0x3d87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f78df000/0x0/0x4ffc00000, data 0x3cb3ff9/0x3d87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 127426560 unmapped: 23224320 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:55.014190+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131514368 unmapped: 19136512 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:56.014395+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133488640 unmapped: 17162240 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:57.014584+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1686404 data_alloc: 251658240 data_used: 36605952
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.745903969s of 12.423884392s, submitted: 151
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484af77000 session 0x56484ab36960
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849477c00 session 0x56484a5ec1e0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133488640 unmapped: 17162240 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:58.014973+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484887c000 session 0x564849e34f00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130637824 unmapped: 20013056 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:59.015352+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130637824 unmapped: 20013056 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:00.015832+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f80a7000/0x0/0x4ffc00000, data 0x34f3ff9/0x35c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134373376 unmapped: 16277504 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:01.016062+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 17661952 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:02.016403+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1658098 data_alloc: 234881024 data_used: 30609408
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133152768 unmapped: 17498112 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75c4000/0x0/0x4ffc00000, data 0x3fd5ff9/0x40a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,3])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:03.016646+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 17391616 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:04.016828+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 17391616 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:05.017285+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 17391616 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:06.017631+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 17391616 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:07.018026+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75bc000/0x0/0x4ffc00000, data 0x3fddff9/0x40b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1665698 data_alloc: 234881024 data_used: 31031296
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 17383424 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:08.018391+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 17383424 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:09.018832+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 17375232 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:10.019379+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75bc000/0x0/0x4ffc00000, data 0x3fddff9/0x40b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 17375232 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:11.019656+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.259255409s of 13.825000763s, submitted: 121
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 18071552 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:12.020033+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1661566 data_alloc: 234881024 data_used: 31031296
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 18071552 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:13.020256+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 18071552 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:14.020672+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 18071552 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:15.021116+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132587520 unmapped: 18063360 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:16.021414+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132587520 unmapped: 18063360 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:17.021657+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1661566 data_alloc: 234881024 data_used: 31031296
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132587520 unmapped: 18063360 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:18.021979+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 18055168 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:19.022515+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 18055168 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:20.022983+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 18055168 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:21.023319+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 18055168 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:22.023509+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1661566 data_alloc: 234881024 data_used: 31031296
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 18055168 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:23.023861+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.511526108s of 12.544201851s, submitted: 4
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:24.024101+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:25.024475+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:26.024832+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:27.025160+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1662094 data_alloc: 234881024 data_used: 31031296
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:28.025573+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:29.026007+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:30.026197+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:31.026525+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:32.026751+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132620288 unmapped: 18030592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1661650 data_alloc: 234881024 data_used: 31031296
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:33.027201+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75b5000/0x0/0x4ffc00000, data 0x3fe5ff9/0x40b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132620288 unmapped: 18030592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:34.027592+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132628480 unmapped: 18022400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.172998428s of 10.241044044s, submitted: 10
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:35.028153+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 17997824 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:36.028556+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 17997824 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75b5000/0x0/0x4ffc00000, data 0x3fe5ff9/0x40b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:37.028967+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 17997824 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664114 data_alloc: 234881024 data_used: 31019008
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849477800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849477800 session 0x5648481472c0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849477400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849477400 session 0x564848021e00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849476400 session 0x56484aeed860
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849476800 session 0x56484aeecb40
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:38.029299+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 17997824 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849476800 session 0x56484aeede00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484887c000 session 0x56484a9b8000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849476400 session 0x56484a9b9860
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849477400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849477400 session 0x564847cef860
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849477800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849477800 session 0x5648488f0780
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f6e82000/0x0/0x4ffc00000, data 0x4718009/0x47ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:39.029716+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132964352 unmapped: 17686528 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:40.030054+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132964352 unmapped: 17686528 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f6e82000/0x0/0x4ffc00000, data 0x4718009/0x47ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:41.030352+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132964352 unmapped: 17686528 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:42.030639+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132964352 unmapped: 17686528 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849db2000 session 0x56484af3d680
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484af76c00 session 0x56484a9b85a0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1719347 data_alloc: 234881024 data_used: 31019008
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:43.030945+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 17670144 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:44.031327+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f6e82000/0x0/0x4ffc00000, data 0x4718009/0x47ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 17670144 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484887c000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.484535217s of 10.727886200s, submitted: 38
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:45.031678+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 17670144 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:46.032261+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 17670144 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564847ff7000 session 0x564847f9d680
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484804a000 session 0x564848143680
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:47.032460+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 27230208 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849476800 session 0x564847f9cb40
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468530 data_alloc: 234881024 data_used: 17723392
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:48.032835+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f842a000/0x0/0x4ffc00000, data 0x3170ff9/0x3244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 27230208 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:49.033227+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564847ff7000 session 0x56484a91c000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 27222016 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484804a000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484804a000 session 0x56484a91d2c0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:50.033627+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 27222016 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f842a000/0x0/0x4ffc00000, data 0x3170ff9/0x3244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849476800 session 0x564848143c20
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849db2000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849db2000 session 0x5648481434a0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:51.034299+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 27746304 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af76c00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849477400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:52.034576+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 27746304 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473527 data_alloc: 234881024 data_used: 17731584
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:53.034846+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 27746304 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:54.035022+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 27746304 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.922307968s of 10.159023285s, submitted: 48
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:55.035718+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484af76c00 session 0x5648492f1860
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849477400 session 0x564847f983c0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 27738112 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8406000/0x0/0x4ffc00000, data 0x3194ff9/0x3268000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,1])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:56.036103+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564847ff7000 session 0x56484aeec960
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:57.036496+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417997 data_alloc: 234881024 data_used: 17723392
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:58.036873+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:59.037236+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:00.037596+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:01.037994+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:02.038445+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417997 data_alloc: 234881024 data_used: 17723392
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:03.038674+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:04.038931+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:05.039296+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:06.039718+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:07.040119+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417997 data_alloc: 234881024 data_used: 17723392
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:08.040440+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:09.040715+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:10.041173+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:11.041392+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:12.041625+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417997 data_alloc: 234881024 data_used: 17723392
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:13.041950+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:14.042140+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:15.042363+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:16.042536+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:17.042822+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417997 data_alloc: 234881024 data_used: 17723392
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:18.043225+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:19.043627+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121061376 unmapped: 29589504 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.571750641s of 24.842250824s, submitted: 46
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:20.044255+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 29532160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:21.044615+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121217024 unmapped: 29433856 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:22.045006+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 29229056 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1435561 data_alloc: 234881024 data_used: 19324928
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:23.045203+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:24.045535+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:25.046140+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:26.046530+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:27.046945+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436041 data_alloc: 234881024 data_used: 19337216
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:28.047239+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:29.047576+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:30.047809+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:31.048171+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:32.048417+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436041 data_alloc: 234881024 data_used: 19337216
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:33.048788+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:34.048996+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:35.049401+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:36.049683+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484d2ca000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:37.050160+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436041 data_alloc: 234881024 data_used: 19337216
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:38.050593+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.756362915s of 18.791091919s, submitted: 4
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484d2ca000 session 0x56484a845680
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:39.051124+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8b59000/0x0/0x4ffc00000, data 0x2a40b66/0x2b14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:40.051494+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:41.052013+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:42.052379+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:43.052846+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439863 data_alloc: 234881024 data_used: 19345408
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8b59000/0x0/0x4ffc00000, data 0x2a40b66/0x2b14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:44.053194+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:45.053700+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:46.054083+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:47.620291+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af69c00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af69c00 session 0x56484aa51860
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439863 data_alloc: 234881024 data_used: 19345408
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848e20800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564848e20800 session 0x56484aeee5a0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847ff7000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564847ff7000 session 0x56484af2c780
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8b59000/0x0/0x4ffc00000, data 0x2a40b66/0x2b14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:48.620542+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848e20800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564848e20800 session 0x564848e265a0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847cf8c00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.284442902s of 10.303551674s, submitted: 2
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564847cf8c00 session 0x564849e341e0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:49.620983+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847fcc400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564847fcc400 session 0x56484a8443c0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484887c000 session 0x56484ab37a40
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484ab370e0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af69c00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:50.621202+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476800 session 0x56484a5ec3c0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 33398784 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68c00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68c00 session 0x564847d114a0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af69c00 session 0x564847cef860
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:51.621609+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 37429248 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:52.621986+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af69400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af69400 session 0x56484a9fef00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 37429248 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356384 data_alloc: 218103808 data_used: 11759616
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484a9fe780
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:53.622337+0000)
Dec 05 02:38:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Dec 05 02:38:39 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/355060447' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 37429248 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476800 session 0x56484a9fe000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68c00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68c00 session 0x564848e272c0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f85f1000/0x0/0x4ffc00000, data 0x26d9b95/0x27ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:54.622620+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 37068800 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af69c00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:55.622968+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 37068800 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8e96000/0x0/0x4ffc00000, data 0x2703bc8/0x27d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:56.623278+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 37068800 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:57.623750+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 37068800 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364170 data_alloc: 218103808 data_used: 11759616
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8e96000/0x0/0x4ffc00000, data 0x2703bc8/0x27d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:58.624080+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 37068800 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:59.624433+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 37068800 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:00.624759+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118841344 unmapped: 37060608 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8e96000/0x0/0x4ffc00000, data 0x2703bc8/0x27d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af69800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:01.625077+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118841344 unmapped: 37060608 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x564848ec7860
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac70800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac70800 session 0x56484a842960
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484ab0f860
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476800 session 0x56484813dc20
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.737582207s of 13.344229698s, submitted: 87
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:02.625421+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484ac421e0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68c00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68c00 session 0x56484ab36f00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 39936000 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac71000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71000 session 0x56484a78ef00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484a9b8780
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476800 session 0x56484a8434a0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1535513 data_alloc: 234881024 data_used: 20418560
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:03.625865+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 36896768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:04.626287+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 35504128 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:05.626664+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 35504128 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac71000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71000 session 0x564847d11860
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:06.626876+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 35504128 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484ab36b40
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:07.627253+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 35504128 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571513 data_alloc: 234881024 data_used: 25505792
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68c00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68c00 session 0x56484aa50b40
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484aa51860
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:08.632190+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 35495936 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac71000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:09.632464+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 35495936 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:10.632679+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 35463168 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:11.632866+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 126574592 unmapped: 33005568 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:12.633108+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132308992 unmapped: 27271168 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663144 data_alloc: 251658240 data_used: 37560320
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:13.633561+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135913472 unmapped: 23666688 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:14.633972+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135913472 unmapped: 23666688 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:15.634216+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135913472 unmapped: 23666688 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:16.634451+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135954432 unmapped: 23625728 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:17.634861+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135962624 unmapped: 23617536 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677864 data_alloc: 251658240 data_used: 38080512
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:18.635294+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135962624 unmapped: 23617536 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:19.635697+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:20.636085+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:21.636471+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:22.637038+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677864 data_alloc: 251658240 data_used: 38080512
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:23.637420+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:24.637789+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:25.638189+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:26.638439+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:27.638786+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677864 data_alloc: 251658240 data_used: 38080512
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:28.639154+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:29.639505+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:30.639877+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:31.640220+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:32.640440+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677864 data_alloc: 251658240 data_used: 38080512
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:33.640670+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:34.641015+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136028160 unmapped: 23552000 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.552471161s of 32.833507538s, submitted: 33
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:35.641367+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139296768 unmapped: 20283392 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:36.641742+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139304960 unmapped: 20275200 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:37.642086+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7387000/0x0/0x4ffc00000, data 0x4212bc8/0x42e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140591104 unmapped: 18989056 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1791954 data_alloc: 251658240 data_used: 38576128
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:38.642461+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140591104 unmapped: 18989056 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:39.642711+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140591104 unmapped: 18989056 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7378000/0x0/0x4ffc00000, data 0x4221bc8/0x42f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:40.643011+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143826944 unmapped: 15753216 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7378000/0x0/0x4ffc00000, data 0x4221bc8/0x42f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:41.643291+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 15073280 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:42.643506+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 16146432 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1865102 data_alloc: 251658240 data_used: 38723584
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:43.643675+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f6673000/0x0/0x4ffc00000, data 0x4b16bc8/0x4beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16130048 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:44.643919+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16130048 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:45.644274+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16130048 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:46.644497+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16130048 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:47.644810+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16130048 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f6673000/0x0/0x4ffc00000, data 0x4b16bc8/0x4beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1865262 data_alloc: 251658240 data_used: 38727680
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:48.645123+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.018227577s of 13.871615410s, submitted: 193
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16130048 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:49.645408+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 16121856 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:50.645674+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 16121856 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:51.646176+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f6672000/0x0/0x4ffc00000, data 0x4b17bc8/0x4bec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 16121856 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:52.646400+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 16121856 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1865686 data_alloc: 251658240 data_used: 38731776
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:53.646769+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 16121856 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:54.647151+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 16121856 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:55.647576+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f6670000/0x0/0x4ffc00000, data 0x4b19bc8/0x4bee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 16113664 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:56.647868+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 16113664 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:57.648353+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 16113664 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1863118 data_alloc: 251658240 data_used: 38731776
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:58.648675+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 16113664 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:59.649048+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f6670000/0x0/0x4ffc00000, data 0x4b19bc8/0x4bee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 16113664 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:00.649324+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 16113664 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484ab0fc20
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:01.649697+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac71c00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71c00 session 0x56484813cd20
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac70c00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac70c00 session 0x56484ab374a0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848048800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564848048800 session 0x56484a9fe780
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.988618851s of 13.008173943s, submitted: 2
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x564847d114a0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143294464 unmapped: 17899520 heap: 161193984 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac70c00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac70c00 session 0x56484a845e00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:02.650037+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f5d46000/0x0/0x4ffc00000, data 0x5442bf1/0x5518000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143327232 unmapped: 17866752 heap: 161193984 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1934703 data_alloc: 251658240 data_used: 38731776
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:03.650393+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac71c00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 146489344 unmapped: 14704640 heap: 161193984 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71c00 session 0x56484a844780
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:04.650719+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484ab37a40
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e78000/0x0/0x4ffc00000, data 0x630fc53/0x63e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 25387008 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:05.651142+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 25387008 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:06.651561+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 25387008 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:07.652099+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 25387008 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2046908 data_alloc: 251658240 data_used: 38731776
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:08.652538+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 25387008 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e78000/0x0/0x4ffc00000, data 0x630fc8c/0x63e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484a7f0400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f0400 session 0x56484aa51a40
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:09.653072+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484a8425a0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 25387008 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:10.653475+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 25378816 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac70c00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac70c00 session 0x56484a843680
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac71c00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:11.653758+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71c00 session 0x56484a843860
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 144007168 unmapped: 25059328 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:12.653963+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.890540123s of 11.288828850s, submitted: 66
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 144015360 unmapped: 25051136 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:13.654217+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2050670 data_alloc: 251658240 data_used: 38731776
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484a7f0800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 144072704 unmapped: 24993792 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:14.654383+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 144252928 unmapped: 24813568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:15.654577+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 146710528 unmapped: 22355968 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:16.654773+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484a7f1000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 153870336 unmapped: 15196160 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:17.654984+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 158777344 unmapped: 10289152 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:18.655131+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2184302 data_alloc: 268435456 data_used: 56541184
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:19.655407+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:20.655678+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:21.655943+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:22.656285+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:23.656510+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2193742 data_alloc: 268435456 data_used: 57905152
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:24.656727+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:25.656991+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:26.657209+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161628160 unmapped: 7438336 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:27.657450+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161636352 unmapped: 7430144 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:28.657667+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2193742 data_alloc: 268435456 data_used: 57905152
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161636352 unmapped: 7430144 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:29.657864+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:30.658165+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:31.658407+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:32.658627+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:33.659007+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2193742 data_alloc: 268435456 data_used: 57905152
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:34.659251+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:35.659562+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:36.659785+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:37.660090+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:38.660420+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2193742 data_alloc: 268435456 data_used: 57905152
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:39.660600+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:40.661021+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:41.661369+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:42.661786+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:43.662008+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2193742 data_alloc: 268435456 data_used: 57905152
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:44.662436+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:45.662751+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161669120 unmapped: 7397376 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:46.663004+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161669120 unmapped: 7397376 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:47.663182+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.762050629s of 34.795322418s, submitted: 5
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 165208064 unmapped: 3858432 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:48.663377+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2233138 data_alloc: 268435456 data_used: 58028032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4a4e000/0x0/0x4ffc00000, data 0x6738c9c/0x6810000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 165330944 unmapped: 3735552 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:49.663580+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 6807552 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:50.663758+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 165462016 unmapped: 6758400 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:51.663996+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 166813696 unmapped: 5406720 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:52.664204+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 166813696 unmapped: 5406720 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:53.664400+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f3c01000/0x0/0x4ffc00000, data 0x7585c9c/0x765d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2345136 data_alloc: 268435456 data_used: 58961920
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:54.664601+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 166846464 unmapped: 5373952 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f3c01000/0x0/0x4ffc00000, data 0x7585c9c/0x765d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:55.664986+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 166846464 unmapped: 5373952 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:56.665320+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f0800 session 0x56484a5e01e0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484a9fe000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 166846464 unmapped: 5373952 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x5648489743c0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:57.665570+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161579008 unmapped: 10641408 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:58.665853+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161579008 unmapped: 10641408 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2161675 data_alloc: 251658240 data_used: 49319936
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:59.666082+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161579008 unmapped: 10641408 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4a3d000/0x0/0x4ffc00000, data 0x6746c2a/0x681c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:00.666645+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161579008 unmapped: 10641408 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4a3d000/0x0/0x4ffc00000, data 0x6746c2a/0x681c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.678235054s of 13.497112274s, submitted: 173
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476800 session 0x56484aa505a0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71000 session 0x564848144d20
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:01.667001+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161587200 unmapped: 10633216 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4a42000/0x0/0x4ffc00000, data 0x6746c2a/0x681c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484af7c3c0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:02.667238+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:03.667600+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1882942 data_alloc: 251658240 data_used: 36667392
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f616b000/0x0/0x4ffc00000, data 0x501dc2a/0x50f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:04.667997+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:05.668224+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:06.668557+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:07.669074+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f616b000/0x0/0x4ffc00000, data 0x501dc2a/0x50f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:08.669834+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1882942 data_alloc: 251658240 data_used: 36667392
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:09.670324+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:10.670664+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:11.671128+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:12.671442+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f616b000/0x0/0x4ffc00000, data 0x501dc2a/0x50f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:13.671790+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1882942 data_alloc: 251658240 data_used: 36667392
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:14.671998+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:15.672260+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f616b000/0x0/0x4ffc00000, data 0x501dc2a/0x50f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:16.672571+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:17.672993+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:18.673223+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1882942 data_alloc: 251658240 data_used: 36667392
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f616b000/0x0/0x4ffc00000, data 0x501dc2a/0x50f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:19.673572+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:20.674124+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:21.674518+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:22.674713+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f616b000/0x0/0x4ffc00000, data 0x501dc2a/0x50f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.953754425s of 21.209007263s, submitted: 50
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:23.675052+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1883210 data_alloc: 251658240 data_used: 36667392
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:24.675470+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:25.675863+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:26.676304+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f1000 session 0x564847f98d20
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f6169000/0x0/0x4ffc00000, data 0x501ec2a/0x50f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:27.676487+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484ac43e00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:28.676988+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:29.677346+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:30.679232+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:31.679472+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:32.679716+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:33.680056+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:34.680341+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:35.680647+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:36.681150+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:37.681495+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:38.682231+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:39.682605+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:40.682970+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:41.683449+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:42.683681+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:43.684064+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:44.684349+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:45.684806+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:46.685101+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:47.685490+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:48.686575+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:49.687059+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:50.687365+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:51.687679+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:52.688151+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:53.688571+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:54.688874+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:55.689356+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139501568 unmapped: 32718848 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:56.689666+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139501568 unmapped: 32718848 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:57.690132+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139501568 unmapped: 32718848 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:58.690391+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.2 total, 600.0 interval
                                            Cumulative writes: 11K writes, 44K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 11K writes, 2982 syncs, 3.80 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 2397 writes, 9102 keys, 2397 commit groups, 1.0 writes per commit group, ingest: 9.64 MB, 0.02 MB/s
                                            Interval WAL: 2397 writes, 959 syncs, 2.50 writes per sync, written: 0.01 GB, 0.02 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139509760 unmapped: 32710656 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:59.690789+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139509760 unmapped: 32710656 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:00.691092+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139509760 unmapped: 32710656 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:01.691500+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139509760 unmapped: 32710656 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:02.691809+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139509760 unmapped: 32710656 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:03.692213+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139517952 unmapped: 32702464 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:04.692422+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139517952 unmapped: 32702464 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:05.692774+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af69000 session 0x564849e34000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139517952 unmapped: 32702464 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: mgrc ms_handle_reset ms_handle_reset con 0x56484885e000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/858078637
Dec 05 02:38:39 compute-0 ceph-osd[207795]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/858078637,v1:192.168.122.100:6801/858078637]
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: get_auth_request con 0x56484a7f1000 auth_method 0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: mgrc handle_mgr_configure stats_period=5
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:06.693110+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847e8dc00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564848045400 session 0x564848144960
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139665408 unmapped: 32555008 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564847ff6c00 session 0x564847cee1e0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564848045400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:07.693649+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139665408 unmapped: 32555008 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:08.694020+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139665408 unmapped: 32555008 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:09.694343+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139665408 unmapped: 32555008 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:10.694641+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139665408 unmapped: 32555008 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:11.695071+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139665408 unmapped: 32555008 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:12.695333+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139665408 unmapped: 32555008 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:13.695714+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:14.695932+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:15.696384+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:16.699302+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:17.699531+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:18.699944+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:19.700334+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:20.700717+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:21.701051+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:22.701330+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139681792 unmapped: 32538624 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:23.701536+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139681792 unmapped: 32538624 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:24.701731+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139681792 unmapped: 32538624 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:25.702036+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139681792 unmapped: 32538624 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:26.702302+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139681792 unmapped: 32538624 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:27.702628+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:28.703021+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:29.703341+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:30.703684+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:31.703914+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:32.704108+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:33.704442+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:34.704663+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:35.705079+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:36.705510+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:37.706186+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:38.706484+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:39.706825+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:40.707238+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:41.707570+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:42.707878+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:43.708580+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:44.708863+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:45.709297+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:46.709684+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:47.710071+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:48.710346+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:49.710838+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:50.711207+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:51.711571+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:52.711788+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:53.712133+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:54.712368+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:55.712656+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:56.712959+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:57.713158+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:58.713478+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:59.713956+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:00.714299+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:01.714683+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484ac425a0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484a7f0800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f0800 session 0x56484ac423c0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac70c00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac70c00 session 0x5648488f1e00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac71c00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71c00 session 0x56484a842d20
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 99.254913330s of 99.396072388s, submitted: 32
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484a8423c0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484a7f0800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f0800 session 0x56484a842f00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac70c00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138616832 unmapped: 40951808 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac70c00 session 0x564848a7a5a0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:02.714920+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x564848005c20
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484a7f1400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f1400 session 0x56484a91d680
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138600448 unmapped: 40968192 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:03.715188+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138600448 unmapped: 40968192 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:04.715583+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7967000/0x0/0x4ffc00000, data 0x3820c3a/0x38f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1612692 data_alloc: 234881024 data_used: 21114880
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138600448 unmapped: 40968192 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:05.716068+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7967000/0x0/0x4ffc00000, data 0x3820c3a/0x38f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138600448 unmapped: 40968192 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:06.716496+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484aeed0e0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138600448 unmapped: 40968192 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:07.716824+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7967000/0x0/0x4ffc00000, data 0x3820c3a/0x38f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484a7f0800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f0800 session 0x56484aeede00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138600448 unmapped: 40968192 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:08.717259+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac70c00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac70c00 session 0x56484aeec960
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af68400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484aeecb40
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138600448 unmapped: 40968192 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:09.717561+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1614530 data_alloc: 234881024 data_used: 21114880
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484a7f1800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484d2ca000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:10.717769+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:11.717984+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:12.718243+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:13.719407+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:14.719594+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1614262 data_alloc: 234881024 data_used: 21233664
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:15.723052+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:16.723239+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:17.723446+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:18.723647+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:19.723838+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1642742 data_alloc: 234881024 data_used: 25214976
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:20.724195+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:21.724532+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:22.724940+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:23.725265+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:24.726324+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1642742 data_alloc: 234881024 data_used: 25214976
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:25.726658+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:26.726923+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564847ff6000 session 0x5648481421e0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564849476400
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:27.727363+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:28.727579+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:29.727829+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1642742 data_alloc: 234881024 data_used: 25214976
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:30.728098+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:31.728474+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:32.728858+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137682944 unmapped: 41885696 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:33.729240+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137682944 unmapped: 41885696 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.585376740s of 31.791507721s, submitted: 41
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:34.729430+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138772480 unmapped: 40796160 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1643046 data_alloc: 234881024 data_used: 25227264
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:35.729863+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138878976 unmapped: 40689664 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:36.730378+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138911744 unmapped: 40656896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:37.730782+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138936320 unmapped: 40632320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:38.731106+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138936320 unmapped: 40632320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:39.731302+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138936320 unmapped: 40632320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1643046 data_alloc: 234881024 data_used: 25227264
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:40.731645+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138944512 unmapped: 40624128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:41.731973+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138944512 unmapped: 40624128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:42.732194+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138944512 unmapped: 40624128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:43.732595+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138944512 unmapped: 40624128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:44.732836+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138944512 unmapped: 40624128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1643046 data_alloc: 234881024 data_used: 25227264
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:45.733281+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138944512 unmapped: 40624128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.734733582s of 12.462025642s, submitted: 110
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:46.733680+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139984896 unmapped: 39583744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:47.734043+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139984896 unmapped: 39583744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f71fb000/0x0/0x4ffc00000, data 0x3f83c4a/0x405b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:48.734554+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139214848 unmapped: 40353792 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:49.734787+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 39370752 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1705914 data_alloc: 234881024 data_used: 25247744
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:50.735171+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 39370752 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:51.735487+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 39288832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:52.735703+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 39288832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:53.735986+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7187000/0x0/0x4ffc00000, data 0x3fffc4a/0x40d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 39288832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:54.736410+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 39288832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1705914 data_alloc: 234881024 data_used: 25247744
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:55.736824+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140288000 unmapped: 39280640 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:56.737184+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140288000 unmapped: 39280640 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7187000/0x0/0x4ffc00000, data 0x3fffc4a/0x40d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:57.737416+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 39272448 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:58.737792+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.947365761s of 12.310205460s, submitted: 67
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:59.738021+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704530 data_alloc: 234881024 data_used: 25251840
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717d000/0x0/0x4ffc00000, data 0x4009c4a/0x40e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:00.738193+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:01.738389+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717d000/0x0/0x4ffc00000, data 0x4009c4a/0x40e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:02.738683+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:03.738870+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717d000/0x0/0x4ffc00000, data 0x4009c4a/0x40e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:04.739108+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704530 data_alloc: 234881024 data_used: 25251840
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:05.739499+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:06.739831+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:07.740262+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:08.740511+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:09.740673+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704398 data_alloc: 234881024 data_used: 25251840
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:10.741018+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:11.741392+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:12.741670+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:13.741930+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:14.742105+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704398 data_alloc: 234881024 data_used: 25251840
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:15.742384+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:16.742733+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:17.743161+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:18.743553+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:19.743982+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704398 data_alloc: 234881024 data_used: 25251840
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:20.747040+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:21.747345+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:22.748274+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:23.748603+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:24.748823+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704398 data_alloc: 234881024 data_used: 25251840
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:25.749098+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:26.749409+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:27.749600+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 39092224 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:28.749781+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 39092224 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:29.749996+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 39092224 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704398 data_alloc: 234881024 data_used: 25251840
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:30.750165+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 39092224 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:31.750362+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 39092224 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:32.750564+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 39092224 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:33.750754+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 39092224 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:34.751157+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 39092224 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704398 data_alloc: 234881024 data_used: 25251840
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:35.751483+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 39084032 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:36.751751+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 39084032 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:37.752145+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.973342896s of 39.005554199s, submitted: 6
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 39084032 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:38.752448+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 39084032 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:39.752830+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704558 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:40.753279+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:41.753838+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:42.754265+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:43.754685+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:44.754961+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704598 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:45.755430+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:46.755947+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:47.756355+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:48.756706+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7178000/0x0/0x4ffc00000, data 0x400ec4a/0x40e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7178000/0x0/0x4ffc00000, data 0x400ec4a/0x40e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:49.757115+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704598 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:50.757466+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:51.757702+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:52.757962+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:53.758213+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7178000/0x0/0x4ffc00000, data 0x400ec4a/0x40e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:54.758552+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704598 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:55.758932+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7178000/0x0/0x4ffc00000, data 0x400ec4a/0x40e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:56.759129+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:57.759431+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:58.759633+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.475843430s of 21.493835449s, submitted: 3
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:59.760246+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704778 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:00.760447+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:01.760771+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:02.761228+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:03.761640+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:04.761956+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704778 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:05.762419+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:06.762780+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:07.763108+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:08.763296+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:09.763637+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704778 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:10.763914+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:11.764171+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:12.764461+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:13.764714+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:14.765047+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704778 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:15.765328+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:16.765619+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:17.765850+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:18.766041+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:19.766225+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704778 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:20.766414+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:21.766590+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:22.766791+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:23.767063+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:24.767279+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704778 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:25.767684+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:26.768108+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:27.768301+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:28.768572+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.689928055s of 29.696563721s, submitted: 1
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:29.768983+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:30.769418+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:31.769695+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:32.770111+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:33.770602+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:34.771062+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:35.771558+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:36.771950+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:37.772211+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:38.772493+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:39.773036+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:40.773237+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:41.773572+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:42.773732+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:43.774097+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:44.774600+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:45.774843+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:46.775024+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:47.775229+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:48.775647+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:49.776017+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:50.776270+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:51.776657+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:52.777055+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:53.777259+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:54.777642+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:55.777975+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:56.778353+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:57.778744+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:58.778995+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:59.779288+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:00.779634+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:01.780051+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:02.780404+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:03.780760+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:04.781129+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:05.781606+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:06.781972+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:07.782363+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:08.782735+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:09.783083+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:10.783372+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:11.783745+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:12.784054+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:13.784385+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:14.784773+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:15.785441+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:16.785849+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:17.786186+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:18.786358+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:19.786874+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:20.787350+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:21.787678+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:22.788078+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:23.788377+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:24.788684+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:25.789112+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:26.789850+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:27.790737+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:28.791290+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:29.798088+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:30.798646+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:31.799967+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:32.800581+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:33.801228+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:34.801493+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:35.802128+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:36.803243+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:37.803509+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:38.804044+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:39.804717+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:40.805136+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:41.805767+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:42.806026+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:43.806387+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:44.806640+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:45.807377+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:46.807741+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:47.808094+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:48.808286+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:49.808639+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:50.808999+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:51.809318+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:52.809503+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:53.809714+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:54.809938+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:55.810175+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:56.810375+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:57.810599+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:58.810804+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:59.811052+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:00.811254+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:01.811771+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:02.812019+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:03.812240+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:04.812456+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:05.812688+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:06.812938+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:07.813161+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:08.813341+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:09.813560+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:10.813770+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:11.814045+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:12.814250+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:13.814440+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:14.814655+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:15.814928+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:16.815118+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:17.816151+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:18.817789+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:19.819522+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:20.821214+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:21.822835+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:22.824050+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:23.824311+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:24.824722+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:25.825021+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:26.825352+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:27.825744+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:28.826178+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:29.826711+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:30.827153+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:31.827618+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:32.828076+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:33.828629+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:34.829224+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 39034880 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:35.829703+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 39034880 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:36.830110+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 39034880 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:37.830531+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 39034880 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:38.831011+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 39034880 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:39.831345+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:40.831747+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:41.832128+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:42.832566+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:43.832980+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:44.833350+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:45.833711+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:46.834095+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:47.834444+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:48.834864+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:49.835174+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:50.835513+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:51.835987+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:52.836333+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:53.836673+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:54.837036+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:55.837508+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:56.837786+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:57.838196+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:58.838545+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:59.838874+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:00.839182+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:01.839455+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:02.839699+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:03.839983+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:04.840362+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 39018496 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:05.840605+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:06.841044+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:07.841405+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:08.841752+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:09.841952+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:10.842368+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:11.842774+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:12.843256+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:13.843554+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:14.844014+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:15.844355+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:16.844604+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:17.844983+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:18.845223+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:19.845424+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:20.845740+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:21.846022+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:22.846300+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:23.846666+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:24.846954+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:25.847322+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:26.847623+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:27.848160+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:28.848510+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:29.848837+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:30.849300+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:31.849754+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:32.850112+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:33.850471+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:34.850820+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:35.851326+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:36.851735+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:37.852149+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:38.852470+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:39.852861+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:40.853247+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:41.853653+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:42.854023+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:43.854397+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:44.854773+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:45.855314+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:46.855569+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:47.856009+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:48.856369+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 200.378692627s of 200.388336182s, submitted: 1
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:49.856746+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:50.857144+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:51.857622+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704586 data_alloc: 234881024 data_used: 25264128
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:52.858133+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:53.858505+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:54.858852+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:55.859342+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:56.859681+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7160000/0x0/0x4ffc00000, data 0x4026c4a/0x40fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1705430 data_alloc: 234881024 data_used: 25264128
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:57.860220+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:58.860629+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:59.861181+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:00.861530+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7160000/0x0/0x4ffc00000, data 0x4026c4a/0x40fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:01.861811+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1705430 data_alloc: 234881024 data_used: 25264128
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:02.862164+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:03.862485+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7160000/0x0/0x4ffc00000, data 0x4026c4a/0x40fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:04.862871+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:05.863497+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:06.863823+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1705590 data_alloc: 234881024 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:07.864288+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:08.864694+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7160000/0x0/0x4ffc00000, data 0x4026c4a/0x40fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:09.865175+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7160000/0x0/0x4ffc00000, data 0x4026c4a/0x40fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:10.865535+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.224842072s of 22.257825851s, submitted: 4
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:11.865846+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706010 data_alloc: 234881024 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140943360 unmapped: 38625280 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:12.866253+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140943360 unmapped: 38625280 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:13.866603+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140943360 unmapped: 38625280 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:14.866832+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140951552 unmapped: 38617088 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:15.867142+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:16.867319+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706010 data_alloc: 234881024 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:17.867557+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:18.867794+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:19.868157+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:20.868482+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:21.868846+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706010 data_alloc: 234881024 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:22.869270+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:23.869652+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:24.870167+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.544095993s of 13.567526817s, submitted: 2
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:25.870652+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:26.871198+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:27.871721+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:28.872328+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:29.872738+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:30.873105+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:31.873308+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:32.873719+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:33.874112+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:34.874370+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:35.874753+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:36.875215+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:37.875494+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:38.875973+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:39.876362+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:40.876751+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:41.877173+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:42.877568+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:43.878004+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:44.878410+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:45.878806+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:46.879159+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:47.879550+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:48.880120+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:49.880536+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:50.881023+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:51.881422+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:52.881837+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:53.882239+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:54.882614+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:55.883119+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:56.883560+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:57.884144+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:58.884524+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:59.884987+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:00.885220+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:01.885679+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:02.886171+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:03.886510+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:04.887008+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:05.887404+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:06.887824+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:07.888203+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:08.888482+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:09.888850+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:10.889229+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:11.889673+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:12.891361+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:13.893318+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:14.894035+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:15.894561+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:16.895108+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:17.896219+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:18.896586+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:19.897152+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:20.897794+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:21.898255+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:22.898606+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:23.899043+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:24.899385+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:25.899807+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:26.899998+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:27.900208+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:28.900459+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:29.900805+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:30.901109+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:31.901356+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:32.901713+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:33.902123+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:34.902560+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:35.903058+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:36.903318+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:37.903657+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:38.904170+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:39.904539+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:40.905048+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:41.905352+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:42.905761+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:43.906238+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:44.906575+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:45.907054+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:46.907423+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:47.907810+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:48.908171+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:49.908615+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:50.909120+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:51.909464+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:52.909799+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:53.910160+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:54.910470+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:55.910857+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:56.911184+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:57.911455+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:58.911803+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:59.912191+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:00.912550+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:01.912760+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141008896 unmapped: 38559744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:02.913075+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:03.913464+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:04.913771+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:05.914288+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:06.914684+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:07.915005+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:08.915343+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:09.915987+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:10.916369+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:11.916682+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:12.917062+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:13.917380+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:14.917578+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 38551552 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:15.918006+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:16.918235+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:17.918599+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:18.918850+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:19.919110+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:20.919458+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:21.919690+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:22.920172+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:23.920505+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:24.920797+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:25.921294+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:26.921648+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:27.922046+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:28.922410+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:29.922755+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:30.923045+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141025280 unmapped: 38543360 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:31.923347+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:32.923685+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:33.924107+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:34.924418+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:35.924773+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:36.925149+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:37.925557+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:38.926015+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:39.926338+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:40.926709+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:41.927074+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:42.927467+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:43.927794+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:44.928179+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:45.928615+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:46.929056+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 38535168 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:47.929455+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:48.929845+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:49.930286+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:50.930734+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:51.931216+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:52.931655+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:53.932073+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:54.932500+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:55.933090+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:56.933418+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:57.933801+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:58.934237+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.2 total, 600.0 interval
                                            Cumulative writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 11K writes, 3184 syncs, 3.69 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 439 writes, 1119 keys, 439 commit groups, 1.0 writes per commit group, ingest: 1.04 MB, 0.00 MB/s
                                            Interval WAL: 439 writes, 202 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:59.934596+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:00.935165+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:01.935569+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 38526976 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:02.936074+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 38518784 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:03.936412+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 38518784 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:04.936793+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 38518784 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:05.937131+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 38518784 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:06.937454+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 38518784 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:07.937861+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 38518784 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:08.938246+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 38518784 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:09.938654+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 38518784 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:10.939214+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 38518784 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:11.939558+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:12.940062+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:13.940220+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:14.940541+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:15.941081+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:16.941435+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:17.941782+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:18.942099+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:19.942476+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:20.943168+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:21.943525+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:22.944385+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:23.944699+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:24.945385+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:25.945971+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:26.946636+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:27.947232+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:28.947574+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 38510592 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:29.948103+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:30.948538+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:31.948968+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:32.949356+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:33.949699+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:34.950086+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:35.950582+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:36.951055+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:37.951315+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:38.951827+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:39.952283+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:40.952675+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:41.953079+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:42.953427+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:43.953787+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141066240 unmapped: 38502400 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:44.954142+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:45.954458+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:46.954748+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:47.954985+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:48.955554+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:49.955738+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:50.956124+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:51.956363+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:52.956570+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 218103808 data_used: 25268224
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:53.956864+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:54.957268+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:55.958168+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 211.454177856s of 211.471817017s, submitted: 2
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af69800 session 0x56484a9b85a0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af69c00 session 0x564847f9cb40
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:56.958405+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484a7f0800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 38494208 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f0800 session 0x564849e35860
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f86b5000/0x0/0x4ffc00000, data 0x261bbe8/0x26f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:57.958787+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1438405 data_alloc: 218103808 data_used: 15904768
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:58.959187+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:59.959583+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:00.960011+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:01.960334+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:02.960629+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f86df000/0x0/0x4ffc00000, data 0x25f1bb5/0x26c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1438405 data_alloc: 218103808 data_used: 15904768
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:03.961123+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:04.961505+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:05.961861+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f86df000/0x0/0x4ffc00000, data 0x25f1bb5/0x26c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:06.962268+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:07.962544+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1438405 data_alloc: 218103808 data_used: 15904768
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:08.962826+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:09.963115+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.053515434s of 13.407748222s, submitted: 59
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f1800 session 0x56484aeec000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484d2ca000 session 0x56484ab36b40
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 44580864 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484a7f0800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8b98000/0x0/0x4ffc00000, data 0x25f1bb5/0x26c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,1])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:10.963451+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f0800 session 0x56484ab0e000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:11.963790+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:12.964251+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315764 data_alloc: 218103808 data_used: 11771904
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:13.964586+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:14.965054+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:15.965490+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f9619000/0x0/0x4ffc00000, data 0x19a9b33/0x1a7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:16.965998+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:17.966417+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:18.966815+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315764 data_alloc: 218103808 data_used: 11771904
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:19.967227+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f9619000/0x0/0x4ffc00000, data 0x19a9b33/0x1a7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:20.967777+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484a7f1800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:21.968190+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _renew_subs
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.907560349s of 12.213282585s, submitted: 45
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 139 ms_handle_reset con 0x56484a7f1800 session 0x564847f9c780
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:22.968639+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af69800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:23.969068+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352751 data_alloc: 218103808 data_used: 11780096
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130973696 unmapped: 48594944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 140 ms_handle_reset con 0x56484af69800 session 0x564847f98d20
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af69c00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:24.969439+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130908160 unmapped: 48660480 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _renew_subs
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 141 ms_handle_reset con 0x56484af69c00 session 0x5648488f1e00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:25.969850+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131989504 unmapped: 47579136 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19aee7e/0x1a84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:26.970254+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131989504 unmapped: 47579136 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:27.970672+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131989504 unmapped: 47579136 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19aee7e/0x1a84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:28.971164+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329473 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131989504 unmapped: 47579136 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:29.971555+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131989504 unmapped: 47579136 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:30.972021+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131989504 unmapped: 47579136 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19aee7e/0x1a84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:31.972351+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131989504 unmapped: 47579136 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 141 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19aee7e/0x1a84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:32.972719+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131989504 unmapped: 47579136 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:33.973107+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329473 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131989504 unmapped: 47579136 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac70c00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 141 ms_handle_reset con 0x56484ac70c00 session 0x564848ec6000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.952308655s of 12.327050209s, submitted: 54
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:34.973513+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19aee7e/0x1a84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131997696 unmapped: 47570944 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:35.974023+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 47505408 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:36.974295+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 47505408 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:37.974574+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:38.975085+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:39.975472+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:40.975862+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:41.976343+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:42.976768+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:43.977129+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:44.977498+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:45.978070+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:46.978436+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:47.978803+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:48.979222+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:49.979563+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:50.979994+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:51.980346+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:52.980725+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:53.981194+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:54.981562+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:55.982155+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:56.982538+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:57.983018+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:58.983326+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:59.983546+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:00.983722+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:01.984163+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:02.984483+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:03.984848+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:04.985158+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:05.985524+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:06.985875+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:07.986181+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:08.986581+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:09.987053+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:10.987426+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:11.987839+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:12.988200+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:13.988543+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:14.988966+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:15.989429+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:16.989820+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:17.990190+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:18.990600+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:19.991052+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:20.991459+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:21.991827+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:22.992266+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:23.992652+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:24.993100+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:25.993526+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:26.994013+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:27.994321+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:28.994678+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:29.995096+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:30.995534+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:31.996033+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:32.996439+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:33.997013+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:34.997433+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:35.997875+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:36.998455+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:37.999030+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:38.999428+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:40.000003+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:41.000375+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:42.000746+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:43.001251+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:44.001596+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:45.002033+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:46.002527+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:47.002796+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:48.003157+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:49.003426+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:50.003809+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:51.004026+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:52.004377+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:53.004728+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:54.004991+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:55.005289+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:56.005483+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:57.005769+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:58.006163+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:59.006337+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:00.006526+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 47480832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:01.006709+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:02.007126+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132079616 unmapped: 47489024 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: do_command 'config diff' '{prefix=config diff}'
Dec 05 02:38:39 compute-0 ceph-osd[207795]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 05 02:38:39 compute-0 ceph-osd[207795]: do_command 'config show' '{prefix=config show}'
Dec 05 02:38:39 compute-0 ceph-osd[207795]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 05 02:38:39 compute-0 ceph-osd[207795]: do_command 'counter dump' '{prefix=counter dump}'
Dec 05 02:38:39 compute-0 ceph-osd[207795]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:03.007435+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132055040 unmapped: 47513600 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: do_command 'counter schema' '{prefix=counter schema}'
Dec 05 02:38:39 compute-0 ceph-osd[207795]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:04.007610+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 47685632 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:05.007814+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132243456 unmapped: 47325184 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: do_command 'log dump' '{prefix=log dump}'
Dec 05 02:38:39 compute-0 ceph-osd[207795]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:06.008047+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132243456 unmapped: 58368000 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: do_command 'perf dump' '{prefix=perf dump}'
Dec 05 02:38:39 compute-0 ceph-osd[207795]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec 05 02:38:39 compute-0 ceph-osd[207795]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec 05 02:38:39 compute-0 ceph-osd[207795]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec 05 02:38:39 compute-0 ceph-osd[207795]: do_command 'perf schema' '{prefix=perf schema}'
Dec 05 02:38:39 compute-0 ceph-osd[207795]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:07.008212+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:08.008378+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:09.008544+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:10.008776+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:11.008993+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:12.009194+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:13.009443+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:14.009660+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:15.009948+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:16.010192+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:17.010342+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:18.010682+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:19.011075+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:20.011428+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:21.011591+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:22.011785+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:23.012100+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:24.012435+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:25.012679+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:26.012997+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:27.013321+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:28.013533+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:29.013741+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:30.014048+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:31.014205+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:32.014536+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:33.014717+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:34.014941+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:35.015264+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:36.015576+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:37.015732+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:38.016023+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:39.017077+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:40.017369+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:41.017712+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:42.018109+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:43.018479+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:44.018974+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:45.019275+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:46.019702+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:47.019932+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:48.020345+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:49.020839+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:50.021280+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:51.021662+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:52.022181+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:53.022352+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:54.022766+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:55.023263+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:56.023670+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:57.024116+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:58.024458+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:59.024989+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:00.025348+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:01.025698+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:02.026451+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:03.026967+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:04.027300+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:05.027671+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:06.027939+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:07.028298+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:08.028679+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:09.029098+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:10.031791+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:11.032173+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:12.032498+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:13.032874+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:14.033148+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:15.033556+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:16.034104+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:17.034518+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:18.035015+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:19.035385+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:20.035672+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:21.035930+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:22.036111+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:23.036457+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:24.036737+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:25.037081+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:26.037431+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:27.037755+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:28.038121+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:29.038456+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:30.038817+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:31.039174+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:32.039457+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:33.039842+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:34.040394+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:35.040778+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:36.041198+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:37.041631+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:38.042011+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:39.042327+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:40.042675+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:41.043118+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:42.043418+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:43.043843+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:44.044225+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:45.044605+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:46.045129+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:47.045456+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:48.045694+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:49.046159+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:50.046650+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:51.047128+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:52.047589+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:53.048090+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:54.048519+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:55.049468+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:56.049867+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:57.050288+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:58.050568+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:59.050968+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:00.051346+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:01.051723+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:02.052054+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:03.052465+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:04.052866+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:05.053376+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 ms_handle_reset con 0x564849476800 session 0x56484a9fe780
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484a7f0800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:06.053706+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:07.054135+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 ms_handle_reset con 0x564847e8dc00 session 0x56484a78ef00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484ac71000
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 ms_handle_reset con 0x564848045400 session 0x56484ac43680
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x564847e8dc00
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:08.054603+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:09.055076+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:10.055545+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:11.056024+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:12.056397+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:13.056836+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:14.057201+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:15.057624+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:16.058050+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:17.058414+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:18.058849+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:19.059300+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:20.059740+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:21.060162+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:22.060521+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:23.061045+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:24.061460+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:25.061798+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:26.062208+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:27.062662+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:28.063145+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:29.063545+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:30.063807+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:31.064186+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:32.064513+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:33.064747+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:34.065207+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:35.065495+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:36.065839+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:37.066264+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:38.066658+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:39.067066+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:40.067509+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:41.068077+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:42.068518+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:43.068877+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:44.069367+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:45.069778+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:46.070613+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:47.071098+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:48.071512+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:49.071992+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131768320 unmapped: 58843136 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:50.072408+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 58834944 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:51.072829+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 58834944 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:52.073194+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 58834944 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:53.073502+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 58834944 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:54.073781+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 58834944 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:55.074138+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 58834944 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:56.074524+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 58834944 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:57.074993+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 58834944 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:58.075372+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 58834944 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:59.075780+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 58834944 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:00.076142+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 58834944 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:01.076569+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 58834944 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:02.077137+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 58834944 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:03.077547+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 58834944 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:04.078280+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 58834944 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:05.078876+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 58834944 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:06.079421+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 58834944 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:07.079624+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 58834944 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:08.080102+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 58834944 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:09.080455+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131776512 unmapped: 58834944 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:10.080647+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 58826752 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:11.081102+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 58826752 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:12.081427+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 58826752 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:13.081718+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 58826752 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:14.082093+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 58826752 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:15.082471+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 58826752 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:16.082997+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 58826752 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:17.083506+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 58826752 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:18.083861+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 58826752 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:19.084253+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 58826752 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:20.084610+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 58826752 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:21.085061+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:22.085412+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 58826752 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:23.085745+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 58826752 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:24.086107+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 58826752 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:25.086448+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 58826752 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:26.087076+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 58826752 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 ms_handle_reset con 0x564849476400 session 0x56484aafe1e0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: handle_auth_request added challenge on 0x56484af69800
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:27.087397+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131784704 unmapped: 58826752 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:28.087732+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131792896 unmapped: 58818560 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:29.088144+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131792896 unmapped: 58818560 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:30.088491+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131792896 unmapped: 58818560 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:31.089118+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131792896 unmapped: 58818560 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:32.089518+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131792896 unmapped: 58818560 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:33.090034+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131792896 unmapped: 58818560 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:34.090453+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131792896 unmapped: 58818560 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:35.090864+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131792896 unmapped: 58818560 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:36.091327+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131792896 unmapped: 58818560 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:37.091777+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131792896 unmapped: 58818560 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:38.092204+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131792896 unmapped: 58818560 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:39.092589+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 58810368 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:40.093097+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 58810368 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:41.093428+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 58810368 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:42.093720+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 58810368 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:43.094014+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 58810368 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:44.094327+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 58810368 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:45.094688+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 58810368 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:46.095259+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 58810368 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:47.095610+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 58810368 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:48.095856+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 58810368 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:49.096214+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 58810368 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:50.096538+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 58810368 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:51.096845+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131801088 unmapped: 58810368 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:52.097174+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 58802176 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:53.097558+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 58802176 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:54.098364+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 58802176 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:55.099267+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 58802176 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:56.099985+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 58802176 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:57.100736+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 58802176 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:58.101208+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 58802176 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:59.101797+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 58802176 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:00.102132+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 58802176 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:01.102612+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 58802176 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:02.103162+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 58802176 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:03.103713+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 58802176 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:04.104243+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 58802176 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:05.104721+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 58802176 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:06.105187+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 58802176 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:07.105601+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131809280 unmapped: 58802176 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:08.105940+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 58793984 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:09.106771+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 58793984 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:10.107123+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 58793984 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:11.107544+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 58793984 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:12.108135+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 58793984 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:13.108501+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 58793984 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:14.110267+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 58793984 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:15.110760+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 58793984 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:16.111267+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 58793984 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:17.111702+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 58793984 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:18.112178+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 58793984 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:19.112612+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 58793984 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:20.113283+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 58793984 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:21.113669+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 58793984 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:22.114118+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 58793984 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:23.114475+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131817472 unmapped: 58793984 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:24.114841+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131825664 unmapped: 58785792 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:25.115311+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131825664 unmapped: 58785792 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:26.115806+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131825664 unmapped: 58785792 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:27.116163+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131825664 unmapped: 58785792 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:28.116563+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131825664 unmapped: 58785792 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:29.117063+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131825664 unmapped: 58785792 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:30.117441+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131825664 unmapped: 58785792 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:31.117828+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131825664 unmapped: 58785792 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:32.118105+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131825664 unmapped: 58785792 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:33.118461+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131825664 unmapped: 58785792 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:34.118782+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 58777600 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:35.119195+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 58777600 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:36.119652+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 58777600 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:37.120113+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 58777600 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:38.120560+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 58777600 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:39.121081+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 58777600 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:40.121402+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 58777600 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:41.121741+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 58777600 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:42.122146+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 58777600 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:43.122503+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 58777600 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:44.123019+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 58777600 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:45.123413+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 58777600 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:46.123865+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 58777600 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:47.124262+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 58777600 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:48.124696+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 58777600 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:49.125093+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 58777600 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:50.125450+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 58777600 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:51.125830+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 58777600 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:52.126267+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 58777600 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:53.126531+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 58777600 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:54.126797+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131833856 unmapped: 58777600 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:55.127109+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 58769408 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:56.127485+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 58769408 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:57.127754+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 58769408 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:58.128136+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 58769408 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:59.128389+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 58769408 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:00.128764+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 58769408 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:01.129361+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 58769408 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:02.129657+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 58769408 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:03.130110+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 58769408 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:04.130469+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 58769408 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:05.131030+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131842048 unmapped: 58769408 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:06.131994+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 58761216 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:07.132354+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 58761216 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:08.132728+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 58761216 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:09.133125+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 58761216 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:10.133572+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 58761216 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:11.134055+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 58761216 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:12.134394+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 58761216 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:13.134776+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 58761216 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:14.135236+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 58761216 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:15.135789+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 58761216 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:16.136438+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 58761216 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:17.136999+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 58761216 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:18.137394+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 58761216 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:19.138005+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 58761216 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:20.138477+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131858432 unmapped: 58753024 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:21.139024+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131858432 unmapped: 58753024 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:22.139451+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131858432 unmapped: 58753024 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:23.140028+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131858432 unmapped: 58753024 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:24.140427+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131858432 unmapped: 58753024 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:25.140801+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131858432 unmapped: 58753024 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:26.141326+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131858432 unmapped: 58753024 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:27.141692+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131858432 unmapped: 58753024 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:28.142129+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131858432 unmapped: 58753024 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:29.142478+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131858432 unmapped: 58753024 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:30.143139+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131858432 unmapped: 58753024 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:31.143520+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131858432 unmapped: 58753024 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:32.143858+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131858432 unmapped: 58753024 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:33.144329+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131858432 unmapped: 58753024 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:34.144756+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131858432 unmapped: 58753024 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:35.144984+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131858432 unmapped: 58753024 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:36.145472+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 58744832 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:37.145987+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 58744832 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:38.146389+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 58744832 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:39.146775+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:40.147051+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 58744832 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:41.147473+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 58744832 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:42.148234+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 58744832 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:43.148737+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 58744832 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:44.149143+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 58744832 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:45.149491+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 58744832 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:46.150024+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 58744832 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:47.150412+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 58744832 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:48.150870+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 58744832 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:49.151400+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 58744832 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:50.152776+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 58744832 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:51.153022+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 58744832 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:52.153336+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 58744832 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:53.153707+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 58736640 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:54.154139+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 58736640 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:55.154547+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 58736640 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:56.154964+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 58736640 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:57.155345+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 58736640 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:58.155853+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 58736640 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:59.156280+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 58736640 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:00.156590+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 58736640 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:01.156995+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 58736640 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:02.157327+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 58736640 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:03.157768+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 58736640 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:04.158182+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 58736640 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:05.158597+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 58736640 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:06.158929+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 58736640 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:07.159294+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 58736640 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:08.159638+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 58728448 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:09.159998+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 58728448 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:10.160384+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 58728448 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:11.160821+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 58728448 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:12.161174+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 58728448 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:13.161520+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 58728448 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:14.161858+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 58728448 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:15.162159+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 58728448 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:16.162558+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 58728448 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:17.162962+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 58728448 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:18.163330+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 58728448 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:19.163674+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 58728448 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:20.164180+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 58728448 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:21.164577+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 58728448 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:22.164770+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 58728448 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:23.165171+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 58728448 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:24.165554+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131891200 unmapped: 58720256 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:25.165980+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131891200 unmapped: 58720256 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:26.166466+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131891200 unmapped: 58720256 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:27.166827+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131891200 unmapped: 58720256 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:28.167451+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131891200 unmapped: 58720256 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:29.167821+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131891200 unmapped: 58720256 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:30.168305+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131891200 unmapped: 58720256 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:31.168671+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131891200 unmapped: 58720256 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:32.169014+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131891200 unmapped: 58720256 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:33.169371+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131891200 unmapped: 58720256 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:34.169854+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131891200 unmapped: 58720256 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:35.170025+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131891200 unmapped: 58720256 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:36.170557+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131891200 unmapped: 58720256 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:37.171063+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131891200 unmapped: 58720256 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:38.171502+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131891200 unmapped: 58720256 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:39.171993+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131891200 unmapped: 58720256 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:40.172412+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131899392 unmapped: 58712064 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:41.173038+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131899392 unmapped: 58712064 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:42.173225+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131899392 unmapped: 58712064 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:43.173559+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131899392 unmapped: 58712064 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:44.173996+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131899392 unmapped: 58712064 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:45.174374+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131899392 unmapped: 58712064 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:46.174853+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131899392 unmapped: 58712064 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:47.175224+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131899392 unmapped: 58712064 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:48.175658+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131899392 unmapped: 58712064 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:49.176238+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131899392 unmapped: 58712064 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:50.176628+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 58703872 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:51.177070+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 58703872 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:52.177295+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 58703872 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:53.177693+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 58703872 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:54.178149+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 58703872 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:55.178575+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 58703872 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:56.179106+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 58703872 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:57.179468+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 58703872 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:58.179815+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 58703872 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4800.2 total, 600.0 interval
                                            Cumulative writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.01 MB/s
                                            Cumulative WAL: 12K writes, 3420 syncs, 3.58 writes per sync, written: 0.04 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 497 writes, 1263 keys, 497 commit groups, 1.0 writes per commit group, ingest: 0.47 MB, 0.00 MB/s
                                            Interval WAL: 497 writes, 236 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:59.180183+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 58703872 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:00.180500+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 58703872 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:01.180824+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 58703872 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:02.181161+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 58703872 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:03.181479+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 58703872 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:04.181794+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 58703872 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:05.182162+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131915776 unmapped: 58695680 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:06.182585+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131915776 unmapped: 58695680 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:07.183050+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131915776 unmapped: 58695680 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:08.183384+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131915776 unmapped: 58695680 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:09.183637+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131915776 unmapped: 58695680 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:10.184059+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131915776 unmapped: 58695680 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:11.184436+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131915776 unmapped: 58695680 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:12.184789+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131915776 unmapped: 58695680 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:13.185246+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131915776 unmapped: 58695680 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:14.185662+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131915776 unmapped: 58695680 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:15.186122+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131915776 unmapped: 58695680 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:16.186589+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131915776 unmapped: 58695680 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:17.186805+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131915776 unmapped: 58695680 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:18.187128+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131915776 unmapped: 58695680 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:19.187321+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131915776 unmapped: 58695680 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:20.187650+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 58687488 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:21.187876+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 58687488 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:22.188268+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 58687488 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:23.188528+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 58687488 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:24.188949+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 58687488 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:25.189174+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 58687488 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:26.189590+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 58687488 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:27.190161+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 58687488 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:28.190556+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131923968 unmapped: 58687488 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:29.190965+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131932160 unmapped: 58679296 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:30.191326+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131932160 unmapped: 58679296 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:31.193206+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131932160 unmapped: 58679296 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:32.193646+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131932160 unmapped: 58679296 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:33.194115+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131932160 unmapped: 58679296 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:34.194501+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131932160 unmapped: 58679296 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:35.195313+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131940352 unmapped: 58671104 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:36.195752+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131940352 unmapped: 58671104 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:37.196136+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131940352 unmapped: 58671104 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:38.196548+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131940352 unmapped: 58671104 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:39.198012+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131940352 unmapped: 58671104 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:40.199709+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131940352 unmapped: 58671104 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:41.201690+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131940352 unmapped: 58671104 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:42.203069+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131940352 unmapped: 58671104 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:43.204564+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131940352 unmapped: 58671104 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:44.206178+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131940352 unmapped: 58671104 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:45.207696+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131940352 unmapped: 58671104 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:46.209380+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131940352 unmapped: 58671104 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:47.211002+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131940352 unmapped: 58671104 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:48.212567+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131940352 unmapped: 58671104 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:49.214289+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131940352 unmapped: 58671104 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:50.215842+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131948544 unmapped: 58662912 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:51.217469+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131948544 unmapped: 58662912 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:52.219092+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131948544 unmapped: 58662912 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:53.219516+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131948544 unmapped: 58662912 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:54.220129+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131948544 unmapped: 58662912 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:55.220477+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:56.220823+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131948544 unmapped: 58662912 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:57.221194+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131956736 unmapped: 58654720 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:58.222781+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131956736 unmapped: 58654720 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:59.223139+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131956736 unmapped: 58654720 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:00.223509+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131956736 unmapped: 58654720 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:01.223766+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131956736 unmapped: 58654720 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:02.224027+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131956736 unmapped: 58654720 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:03.224389+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131956736 unmapped: 58654720 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:04.224878+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131956736 unmapped: 58654720 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:05.225430+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131956736 unmapped: 58654720 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:06.226005+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131956736 unmapped: 58654720 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:07.226388+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131956736 unmapped: 58654720 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:08.226748+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131964928 unmapped: 58646528 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:09.227157+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131964928 unmapped: 58646528 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:10.227506+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131964928 unmapped: 58646528 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:11.228098+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131964928 unmapped: 58646528 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:12.228463+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131964928 unmapped: 58646528 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:13.228658+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131964928 unmapped: 58646528 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:14.228825+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131964928 unmapped: 58646528 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:15.229262+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131964928 unmapped: 58646528 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:16.229693+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131964928 unmapped: 58646528 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:17.230169+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131964928 unmapped: 58646528 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:18.230564+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131964928 unmapped: 58646528 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:19.230981+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131964928 unmapped: 58646528 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:20.231357+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131964928 unmapped: 58646528 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:21.231720+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131964928 unmapped: 58646528 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:22.232121+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131964928 unmapped: 58646528 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:23.232526+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131964928 unmapped: 58646528 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:24.232965+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131973120 unmapped: 58638336 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:25.233342+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131973120 unmapped: 58638336 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:26.233768+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131973120 unmapped: 58638336 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:27.234151+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131973120 unmapped: 58638336 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:28.235121+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131973120 unmapped: 58638336 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:29.235380+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131973120 unmapped: 58638336 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:30.235726+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131973120 unmapped: 58638336 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:31.236076+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131973120 unmapped: 58638336 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:32.237086+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131981312 unmapped: 58630144 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:33.237435+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131989504 unmapped: 58621952 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:34.237671+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131989504 unmapped: 58621952 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:35.238085+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131997696 unmapped: 58613760 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 600.671325684s of 601.309997559s, submitted: 103
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:36.239352+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132022272 unmapped: 58589184 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:37.239525+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132046848 unmapped: 58564608 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:38.240088+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:39.240529+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:40.241002+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:41.241406+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:42.241861+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:43.242253+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:44.242683+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:45.243072+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:46.243500+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:47.244037+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:48.244550+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:49.245075+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:50.245441+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:51.245838+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:52.246279+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:53.246681+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:54.247120+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:55.247469+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:56.247879+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:57.248350+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:58.248751+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:59.249138+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:00.249545+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:01.250043+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:02.250414+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:03.250804+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:04.251154+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:05.251515+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:06.251856+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:07.252167+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:08.252564+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:09.253143+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:10.253518+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:11.254016+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:12.254422+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:13.254784+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:14.255171+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:15.255637+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:16.256109+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:17.256704+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:18.257189+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:19.257587+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:20.258050+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:21.258486+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:22.258837+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:23.259250+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:24.259679+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:25.260127+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:26.260604+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:27.261059+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:28.261461+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:29.261835+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:30.262232+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:31.262599+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:32.263110+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:33.263390+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:34.263809+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 58548224 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:35.264227+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132071424 unmapped: 58540032 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:36.264691+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132071424 unmapped: 58540032 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:37.265057+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132071424 unmapped: 58540032 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:38.265533+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132071424 unmapped: 58540032 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:39.265851+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132071424 unmapped: 58540032 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:40.266193+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132071424 unmapped: 58540032 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:41.266510+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132071424 unmapped: 58540032 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:42.266983+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132071424 unmapped: 58540032 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:43.267360+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132071424 unmapped: 58540032 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:44.267596+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132071424 unmapped: 58540032 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:45.268094+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132071424 unmapped: 58540032 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:46.268616+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132071424 unmapped: 58540032 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:47.269150+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132071424 unmapped: 58540032 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:48.269517+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132071424 unmapped: 58540032 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:49.269870+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132071424 unmapped: 58540032 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:50.270293+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132079616 unmapped: 58531840 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:51.270663+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132079616 unmapped: 58531840 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:52.271114+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132079616 unmapped: 58531840 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:53.271489+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132079616 unmapped: 58531840 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:54.272005+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132079616 unmapped: 58531840 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:55.272273+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132079616 unmapped: 58531840 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:56.272867+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132079616 unmapped: 58531840 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:57.273373+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132079616 unmapped: 58531840 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2693: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:58.273781+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132079616 unmapped: 58531840 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:59.274297+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132079616 unmapped: 58531840 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:00.274727+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132079616 unmapped: 58531840 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:01.275019+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132079616 unmapped: 58531840 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:02.275351+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 58523648 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:03.275696+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 58523648 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:04.276122+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 58523648 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:05.276452+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 58523648 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:06.277007+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 58523648 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:07.277370+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 58523648 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:08.277723+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 58523648 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:09.278105+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 58523648 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:10.278466+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 58523648 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:11.278831+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 58523648 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:12.279258+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 58523648 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:13.280850+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 58523648 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:14.281570+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:15.282255+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 58523648 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:16.283338+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 58523648 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:17.284036+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132087808 unmapped: 58523648 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:18.284709+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132096000 unmapped: 58515456 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:19.285380+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132096000 unmapped: 58515456 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:20.285738+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132096000 unmapped: 58515456 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:21.286294+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132096000 unmapped: 58515456 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:22.286843+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132096000 unmapped: 58515456 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:23.287339+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132096000 unmapped: 58515456 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:24.288045+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132096000 unmapped: 58515456 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:25.288555+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132096000 unmapped: 58515456 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:26.289153+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132096000 unmapped: 58515456 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:27.289505+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132096000 unmapped: 58515456 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:28.290074+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132096000 unmapped: 58515456 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:29.290482+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132096000 unmapped: 58515456 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:30.290974+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132096000 unmapped: 58515456 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:31.291411+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132096000 unmapped: 58515456 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:32.291853+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 58499072 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:33.292298+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 58499072 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:34.292788+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 58499072 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:35.293276+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 58499072 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:36.293846+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 58499072 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:37.294203+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 58499072 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:38.294616+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 58499072 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:39.295146+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 58499072 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:40.295531+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 58499072 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:41.296071+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 58499072 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:42.296557+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 58499072 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:43.297051+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 58499072 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:44.297454+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 58499072 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:45.298174+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 58499072 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:46.298866+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 58499072 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:47.299316+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 58499072 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:48.299611+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132112384 unmapped: 58499072 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:49.299869+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132120576 unmapped: 58490880 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:50.300194+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132128768 unmapped: 58482688 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:51.300384+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132128768 unmapped: 58482688 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:52.300627+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132128768 unmapped: 58482688 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:53.301069+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132128768 unmapped: 58482688 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:54.301610+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132128768 unmapped: 58482688 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:55.301961+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132128768 unmapped: 58482688 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:56.302305+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132128768 unmapped: 58482688 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:57.302620+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132136960 unmapped: 58474496 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:58.302784+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132136960 unmapped: 58474496 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:59.302989+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132136960 unmapped: 58474496 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:00.303177+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132136960 unmapped: 58474496 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:01.303499+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132136960 unmapped: 58474496 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:02.303685+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132136960 unmapped: 58474496 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:03.304078+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132136960 unmapped: 58474496 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:04.305212+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329396 data_alloc: 218103808 data_used: 11792384
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132145152 unmapped: 58466304 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:05.305415+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132145152 unmapped: 58466304 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: do_command 'config diff' '{prefix=config diff}'
Dec 05 02:38:39 compute-0 ceph-osd[207795]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 05 02:38:39 compute-0 ceph-osd[207795]: do_command 'config show' '{prefix=config show}'
Dec 05 02:38:39 compute-0 ceph-osd[207795]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:06.309449+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: do_command 'counter dump' '{prefix=counter dump}'
Dec 05 02:38:39 compute-0 ceph-osd[207795]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 05 02:38:39 compute-0 ceph-osd[207795]: do_command 'counter schema' '{prefix=counter schema}'
Dec 05 02:38:39 compute-0 ceph-osd[207795]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132325376 unmapped: 58286080 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:07.309647+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132358144 unmapped: 58253312 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f97d7000/0x0/0x4ffc00000, data 0x19b0901/0x1a87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: tick
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_tickets
Dec 05 02:38:39 compute-0 ceph-osd[207795]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:08.310266+0000)
Dec 05 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131866624 unmapped: 58744832 heap: 190611456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:39 compute-0 ceph-osd[207795]: do_command 'log dump' '{prefix=log dump}'
Dec 05 02:38:39 compute-0 rsyslogd[188644]: imjournal from <compute-0:ceph-osd>: begin to drop messages due to rate-limiting
Dec 05 02:38:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Dec 05 02:38:39 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3872313172' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 05 02:38:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Dec 05 02:38:39 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2106024776' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 05 02:38:39 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1405470282' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 05 02:38:39 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1005288873' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 05 02:38:39 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/355060447' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 05 02:38:39 compute-0 ceph-mon[192914]: pgmap v2693: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:39 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3872313172' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 05 02:38:39 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2106024776' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:38:39.543404) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902319543459, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 1016, "num_deletes": 251, "total_data_size": 1371458, "memory_usage": 1394480, "flush_reason": "Manual Compaction"}
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902319553207, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 1347008, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54629, "largest_seqno": 55644, "table_properties": {"data_size": 1341958, "index_size": 2509, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11621, "raw_average_key_size": 20, "raw_value_size": 1331610, "raw_average_value_size": 2315, "num_data_blocks": 112, "num_entries": 575, "num_filter_entries": 575, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764902234, "oldest_key_time": 1764902234, "file_creation_time": 1764902319, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 9846 microseconds, and 4037 cpu microseconds.
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:38:39.553259) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 1347008 bytes OK
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:38:39.553276) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:38:39.554794) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:38:39.554807) EVENT_LOG_v1 {"time_micros": 1764902319554802, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:38:39.554822) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 1366519, prev total WAL file size 1366519, number of live WAL files 2.
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:38:39.555482) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(1315KB)], [131(7681KB)]
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902319555617, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 9212594, "oldest_snapshot_seqno": -1}
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 6874 keys, 7488825 bytes, temperature: kUnknown
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902319602452, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 7488825, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7448359, "index_size": 22177, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17221, "raw_key_size": 180244, "raw_average_key_size": 26, "raw_value_size": 7329453, "raw_average_value_size": 1066, "num_data_blocks": 871, "num_entries": 6874, "num_filter_entries": 6874, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764902319, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:38:39.602643) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 7488825 bytes
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:38:39.605456) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.5 rd, 159.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.5 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(12.4) write-amplify(5.6) OK, records in: 7388, records dropped: 514 output_compression: NoCompression
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:38:39.605471) EVENT_LOG_v1 {"time_micros": 1764902319605463, "job": 80, "event": "compaction_finished", "compaction_time_micros": 46891, "compaction_time_cpu_micros": 18127, "output_level": 6, "num_output_files": 1, "total_output_size": 7488825, "num_input_records": 7388, "num_output_records": 6874, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902319605762, "job": 80, "event": "table_file_deletion", "file_number": 133}
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902319606931, "job": 80, "event": "table_file_deletion", "file_number": 131}
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:38:39.555376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:38:39.607347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:38:39.607353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:38:39.607356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:38:39.607359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:38:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:38:39.607362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 05 02:38:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Dec 05 02:38:39 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/744710921' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 02:38:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Dec 05 02:38:39 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/436085226' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 05 02:38:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Dec 05 02:38:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/629443624' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 05 02:38:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Dec 05 02:38:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/163897241' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 05 02:38:40 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/744710921' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 05 02:38:40 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/436085226' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 05 02:38:40 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/629443624' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 05 02:38:40 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/163897241' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 05 02:38:40 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15999 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:38:40 compute-0 podman[493921]: 2025-12-05 02:38:40.698117937 +0000 UTC m=+0.092368934 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 05 02:38:40 compute-0 podman[493920]: 2025-12-05 02:38:40.72943527 +0000 UTC m=+0.125651642 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 05 02:38:40 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15997 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:41 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.16001 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:38:41 compute-0 nova_compute[349548]: 2025-12-05 02:38:41.124 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:38:41 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.16003 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2694: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:41 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.16005 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:38:41 compute-0 ceph-mon[192914]: from='client.15999 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:38:41 compute-0 ceph-mon[192914]: from='client.15997 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:41 compute-0 ceph-mon[192914]: from='client.16001 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:38:41 compute-0 ceph-mon[192914]: from='client.16003 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:41 compute-0 ceph-mon[192914]: pgmap v2694: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:41 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.16009 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:38:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Dec 05 02:38:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3256681494' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 05 02:38:42 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.16013 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:38:42 compute-0 ceph-mon[192914]: from='client.16005 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:38:42 compute-0 ceph-mon[192914]: from='client.16009 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:38:42 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3256681494' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 05 02:38:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Dec 05 02:38:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2818520246' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 05 02:38:42 compute-0 podman[494195]: 2025-12-05 02:38:42.670230185 +0000 UTC m=+0.081613981 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 05 02:38:42 compute-0 podman[494196]: 2025-12-05 02:38:42.706995931 +0000 UTC m=+0.117399899 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, release=1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, architecture=x86_64, container_name=kepler, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 05 02:38:42 compute-0 podman[494197]: 2025-12-05 02:38:42.709670727 +0000 UTC m=+0.104851146 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec 05 02:38:42 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.16017 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:38:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Dec 05 02:38:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1917823849' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 02:38:43 compute-0 nova_compute[349548]: 2025-12-05 02:38:43.240 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:38:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2695: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:43 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.16021 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:38:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Dec 05 02:38:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2611016716' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 05 02:38:43 compute-0 ceph-mon[192914]: from='client.16013 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:38:43 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2818520246' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 05 02:38:43 compute-0 ceph-mon[192914]: from='client.16017 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:38:43 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1917823849' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 02:38:43 compute-0 ceph-mon[192914]: pgmap v2695: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:43 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2611016716' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 05 02:38:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:38:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 02:38:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 02:38:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 02:38:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202981 data_alloc: 218103808 data_used: 10743808
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:25.485028+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 17260544 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 ms_handle_reset con 0x5630e98f2000 session 0x5630e9036780
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:26.485509+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 17260544 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb27a000/0x0/0x4ffc00000, data 0x73bad3/0x804000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:27.485956+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 17260544 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:28.486349+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 17260544 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:29.486750+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 17260544 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050933 data_alloc: 218103808 data_used: 3756032
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:30.487133+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 17260544 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:31.487677+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb27a000/0x0/0x4ffc00000, data 0x73bad3/0x804000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 17260544 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:32.488055+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 17260544 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:33.488423+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 17260544 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb27a000/0x0/0x4ffc00000, data 0x73bad3/0x804000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:34.488754+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.792471886s of 10.155592918s, submitted: 55
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92446720 unmapped: 17743872 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051005 data_alloc: 218103808 data_used: 3756032
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:35.489150+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92479488 unmapped: 17711104 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:36.490078+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 17620992 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:37.490474+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 17620992 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:38.491107+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 17620992 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:39.491590+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 17620992 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fae6a000/0x0/0x4ffc00000, data 0x73bad3/0x804000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050933 data_alloc: 218103808 data_used: 3756032
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:40.492096+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 17620992 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:41.492555+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 17620992 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:42.493164+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 17620992 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:43.493426+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 17620992 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:44.493802+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.582604408s of 10.326370239s, submitted: 106
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 ms_handle_reset con 0x5630e9032000 session 0x5630e6761e00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 ms_handle_reset con 0x5630e9032c00 session 0x5630e8cb83c0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 ms_handle_reset con 0x5630e9033000 session 0x5630e64c7e00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fae6a000/0x0/0x4ffc00000, data 0x73bad3/0x804000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 17620992 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8dcd000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050353 data_alloc: 218103808 data_used: 3756032
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:45.494060+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89628672 unmapped: 20561920 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 ms_handle_reset con 0x5630e8dcd000 session 0x5630e814b860
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:46.494438+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:47.494767+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:48.495047+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:49.495386+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:50.495783+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:51.496211+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:52.496607+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:53.497029+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:54.497399+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:55.497804+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:56.498176+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:57.498518+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:58.498871+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:05:59.499298+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:00.499673+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:01.500558+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:02.501010+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:03.501444+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:04.501963+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:05.502323+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:06.502696+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:07.503139+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:08.503554+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:09.504179+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:10.504560+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:11.505378+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:12.505610+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:13.505875+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:14.506175+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:15.506428+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:16.506786+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:17.507183+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:18.507601+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:19.508016+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:20.508430+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:21.508755+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:22.509168+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:23.509464+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:24.509780+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:25.510189+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:26.510584+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:27.511051+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:28.511418+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:29.511982+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:30.512398+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:31.512824+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:32.513057+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:33.513462+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:34.513872+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:35.514286+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:36.514751+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:37.515130+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:38.515495+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:39.516052+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:40.516468+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:41.516685+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:42.517155+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:43.517389+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:44.517784+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:45.518016+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:46.518387+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:47.518753+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:48.519128+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:49.519541+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:50.519873+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:51.520365+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:52.520556+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:53.520977+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:54.521299+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:55.521666+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:56.522018+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:57.522252+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:58.522614+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:06:59.523070+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:00.523431+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:01.523725+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:02.524077+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:03.524438+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:04.524815+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:05.525200+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 20553728 heap: 110190592 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974169 data_alloc: 218103808 data_used: 258048
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 81.124771118s of 81.432746887s, submitted: 56
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fb4e3000/0x0/0x4ffc00000, data 0xc3ac3/0x18b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:06.525452+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89784320 unmapped: 28803072 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _renew_subs
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 132 handle_osd_map epochs [133,133], i have 133, src has [1,133]
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 133 ms_handle_reset con 0x5630e9032000 session 0x5630e86d0b40
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:07.525766+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89817088 unmapped: 28770304 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ea005000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:08.526179+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89841664 unmapped: 28745728 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _renew_subs
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 134 ms_handle_reset con 0x5630ea005000 session 0x5630e8c4be00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:09.526574+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89882624 unmapped: 28704768 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:10.527065+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041480 data_alloc: 218103808 data_used: 274432
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:11.527413+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:12.527720+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:13.528188+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:14.528533+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:15.528980+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041480 data_alloc: 218103808 data_used: 274432
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:16.529298+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:17.529869+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:18.530341+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:19.530668+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:20.531195+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041480 data_alloc: 218103808 data_used: 274432
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:21.531523+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:22.532133+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:23.532502+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:24.532837+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:25.533246+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041480 data_alloc: 218103808 data_used: 274432
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:26.533608+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:27.534154+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:28.534614+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:29.535341+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:30.535718+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041480 data_alloc: 218103808 data_used: 274432
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:31.536115+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:32.536449+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:33.536806+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:34.537055+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:35.537498+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041640 data_alloc: 218103808 data_used: 278528
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:36.537875+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:37.538395+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:38.538764+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:39.539206+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:40.539566+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041640 data_alloc: 218103808 data_used: 278528
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:41.540112+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:42.540507+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:43.541020+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:44.541384+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:45.541754+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041640 data_alloc: 218103808 data_used: 278528
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:46.542092+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:47.542437+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:48.542832+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:49.543309+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:50.543660+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041640 data_alloc: 218103808 data_used: 278528
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:51.543874+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:52.544305+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:53.544556+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:54.544982+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 28688384 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:55.545292+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89907200 unmapped: 28680192 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041640 data_alloc: 218103808 data_used: 278528
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:56.545688+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89907200 unmapped: 28680192 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:57.546136+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89907200 unmapped: 28680192 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:58.546536+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89907200 unmapped: 28680192 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:07:59.546869+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89907200 unmapped: 28680192 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:00.547357+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89915392 unmapped: 28672000 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041640 data_alloc: 218103808 data_used: 278528
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:01.547780+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89915392 unmapped: 28672000 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:02.548171+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89915392 unmapped: 28672000 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:03.548353+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 134 heartbeat osd_stat(store_statfs(0x4facda000/0x0/0x4ffc00000, data 0x8c7203/0x993000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89915392 unmapped: 28672000 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:04.548677+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 89915392 unmapped: 28672000 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:05.549067+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8b23c00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 59.437492371s of 59.699771881s, submitted: 34
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 134 ms_handle_reset con 0x5630e8b23c00 session 0x5630e84970e0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e837f800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96550912 unmapped: 22036480 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061116 data_alloc: 218103808 data_used: 7094272
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 135 ms_handle_reset con 0x5630e837f800 session 0x5630e64c6000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:06.549408+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 96550912 unmapped: 22036480 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:07.549642+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8b23c00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 135 heartbeat osd_stat(store_statfs(0x4facd7000/0x0/0x4ffc00000, data 0x8c8d80/0x996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 135 ms_handle_reset con 0x5630e8b23c00 session 0x5630e849c960
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97255424 unmapped: 21331968 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8dcd000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:08.549873+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 135 ms_handle_reset con 0x5630e8dcd000 session 0x5630e64e94a0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97353728 unmapped: 21233664 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:09.550241+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _renew_subs
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 136 ms_handle_reset con 0x5630e9032000 session 0x5630e90ef860
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97198080 unmapped: 21389312 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f9cb5000/0x0/0x4ffc00000, data 0x18ead80/0x19b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:10.550452+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18ec951/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 97214464 unmapped: 21372928 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ea005000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 136 ms_handle_reset con 0x5630ea005000 session 0x5630e848f4a0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e7464c00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 136 ms_handle_reset con 0x5630e7464c00 session 0x5630e848e960
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198548 data_alloc: 218103808 data_used: 7102464
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8b23c00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 136 ms_handle_reset con 0x5630e8b23c00 session 0x5630e679bc20
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:11.550795+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8dcd000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 136 ms_handle_reset con 0x5630e8dcd000 session 0x5630e679b0e0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 136 ms_handle_reset con 0x5630e9032000 session 0x5630e74bb4a0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 99090432 unmapped: 19496960 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ea005000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 136 ms_handle_reset con 0x5630ea005000 session 0x5630e8a2d0e0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:12.551006+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f3400
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 136 ms_handle_reset con 0x5630e98f3400 session 0x5630e62b5860
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8b23c00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 136 ms_handle_reset con 0x5630e8b23c00 session 0x5630e73805a0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 99090432 unmapped: 19496960 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:13.551462+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8dcd000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 136 ms_handle_reset con 0x5630e8dcd000 session 0x5630e8d38d20
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 136 ms_handle_reset con 0x5630e9032000 session 0x5630e8492780
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 99098624 unmapped: 19488768 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:14.553522+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ea005000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 19832832 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 136 ms_handle_reset con 0x5630ea005000 session 0x5630e64e65a0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:15.554000+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98762752 unmapped: 19824640 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278747 data_alloc: 218103808 data_used: 7106560
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:16.554216+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 136 heartbeat osd_stat(store_statfs(0x4f9299000/0x0/0x4ffc00000, data 0x23049d6/0x23d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f3800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98762752 unmapped: 19824640 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:17.554619+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f3000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.525561333s of 12.098001480s, submitted: 91
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 98852864 unmapped: 19734528 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:18.554862+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 10297344 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:19.556302+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 109568000 unmapped: 9019392 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:20.556653+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e6e6f800 session 0x5630e8c4ad20
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 109568000 unmapped: 9019392 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401641 data_alloc: 234881024 data_used: 23805952
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:21.557077+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f9295000/0x0/0x4ffc00000, data 0x2306439/0x23d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e6e6f800 session 0x5630e8d38b40
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 109600768 unmapped: 8986624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:22.557438+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e98f3800 session 0x5630e8cc12c0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 109600768 unmapped: 8986624 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8b23c00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e8b23c00 session 0x5630e6b52960
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8dcd000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:23.557614+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e8dcd000 session 0x5630e848fe00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ea005000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9032000 session 0x5630e62b5680
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 103759872 unmapped: 14827520 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:24.557858+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 14819328 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:25.558207+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 14819328 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f9ab2000/0x0/0x4ffc00000, data 0x16e9449/0x17bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226068 data_alloc: 234881024 data_used: 11317248
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:26.558426+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 103768064 unmapped: 14819328 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:27.558998+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 103809024 unmapped: 14778368 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:28.559250+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f9ab2000/0x0/0x4ffc00000, data 0x16e9449/0x17bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 103833600 unmapped: 14753792 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:29.559657+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:30.560090+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:31.560361+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299988 data_alloc: 234881024 data_used: 21700608
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:32.560581+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:33.561130+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f9ab2000/0x0/0x4ffc00000, data 0x16e9449/0x17bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:34.561502+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:35.561868+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:36.562123+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299988 data_alloc: 234881024 data_used: 21700608
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:37.562385+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:38.564222+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f9ab2000/0x0/0x4ffc00000, data 0x16e9449/0x17bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:39.566120+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:40.568264+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:41.569063+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1299988 data_alloc: 234881024 data_used: 21700608
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:42.571343+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f9ab2000/0x0/0x4ffc00000, data 0x16e9449/0x17bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:43.573494+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 10502144 heap: 118587392 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:44.574450+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e6e6f800 session 0x5630e672cb40
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8b23c00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e8b23c00 session 0x5630e8497e00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8dcd000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e8dcd000 session 0x5630e73f2f00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f3800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e98f3800 session 0x5630e6760780
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9916000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.943056107s of 27.035001755s, submitted: 27
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9916000 session 0x5630e64e63c0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 14090240 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e6e6f800 session 0x5630e900dc20
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8b23c00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e8b23c00 session 0x5630e8d392c0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:45.576035+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8dcd000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e8dcd000 session 0x5630e8c4c780
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f3800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e98f3800 session 0x5630e79ead20
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 14090240 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:46.576372+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356478 data_alloc: 234881024 data_used: 21700608
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 14090240 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:47.576626+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 14090240 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:48.576850+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f97d2000/0x0/0x4ffc00000, data 0x1dc8459/0x1e9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108699648 unmapped: 14090240 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:49.577259+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9916400
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9916400 session 0x5630ea57d2c0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 11485184 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:50.577525+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e6e6f800 session 0x5630e79e3e00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111345664 unmapped: 11444224 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:51.577770+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1380426 data_alloc: 234881024 data_used: 21864448
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8b23c00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e8b23c00 session 0x5630e700de00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8dcd000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e8dcd000 session 0x5630e6b52d20
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111190016 unmapped: 11599872 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f3800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:52.577962+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9916800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f94e3000/0x0/0x4ffc00000, data 0x20b7459/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111230976 unmapped: 11558912 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:53.578191+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f94dc000/0x0/0x4ffc00000, data 0x20bc48c/0x2192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f94d2000/0x0/0x4ffc00000, data 0x20c648c/0x219c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111534080 unmapped: 11255808 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:54.578419+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112910336 unmapped: 9879552 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:55.578819+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115310592 unmapped: 7479296 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:56.579039+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1433605 data_alloc: 251658240 data_used: 27631616
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116080640 unmapped: 6709248 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e98f3800 session 0x5630e73832c0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:57.579422+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9916800 session 0x5630e6456f00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.604548454s of 12.920574188s, submitted: 49
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f94d2000/0x0/0x4ffc00000, data 0x20c648c/0x219c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 10223616 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:58.579725+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e6e6f800 session 0x5630e8c86000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 10215424 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:08:59.580073+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112582656 unmapped: 10207232 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:00.580295+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 6463488 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:01.580465+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399972 data_alloc: 234881024 data_used: 21270528
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f93b0000/0x0/0x4ffc00000, data 0x21e5449/0x22b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 6299648 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:02.580839+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 6823936 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:03.581129+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 6823936 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:04.581482+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f92eb000/0x0/0x4ffc00000, data 0x22a2449/0x2375000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 6823936 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:05.581861+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f92eb000/0x0/0x4ffc00000, data 0x22a2449/0x2375000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:06.582608+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 6823936 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424694 data_alloc: 234881024 data_used: 21520384
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:07.583072+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115965952 unmapped: 6823936 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:08.583355+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115974144 unmapped: 6815744 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.178137779s of 10.709356308s, submitted: 124
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:09.583608+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 7782400 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:10.583948+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 7782400 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:11.584275+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 7782400 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415802 data_alloc: 234881024 data_used: 21524480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f92d6000/0x0/0x4ffc00000, data 0x22c5449/0x2398000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:12.584568+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 7782400 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:13.585061+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 7782400 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:14.585308+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 7782400 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f92d6000/0x0/0x4ffc00000, data 0x22c5449/0x2398000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:15.585640+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 7782400 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:16.586072+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 7774208 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415802 data_alloc: 234881024 data_used: 21524480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:17.586423+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 7774208 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:18.586673+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 7774208 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f92d6000/0x0/0x4ffc00000, data 0x22c5449/0x2398000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:19.586984+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 7774208 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:20.587381+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 7774208 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.986150742s of 12.017604828s, submitted: 4
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f92d6000/0x0/0x4ffc00000, data 0x22c5449/0x2398000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:21.587655+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 7774208 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415670 data_alloc: 234881024 data_used: 21524480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:22.588029+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 7774208 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:23.588378+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 7774208 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:24.588716+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 7766016 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:25.589064+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 7757824 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f92d1000/0x0/0x4ffc00000, data 0x22c9449/0x239c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:26.589503+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 7757824 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415978 data_alloc: 234881024 data_used: 21524480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:27.589809+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 7757824 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:28.590221+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 7757824 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:29.590748+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 7757824 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f92d1000/0x0/0x4ffc00000, data 0x22c9449/0x239c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:30.591115+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 7749632 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:31.591454+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 7749632 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415978 data_alloc: 234881024 data_used: 21524480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:32.591683+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 7749632 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f92d1000/0x0/0x4ffc00000, data 0x22c9449/0x239c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:33.592160+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 7749632 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.032866478s of 13.056352615s, submitted: 3
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:34.592533+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 7749632 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:35.593027+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 7749632 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:36.593392+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 7749632 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416246 data_alloc: 234881024 data_used: 21524480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:37.593814+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9917000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9917000 session 0x5630e64c7860
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9917400
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9917400 session 0x5630e86dbe00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9917c00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9917c00 session 0x5630e64e7c20
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 7766016 heap: 122789888 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9033c00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9033c00 session 0x5630e73f2780
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f92d0000/0x0/0x4ffc00000, data 0x22ca449/0x239d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e6e6f800 session 0x5630e8cc0000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:38.594056+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9917000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9917000 session 0x5630e8a2d0e0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 23060480 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:39.594537+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 23060480 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:40.595213+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 23060480 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:41.596138+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 23060480 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1505841 data_alloc: 234881024 data_used: 21512192
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9032000 session 0x5630e86dbc20
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630ea005000 session 0x5630ea57cb40
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:42.596351+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 23126016 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9917400
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f882b000/0x0/0x4ffc00000, data 0x2d6f4ab/0x2e43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9917400 session 0x5630e6b53c20
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:43.596635+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 23126016 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:44.596856+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 23117824 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.861010551s of 11.164711952s, submitted: 59
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:45.597124+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 23117824 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e98f3000 session 0x5630e64c6b40
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e98f2000 session 0x5630e79e3860
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9917000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:46.597428+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 25010176 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f882b000/0x0/0x4ffc00000, data 0x2d6f4ab/0x2e43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,1])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1413893 data_alloc: 234881024 data_used: 17195008
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9917000 session 0x5630e8c86780
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f882b000/0x0/0x4ffc00000, data 0x2d6f4ab/0x2e43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [1])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:47.597792+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 25534464 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:48.598128+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 25526272 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ea005000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630ea005000 session 0x5630e90ef680
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:49.598563+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9917c00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9917c00 session 0x5630e90ef860
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112574464 unmapped: 25526272 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:50.598945+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e98f2000 session 0x5630e90ee780
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f3000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e98f3000 session 0x5630e90efe00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 25509888 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9917000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:51.599131+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 25509888 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414291 data_alloc: 234881024 data_used: 18087936
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:52.599328+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 25509888 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f8f2f000/0x0/0x4ffc00000, data 0x266b498/0x273f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:53.599687+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ea005000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25468928 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:54.600036+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 25468928 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630ea005000 session 0x5630e6776780
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9917000 session 0x5630e90ee1e0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9033000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.266552925s of 10.460735321s, submitted: 37
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:55.600244+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 ms_handle_reset con 0x5630e9033000 session 0x5630e90eed20
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:56.600560+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335384 data_alloc: 234881024 data_used: 18087936
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:57.601036+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99cf000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:58.601406+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:09:59.602030+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:00.602287+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:01.602739+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99cf000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335384 data_alloc: 234881024 data_used: 18087936
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:02.603235+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99cf000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:03.603635+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99cf000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99cf000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:04.604093+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:05.604480+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:06.605090+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335384 data_alloc: 234881024 data_used: 18087936
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:07.605501+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:08.605933+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99cf000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:09.609404+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 25436160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:10.609639+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 25427968 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:11.609853+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 25427968 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335384 data_alloc: 234881024 data_used: 18087936
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:12.610077+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 25427968 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:13.610349+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 25427968 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:14.610722+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99cf000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 25427968 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:15.611060+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 25427968 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:16.611424+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 25427968 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335384 data_alloc: 234881024 data_used: 18087936
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:17.611789+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 25427968 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:18.612223+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 25427968 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:19.612632+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.385606766s of 24.469984055s, submitted: 18
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 25255936 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99cf000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,0,1])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:20.612978+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99ce000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 25255936 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:21.613386+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 25272320 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351520 data_alloc: 234881024 data_used: 18911232
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:22.613694+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 25272320 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:23.614183+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 25272320 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:24.614519+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99ce000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 25272320 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:25.614968+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 25272320 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:26.615238+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99ce000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 25272320 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351680 data_alloc: 234881024 data_used: 18915328
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:27.615562+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 25272320 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:28.617254+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 25272320 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:29.617568+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 25272320 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:30.617983+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99ce000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 25264128 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:31.618228+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 25264128 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351680 data_alloc: 234881024 data_used: 18915328
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:32.618588+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 25264128 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:33.619016+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 25264128 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:34.619379+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 25264128 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:35.619666+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 25264128 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 heartbeat osd_stat(store_statfs(0x4f99ce000/0x0/0x4ffc00000, data 0x1bcc426/0x1c9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:36.620092+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.105096817s of 17.149562836s, submitted: 19
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 25264128 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1350429 data_alloc: 234881024 data_used: 18915328
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:37.620432+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 25264128 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:38.620855+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _renew_subs
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 137 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e98f2000 session 0x5630e84923c0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 25255936 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:39.621351+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 25255936 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f99ca000/0x0/0x4ffc00000, data 0x1bcdfe9/0x1ca3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:40.621729+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 25239552 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:41.622159+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 25239552 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356379 data_alloc: 234881024 data_used: 18923520
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:42.622560+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 25239552 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f99ca000/0x0/0x4ffc00000, data 0x1bcdfe9/0x1ca3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:43.623376+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 25239552 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:44.623835+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 25239552 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:45.624205+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 25239552 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:46.624600+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 25239552 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356379 data_alloc: 234881024 data_used: 18923520
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:47.624924+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f3000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e98f3000 session 0x5630e8a2dc20
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9917000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9917000 session 0x5630e8a2c5a0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ea005000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630ea005000 session 0x5630e90ef4a0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 25255936 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:48.625264+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f99ca000/0x0/0x4ffc00000, data 0x1bcdfe9/0x1ca3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032c00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032c00 session 0x5630e62b43c0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032c00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032c00 session 0x5630e90ee5a0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 25255936 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:49.625555+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e98f2000 session 0x5630e73f3680
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.139289856s of 13.231978416s, submitted: 12
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032000 session 0x5630e900c000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f3000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e6e6f800 session 0x5630e700da40
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112369664 unmapped: 25731072 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9917000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:50.625763+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e98f3000 session 0x5630e6760960
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e6e6f800 session 0x5630e8c4cd20
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032000 session 0x5630e672d4a0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032c00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032c00 session 0x5630e8496960
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e98f2000 session 0x5630e679b860
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 105979904 unmapped: 32120832 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9917000 session 0x5630e64e63c0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:51.626109+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 105979904 unmapped: 32120832 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146508 data_alloc: 218103808 data_used: 7118848
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:52.626455+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e6e6f800 session 0x5630e6777860
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4faaa5000/0x0/0x4ffc00000, data 0xaacf87/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032000 session 0x5630e617a3c0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 105979904 unmapped: 32120832 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:53.626835+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032c00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032c00 session 0x5630e62a0960
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e98f2000 session 0x5630e8497680
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 105979904 unmapped: 32120832 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:54.627045+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ea005000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106061824 unmapped: 32038912 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:55.627400+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4faaa5000/0x0/0x4ffc00000, data 0xaacf87/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106061824 unmapped: 32038912 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:56.627762+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106061824 unmapped: 32038912 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146508 data_alloc: 218103808 data_used: 7118848
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:57.628215+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106061824 unmapped: 32038912 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:58.628588+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106061824 unmapped: 32038912 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:10:59.629113+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106061824 unmapped: 32038912 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:00.629939+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4faaa5000/0x0/0x4ffc00000, data 0xaacf87/0xb81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 105996288 unmapped: 32104448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:01.630256+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8cf4400
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.733372688s of 12.021333694s, submitted: 56
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 107225088 unmapped: 30875648 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:02.630815+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e8cf4400 session 0x5630e64e7c20
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185235 data_alloc: 218103808 data_used: 7671808
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa98a000/0x0/0x4ffc00000, data 0xc0efb0/0xce4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e6e6f800 session 0x5630e8c4cf00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa98a000/0x0/0x4ffc00000, data 0xc0efb0/0xce4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 31293440 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:03.631204+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 31358976 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:04.631597+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 31358976 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:05.632002+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032400
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 31358976 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032400 session 0x5630e8c4c5a0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:06.632354+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa780000/0x0/0x4ffc00000, data 0xe18fe9/0xeee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032800 session 0x5630e679b860
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 31358976 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:07.632703+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185571 data_alloc: 218103808 data_used: 7684096
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032c00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032c00 session 0x5630e679bc20
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106774528 unmapped: 31326208 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032000 session 0x5630e679be00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:08.633081+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032400
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 31293440 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:09.633431+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 31293440 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:10.633717+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106815488 unmapped: 31285248 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:11.634084+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106815488 unmapped: 31285248 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:12.634315+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192468 data_alloc: 218103808 data_used: 8331264
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:13.634511+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:14.634780+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:15.635033+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:16.635277+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:17.635589+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203828 data_alloc: 234881024 data_used: 9945088
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:18.635982+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:19.636390+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:20.636722+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:21.636985+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:22.637227+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203828 data_alloc: 234881024 data_used: 9945088
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:23.637550+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:24.637959+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:25.638279+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:26.638987+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:27.639407+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203828 data_alloc: 234881024 data_used: 9945088
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:28.639797+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:29.640317+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:30.640517+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:31.640821+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:32.641193+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203828 data_alloc: 234881024 data_used: 9945088
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:33.641441+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa77e000/0x0/0x4ffc00000, data 0xe1901b/0xef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:34.641739+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 106823680 unmapped: 31277056 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.723632812s of 32.954315186s, submitted: 40
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:35.642022+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108003328 unmapped: 30097408 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:36.643492+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 108003328 unmapped: 30097408 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:37.644143+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 109035520 unmapped: 29065216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232924 data_alloc: 234881024 data_used: 10022912
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:38.644533+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 109035520 unmapped: 29065216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa57d000/0x0/0x4ffc00000, data 0x101801b/0x10ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:39.645298+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 109035520 unmapped: 29065216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:40.645595+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 30564352 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:41.645766+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111738880 unmapped: 26361856 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:42.646044+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 26312704 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304974 data_alloc: 234881024 data_used: 10981376
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:43.646237+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 26238976 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9ca9000/0x0/0x4ffc00000, data 0x18ee01b/0x19c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:44.646545+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 26238976 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:45.646953+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 26238976 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:46.647252+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 26238976 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9ca9000/0x0/0x4ffc00000, data 0x18ee01b/0x19c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:47.647388+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 26238976 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1312330 data_alloc: 234881024 data_used: 10944512
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:48.647734+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 111861760 unmapped: 26238976 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.991629601s of 13.776041031s, submitted: 141
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:49.648046+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 26001408 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:50.648394+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 26001408 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:51.648779+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 26001408 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9c8c000/0x0/0x4ffc00000, data 0x190b01b/0x19e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:52.649021+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112115712 unmapped: 25985024 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313406 data_alloc: 234881024 data_used: 11014144
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:53.649331+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112115712 unmapped: 25985024 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:54.649784+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112115712 unmapped: 25985024 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9c8c000/0x0/0x4ffc00000, data 0x190b01b/0x19e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:55.650231+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112173056 unmapped: 25927680 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:56.650634+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112173056 unmapped: 25927680 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:57.650950+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112173056 unmapped: 25927680 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314082 data_alloc: 234881024 data_used: 11014144
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9c7b000/0x0/0x4ffc00000, data 0x191c01b/0x19f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:58.651271+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112181248 unmapped: 25919488 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:11:59.651689+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112181248 unmapped: 25919488 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:00.652110+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112181248 unmapped: 25919488 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.975809097s of 12.017213821s, submitted: 6
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9c7b000/0x0/0x4ffc00000, data 0x191c01b/0x19f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:01.652825+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112181248 unmapped: 25919488 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032800 session 0x5630e8cc0780
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:02.653060+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 24625152 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346354 data_alloc: 234881024 data_used: 11014144
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:03.653436+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032c00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032c00 session 0x5630e86daf00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e98f2000 session 0x5630e900d0e0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9238000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9238000 session 0x5630e90363c0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 24625152 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9238400
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9238400 session 0x5630e67612c0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f99f6000/0x0/0x4ffc00000, data 0x1ba101b/0x1c78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9238400
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9238400 session 0x5630e8c4c1e0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032800 session 0x5630e6b53e00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032c00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032c00 session 0x5630e6b52960
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:04.653763+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9238000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9238000 session 0x5630e900c3c0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e98f2000 session 0x5630e73805a0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114049024 unmapped: 24051712 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:05.654110+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9459000/0x0/0x4ffc00000, data 0x213d02b/0x2215000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114049024 unmapped: 24051712 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9459000/0x0/0x4ffc00000, data 0x213d02b/0x2215000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:06.654531+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114049024 unmapped: 24051712 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:07.654787+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114049024 unmapped: 24051712 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401728 data_alloc: 234881024 data_used: 11014144
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9459000/0x0/0x4ffc00000, data 0x213d02b/0x2215000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:08.655101+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114049024 unmapped: 24051712 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e98f2000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e98f2000 session 0x5630e700c5a0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:09.655535+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114049024 unmapped: 24051712 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032800 session 0x5630e7383c20
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:10.656028+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114049024 unmapped: 24051712 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032c00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032c00 session 0x5630e64e7680
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9238000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.383359909s of 10.606524467s, submitted: 36
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:11.656236+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9238000 session 0x5630e9024b40
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114049024 unmapped: 24051712 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9238400
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9238400 session 0x5630e8cc0d20
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9238400
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:12.656497+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114057216 unmapped: 24043520 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032800 session 0x5630e8a2c780
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401728 data_alloc: 234881024 data_used: 11014144
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:13.656719+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9459000/0x0/0x4ffc00000, data 0x213d02b/0x2215000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113524736 unmapped: 24576000 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e97cac00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e97cac00 session 0x5630e79ea1e0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9229400
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9456000/0x0/0x4ffc00000, data 0x213e05e/0x2218000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [1,0,0,2])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9229400 session 0x5630e64e6b40
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:14.656952+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8dcd800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8dcdc00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113680384 unmapped: 24420352 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:15.657168+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113680384 unmapped: 24420352 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:16.657439+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114851840 unmapped: 23248896 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:17.657693+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 23060480 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1438654 data_alloc: 234881024 data_used: 15085568
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:18.658078+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 21782528 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f942c000/0x0/0x4ffc00000, data 0x216805e/0x2242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:19.658444+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 21782528 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:20.658865+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 21782528 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f942c000/0x0/0x4ffc00000, data 0x216805e/0x2242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:21.659348+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 21774336 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:22.659587+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 21774336 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450014 data_alloc: 234881024 data_used: 16699392
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:23.660000+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.337281227s of 12.433724403s, submitted: 17
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 21708800 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f942c000/0x0/0x4ffc00000, data 0x216805e/0x2242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:24.660371+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 21708800 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:25.660704+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 21708800 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:26.661084+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 21708800 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:27.661442+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f942c000/0x0/0x4ffc00000, data 0x216805e/0x2242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 21708800 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452254 data_alloc: 234881024 data_used: 16691200
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:28.661653+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 21708800 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:29.662064+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 21708800 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:30.662348+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 21708800 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:31.662671+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f942c000/0x0/0x4ffc00000, data 0x216805e/0x2242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 21708800 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:32.663032+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 21708800 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452254 data_alloc: 234881024 data_used: 16691200
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:33.663312+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 21708800 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:34.663516+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 21708800 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f942c000/0x0/0x4ffc00000, data 0x216805e/0x2242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:35.663933+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 21700608 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:36.664197+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 21700608 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:37.664417+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 21692416 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452254 data_alloc: 234881024 data_used: 16691200
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:38.664659+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 21692416 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:39.664917+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f942c000/0x0/0x4ffc00000, data 0x216805e/0x2242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 21692416 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:40.665081+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 21692416 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:41.665251+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f942c000/0x0/0x4ffc00000, data 0x216805e/0x2242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 21692416 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:42.665456+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 21692416 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452254 data_alloc: 234881024 data_used: 16691200
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:43.665746+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f942c000/0x0/0x4ffc00000, data 0x216805e/0x2242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 21684224 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:44.665951+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 21684224 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:45.666267+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 21684224 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:46.666485+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116424704 unmapped: 21676032 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:47.666818+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.989109039s of 24.041637421s, submitted: 13
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 19341312 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1493354 data_alloc: 234881024 data_used: 16728064
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:48.667016+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 118226944 unmapped: 19873792 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:49.667736+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f8308000/0x0/0x4ffc00000, data 0x327e05e/0x3358000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,4,3,2,2])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 15941632 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f79a4000/0x0/0x4ffc00000, data 0x3be005e/0x3cba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:50.667940+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 16220160 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:51.668194+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 16531456 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:52.668352+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 16498688 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1691062 data_alloc: 234881024 data_used: 18505728
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:53.668699+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3ca105e/0x3d7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 16498688 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:54.669112+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f78ea000/0x0/0x4ffc00000, data 0x3ca105e/0x3d7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 16457728 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:55.669443+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 16531456 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:56.670055+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9238400 session 0x5630e7301860
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9917400
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 119947264 unmapped: 18153472 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9917400 session 0x5630e64e65a0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:57.670223+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f8090000/0x0/0x4ffc00000, data 0x285d05e/0x2937000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 17883136 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1518456 data_alloc: 234881024 data_used: 16740352
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:58.670538+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 17883136 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:12:59.671165+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 17883136 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:00.671540+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 17883136 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.288977623s of 13.481291771s, submitted: 293
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e6e6f800 session 0x5630e79eb4a0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032400 session 0x5630e90372c0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:01.671970+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 21291008 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032800 session 0x5630ea57dc20
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:02.672207+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f996f000/0x0/0x4ffc00000, data 0x1c1bffc/0x1cf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 21282816 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391140 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:03.672528+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 21282816 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:04.674746+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 21282816 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:05.675154+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 21282816 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:06.675501+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9970000/0x0/0x4ffc00000, data 0x1c1bfca/0x1cf3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116817920 unmapped: 21282816 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:07.675994+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 21274624 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391140 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9970000/0x0/0x4ffc00000, data 0x1c1bfca/0x1cf3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:08.676621+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 21274624 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:09.677408+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 21274624 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:10.678043+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 21274624 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:11.678326+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 21274624 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:12.678564+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 21274624 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391184 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:13.679155+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9978000/0x0/0x4ffc00000, data 0x1c1efca/0x1cf6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 21274624 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:14.679501+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 21266432 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:15.679785+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9978000/0x0/0x4ffc00000, data 0x1c1efca/0x1cf6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 21266432 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9978000/0x0/0x4ffc00000, data 0x1c1efca/0x1cf6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:16.680206+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 21266432 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:17.680578+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 21266432 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391184 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:18.681057+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 21266432 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:19.681481+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 21266432 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:20.681965+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 21266432 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:21.682326+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9978000/0x0/0x4ffc00000, data 0x1c1efca/0x1cf6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 21266432 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:22.682699+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.034488678s of 21.216693878s, submitted: 34
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391680 data_alloc: 234881024 data_used: 13504512
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116842496 unmapped: 21258240 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:23.682858+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 21250048 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:24.683223+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9978000/0x0/0x4ffc00000, data 0x1c1efca/0x1cf6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 21250048 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:25.683541+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 21250048 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:26.683998+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9978000/0x0/0x4ffc00000, data 0x1c1efca/0x1cf6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e8dcd800 session 0x5630e8cc0960
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e8dcdc00 session 0x5630e8c4b0e0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 21184512 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:27.684211+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e6e6f800 session 0x5630e8d381e0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:28.684423+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:29.685055+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:30.685382+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:31.685828+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:32.686160+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:33.686536+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:34.687027+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:35.687314+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:36.687651+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:37.688084+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:38.688486+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:39.688997+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:40.689171+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:41.689397+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:42.689633+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:43.689863+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:44.690214+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:45.690440+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:46.690665+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:47.690971+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:48.691212+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:49.691496+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:50.691706+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:51.692067+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:52.692340+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 3600.1 total, 600.0 interval
                                            Cumulative writes: 9676 writes, 36K keys, 9676 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 9676 writes, 2587 syncs, 3.74 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 2307 writes, 8716 keys, 2307 commit groups, 1.0 writes per commit group, ingest: 9.18 MB, 0.02 MB/s
                                            Interval WAL: 2307 writes, 929 syncs, 2.48 writes per sync, written: 0.01 GB, 0.02 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:53.692844+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:54.693182+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 24068096 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:55.693514+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 24059904 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:56.693979+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 24059904 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:57.694313+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 24059904 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:58.694548+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 24059904 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:13:59.694750+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: mgrc ms_handle_reset ms_handle_reset con 0x5630e8b22800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/858078637
Dec 05 02:38:44 compute-0 ceph-osd[206647]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/858078637,v1:192.168.122.100:6801/858078637]
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: get_auth_request con 0x5630e9917400 auth_method 0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: mgrc handle_mgr_configure stats_period=5
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 23928832 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:00.695038+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 23928832 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:01.695421+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 23928832 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:02.695645+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 23928832 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:03.696095+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 23928832 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:04.696381+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 23920640 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:05.696735+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 23920640 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:06.697759+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 23920640 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e8867000 session 0x5630e64c7a40
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e8dcd800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:07.698184+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 23920640 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:08.698550+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 23920640 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:09.698973+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 23920640 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:10.699164+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 23920640 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:11.699511+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:12.699826+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:13.700043+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:14.700229+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:15.700482+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:16.700828+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:17.701054+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:18.701383+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:19.701777+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:20.702026+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:21.702425+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:22.702947+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:23.703381+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:24.703754+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:25.704204+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:26.704567+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:27.705089+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:28.705384+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:29.705718+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:30.706116+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:31.706453+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 23912448 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:32.706827+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 23904256 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:33.707117+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 23953408 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:34.707786+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 23953408 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:35.708239+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 23953408 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:36.708621+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 23953408 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:37.709037+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 23953408 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:38.709353+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 23953408 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:39.709766+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 23953408 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:40.710386+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:41.710775+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:42.710994+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:43.711495+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:44.711759+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:45.712123+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:46.712526+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:47.712716+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:48.712961+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:49.713282+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:50.713735+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:51.714006+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:52.714247+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:53.714558+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:54.714874+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:55.715341+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:56.715711+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:57.716083+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:58.716436+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215415 data_alloc: 218103808 data_used: 7757824
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:14:59.716814+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 23945216 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8df000/0x0/0x4ffc00000, data 0xcb9f87/0xd8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:00.717174+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 23937024 heap: 138100736 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:01.717559+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032400
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 99.126289368s of 99.387001038s, submitted: 48
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 119668736 unmapped: 21585920 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032400 session 0x5630e849cd20
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:02.717768+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 26894336 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:03.717992+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa2bc000/0x0/0x4ffc00000, data 0x12ddf87/0x13b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261787 data_alloc: 218103808 data_used: 7757824
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 26894336 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:04.718290+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 26894336 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:05.718604+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 26894336 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:06.718841+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032800 session 0x5630e849c960
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 26894336 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:07.719141+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9229400
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9229400 session 0x5630e64e8d20
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114360320 unmapped: 26894336 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:08.719494+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ebdfa000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630ebdfa000 session 0x5630e9025860
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e6e6f800 session 0x5630e8d38b40
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264741 data_alloc: 218103808 data_used: 7757824
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 26533888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:09.719796+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032400
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa2bc000/0x0/0x4ffc00000, data 0x12ddf87/0x13b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 26533888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:10.720045+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 26533888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:11.720337+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa292000/0x0/0x4ffc00000, data 0x1307f87/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 26533888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:12.720541+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 26533888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:13.720811+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292901 data_alloc: 234881024 data_used: 11685888
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:14.721014+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:15.721232+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa292000/0x0/0x4ffc00000, data 0x1307f87/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:16.721507+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:17.721664+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:18.721912+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302501 data_alloc: 234881024 data_used: 13074432
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:19.722198+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa292000/0x0/0x4ffc00000, data 0x1307f87/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:20.722503+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:21.722729+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa292000/0x0/0x4ffc00000, data 0x1307f87/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:22.722984+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:23.723222+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302501 data_alloc: 234881024 data_used: 13074432
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:24.723445+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa292000/0x0/0x4ffc00000, data 0x1307f87/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:25.723780+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:26.724021+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9033400 session 0x5630ea57d0e0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9229400
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:27.724370+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:28.724596+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302501 data_alloc: 234881024 data_used: 13074432
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:29.724864+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa292000/0x0/0x4ffc00000, data 0x1307f87/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:30.725152+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:31.725521+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:32.725992+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa292000/0x0/0x4ffc00000, data 0x1307f87/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 26591232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:33.726329+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.751306534s of 31.864397049s, submitted: 10
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa292000/0x0/0x4ffc00000, data 0x1307f87/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,1])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304101 data_alloc: 234881024 data_used: 13115392
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114688000 unmapped: 26566656 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:34.726510+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 26542080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:35.726856+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114769920 unmapped: 26484736 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:36.727270+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa292000/0x0/0x4ffc00000, data 0x1307f87/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,0,1])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 26435584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:37.727616+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 26435584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:38.728026+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304581 data_alloc: 234881024 data_used: 13127680
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 26435584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:39.728289+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 26435584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:40.728615+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 26435584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:41.729012+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa292000/0x0/0x4ffc00000, data 0x1307f87/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa292000/0x0/0x4ffc00000, data 0x1307f87/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 26435584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:42.729247+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 26435584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:43.729700+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304581 data_alloc: 234881024 data_used: 13127680
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 26435584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:44.730019+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa292000/0x0/0x4ffc00000, data 0x1307f87/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [1])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 114819072 unmapped: 26435584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:45.730426+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.707974434s of 12.498046875s, submitted: 132
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116006912 unmapped: 25247744 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:46.730940+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 25567232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:47.731260+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 25567232 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:48.731983+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341687 data_alloc: 234881024 data_used: 13488128
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 24510464 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:49.732246+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa018000/0x0/0x4ffc00000, data 0x1580f87/0x1655000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 24510464 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:50.732653+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 24510464 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:51.733037+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 24510464 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:52.733253+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa00a000/0x0/0x4ffc00000, data 0x158ef87/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:53.733506+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 24510464 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341687 data_alloc: 234881024 data_used: 13488128
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:54.733985+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 24510464 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:55.734382+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 24510464 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:56.734735+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 24510464 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:57.735053+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 24510464 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:58.735429+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 24510464 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa00a000/0x0/0x4ffc00000, data 0x158ef87/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341687 data_alloc: 234881024 data_used: 13488128
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:15:59.735764+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116744192 unmapped: 24510464 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:00.736058+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 24502272 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:01.736278+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 24502272 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:02.736517+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 24502272 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:03.736713+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 24502272 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341687 data_alloc: 234881024 data_used: 13488128
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:04.736998+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa00a000/0x0/0x4ffc00000, data 0x158ef87/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:05.737348+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:06.737702+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:07.738172+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:08.738458+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:09.738715+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341687 data_alloc: 234881024 data_used: 13488128
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa00a000/0x0/0x4ffc00000, data 0x158ef87/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:10.739147+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:11.739577+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:12.739926+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:13.740184+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:14.740419+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341687 data_alloc: 234881024 data_used: 13488128
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa00a000/0x0/0x4ffc00000, data 0x158ef87/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:15.740795+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:16.741130+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:17.741539+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:18.742043+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa00a000/0x0/0x4ffc00000, data 0x158ef87/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:19.742522+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341687 data_alloc: 234881024 data_used: 13488128
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:20.742793+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:21.743219+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116760576 unmapped: 24494080 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:22.743443+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa00a000/0x0/0x4ffc00000, data 0x158ef87/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:23.743813+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:24.744461+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341687 data_alloc: 234881024 data_used: 13488128
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:25.745011+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:26.745333+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa00a000/0x0/0x4ffc00000, data 0x158ef87/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:27.745537+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa00a000/0x0/0x4ffc00000, data 0x158ef87/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:28.746733+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:29.749040+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341687 data_alloc: 234881024 data_used: 13488128
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:30.749693+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:31.749933+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa00a000/0x0/0x4ffc00000, data 0x158ef87/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:32.750229+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:33.750497+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:34.750690+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341687 data_alloc: 234881024 data_used: 13488128
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:35.751121+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:36.751482+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa00a000/0x0/0x4ffc00000, data 0x158ef87/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:37.751709+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:38.752087+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:39.752499+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341847 data_alloc: 234881024 data_used: 13492224
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:40.752732+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:41.753099+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:42.753325+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 24485888 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 56.702072144s of 56.839138031s, submitted: 25
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa00a000/0x0/0x4ffc00000, data 0x158ef87/0x1663000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:43.753617+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:44.754056+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340915 data_alloc: 234881024 data_used: 13492224
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:45.754414+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:46.754806+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9ff4000/0x0/0x4ffc00000, data 0x15a5f87/0x167a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:47.755036+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:48.755389+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:49.755840+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340915 data_alloc: 234881024 data_used: 13492224
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:50.756271+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:51.756466+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:52.756734+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9ff4000/0x0/0x4ffc00000, data 0x15a5f87/0x167a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:53.757100+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:54.757458+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341075 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:55.757790+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9ff4000/0x0/0x4ffc00000, data 0x15a5f87/0x167a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:56.758195+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9ff4000/0x0/0x4ffc00000, data 0x15a5f87/0x167a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:57.758632+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:58.759129+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 24387584 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9ff4000/0x0/0x4ffc00000, data 0x15a5f87/0x167a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.855074883s of 15.871548653s, submitted: 2
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:16:59.759556+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341495 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:00.759781+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:01.760012+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:02.760326+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:03.760640+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdf000/0x0/0x4ffc00000, data 0x15baf87/0x168f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:04.761178+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341495 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:05.761557+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:06.761973+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:07.762404+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdf000/0x0/0x4ffc00000, data 0x15baf87/0x168f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:08.762740+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:09.763140+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341495 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdf000/0x0/0x4ffc00000, data 0x15baf87/0x168f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:10.763524+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdf000/0x0/0x4ffc00000, data 0x15baf87/0x168f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:11.764007+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:12.764389+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.313895226s of 14.336582184s, submitted: 2
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:13.764682+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:14.765051+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:15.765645+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:16.766116+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:17.766564+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:18.766843+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:19.767311+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:20.767592+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:21.768028+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:22.768271+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:23.768852+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:24.769281+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:25.769713+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:26.770194+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:27.770671+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:28.771180+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:29.771606+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:30.772104+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:31.772596+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:32.773203+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:33.773642+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:34.774065+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:35.774377+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:36.774584+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:37.774799+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:38.775078+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:39.775430+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:40.775676+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:41.775977+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:42.776281+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:43.776493+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:44.776729+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:45.777071+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:46.777421+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:47.777872+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:48.778251+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:49.778556+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:50.778844+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:51.779267+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:52.779803+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:53.780021+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:54.780288+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:55.780606+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:56.780942+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:57.781128+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:58.781302+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:17:59.781643+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:00.782082+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:01.782442+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:02.782715+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:03.782984+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:04.783207+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:05.783441+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:06.783711+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:07.784164+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:08.784500+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:09.784706+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:10.784989+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:11.785214+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:12.785587+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:13.786044+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:14.786449+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:15.787131+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:16.787508+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:17.787975+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:18.788371+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:19.788749+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:20.789076+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:21.789428+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:22.789747+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:23.790076+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:24.790290+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:25.790529+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:26.790966+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:27.791297+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:28.791809+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:29.792313+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:30.793082+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:31.793291+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:32.793564+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:33.793812+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:34.794054+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:35.794325+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:36.794632+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:37.796080+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:38.796604+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:39.797126+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:40.797562+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:41.797998+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:42.798292+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:43.798626+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:44.798875+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:45.799401+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:46.799755+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:47.800066+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:48.800361+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:49.800954+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:50.801330+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:51.801774+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:52.802135+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:53.802333+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:54.802692+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:55.802916+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:56.803113+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:57.803368+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:58.803566+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:18:59.804011+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:00.804307+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:01.805079+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:02.805345+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:03.805573+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:04.806015+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:05.806418+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:06.806680+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:07.807290+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:08.807508+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:09.807829+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:10.808119+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:11.808474+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:12.808822+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:13.808977+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:14.809257+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:15.809489+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:16.809803+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:17.810151+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:18.810623+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:19.811060+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:20.811500+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:21.811877+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:22.812359+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:23.812680+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:24.813049+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:25.813224+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:26.813611+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:27.814061+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:28.814496+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:29.814950+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:30.815334+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:31.815852+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:32.816232+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:33.816688+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:34.817099+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:35.817435+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:36.817853+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:37.818308+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:38.818673+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:39.819133+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:40.819481+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:41.819736+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:42.820080+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:43.820428+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:44.820755+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:45.821080+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:46.821466+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:47.821862+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:48.822375+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:49.822972+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:50.823396+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:51.823769+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:52.824119+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:53.824346+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:54.824714+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:55.825046+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:56.825395+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:57.825651+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:58.826023+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:19:59.826354+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:00.826552+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:01.826761+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:02.827091+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 24141824 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:03.827532+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:04.827739+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:05.828048+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 234881024 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:06.828335+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:07.828624+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:08.828856+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:09.829159+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:10.829507+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:11.829759+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:12.830123+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:13.830463+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:14.830808+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:15.831142+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:16.831547+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:17.831975+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:18.832378+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:19.832804+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:20.833208+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:21.833636+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:22.834033+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:23.834495+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:24.834754+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:25.835038+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:26.835419+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:27.835621+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:28.835998+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:29.836303+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:30.836575+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:31.836813+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:32.837150+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:33.837486+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:34.837794+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:35.838047+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:36.838275+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:37.838676+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:38.838982+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:39.839291+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:40.839752+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:41.840021+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:42.840348+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:43.840698+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 24117248 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:44.840950+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:45.841272+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:46.841666+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:47.841986+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:48.842369+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:49.842628+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:50.843005+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:51.843393+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:52.843711+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:53.844123+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:54.844447+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:55.844844+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:56.845192+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 24100864 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:57.845563+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 24100864 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:58.845880+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 24100864 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:20:59.846358+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24092672 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:00.846789+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24092672 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:01.847148+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24092672 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:02.847547+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24092672 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:03.847772+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24092672 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:04.848304+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24092672 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:05.848514+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24092672 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:06.848983+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24092672 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:07.849329+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24092672 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:08.849748+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24092672 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:09.850143+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 24092672 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:10.850506+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24084480 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:11.850866+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24084480 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:12.851149+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24084480 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:13.851520+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24084480 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:14.851871+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24084480 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:15.852265+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24084480 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:16.853084+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24084480 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:17.853321+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24084480 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:18.853603+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 24084480 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:19.854221+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 24076288 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:20.854647+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 24076288 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:21.854964+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341979 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 24076288 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:22.855205+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 24076288 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0x15bbf87/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:23.855591+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24068096 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:24.856072+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24068096 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 251.589416504s of 251.610870361s, submitted: 3
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:25.856393+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24068096 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:26.856707+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24068096 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:27.856982+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24068096 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:28.857341+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24068096 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:29.857661+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24068096 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:30.858033+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24068096 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:31.858362+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24068096 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:32.858724+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24068096 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:33.859106+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24068096 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:34.859385+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 24068096 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:35.859759+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24059904 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:36.860159+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24059904 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:37.860397+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24059904 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:38.860828+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 24059904 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:39.861272+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24051712 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:40.861649+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24051712 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:41.862029+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24051712 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:42.862410+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24051712 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:43.862748+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24051712 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:44.863321+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24051712 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:45.863696+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24051712 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:46.864118+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 24051712 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:47.864640+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:48.865072+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:49.865463+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:50.865997+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:51.866405+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:52.866998+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:53.867361+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:54.867765+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:55.868189+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:56.868620+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:57.869058+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:58.869415+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:21:59.869985+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:00.870392+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:01.870684+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:02.871191+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 24043520 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:03.871567+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 24035328 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:04.872041+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 24305664 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:05.872475+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 24305664 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:06.873582+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 24305664 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:07.873817+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 24305664 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:08.874181+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 24305664 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:09.874653+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 24305664 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:10.875116+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 24305664 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:11.875553+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 24305664 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:12.876237+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 24305664 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:13.876803+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24297472 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:14.877144+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24297472 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:15.877627+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24297472 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:16.878490+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24297472 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:17.878800+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24297472 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:18.879241+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24297472 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:19.879667+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24297472 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:20.880084+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24297472 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:21.880593+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24297472 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:22.881055+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24297472 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:23.881388+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24297472 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:24.881736+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 24297472 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:25.882129+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 24289280 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:26.882360+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 24289280 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:27.882679+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 24289280 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:28.882930+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 24289280 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:29.883232+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 24289280 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:30.883548+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 24289280 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:31.883828+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 24281088 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:32.884013+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 24281088 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:33.884287+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 24281088 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:34.884611+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 24281088 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:35.885273+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 24272896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:36.885610+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 24272896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:37.886187+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 24272896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:38.886546+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 24272896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:39.887072+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 24272896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:40.887468+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 24272896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:41.887981+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 24272896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:42.888457+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 24272896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:43.888800+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 24272896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:44.889870+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 24272896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:45.890251+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 24272896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:46.890463+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 24272896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:47.890842+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116989952 unmapped: 24264704 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:48.891167+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116989952 unmapped: 24264704 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:49.891592+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116989952 unmapped: 24264704 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:50.892146+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116989952 unmapped: 24264704 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:51.892533+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 24256512 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:52.893053+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 116998144 unmapped: 24256512 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:53.893422+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:54.893802+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:55.894118+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:56.894430+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:57.894783+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:58.895160+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:22:59.895768+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:00.896151+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:01.896590+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:02.897064+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:03.897358+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:04.897712+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:05.898177+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:06.898662+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:07.899036+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:08.899394+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 24248320 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:09.899734+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 24240128 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:10.900084+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 24240128 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:11.900441+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 24240128 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:12.900792+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 24240128 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:13.901122+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 24240128 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:14.901500+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117014528 unmapped: 24240128 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:15.902081+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 24231936 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:16.902483+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 24223744 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:17.902848+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 24223744 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:18.903238+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 24223744 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:19.903602+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 24223744 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:20.904048+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 24223744 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:21.904420+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 24223744 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:22.904685+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 24223744 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:23.905140+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 24223744 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:24.905468+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:25.905827+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:26.906180+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117039104 unmapped: 24215552 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:27.906566+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:28.907005+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:29.907407+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:30.907806+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:31.908226+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:32.908575+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:33.908998+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:34.909618+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:35.909993+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:36.910224+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:37.911245+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:38.912465+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 24207360 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:39.912970+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:40.913454+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:41.914226+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:42.914556+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:43.914998+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:44.915409+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:45.915780+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:46.916159+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 24199168 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:47.917021+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:48.917349+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:49.918149+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:50.918640+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:51.919044+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 24190976 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:52.919510+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4200.1 total, 600.0 interval
                                            Cumulative writes: 10K writes, 37K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 10K writes, 2749 syncs, 3.64 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 337 writes, 730 keys, 337 commit groups, 1.0 writes per commit group, ingest: 0.46 MB, 0.00 MB/s
                                            Interval WAL: 337 writes, 162 syncs, 2.08 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:53.920036+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:54.920442+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 24182784 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:55.920829+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:56.921252+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:57.921585+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:58.922117+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:23:59.922511+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:00.922845+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:01.923237+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:02.923569+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 24174592 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:03.924038+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:04.924369+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:05.924801+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 24166400 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:06.925443+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:07.925840+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:08.926198+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:09.926549+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:10.926993+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:11.927529+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:12.927865+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:13.928341+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:14.928826+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:15.929161+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:16.929525+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:17.929982+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:18.930481+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 24158208 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:19.931014+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:20.931376+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:21.931806+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:22.932162+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:23.932533+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:24.932732+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:25.933055+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:26.933443+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 24150016 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:27.933984+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:28.934343+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:29.935143+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:30.935492+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:31.936603+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:32.937149+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:33.938083+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:34.938491+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 24133632 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:35.939118+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:36.939448+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:37.939828+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:38.940328+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:39.940746+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:40.941141+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:41.941547+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:42.942072+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:43.942397+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:44.942839+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:45.943222+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:46.943604+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:47.944003+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:48.944299+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:49.944714+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:50.945089+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 24125440 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:51.945301+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:52.945758+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1342287 data_alloc: 218103808 data_used: 13496320
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:53.946199+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15bcf87/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:54.946473+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:55.946724+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630ea005000 session 0x5630e90eef00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 24109056 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:56.946967+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ebdfa000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 211.850372314s of 211.858688354s, submitted: 1
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630ebdfa000 session 0x5630e8d390e0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:57.947379+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296649 data_alloc: 218103808 data_used: 12849152
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa3f7000/0x0/0x4ffc00000, data 0x11a3f77/0x1277000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:58.947683+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:24:59.948060+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:00.948527+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:01.949428+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:02.949822+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa3f7000/0x0/0x4ffc00000, data 0x11a3f77/0x1277000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296649 data_alloc: 218103808 data_used: 12849152
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa3f7000/0x0/0x4ffc00000, data 0x11a3f77/0x1277000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:03.950176+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:04.950420+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:05.950807+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:06.951196+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:07.951679+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa3f7000/0x0/0x4ffc00000, data 0x11a3f77/0x1277000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296649 data_alloc: 218103808 data_used: 12849152
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:08.952112+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 25296896 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:09.952681+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032400 session 0x5630e73825a0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e9032800 session 0x5630e8cc0000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.471276283s of 13.519330025s, submitted: 11
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 28729344 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:10.953298+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 ms_handle_reset con 0x5630e6e6f800 session 0x5630e7381860
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa893000/0x0/0x4ffc00000, data 0x8f7f77/0x9cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 28729344 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:11.953662+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 28729344 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:12.954152+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183069 data_alloc: 218103808 data_used: 7172096
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 28729344 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:13.954460+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 28729344 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:14.954827+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8bd000/0x0/0x4ffc00000, data 0x8cdf77/0x9a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 28729344 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:15.955247+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8bd000/0x0/0x4ffc00000, data 0x8cdf77/0x9a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 28729344 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:16.955735+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 28729344 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:17.956145+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183069 data_alloc: 218103808 data_used: 7172096
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fa8bd000/0x0/0x4ffc00000, data 0x8cdf77/0x9a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 28729344 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:18.956570+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 28729344 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:19.957134+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 28729344 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:20.957528+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032400
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.972305298s of 11.028164864s, submitted: 11
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 28729344 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:21.958064+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 139 ms_handle_reset con 0x5630e9032400 session 0x5630e8d38f00
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fa8bd000/0x0/0x4ffc00000, data 0x8cdf54/0x9a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112590848 unmapped: 28663808 heap: 141254656 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:22.958472+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ea005000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240857 data_alloc: 218103808 data_used: 7180288
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:23.958764+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112607232 unmapped: 37044224 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 139 ms_handle_reset con 0x5630ea005000 session 0x5630e900d2c0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630ebdfa000
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _renew_subs
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:24.959172+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112615424 unmapped: 37036032 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _renew_subs
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 141 ms_handle_reset con 0x5630ebdfa000 session 0x5630e8c4c5a0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:25.959529+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112017408 unmapped: 37634048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa8b4000/0x0/0x4ffc00000, data 0x8d327c/0x9a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:26.959984+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112017408 unmapped: 37634048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa8b4000/0x0/0x4ffc00000, data 0x8d327c/0x9a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:27.960346+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112017408 unmapped: 37634048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193709 data_alloc: 218103808 data_used: 7180288
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:28.960785+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112017408 unmapped: 37634048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:29.961328+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112017408 unmapped: 37634048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:30.961721+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112017408 unmapped: 37634048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:31.962173+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112017408 unmapped: 37634048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa8b4000/0x0/0x4ffc00000, data 0x8d327c/0x9a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:32.962662+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112017408 unmapped: 37634048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.521178246s of 11.814188957s, submitted: 43
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196011 data_alloc: 218103808 data_used: 7180288
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:33.963073+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112017408 unmapped: 37634048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:34.963503+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112017408 unmapped: 37634048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:35.963847+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112066560 unmapped: 37584896 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:36.964251+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112123904 unmapped: 37527552 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:37.964729+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 37494784 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195131 data_alloc: 218103808 data_used: 7180288
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:38.965262+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 37494784 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:39.965762+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 37494784 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:40.966211+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 37494784 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:41.966650+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 37494784 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:42.967106+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 37494784 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195131 data_alloc: 218103808 data_used: 7180288
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:43.967483+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 37494784 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:44.967988+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 37494784 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:45.968427+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:46.968774+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:47.969188+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:48.969524+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:49.970062+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:50.970438+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:51.970825+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:52.971292+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:53.971688+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:54.972118+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:55.972621+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:56.973159+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:57.973494+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:58.973799+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:25:59.974016+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:00.974197+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:01.974572+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:02.975015+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:03.975415+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:04.975595+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:05.976021+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:06.976384+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:07.976676+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:08.977152+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:09.977455+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:10.977852+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:11.978243+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:12.978659+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:13.979174+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:14.979541+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:15.980040+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:16.980390+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:17.980744+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:18.981250+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:19.981694+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:20.982123+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:21.982545+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:22.983030+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:23.983417+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:24.983768+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:25.984118+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:26.984599+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:27.985036+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:28.985371+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:29.985979+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:30.986449+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:31.986950+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:32.987338+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:33.987817+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:34.988262+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:35.988668+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:36.989190+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:37.989665+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:38.990169+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:39.990658+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:40.991089+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:41.991466+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:42.991817+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:43.992228+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:44.992629+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:45.993072+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:46.993428+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:47.993809+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:48.994066+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:49.994456+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:50.994767+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:51.995228+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:52.995706+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:53.996069+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:54.996251+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:55.996603+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:56.996824+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:57.997101+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:58.997312+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:26:59.998123+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:00.998341+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:01.998662+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:02.998853+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:03.999095+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:04.999281+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:05.999472+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 37486592 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:06.999960+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: do_command 'config diff' '{prefix=config diff}'
Dec 05 02:38:44 compute-0 ceph-osd[206647]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112279552 unmapped: 37371904 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: do_command 'config show' '{prefix=config show}'
Dec 05 02:38:44 compute-0 ceph-osd[206647]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: do_command 'counter dump' '{prefix=counter dump}'
Dec 05 02:38:44 compute-0 ceph-osd[206647]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:08.000271+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: do_command 'counter schema' '{prefix=counter schema}'
Dec 05 02:38:44 compute-0 ceph-osd[206647]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 37429248 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:09.000444+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 37085184 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:10.000679+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: do_command 'log dump' '{prefix=log dump}'
Dec 05 02:38:44 compute-0 ceph-osd[206647]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112566272 unmapped: 37085184 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:11.000949+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: do_command 'perf dump' '{prefix=perf dump}'
Dec 05 02:38:44 compute-0 ceph-osd[206647]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec 05 02:38:44 compute-0 ceph-osd[206647]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec 05 02:38:44 compute-0 ceph-osd[206647]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec 05 02:38:44 compute-0 ceph-osd[206647]: do_command 'perf schema' '{prefix=perf schema}'
Dec 05 02:38:44 compute-0 ceph-osd[206647]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:12.001248+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:13.001564+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:14.001975+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:15.002158+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:16.002344+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:17.002537+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:18.002832+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:19.003552+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:20.003833+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:21.004217+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:22.004517+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:23.005002+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:24.005183+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:25.005358+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:26.005616+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:27.005999+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:28.006226+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:29.006458+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:30.007201+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:31.007803+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:32.008122+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:33.008325+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:34.008507+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:35.008839+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:36.009185+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:37.009403+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:38.009770+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:39.010135+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:40.010537+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:41.010979+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:42.011402+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:43.011796+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:44.012181+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 36872192 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:45.012465+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:46.012849+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:47.013000+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:48.013415+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:49.013787+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:50.014182+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:51.014565+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:52.014970+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:53.015153+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:54.015463+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:55.016075+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:56.016528+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:57.017005+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:58.017432+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:27:59.017857+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:00.018405+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:01.018783+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:02.019238+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:03.019711+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:04.020118+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:05.020628+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:06.020824+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:07.021209+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:08.021585+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:09.023194+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:10.023657+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:11.024120+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:12.024514+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:13.024962+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:14.025254+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:15.025636+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:16.026200+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:17.026671+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:18.027138+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:19.027528+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:20.027961+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:21.028199+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:22.028390+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:23.028763+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:24.029147+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:25.029443+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:26.029783+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:27.030179+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:28.030716+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:29.031042+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:30.031493+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:31.031860+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:32.032228+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:33.032634+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:34.033294+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:35.033705+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:36.034148+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:37.034567+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:38.035129+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:39.035491+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:40.036007+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:41.036364+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:42.036779+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:43.037342+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:44.037766+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:45.038183+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:46.038585+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:47.039025+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:48.039470+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:49.040023+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:50.040606+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:51.041115+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:52.041515+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:53.042058+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:54.042526+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:55.043106+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:56.043496+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:57.043980+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:58.044387+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:28:59.044857+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: mgrc ms_handle_reset ms_handle_reset con 0x5630e9917400
Dec 05 02:38:44 compute-0 ceph-osd[206647]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/858078637
Dec 05 02:38:44 compute-0 ceph-osd[206647]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/858078637,v1:192.168.122.100:6801/858078637]
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: get_auth_request con 0x5630e9032800 auth_method 0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: mgrc handle_mgr_configure stats_period=5
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:00.045461+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:01.046262+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:02.047001+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:03.047482+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:04.048138+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:05.048555+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:06.049026+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:07.049552+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 ms_handle_reset con 0x5630e8dcd800 session 0x5630e8a2c960
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e6e6f800
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:08.050111+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:09.050592+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:10.051138+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:11.051634+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:12.052354+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:13.053131+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:14.053555+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:15.054094+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:16.054636+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:17.055875+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:18.057831+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:19.059612+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:20.061551+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:21.063456+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:22.065465+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:23.067176+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:24.067969+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:25.068379+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:26.068773+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:27.069175+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:28.069582+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:29.069959+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:30.070356+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:31.070717+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 36864000 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:32.071095+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 36855808 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:33.071476+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 36855808 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:34.071843+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 36855808 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:35.072112+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 36855808 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:36.072491+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 36855808 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:37.072739+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 36855808 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:38.073089+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 36855808 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:39.073523+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 36855808 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:40.074062+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 36855808 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:41.074517+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 36855808 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:42.075379+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 36855808 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:43.075788+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112795648 unmapped: 36855808 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:44.076162+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112803840 unmapped: 36847616 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:45.076515+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112803840 unmapped: 36847616 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:46.077053+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112803840 unmapped: 36847616 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:47.077424+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112803840 unmapped: 36847616 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:48.077842+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112803840 unmapped: 36847616 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:49.078260+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112803840 unmapped: 36847616 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:50.078653+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112803840 unmapped: 36847616 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:51.079081+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112803840 unmapped: 36847616 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:52.079430+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112803840 unmapped: 36847616 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:53.079858+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112803840 unmapped: 36847616 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:54.080347+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112803840 unmapped: 36847616 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:55.080762+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112803840 unmapped: 36847616 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:56.081335+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 36839424 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:57.081714+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 36839424 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:58.082330+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 36839424 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:29:59.082674+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 36839424 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:00.083222+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 36839424 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:01.083596+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 36839424 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:02.084045+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 36839424 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:03.084489+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 36839424 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:04.084857+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 36839424 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:05.085251+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 36839424 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:06.085621+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 36839424 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:07.085835+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 36839424 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:08.086210+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 36839424 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:09.086594+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:10.086865+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 36839424 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:11.087222+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 36839424 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:12.087525+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 36839424 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:13.087792+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112820224 unmapped: 36831232 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:14.088179+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112820224 unmapped: 36831232 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:15.088554+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112820224 unmapped: 36831232 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:16.089022+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 36823040 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:17.089391+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 36823040 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:18.089723+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 36823040 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:19.090096+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 36823040 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:20.090526+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 36823040 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:21.090980+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 36823040 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:22.091328+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 36814848 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:23.091693+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 36814848 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:24.091981+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 36814848 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:25.092349+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 36814848 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:26.092793+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 36814848 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 ms_handle_reset con 0x5630e9229400 session 0x5630e6760780
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: handle_auth_request added challenge on 0x5630e9032400
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:27.093197+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 36814848 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:28.093656+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 36814848 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:29.094207+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 36814848 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:30.094707+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 36814848 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:31.095147+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 36814848 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:32.095551+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 36814848 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:33.095997+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 36814848 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:34.096446+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 36814848 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:35.096857+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 36814848 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:36.097171+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 36806656 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:37.097521+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 36806656 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:38.098026+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 36806656 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:39.098431+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 36806656 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:40.098832+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 36806656 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:41.099142+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 36806656 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:42.099416+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 36806656 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:43.099800+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112844800 unmapped: 36806656 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:44.100212+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112852992 unmapped: 36798464 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:45.100538+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112852992 unmapped: 36798464 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:46.101003+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112852992 unmapped: 36798464 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:47.101359+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112852992 unmapped: 36798464 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:48.101588+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112852992 unmapped: 36798464 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:49.102120+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112852992 unmapped: 36798464 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:50.102409+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112852992 unmapped: 36798464 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:51.102707+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112852992 unmapped: 36798464 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:52.103170+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112852992 unmapped: 36798464 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:53.103557+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112852992 unmapped: 36798464 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:54.104040+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112852992 unmapped: 36798464 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:55.104424+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112852992 unmapped: 36798464 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:56.104844+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 36790272 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:57.105185+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 36790272 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:58.105849+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 36790272 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:30:59.106518+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 36790272 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:00.107035+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 36790272 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:01.107443+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 36790272 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:02.107858+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 36790272 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:03.108397+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 36790272 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:04.108790+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 36790272 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:05.109250+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 36790272 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:06.109675+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 36790272 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:07.110059+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 36790272 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:08.110787+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112869376 unmapped: 36782080 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:09.111337+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112869376 unmapped: 36782080 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:10.111728+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 36773888 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:11.112085+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 36773888 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:12.112463+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 36773888 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:13.112798+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 36773888 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:14.113197+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 36773888 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:15.113623+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 36773888 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:16.114007+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 36773888 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:17.114405+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 36773888 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:18.114793+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 36773888 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:19.115156+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 36773888 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:20.115491+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 36773888 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:21.115857+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 36773888 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:22.116203+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 36773888 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:23.116576+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 36773888 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:24.117044+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 36773888 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:25.117396+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112877568 unmapped: 36773888 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:26.117766+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112885760 unmapped: 36765696 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:27.118067+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112885760 unmapped: 36765696 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:28.118410+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112885760 unmapped: 36765696 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:29.118793+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112885760 unmapped: 36765696 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:30.119272+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112885760 unmapped: 36765696 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:31.119673+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112885760 unmapped: 36765696 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:32.120104+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 36757504 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:33.120450+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 36757504 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:34.120819+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 36757504 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:35.121327+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 36757504 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:36.121754+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 36757504 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:37.122244+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 36757504 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:38.122623+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 36757504 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:39.123095+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 36757504 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:40.123433+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 36749312 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:41.123784+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 36749312 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:42.124112+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 36749312 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:43.124450+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 36749312 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:44.124805+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 36749312 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:45.125153+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 36749312 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:46.125473+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 36749312 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:47.125786+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 36749312 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:48.126167+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 36749312 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:49.126500+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 36749312 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:50.126870+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 36749312 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:51.127268+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 36749312 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:52.127591+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 36749312 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:53.128072+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 36749312 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:54.128478+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 36749312 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:55.128842+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 36749312 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:56.129110+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112918528 unmapped: 36732928 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:57.129846+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112918528 unmapped: 36732928 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:58.130191+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112918528 unmapped: 36732928 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:31:59.130422+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112918528 unmapped: 36732928 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:00.130811+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112918528 unmapped: 36732928 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:01.131152+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112918528 unmapped: 36732928 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:02.131368+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112918528 unmapped: 36732928 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:03.131588+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112918528 unmapped: 36732928 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:04.131864+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112918528 unmapped: 36732928 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:05.132197+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112918528 unmapped: 36732928 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:06.132518+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112918528 unmapped: 36732928 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:07.132775+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 36724736 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:08.133112+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 36724736 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:09.133468+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 36724736 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:10.133989+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 36724736 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:11.134403+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112926720 unmapped: 36724736 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:12.134710+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 36716544 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:13.135133+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 36716544 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:14.135505+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 36716544 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:15.135980+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 36716544 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:16.136430+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 36716544 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:17.136812+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 36716544 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:18.137241+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 36716544 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:19.137592+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 36716544 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:20.138102+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 36716544 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:21.138382+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 36716544 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:22.138792+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 36708352 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:23.139208+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 36708352 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:24.139620+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 36708352 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:25.140119+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 36708352 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:26.140482+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 36708352 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:27.140803+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 36708352 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:28.141267+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:29.141629+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 36708352 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:30.142214+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 36708352 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:31.142482+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 36708352 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:32.143625+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 36708352 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:33.144068+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 36708352 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:34.144454+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 36708352 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:35.144792+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 36708352 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:36.145311+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112951296 unmapped: 36700160 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:37.145719+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112951296 unmapped: 36700160 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:38.146163+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112951296 unmapped: 36700160 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:39.146578+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112951296 unmapped: 36700160 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:40.146974+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112951296 unmapped: 36700160 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:41.147429+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112951296 unmapped: 36700160 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:42.147824+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112951296 unmapped: 36700160 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:43.148228+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112951296 unmapped: 36700160 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:44.148613+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112959488 unmapped: 36691968 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:45.149120+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112959488 unmapped: 36691968 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:46.149529+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112959488 unmapped: 36691968 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:47.150072+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112959488 unmapped: 36691968 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:48.150351+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112959488 unmapped: 36691968 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:49.150838+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112959488 unmapped: 36691968 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:50.151630+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112959488 unmapped: 36691968 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:51.152205+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112959488 unmapped: 36691968 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:52.152582+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112967680 unmapped: 36683776 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:53.153139+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112967680 unmapped: 36683776 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:54.153554+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112967680 unmapped: 36683776 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:55.154058+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112967680 unmapped: 36683776 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:56.154433+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112967680 unmapped: 36683776 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:57.154867+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112967680 unmapped: 36683776 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:58.155387+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112967680 unmapped: 36683776 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:32:59.155795+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112967680 unmapped: 36683776 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:00.156236+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112967680 unmapped: 36683776 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:01.156518+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112967680 unmapped: 36683776 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:02.156998+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112967680 unmapped: 36683776 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:03.157363+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112967680 unmapped: 36683776 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:04.157691+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112975872 unmapped: 36675584 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:05.158117+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112975872 unmapped: 36675584 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:06.158331+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112975872 unmapped: 36675584 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:07.158704+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112975872 unmapped: 36675584 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:08.159083+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 36667392 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:09.159356+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 36667392 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:10.159700+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 36667392 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:11.160038+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 36667392 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:12.160303+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 36667392 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:13.160653+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 36667392 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:14.161109+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 36667392 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:15.161350+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 36667392 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:16.161627+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 36667392 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:17.162029+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 36667392 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:18.162396+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 36667392 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:19.162769+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 36667392 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:20.163212+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 36667392 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:21.163597+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 36667392 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:22.163777+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 36667392 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:23.164151+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 36667392 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:24.164501+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112992256 unmapped: 36659200 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:25.164862+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 112992256 unmapped: 36659200 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:26.165271+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113000448 unmapped: 36651008 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:27.165855+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113000448 unmapped: 36651008 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:28.166462+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113000448 unmapped: 36651008 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:29.166980+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113000448 unmapped: 36651008 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:30.167491+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113000448 unmapped: 36651008 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:31.167966+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113000448 unmapped: 36651008 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:32.168465+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113000448 unmapped: 36651008 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:33.168867+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113000448 unmapped: 36651008 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:34.169438+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113000448 unmapped: 36651008 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:35.169834+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113000448 unmapped: 36651008 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:36.170259+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113000448 unmapped: 36651008 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:37.170621+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113000448 unmapped: 36651008 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:38.171093+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113000448 unmapped: 36651008 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:39.171470+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113000448 unmapped: 36651008 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:40.171986+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113008640 unmapped: 36642816 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:41.172433+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113008640 unmapped: 36642816 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:42.172820+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113008640 unmapped: 36642816 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:43.173222+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113008640 unmapped: 36642816 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:44.173680+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113008640 unmapped: 36642816 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:45.174134+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113008640 unmapped: 36642816 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:46.174499+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113008640 unmapped: 36642816 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:47.174856+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113008640 unmapped: 36642816 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:48.175242+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113008640 unmapped: 36642816 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:49.175547+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113008640 unmapped: 36642816 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:50.176107+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 36634624 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:51.176536+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 36634624 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:52.177017+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 36634624 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                            ** DB Stats **
                                            Uptime(secs): 4800.1 total, 600.0 interval
                                            Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                            Cumulative WAL: 10K writes, 2909 syncs, 3.56 writes per sync, written: 0.03 GB, 0.01 MB/s
                                            Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                            Interval writes: 340 writes, 834 keys, 340 commit groups, 1.0 writes per commit group, ingest: 0.30 MB, 0.00 MB/s
                                            Interval WAL: 340 writes, 160 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s
                                            Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:53.177397+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 36634624 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:54.177813+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 36634624 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:55.178207+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113016832 unmapped: 36634624 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:56.178534+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 36626432 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:57.178817+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 36626432 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:58.179204+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 36626432 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:33:59.179567+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 36626432 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:00.180116+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 36626432 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:01.180441+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 36626432 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:02.180667+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 36626432 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:03.181094+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 36626432 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:04.181443+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 36626432 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:05.181781+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 36626432 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:06.182168+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 36626432 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:07.182556+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 36626432 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:08.182970+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 36626432 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:09.183342+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 36626432 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:10.183674+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 36626432 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:11.184222+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 36626432 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:12.184648+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 36618240 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:13.185092+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 36618240 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:14.185476+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 36618240 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:15.185959+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 36618240 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:16.186261+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 36618240 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:17.186667+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 36618240 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:18.187036+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 36618240 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:19.187235+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 36618240 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:20.187638+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 36610048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:21.187998+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 36610048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:22.188350+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 36610048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:23.188615+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 36610048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:24.189231+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 36610048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:25.190022+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 36610048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:26.190381+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 36610048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:27.190597+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 36610048 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:28.190951+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 36601856 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:29.191258+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 36601856 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:30.191664+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 36601856 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:31.193405+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 36601856 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:32.193808+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 36601856 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:33.194296+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 36601856 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:34.194620+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 36601856 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:35.195012+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 36601856 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:36.195371+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 36601856 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:37.195704+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 36601856 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:38.196041+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 36601856 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:39.196366+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 36601856 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:40.196812+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 36601856 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:41.197149+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 36601856 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:42.197531+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113049600 unmapped: 36601856 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:43.197983+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 36585472 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:44.198251+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 36585472 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:45.198649+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 36585472 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:46.199128+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 36585472 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:47.199511+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 36585472 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:48.199984+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:49.200382+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 36585472 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:50.200808+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 36585472 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:51.201160+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 36585472 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:52.201525+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 36585472 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:53.202067+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 36585472 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:54.202455+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 36585472 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:55.202826+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 36585472 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:56.203248+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 36585472 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:57.203609+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 36585472 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:58.204440+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 36585472 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:34:59.204724+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 36585472 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:00.205138+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113074176 unmapped: 36577280 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:01.205615+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113074176 unmapped: 36577280 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:02.206725+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113074176 unmapped: 36577280 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:03.207119+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113074176 unmapped: 36577280 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:04.207533+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113074176 unmapped: 36577280 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:05.207957+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113074176 unmapped: 36577280 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:06.208397+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113074176 unmapped: 36577280 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:07.208758+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113074176 unmapped: 36577280 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:08.209107+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113082368 unmapped: 36569088 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:09.209552+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113082368 unmapped: 36569088 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:10.210278+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113082368 unmapped: 36569088 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:11.210673+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 36560896 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:12.210870+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 36560896 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:13.211298+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 36560896 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:14.211692+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 36560896 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:15.212154+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 36560896 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:16.212523+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 36560896 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:17.213156+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 36560896 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:18.213548+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 36560896 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:19.214111+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 36560896 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:20.214530+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 36560896 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:21.215039+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 36560896 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:22.215398+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 36560896 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:23.215748+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 36560896 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:24.216097+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 36552704 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:25.216449+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 36552704 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:26.216821+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 36552704 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:27.217251+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 36552704 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:28.217779+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 36552704 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:29.218208+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 36552704 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:30.218630+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 36552704 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:31.218855+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 36552704 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:32.219033+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 36544512 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:33.219360+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 36544512 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:34.219623+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 36544512 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:35.220037+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 36544512 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 601.762023926s of 602.491333008s, submitted: 115
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:36.220397+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 36544512 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:37.220762+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113172480 unmapped: 36478976 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:38.221154+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:39.221451+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:40.221815+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:41.222167+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:42.222443+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:43.222747+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:44.223086+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:45.223445+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:46.223807+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:47.224135+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:48.224349+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:49.224669+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:50.225122+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:51.225457+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:52.225830+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:53.226207+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:54.226664+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:55.227117+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:56.227485+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:57.227836+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:58.228191+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:35:59.228583+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:00.229081+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:01.229318+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:02.229697+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:03.230143+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:04.230479+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:05.230839+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:06.231219+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:07.231437+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:08.231872+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:09.232451+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 36429824 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:10.233151+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 36421632 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:11.233598+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 36421632 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:12.234053+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 36421632 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:13.234337+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 36421632 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:14.234605+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 36421632 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:15.234849+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 36421632 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:16.235365+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 36421632 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:17.235582+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 36421632 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:18.236077+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 36421632 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:19.236475+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 36421632 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:20.237011+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 36421632 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:21.237414+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 36421632 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:22.237792+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 36421632 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:23.238152+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 36421632 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:24.238500+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 36421632 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:25.238826+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 36421632 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:26.239203+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 36421632 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:27.239611+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 36421632 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:28.240238+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113229824 unmapped: 36421632 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:29.240662+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113238016 unmapped: 36413440 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:30.241141+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113238016 unmapped: 36413440 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:31.241557+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113238016 unmapped: 36413440 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:32.242037+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113238016 unmapped: 36413440 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:33.242427+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113238016 unmapped: 36413440 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:34.242809+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113238016 unmapped: 36413440 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:35.243247+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113238016 unmapped: 36413440 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:36.243629+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113238016 unmapped: 36413440 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:37.244160+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113238016 unmapped: 36413440 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:38.244576+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113238016 unmapped: 36413440 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:39.244993+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113238016 unmapped: 36413440 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:40.245347+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113238016 unmapped: 36413440 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:41.246228+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113238016 unmapped: 36413440 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:42.246565+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113238016 unmapped: 36413440 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:43.247139+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113238016 unmapped: 36413440 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:44.247515+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113238016 unmapped: 36413440 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:45.248377+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 36405248 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:46.248736+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 36405248 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:47.249057+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 36405248 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:48.249457+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 36405248 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:49.249796+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 36405248 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:50.250246+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 36405248 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:51.250580+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 36405248 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:52.251052+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 36405248 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:53.251436+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 36405248 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:54.251801+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 36405248 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:55.252121+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 36405248 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:56.252542+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 36405248 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:57.253059+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 36405248 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:58.253445+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 36405248 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:36:59.253830+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 36405248 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:00.254406+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 36405248 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:01.254782+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 36405248 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:02.255160+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 36405248 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:03.255509+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 36397056 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:04.255860+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 36397056 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:05.256046+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 36397056 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:06.256433+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 36397056 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:07.256787+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:08.257203+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 36397056 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:09.257616+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 36397056 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:10.258196+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 36397056 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:11.258603+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 36397056 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:12.260052+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 36397056 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:13.261280+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 36397056 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:14.262286+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 36397056 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:15.263073+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 36397056 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:16.263702+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 36397056 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:17.264210+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 36397056 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:18.264584+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 36397056 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:19.265502+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 36397056 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:20.266440+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 36397056 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:21.266870+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 36397056 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:22.267748+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 36397056 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:23.268470+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113254400 unmapped: 36397056 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:24.269153+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113270784 unmapped: 36380672 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:25.269489+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113270784 unmapped: 36380672 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:26.270125+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113270784 unmapped: 36380672 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:27.270609+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113270784 unmapped: 36380672 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:28.271152+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113270784 unmapped: 36380672 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:29.271621+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113270784 unmapped: 36380672 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:30.272217+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113270784 unmapped: 36380672 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:31.272736+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113270784 unmapped: 36380672 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:32.273311+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113270784 unmapped: 36380672 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:33.273686+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113270784 unmapped: 36380672 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:34.274063+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 36372480 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:35.274376+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 36372480 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:36.274811+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 36372480 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:37.275279+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 36372480 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:38.275745+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 36372480 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:39.276205+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 36372480 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:40.276654+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 36372480 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:41.277097+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 36372480 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:42.277477+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 36372480 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:43.278030+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 36372480 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:44.278567+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 36372480 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:45.279392+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 36372480 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:46.280187+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 36372480 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:47.280791+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 36372480 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:48.281269+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 36356096 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:49.282180+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 36356096 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:50.282580+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 36356096 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:51.283012+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 36356096 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:52.283231+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 36356096 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:53.283589+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 36356096 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:54.283809+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 36356096 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:55.284059+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 36356096 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:56.284241+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 36356096 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:57.284435+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 36356096 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:58.284635+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 36356096 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:37:59.284819+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 36347904 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:00.284986+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 36347904 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:01.285188+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 36347904 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:02.285447+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 36347904 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:03.285629+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 36347904 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:04.285815+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 36347904 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:05.286192+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 36347904 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:06.286393+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 36347904 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:07.286580+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 36347904 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:08.286771+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 36339712 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 05 02:38:44 compute-0 ceph-osd[206647]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 05 02:38:44 compute-0 ceph-osd[206647]: bluestore.MempoolThread(0x5630e4d6fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195451 data_alloc: 218103808 data_used: 7188480
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:09.286971+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 36339712 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:10.287177+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 36339712 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: do_command 'config diff' '{prefix=config diff}'
Dec 05 02:38:44 compute-0 ceph-osd[206647]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:11.287418+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: do_command 'config show' '{prefix=config show}'
Dec 05 02:38:44 compute-0 ceph-osd[206647]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 05 02:38:44 compute-0 ceph-osd[206647]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fa8b3000/0x0/0x4ffc00000, data 0x8d4cff/0x9ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Dec 05 02:38:44 compute-0 ceph-osd[206647]: do_command 'counter dump' '{prefix=counter dump}'
Dec 05 02:38:44 compute-0 ceph-osd[206647]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 05 02:38:44 compute-0 ceph-osd[206647]: do_command 'counter schema' '{prefix=counter schema}'
Dec 05 02:38:44 compute-0 ceph-osd[206647]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 36257792 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:12.287628+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113278976 unmapped: 36372480 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: tick
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_tickets
Dec 05 02:38:44 compute-0 ceph-osd[206647]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-05T02:38:13.287798+0000)
Dec 05 02:38:44 compute-0 ceph-osd[206647]: prioritycache tune_memory target: 4294967296 mapped: 113311744 unmapped: 36339712 heap: 149651456 old mem: 2845415832 new mem: 2845415832
Dec 05 02:38:44 compute-0 ceph-osd[206647]: do_command 'log dump' '{prefix=log dump}'
Dec 05 02:38:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Dec 05 02:38:44 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1175095609' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 05 02:38:44 compute-0 ceph-mon[192914]: from='client.16021 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:38:44 compute-0 ceph-mon[192914]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 05 02:38:44 compute-0 ceph-mon[192914]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 05 02:38:44 compute-0 ceph-mon[192914]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 05 02:38:44 compute-0 ceph-mon[192914]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 05 02:38:44 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1175095609' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 05 02:38:44 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.16035 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Dec 05 02:38:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/343804078' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 05 02:38:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2696: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 05 02:38:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1572905284' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:38:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 05 02:38:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1572905284' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:38:45 compute-0 ceph-mon[192914]: from='client.16035 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/343804078' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 05 02:38:45 compute-0 ceph-mon[192914]: pgmap v2696: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1572905284' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 05 02:38:45 compute-0 ceph-mon[192914]: from='client.? 192.168.122.10:0/1572905284' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 05 02:38:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Dec 05 02:38:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3663368569' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 05 02:38:46 compute-0 nova_compute[349548]: 2025-12-05 02:38:46.127 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:38:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Dec 05 02:38:46 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2270887085' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 05 02:38:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:38:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:38:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:38:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:38:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec 05 02:38:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec 05 02:38:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3663368569' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 05 02:38:46 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2270887085' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 05 02:38:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Dec 05 02:38:46 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3305394941' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 05 02:38:47 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.16049 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2697: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:47 compute-0 systemd[1]: Starting Hostname Service...
Dec 05 02:38:47 compute-0 systemd[1]: Started Hostname Service.
Dec 05 02:38:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Dec 05 02:38:47 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3444920739' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 05 02:38:47 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3305394941' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 05 02:38:47 compute-0 ceph-mon[192914]: from='client.16049 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:47 compute-0 ceph-mon[192914]: pgmap v2697: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:47 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3444920739' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 05 02:38:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Dec 05 02:38:48 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2345895976' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 05 02:38:48 compute-0 nova_compute[349548]: 2025-12-05 02:38:48.242 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:38:48 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.16055 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:48 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2345895976' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 05 02:38:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:38:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Dec 05 02:38:48 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2985369170' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 05 02:38:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2698: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:49 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.16059 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:49 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.16061 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:49 compute-0 ceph-mon[192914]: from='client.16055 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:49 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2985369170' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 05 02:38:49 compute-0 ceph-mon[192914]: pgmap v2698: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:49 compute-0 ceph-mon[192914]: from='client.16059 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Dec 05 02:38:50 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2628167649' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 05 02:38:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Dec 05 02:38:50 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3734316373' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 05 02:38:50 compute-0 ceph-mon[192914]: from='client.16061 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:50 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2628167649' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 05 02:38:50 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3734316373' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 05 02:38:50 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.16067 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:51 compute-0 nova_compute[349548]: 2025-12-05 02:38:51.131 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:38:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2699: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:51 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.16069 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:51 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:38:51 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 05 02:38:51 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:38:51 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec 05 02:38:51 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:38:51 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:38:51 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:38:51 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:38:51 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:38:51 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec 05 02:38:51 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:38:51 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec 05 02:38:51 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:38:51 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:38:51 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:38:51 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec 05 02:38:51 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:38:51 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec 05 02:38:51 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:38:51 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 05 02:38:51 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 05 02:38:51 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 05 02:38:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Dec 05 02:38:51 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/185462623' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 05 02:38:51 compute-0 ceph-mon[192914]: from='client.16067 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:51 compute-0 ceph-mon[192914]: pgmap v2699: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:51 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/185462623' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 05 02:38:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Dec 05 02:38:52 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/974504716' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 05 02:38:52 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.16075 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:52 compute-0 ceph-mon[192914]: from='client.16069 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:52 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/974504716' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 05 02:38:53 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.16077 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:53 compute-0 nova_compute[349548]: 2025-12-05 02:38:53.244 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:38:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2700: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Dec 05 02:38:53 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3569533527' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 05 02:38:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 05 02:38:53 compute-0 ceph-mon[192914]: from='client.16075 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:53 compute-0 ceph-mon[192914]: from='client.16077 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 05 02:38:53 compute-0 ceph-mon[192914]: pgmap v2700: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:53 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3569533527' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 05 02:38:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Dec 05 02:38:54 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/403404220' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 05 02:38:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Dec 05 02:38:54 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2870291982' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 05 02:38:54 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/403404220' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 05 02:38:54 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2870291982' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 05 02:38:54 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.16085 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:38:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2701: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Dec 05 02:38:55 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/14031506' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 02:38:55 compute-0 podman[495864]: 2025-12-05 02:38:55.729076392 +0000 UTC m=+0.126722052 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible)
Dec 05 02:38:55 compute-0 podman[495862]: 2025-12-05 02:38:55.738806667 +0000 UTC m=+0.148940149 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 05 02:38:55 compute-0 podman[495859]: 2025-12-05 02:38:55.753099469 +0000 UTC m=+0.159263449 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible)
Dec 05 02:38:55 compute-0 podman[495863]: 2025-12-05 02:38:55.786873211 +0000 UTC m=+0.190376576 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 05 02:38:55 compute-0 ceph-mon[192914]: from='client.16085 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 05 02:38:55 compute-0 ceph-mon[192914]: pgmap v2701: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec 05 02:38:55 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/14031506' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 05 02:38:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Dec 05 02:38:55 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1940725773' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 05 02:38:56 compute-0 nova_compute[349548]: 2025-12-05 02:38:56.136 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 05 02:38:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:38:56.242 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 05 02:38:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:38:56.242 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 05 02:38:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:38:56.242 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 05 02:38:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Dec 05 02:38:56 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3173919783' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 05 02:38:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Dec 05 02:38:56 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2343658664' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 05 02:38:56 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1940725773' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 05 02:38:56 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3173919783' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 05 02:38:56 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2343658664' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 05 02:38:57 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.16095 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
